text
stringlengths
4
2.78M
--- abstract: 'Azimuthal single-spin asymmetries of lepto-produced pions and charged kaons were measured on a transversely polarized hydrogen target. Evidence for a naive-T-odd, transverse-momentum-dependent parton distribution function is deduced from non-vanishing Sivers effects for $ \pi^+ $, $\pi^0$, and $ K^\pm $, as well as in the difference of the $ \pi^+$ and $\pi^- $ cross sections.' author: - 'A. Airapetian' - 'N. Akopov' - 'Z. Akopov' - 'E.C. Aschenauer' - 'W. Augustyniak' - 'A. Avetissian' - 'E. Avetisyan' - 'A. Bacchetta' - 'B. Ball' - 'N. Bianchi' - 'H.P. Blok' - 'H. Böttcher' - 'C. Bonomo' - 'A. Borissov' - 'V. Bryzgalov' - 'J. Burns' - 'M. Capiluppi' - 'G.P. Capitani' - 'E. Cisbani' - 'G. Ciullo' - 'M. Contalbrigo' - 'P.F. Dalpiaz' - 'W. Deconinck' - 'R. De Leo' - 'L. De Nardo' - 'E. De Sanctis' - 'M. Diefenthaler' - 'P. Di Nezza' - 'J. Dreschler' - 'M. Düren' - 'M. Ehrenfried' - 'G. Elbakian' - 'F. Ellinghaus' - 'U. Elschenbroich' - 'R. Fabbri' - 'A. Fantoni' - 'L. Felawka' - 'S. Frullani' - 'D. Gabbert' - 'G. Gapienko' - 'V. Gapienko' - 'F. Garibaldi' - 'V. Gharibyan' - 'F. Giordano' - 'S. Gliske' - 'C. Hadjidakis' - 'M. Hartig' - 'D. Hasch' - 'G. Hill' - 'A. Hillenbrand' - 'M. Hoek' - 'Y. Holler' - 'I. Hristova' - 'Y. Imazu' - 'A. Ivanilov' - 'H.E. Jackson' - 'H.S. Jo' - 'S. Joosten' - 'R. Kaiser' - 'T. Keri' - 'E. Kinney' - 'A. Kisselev' - 'V. Korotkov' - 'V. Kozlov' - 'P. Kravchenko' - 'L. Lagamba' - 'R. Lamb' - 'L. Lapikás' - 'I. Lehmann' - 'P. Lenisa' - 'L.A. Linden-Levy' - 'A. López Ruiz' - 'W. Lorenzon' - 'X.-G. Lu' - 'X.-R. Lu' - 'B.-Q. Ma' - 'D. Mahon' - 'N.C.R. Makins' - 'S.I. Manaenkov' - 'L. Manfré' - 'Y. Mao' - 'B. Marianski' - 'A. Martinez de la Ossa' - 'H. Marukyan' - 'C.A. Miller' - 'Y. Miyachi' - 'A. Movsisyan' - 'M. Murray' - 'A. Mussgiller' - 'E. Nappi' - 'Y. Naryshkin' - 'A. Nass' - 'M. Negodaev' - 'W.-D. Nowak' - 'L.L. Pappalardo' - 'R. Perez-Benito' - 'P.E. Reimer' - 'A.R. Reolon' - 'C. Riedl' - 'K. Rith' - 'G. Rosner' - 'A. Rostomyan' - 'J. Rubin' - 'D. Ryckbosch' - 'Y. Salomatin' - 'F. Sanftl' - 'A. Schäfer' - 'G. Schnell' - 'K.P. Schüler' - 'B. Seitz' - 'T.-A. Shibata' - 'V. Shutov' - 'M. Stancari' - 'M. Statera' - 'J.J.M. Steijger' - 'H. Stenzel' - 'J. Stewart' - 'F. Stinzing' - 'S. Taroian' - 'A. Terkulov' - 'A. Trzcinski' - 'M. Tytgat' - 'A. Vandenbroucke' - 'P.B. van der Nat' - 'Y. Van Haarlem' - 'C. Van Hulse' - 'M. Varanda' - 'D. Veretennikov' - 'V. Vikhrov' - 'I. Vilardi' - 'C. Vogel' - 'S. Wang' - 'S. Yaschenko' - 'H. Ye' - 'Z. Ye' - 'S. Yen' - 'W. Yu' - 'D. Zeiler' - 'B. Zihlmann' - 'P. Zupranski' date: '****' title: 'Observation of the Naive-T-odd Sivers Effect in Deep-Inelastic Scattering' --- The ongoing experimental effort in spin-dependent high-energy scattering and attendant theoretical work continue to indicate that the spins of the quarks and gluons are not sufficient to explain the nucleon spin [@deFlorian:2008mr]. The investigation of the only remaining contribution, that of orbital angular momentum of the constituents, is clearly essential. Transverse-momentum-dependent parton distribution functions are recognized as a tool to study spin-orbit correlations, hence providing experimental observables for studying orbital angular momentum. One particular example is the Sivers function  [@Sivers:1990cc], describing the correlation between the momentum direction of the struck quark and the spin of its parent nucleon. This correlation is commonly defined as the Sivers effect. A non-vanishing contributes to, e.g., single-spin asymmetries (SSAs) in semi-inclusive deep-inelastic scattering (DIS) off transversely polarized protons, $ep^\uparrow \rightarrow e'hX$, where $h$ is a hadron detected in coincidence with the scattered lepton $e'$. For a long time, transverse SSAs had been assumed to be negligible in hard scattering processes. They are odd under naive time reversal, i.e., time reversal of three-momenta and angular momenta, and thus require interference of amplitudes with different helicities and phases. In QED and perturbative QCD, these ingredients are suppressed [@PhysRev.143.1310; @Kane:1978nd]. Therefore, in semi-inclusive DIS they must be ascribed to the non-perturbative parts in the cross section, i.e., to specific parton distribution and fragmentation functions, commonly categorized as being naive-T-odd. The idea of a naive-T-odd quark distribution function goes back to an interpretation [@Sivers:1990cc] of large left-right asymmetries observed in pion production in the collision of unpolarized with transversely polarized nucleons [@Antille:1980th]. It was argued that such asymmetries could be attributed to a left-right asymmetry in the distribution of unpolarized quarks in transversely polarized nucleons, i.e., an asymmetry that exists before the pion is formed in the fragmentation process, and that does not vanish at high energies. A decade after an initial proof [@Collins:1993kk] that this distribution function, now termed the Sivers function, must vanish because of time-reversal invariance of QCD, it was realized through the pioneering work in Ref. [@Brodsky:2002cx] and subsequently in Refs. [@Collins:2002kn; @Ji:2002aa; @Belitsky:2002sm] that this proof applies only to transverse-momentum-integrated distribution functions. A gauge link, previously neglected in the definition of gauge-invariant distribution functions, invalidates the original proof for the case of transverse-momentum-dependent distribution functions. The gauge link provides the phase for the interference (required for naive-T-oddness), and can be interpreted as an interaction of the struck quark with the color field of the target remnant [@Burkardt:2003yg]. The inclusion of the gauge link has profound consequences on factorization proofs and on the concept of universality, which are of fundamental relevance for high-energy hadronic physics. A direct QCD prediction is a Sivers effect in the Drell–Yan process that has the opposite sign compared to the one in semi-inclusive DIS [@Collins:2002kn]. For hadron production in proton-proton collisions the situation is more intricate [@Bacchetta:2005rm], leading to a violation of standard factorization and universality, even for the case of unpolarized collisions [@Collins:2007nk]. Therefore, the study of the Sivers effect in semi-inclusive DIS and other processes is of utmost importance for our understanding of high-energy scattering involving hadrons. The Sivers effect has been related to the orbital motion of quarks inside a transversely polarized nucleon since the seminal work in Ref. [@Sivers:1990cc]. In the calculation of Ref. [@Brodsky:2002cx], it became clear that orbital angular momentum of quarks is needed for a non-vanishing Sivers effect as it arises through overlap integrals of wave-function components with different orbital angular momenta. However, no quantitative relation has yet been found between [$f_{1{T}}^{\perp}$]{} and the orbital angular momentum of quarks. One faces a similar quandary with the anomalous magnetic moment $\kappa$ of the nucleon: it also requires wave function components with non-vanishing quark orbital angular momentum without constraining the [*net*]{} orbital angular momentum [@Burkardt:2005km]. Indeed, [$f_{1{T}}^{\perp}$]{}  involves overlap integrals between the same wave function components that also appear in the expressions for $\kappa$ as well as for the total angular momentum in the Ji relation [@Ji:1997ek] for the nucleon-spin decomposition [@Brodsky:2002cx; @Burkardt:2005km]. An interesting link between $\kappa$ and was suggested in Ref. [@Burkardt:2002ks]: the sign of the quark-flavor contribution to $\kappa$ determines the sign of for that quark flavor. If the final-state interactions are attractive, as one would assume for the confining color force, a positive flavor contribution to $\kappa$ leads to a negative . (The sign and angle definitions follow the [*Trento Conventions*]{} [@Bacchetta:2004jz].) In semi-inclusive DIS, [$f_{1{T}}^{\perp}$]{} leads to SSAs in the distribution of hadrons in the azimuthal angle about the virtual-photon direction. In general, azimuthal SSAs provide important information not only about the Sivers function but also about other distribution and fragmentation functions. For example, transversity [@Ralston:1979ys], describing the distribution of transversely polarized quarks in transversely polarized nucleons, combined with the naive-T-odd Collins fragmentation function [@Collins:1993kk], also leads to SSAs. The keys to extracting different combinations of the various distribution and fragmentation functions are their different dependences on the two azimuthal angles $\phi$ and $\phi_S$ of the hadron momentum $\boldsymbol{P}_h$ and of the transverse component [$\V{S}_{T}$]{} of the target-proton spin, respectively, about the virtual-photon direction (cf. [@Bacchetta:2004jz]). The Sivers effect manifests itself as a $\sin(\phi-\phi_S)$ modulation in the azimuthal distribution [@Boer:1998nt]. In this Letter clear evidence for a non-vanishing Sivers function is reported. The $\sin(\phi-\phi_S)$ modulations in semi-inclusive DIS are measured for pions and charged kaons, as well as in the difference between the $\pi^+$ and $\pi^-$ cross sections, providing sensitivity to [$f_{1{T}}^{\perp}$]{} for both valence and sea quarks. The data reported here were recorded during the 2002–2005 running period of the [<span style="font-variant:small-caps;">Hermes</span>]{} experiment using a transversely nuclear-polarized hydrogen gas target internal to the $27.6$GeV [<span style="font-variant:small-caps;">Hera</span>]{} lepton ($e^+$ or $e^-$) storage ring at [<span style="font-variant:small-caps;">Desy</span>]{}. The open-ended target cell was fed by an atomic-beam source [@Nass:2003mk] based on Stern–Gerlach separation combined with radio-frequency transitions of hyperfine states. The nuclear spin direction was flipped at 1–3min time intervals, while both nuclear polarization and the atomic fraction inside the target cell were continuously measured [@hermes:BRP_TGA]. The average magnitude of the proton-polarization component perpendicular to the lepton-beam direction was $0.725\pm 0.053$. Scattered leptons and coincident hadrons were detected by the [<span style="font-variant:small-caps;">Hermes</span>]{} spectrometer [@Ackerstaff:1998av]. Leptons were identified with an efficiency exceeding 98% and a hadron contamination of less than 1%. Charged hadrons with momentum $2\,\text{GeV} < |\boldsymbol{P}_h| < 15\,\text{GeV}$ were identified using a dual-radiator ring-imaging [Č]{}erenkov detector (RICH) [@Akopov:2000qi]. For this a hadron-identification algorithm was employed that takes into account the topology of the whole event, in contrast to the track-level algorithm in previous analyses [@Airapetian:2004tw]. Events were selected subject to the requirements $Q^2>1$GeV$^2$, $W^2 >10$GeV$^2$, $0.1 < y < 0.95$, and $0.023<x< 0.4$, where $Q^2\equiv -q^2\equiv -(k-k')^2$, $W^2\equiv(P+q)^2$, $y\equiv (P\cdot q)/(P\cdot k)$, and $x\equiv Q^2/(2P\cdot q)$. Here, $P$, $k$, and $k'$ represent the four-momenta of the target proton, the incident lepton, and the scattered lepton, respectively. Coincident hadrons were accepted if $0.2<z<0.7$, where $z\equiv(P\cdot P_h)/(P\cdot q)$. The cross section for semi-inclusive production of hadrons using an unpolarized lepton beam on a transversely polarized target can be written as [@Mulders:1996dh; @Boer:1998nt; @Bacchetta:2006tn] $$\begin{aligned} \sigma(\phi,\phi_S) \!\! &=& \!\! \sigma_{_{UU}} \lbrace 1+ 2\langle\cos\phi\rangle_{_{UU}}\cos\phi + 2\langle\cos2\phi\rangle_{_{UU}} \cos2\phi \nonumber\\ &+& \!\! \mid \!\! {\ensuremath{\V{S}_{T}}}\!\! \mid \! \left[ 2 \langle\sin(\phi\!-\!\phi_S)\rangle_{_{UT}} \sin(\phi-\phi_S)+ \ldots \right] \rbrace , \, \label{eq:sigma_UT}\end{aligned}$$ where five sine modulations contribute to the polarization-dependent part, but, for convenience, only the $\sin(\phi-\phi_S)$ modulation (the Sivers term), is written out explicitly. Here, the subscript $UT$ denotes unpolarized beam and transverse target polarization (with respect to the virtual-photon direction), while $\sigma_{_{UU}}$ represents the $\phi$-independent part of the polarization-independent cross section. The $\sin(\phi-\phi_S)$ amplitude can be interpreted in the quark-parton model as [@Boer:1998nt] $$\label{eq:QPM-sivers} 2\langle\sin{(\phi-\phi_S)}\rangle_{_{UT}} = -\frac{\sum_q e_q^2 \siversf{,q}(x,p_T^2) \otimes_{\mathcal W} D_1^q(z,K_T^2)}{\sum_q e_q^2 f_1^q(x,p_T^2) \otimes D_1^q(z,K_T^2)},$$ where the sums run over the quark flavors, the $e_q$ are the quark charges, and $f_1$ and $D_1$ are the spin-independent quark distribution and fragmentation functions, respectively. The symbol $\otimes$ ($\otimes_{\mathcal W}$) represents a (weighted) convolution integral over intrinsic and fragmentation transverse momenta [$\V{p}_{T}^{}$]{} and [$\V{K}_{\!T}$]{}, respectively. The amplitudes of the five sine modulations in Eq.  were extracted simultaneously to avoid cross contamination. For this a maximum-likelihood fit was used [@Diefenthaler2009], with the data alternately binned in $x$, $z$, and $P_{h\perp}\equiv | \boldsymbol{P}_h - \frac{(\boldsymbol{P}_h \cdot \boldsymbol{q})\boldsymbol{q}}{|\boldsymbol{q}|^2}| $, but unbinned in $\phi$ and $\phi_S$. A sixth term, arising from the small but non-vanishing target-spin component that is longitudinal to the virtual-photon direction when the target is polarized perpendicular to the beam direction [@Diehl:2004], was also included in the fit. A scale uncertainty of 7.3% on the extracted Sivers amplitudes arises from the accuracy of the target-polarization determination. Inclusion in the fit of estimates [@Giordano:2009hi] for the $\cos\phi$ and $\cos2\phi$ amplitudes of the unpolarized cross section had negligible effects on the amplitudes extracted. Possible contributions [@Diehl:2004] to the amplitudes from the non-vanishing longitudinal target-spin component were estimated based on measurements of SSAs on longitudinally polarized protons [@Airapetian:1999tv; @Airapetian:2001eg] and included in the systematic uncertainty. Effects from the hadron identification using the RICH, the geometric acceptance, smearing due to detector resolution, and radiative effects are not corrected for in the data. Rather, the size of all these effects was estimated using a simulation tuned to the data, which involved a fully differential polynomial fit to the measured azimuthal amplitudes [@Pappalardo:2008zza]. The result was included in the systematic uncertainty and constitutes the largest contribution. ![\[fig:main-results-all\] Sivers amplitudes for pions, charged kaons, and the pion-difference asymmetry (as denoted in the panels) as functions of $x$, $z$, or ${P_{h\perp}}$. The systematic uncertainty is given as a band at the bottom of each panel. In addition there is a 7.3% scale uncertainty from the target-polarization measurement. ](Figure1.eps) Based on a [<span style="font-variant:small-caps;">Pythia6</span>]{} Monte Carlo simulation [@PYTHIA6] tuned to [<span style="font-variant:small-caps;">Hermes</span>]{} data, the fraction of charged pions (kaons) stemming from the decay of exclusive vector-meson channels was estimated to be about 6–7% (2–3%). Among the contributions of all the vector mesons to the pion samples, that of the $\rho^0$ is dominant. A different observable, for which the contributions from exclusive $\rho^0$ mesons cancels, is the [*pion-difference asymmetry*]{} $$\label{eq:pion-yield-difference} A_{UT}^{\pi^+-\pi^-} \!(\phi, \phi_S) \equiv \frac{1}{ \mid \!\! {\ensuremath{\V{S}_{T}}}\!\! \mid} \frac{(\sigma_{{U}\uparrow}^{\pi^+}\!-\!\sigma_{{U}\uparrow}^{\pi^-}) - (\sigma_{{U}\downarrow}^{\pi^+} \!-\! \sigma_{{U}\downarrow}^{\pi^-})}{(\sigma_{{U}\uparrow}^{\pi^+}\!-\!\sigma_{{U}\uparrow}^{\pi^-}) + (\sigma_{{U}\downarrow}^{\pi^+}\!-\! \sigma_{{U}\downarrow}^{\pi^-})},$$ the SSA in the difference in the $\pi^+$ and $\pi^-$ cross sections for opposite target-spin states $\uparrow,\downarrow$. In addition, this asymmetry helps to isolate the valence-quark Sivers functions: under some assumptions, such as charge-conjugation and isospin symmetry among pion fragmentation functions, one can deduce from Eq.  that this SSA stems mainly from the difference $(\siversf{, d_{v}} - 4 \siversf{, u_{v}})$ in the Sivers functions for valence down and up quarks. The resulting Sivers amplitudes for pions, charged kaons, and for the pion-difference asymmetry are shown in Fig. \[fig:main-results-all\] as functions of $x$, $z$, or $P_{h\perp}$. They are positive and increase with increasing $z$, except for $\pi^-$, for which they are consistent with zero. In the case of $\pi^+$, $K^+$, and the pion-difference asymmetry, the data suggest a saturation of the amplitudes for ${P_{h\perp}}\gtrsim 0.4$ GeV and are consistent with the predicted linear decrease in the limit of ${P_{h\perp}}$ going to zero. ![\[fig:Q2studyVM\] Sivers amplitudes for $\pi^+$ (left) and $K^+$ (right) as functions of $z$ or ${P_{h\perp}}$, compared for two different ranges in $Q^2$ (high-$Q^2$ points are slightly shifted horizontally). The corresponding fraction of pions and kaons stemming from exclusive vector mesons, extracted from a Monte Carlo simulation, is provided in the bottom panels. ](Figure2a.eps "fig:") ![\[fig:Q2studyVM\] Sivers amplitudes for $\pi^+$ (left) and $K^+$ (right) as functions of $z$ or ${P_{h\perp}}$, compared for two different ranges in $Q^2$ (high-$Q^2$ points are slightly shifted horizontally). The corresponding fraction of pions and kaons stemming from exclusive vector mesons, extracted from a Monte Carlo simulation, is provided in the bottom panels. ](Figure2b.eps "fig:") ![\[fig:Q2studyTwist\] Sivers amplitudes for $\pi^+$ (left) and $K^+$ (right) as functions of $x$. The $Q^2$ range for each bin was divided into the two regions above and below $\langle Q^2(x_i)\rangle$ of that bin. In the bottom the average $ Q^2$ values are given for the two $Q^2$ ranges. ](Figure3a.eps "fig:") ![\[fig:Q2studyTwist\] Sivers amplitudes for $\pi^+$ (left) and $K^+$ (right) as functions of $x$. The $Q^2$ range for each bin was divided into the two regions above and below $\langle Q^2(x_i)\rangle$ of that bin. In the bottom the average $ Q^2$ values are given for the two $Q^2$ ranges. ](Figure3b.eps "fig:") In order to further examine the influence of exclusive vector-meson decay and other possible $\frac{1}{Q^2}$-suppressed contributions, several studies were performed. Raising the lower limit of $Q^2$ to 4 GeV$^2$ eliminates a large part of the vector-meson contribution. Because of strong correlations between $x$ and $Q^2$ in the data, this is presented only for the $z$ and ${P_{h\perp}}$ dependences. No influence of the vector-meson fraction on the asymmetries is visible as shown in Fig. \[fig:Q2studyVM\]. For the $x$ dependence shown in Fig. \[fig:Q2studyTwist\], each bin was divided into two $Q^2$ regions below and above the corresponding average $Q^2$ ($\langle Q^2(x_i)\rangle$) for that $x$ bin. While the averages of the kinematics integrated over in those $x$ bins do not differ significantly, the $\langle Q^2 \rangle$ values for the two $Q^2$ ranges change by a factor of about 1.7. The asymmetries do not change by as much as would have been expected for a sizable $\frac{1}{Q^2}$-suppressed contribution, e.g., the one from longitudinal photons to the spin-(in)dependent cross section. However, while the $\pi^+$ asymmetries for the two $Q^2$ regions are fully consistent, there is a hint of systematically smaller $K^+$ asymmetries in the large-$Q^2$ region. ![\[fig:piKstudy\] Difference of Sivers amplitudes for $K^+$ and $\pi^+$ as functions of $x$ for all $Q^2$ (left), and separated into “low-” and “high-$Q^2$” regions as done for Fig. \[fig:Q2studyTwist\].](Figure4.eps) An interesting facet of the data is the difference in the $\pi^+$ and $K^+$ amplitudes shown in Fig. \[fig:piKstudy\]. On the basis of $u$-quark dominance, i.e., the dominant contribution to $\pi^+$ and $K^+$ production from scattering off $u$-quarks, one might naively expect that the $\pi^+$ and $K^+$ amplitudes should be similar. The difference in the $\pi^+$ and $K^+$ amplitudes may thus point to a significant role of other quark flavors, e.g., sea quarks. Strictly speaking, even in the case of scattering solely off $u$-quarks, the fragmentation function $D_1$, contained in both the numerator and denominator in Eq. , does not cancel in general as it appears in convolution integrals. This can lead not only to additional $z$-dependences, but also to a difference in size of the Sivers amplitude for $\pi^+$ and $K^+$. Higher-twist effects in kaon production might also contribute to the difference observed: in the low-$Q^2$ region, where higher-twist should be more pronounced, the $\pi^+$ and $K^+$ amplitudes disagree at the confidence level of at least 90%, based on a Student’s $t$-test, while being statistically consistent in the high-$Q^2$ region. As scattering off $u$-quarks dominates in these data, the positive Sivers amplitudes for $\pi^+$ and $K^\pm$ suggest a large and negative Sivers function for $u$-quarks. This is supported by the positive amplitudes of the difference asymmetry, which is dominated by the contribution from valence $u$-quarks. The vanishing amplitudes for $\pi^-$ require cancelation effects, e.g., from a $d$-quark Sivers function opposite in sign to the $u$-quark Sivers function. In combination with deuteron data from the [<span style="font-variant:small-caps;">Compass</span>]{} collaboration [@Ageev:2006da], a large positive $d$-quark Sivers function can be deduced [@Vogelsang:2005cs]. These fits have yet to be updated with the final results presented here, as well as with preliminary proton data from [<span style="font-variant:small-caps;">Compass</span>]{} [@Levorato:2008tv]. In summary, non-zero Sivers amplitudes in semi-inclusive DIS were measured for production of $\pi^+$, $\pi^0$, and $K^\pm$, as well as for the pion-difference asymmetry. They can be explained by the non-vanishing naive-T-odd, transverse-momentum-dependent Sivers distribution function. This function also plays an important role in transverse single-spin asymmetries in $pp$ collisions, and is linked to orbital angular momentum of quarks inside the nucleon. Although no quantitative conclusion about their orbital angular momentum can be inferred, the Sivers function provides important constraints on the nucleon wave function and thus indirectly on the total quark orbital angular momentum. For instance, in the approach of Ref. [@Burkardt:2003yg], the measured positive Sivers asymmetries for $\pi^+$ and $K^+$ mesons correspond to a positive contribution of $u$-quarks to the orbital angular momentum, under the assumption that the production of $\pi^+$ and $K^+$ mesons is dominated by scattering off $u$-quarks. We gratefully acknowledge the [<span style="font-variant:small-caps;">Desy</span>]{} management for its support, the staff at [<span style="font-variant:small-caps;">Desy</span>]{} and the collaborating institutions for their significant effort, and our national funding agencies and the EU RII3-CT-2004-506078 program for financial support.
--- abstract: 'We study statistical properties of the number of large earthquakes over the past century. We analyze the cumulative distribution of the number of earthquakes with magnitude larger than threshold $M$ in time interval $T$, and quantify the statistical significance of these results by simulating a large number of synthetic random catalogs. We find that in general, the earthquake record cannot be distinguished from a process that is random in time. This conclusion holds whether aftershocks are removed or not, except at magnitudes below $M=7.3$. At long time intervals ($T$ = 2-5 years), we find that statistically significant clustering is present in the catalog for lower magnitude thresholds ($M$ = 7-7.2). However, this clustering is due to a large number of earthquakes on record in the early part of the 20th century, when magnitudes are less certain.' title: 'Are megaquakes clustered?' --- Introduction ============ The number of powerful earthquakes worldwide has increased over the past decade (Fig. \[fig-nt\] (left)). This increase has prompted debate whether large earthquakes cluster in time \[[*Kerr*]{}, 2011\]. If so, this would have an impact on how seismic hazard is assessed worldwide. Multiple studies have investigated this question \[[*Bufe and Perkins*]{}, 2005; [*Brodsky*]{}, 2009; [*Michael*]{}, 2011; [*Shearer and Stark*]{}, 2012; [*Ammon et al.*]{}, 2011; [*Bufe and Perkins*]{}, 2011\]. Conclusions have been mixed, with some studies finding evidence of clustering \[[*Bufe and Perkins*]{}, 2005; 2011\], while others have concluded that earthquakes cannot be distinguished from a process that is random in time \[[*Michael*]{}, 2011; [*Shearer and Stark*]{}, 2012\]. ![image](fig1.pdf){width="39pc"} In parallel, recent studies show that earthquakes can be dynamically triggered by seismic waves \[[*Hill et al.*]{}, 1993; [*Gomberg et al.*]{}, 2004; [*Freed*]{}, 2005\]. It is not clear if large earthquakes can trigger other large earthquakes; one recent study did not find evidence of such triggering \[[*Parsons and Velasco*]{}, 2011\], although this remains an open question in seismology. If large earthquakes do cluster in time, this might suggest that large earthquakes can be dynamically triggered. We study the statistics of large ($M\geq7$) earthquakes from 1900-2011 to assess whether earthquakes deviate from random occurrence. We examine the catalog both with and without removal of aftershocks, and use transparent statistical measures to quantify the likelihood that a random process could produce the earthquake record. Data and Aftershock Removal =========================== Our statistical analysis uses the USGS PAGER catalog of large earthquakes \[[*Allen et al.*]{}, 2009\], supplemented with the Global CMT catalog through the end of 2011. The catalog consists of 1761 events with magnitude $M>7.0$. As can be seen from the magnitude-frequency plot in Fig. \[fig-nt\] (right), this catalog adheres to the ubiquitous Gutenberg-Richter law \[[*Gutenberg and Richter*]{}, 1954\], and is complete for magnitude $M>7.0$. The magnitudes in the PAGER catalog are a mix of magnitude types – the majority of events are given in moment magnitude, but events early in the century often use a different magnitude measure, such as surface wave magnitude. Because very large earthquakes are rare, any study of the statistics of this dataset is inherently limited by the small number of extremely powerful earthquakes on record. We have studied two additional catalogs, one compiled by [*Pacheco and Sykes*]{} \[1992\], and one based on the NOAA Significant Earthquake Database (National Geophysical Data Center/World Data Center (NGDC/WDC) Significant Earthquake Database, Boulder, CO, USA, available at http://www.ngdc.noaa.gov/hazard/earthqk.shtml). We find that the results depend on the catalog choice due to discrepancies in magnitude between the catalogs. Because PAGER contains more events, and the magnitudes in PAGER are the most consistent with the Gutenberg-Richter Law, we focus on PAGER in our analysis. A comprehensive study of the discrepancies between catalogs will be the subject of future work. While the PAGER catalog is the most complete record of large earthquakes, the data has limitations. First, because seismic instruments were relatively sparse in the first half of the 20th century, data for these events have larger uncertainties. Additionally, the data includes aftershocks. Aftershock removal is not trivial, and it requires assumptions that cannot be tested rigorously due to limited data. We remove aftershocks by flagging any event within a specified time and distance window of a larger magnitude main shock \[[*Gardner and Knopoff*]{}, 1974\]. We use the time window from the original [*Gardner and Knopoff*]{} study. The distance window should be similar to the rupture length of the main shock. However, rupture length data does not exist for the entire catalog. Therefore, we must estimate the rupture length based on magnitude. This is problematic because the catalog contains multiple types of faulting (i.e. subduction megathrust, crustal strike-slip, etc.), each with a different typical rupture length for a given magnitude. For example, the 2002 $M=7.9$ Denali earthquake and the 2011 $M=9.0$ Tohoku Earthquake did not have substantially different rupture lengths \[[*Eberhart-Philips et al.*]{}, 2003; [*Simons et al.*]{}, 2011\] despite a large difference in seismic moment. We use an empirical rupture length formula \[[*Wells and Coppersmith*]{}, 1994\], and choose to be conservative by doubling the [*Wells and Coppersmith*]{} subsurface rupture length estimate for reverse faulting. We have studied various choices for this rupture length multiplicative factor, and find that doubling the rupture length estimate makes the rupture lengths large enough to be fairly conservative, but not so large as to excessively remove events from the catalog. This may remove some events from the catalog that are not aftershocks, but it will not bias our results by leaving many aftershocks in the catalog. After removal of aftershocks, the PAGER catalog is reduced to 1253 events. In this investigation, we first examine the entire catalog to draw as much information from the raw data as possible before introducing assumptions about aftershocks. Statistical Analysis ==================== Our study utilizes the cumulative probability distribution of the number of large earthquakes in a fixed time interval $Q_n$. The cumulative distribution gives the probability that there are at least $n$ earthquakes with magnitude of at least $M$ in a given time interval $T$, measured in months. We compare the observed frequency distribution $Q_n$ with the frequency distribution for a random Poisson process. Let the average number of large earthquakes in a time interval be $\alpha$. If large earthquakes are not correlated in time, then the probability $P^{{\rm rand}}_{n}$ that there are $n$ events during a time interval is $$P^{{\rm rand}}_{n}=\frac{\alpha^n}{n!}e^{-\alpha}.$$ The Poisson distribution is characterized by a single parameter, the average. We also note that the average and the variance are identical, $\langle n\rangle =\langle n^2\rangle -\langle n\rangle^2=\alpha$. The cumulative distribution for a Poissonian catalog $Q^{{\rm rand}}_n$ is given by the following sum: $$\label{qn} Q^{{\rm rand}}_{n}=\sum_{m=n}^\infty P^{\rm rand}_m=\sum_{m=n}^\infty\frac{\alpha^m}{m!}e^{-\alpha}.$$ Note that $Q^{\rm rand}_n$ depends on the choice of $M$ and $T$, as these determine the average event rate $\alpha$. We calculate $Q_n$ for the earthquake data, and compare the data with the expected distribution for a Poissonian catalog $Q^{{\rm rand}}_n$. Note that the cumulative distribution forms the basis of one of the statistical tests used in [*Shearer and Stark*]{} \[2012\], but here we explore many time bin sizes to see if the results depend on the choice of the time window. Figure \[fig-panel9\] (left) shows an example of the cumulative distribution plot for the raw PAGER catalog for $M=7$ and $T=12$ months. The cumulative distribution $Q_n$ quantifies the probability that a time window contains at least $n$ events. Thus, the curves always begin at $Q_0=1$, and decrease as $n$ increases. The final point on each plot corresponds to the maximum number of events observed in the chosen time window. ![image](fig2.pdf){width="39pc"} Figure \[fig-panel9\] (left) shows that the frequency of large earthquakes with $M\geq 7.0$ is roughly Poissonian below the average $\alpha=15.7$ events/year. However, the tail of the cumulative distribution is [*overpopulated*]{} with respect to the Poisson distribution. An overpopulated tail indicates that events are clustered in time. We perform this analysis for higher magnitude thresholds ($M=7.5$, $M=8$) and both longer and shorter time window sizes ($T=1$ month, $T=60$ months), and the results are shown in Fig. \[fig-panel9\] (right). The bins evenly divide the catalog into an integer number of fixed time windows: $T=1$ month corresponds to $112\times12=1344$ bins, and $T=12$ months corresponds to 112 bins. For $T=60$ months, the catalog cannot be evenly divided into 5 year bins. Therefore, it is instead divided into the closest integer number of bins (22), which means that the bin size is actually slightly larger than 60 months. We find that the catalog exhibits an overpopulated tail only for $M=7$. Within the $M=7$ data, the overpopulation is found for all $T$. The strength of this overpopulation is significant because it can be a few orders of magnitude. However, the catalog at $M=7.5$ and $M=8$ agrees very well with the prediction for a Poissonian catalog. This is remarkable, as even with a relatively small number of earthquakes, the data is in agreement with a random distribution. To quantify the statistical significance of the overpopulation, we utilize the normalized variance: $$V=\frac{\langle n^2\rangle-\langle n\rangle^2}{\langle n\rangle}.$$ An observed distribution with a strongly overpopulated tail necessarily has a large variance. Moreover, a value close to unity is expected for a catalog that is random in time, while a value larger than unity indicates clustering. Hence, the normalized variance $V$ is a convenient, scalar, measure of clustering. The normalized variance is shown as a function of $M$ and $T$ in Fig. \[fig-var\] (left), and confirms that at $M=7$ the catalog is clustered. In this analysis, we compute $V$ with many different bin sizes, ranging from 1 month up to approximately 5 years. In each case, the number of bins is chosen to be an integer so that we always utilize the entire catalog (i.e. the time bin size is not always an integer number of months). ![image](fig3.pdf){width="39pc"} To test whether the clustering observed in the data is statistically significant, we generate $10^6$ synthetic Poissonian catalogs with an average event rate given by $\alpha=1761/112$ events/year, the same as in the PAGER catalog. Each event is assigned a magnitude, drawn randomly from the actual catalog magnitudes with replacement. Using the $10^6$ Poissonian realizations of the earthquake catalog, we compute the average normalized variance $\bar{V}$ and the standard deviation of the normalized variance $\delta V$ as a function of $M$ and $T$. Conveniently, the normalized variance for an ensemble of synthetic random catalogs is approximately described by a normal distribution. This makes this quantity useful for determining the statistical significance of the observed clustering. The normalized variance determined from the earthquake data $V$ can then be expressed as a certain number of standard deviations above the mean $\sigma$, $$\sigma=\frac{V-\bar{V}}{\delta V}.$$ Since $V$ is normally distributed for an ensemble of random catalogs, we know that if the value of $V$ determined from the data is larger than $\bar{V}$ by several standard deviations, this indicates that the catalog contains statistically significant clustering. The number of standard deviations above the mean $\sigma$ is shown as a function of $M$ and $T$ in Fig. \[fig-var\] (right). In the plot, red indicates statistically significant clustering, and blue indicates a variance consistent with a random catalog. This analysis verifies the results from the cumulative distribution: clustering is observed at low magnitudes ($M<7.3$), while no significant clustering is observed at higher magnitudes ($M\geq7.3$). This observation is robust over time bin sizes ranging from 1 month to 5 years. Note that while the normalized variance is much larger for $M=7$ and $T=60$ months than for $M=7$ and $T=1$ month, in both cases the normalized variance is several standard deviations above the mean. This is because there is more variability in the normalized variance for longer time bins – we find that $\delta V\sim T^{1/2}$, independent of the magnitude threshold. We stress that our analysis thus far relies on the complete earthquake record which necessarily includes aftershocks. Hence, aftershock removal is not even necessary to demonstrate that the statistics of large earthquakes with magnitude $M>7.3$ show no significant clustering. We repeat the above analysis, with aftershocks removed, to test if the clustering observed for $M<7.3$ is due to aftershocks. The results of the cumulative distribution analysis with aftershocks removed is shown in Fig. \[fig-panel9declust\]. The catalog now closely follows the cumulative distribution for a Poissonian catalog for $M=7$, $T=1$ month, demonstrating that the clustering at short times and lower magnitudes is due to aftershocks. There is still overpopulation for $M=7$ at longer times. At higher magnitudes, many of the curves appear slightly underpopulated for large numbers of events. This could be due to our conservative aftershock removal procedure, which may have removed some independent events. ![The cumulative frequency distribution for the catalog with aftershocks removed at different threshold magnitudes and time intervals. Shown is $Q_n$ versus $n$, obtained using magnitude thresholds $M=7.0$ (top), $M=7.5$ (middle), and $M=8.0$ (bottom) and time intervals $T=1$ month (left), $T=12$ months (middle), and $T=60$ months (right). The solid lines indicate the expected distribution for a Poissonian catalog.[]{data-label="fig-panel9declust"}](fig4.pdf){width="19pc"} Calculations using synthetic catalogs and the variance measure $V$ confirm these results. Figure \[fig-vardeclust\] shows that the clustering observed for small magnitudes ($M<7.3$) and short times ($T<12$ months) no longer occurs once aftershocks are removed from the catalog. Interestingly, the clustering at longer time intervals ($T>24$ months) persists. Most likely, this clustering is due to the fact that there is a mismatch between the event rates in the first and the second halves of the century, the former being larger by about $20\%$. This can be seen in Fig. \[fig-nt\] (left, top), which shows several spikes in the number of $M\geq7$ events during the first half of the century. If we divide the catalog into two time periods (1900-1955 and 1956-2011), we find that each half of the data is consistent with random earthquake occurrence, with a different rate for each half. Because magnitude estimates early in the century are subject to larger uncertainties and may be systematically overestimated \[[*Engdahl and Villaseñor*]{}, 2002\], it is not clear if this clustering is real or due to less reliable data. ![image](fig5.pdf){width="39pc"} Conclusions =========== Our studies using the PAGER earthquake catalog demonstrate that the catalog cannot be distinguished from random earthquake occurrence. This is in agreement with several other recent studies \[[*Michael*]{}, 2011; [*Shearer and Stark*]{}, 2012\]. We do find evidence of clustering for $M=7$ and $T=2$-5 years, which was not identified in the other studies. However, we note that this clustering is due to a large number of events on record early in the 20th Century. For large events ($M>7.3$), the catalog with aftershocks is well described by a process that is random in time. This is because large aftershocks are rare, and there are relatively few large events in the catalog to begin with. Because clustering due to aftershocks, which is known to be present in the data, is not detectable by our statistical tests, it is possible that there is clustering in the catalog at large magnitudes that is obscured by the small amount of data. Future studies will examine the likelihood of identifying clustering in synthetic clustered catalogs given the small amount of data in the earthquake catalog. These findings underscore that we have very little megaquake data, due to limited instrumentation. Increases in the number of seismic and geodetic instruments in recent years has led not only to the improved identification and characterization of large earthquakes, but also to the discovery of novel slip behaviors such as low frequency earthquakes \[[*Katsumata and Kamaya*]{} 2003\], very low frequency earthquakes \[[*Ito et al.*]{}, 2006\], slow slip events \[[*Dragert et al.*]{}, 2001\], and silent earthquakes \[[*Kawasaki et al.*]{}, 1995\]. Integrating observations of other types of events with earthquake data may prove to be the key to identifying causal links between events, providing a comprehensive picture of the interactions that may underlie the physics of great earthquakes. The USGS PAGER catalog is available on the web at http://earthquake.usgs.gov/earthquakes/pager/, and the Global CMT catalog is available at http://www.globalcmt.org/. We thank Terry Wallace, Thorne Lay, Charles Ammon, and Joan Gomberg for useful comments. This research has been supported by DOE grant DE-AC52-06NA25396 and institutional (LDRD) funding at Los Alamos. Allen, T.I., K. Marano, P. S. Earle, and D. J. Wald (2009), PAGER-CAT: A composite earthquake catalog for calibrating global fatality models, [*Seism. Res. Lett.*]{}, *80*, 50-56. Ammon, C. J., R. C. Aster, T. Lay, and D. W. Simpson (2011), The Tohoku Earthquake and a 110-year Spatiotemporal Record of Global Seismic Strain Release, [*Seismol. Res. Lett.*]{}, *82*, 455. Brodsky, E. E. (2009), The 2004-2008 Worldwide Superswarm, [*Eos. Trans. AGU*]{}, Fall Meet. Suppl., *90*, S53B. Bufe, C. G., and D. M. Perkins (2005), Evidence for a Global Seismic-Moment Release Sequence, [*Bull. Seismol. Soc. Am.*]{}, *95*, 833-843. Bufe, C. G., and D. M. Perkins (2011), The 2011 Tohoku Earthquake: Resumption of Temporal Clustering of Earth’s Megaquakes, [*Seismol. Res. Lett.*]{}, *82*, 455. Dragert, H., K. Wang, and T. S. James (2001), A Silent Slip Event on the Deeper Cascadia Subduction Interface, [*Science*]{} [*292*]{}, 5521, 1525-1528. Eberhart-Philips, D., et al. (2003), The 2002 Denali fault earthquake, Alaska: A large-magnitude, slip-partitioned event, [*Science*]{}, [*300*]{}, 1113-1118. Engdahl, E. R., and A. Villaseñor (2002), Global seismicity: 1900-1999, [*International Handbook of Earthquake and Engineering Seismology, Volume 81A*]{}, ISBN:0-12-440652-1, 665-690. Freed, A. M. (2005), Earthquake triggering by static dynamic, and postseismic stress transfer, [*Ann. Rev. Earth Plant. Sci.*]{} [*33*]{}, 335-367, doi:10.1146/annurev.earth.33.092203.122505. Gardner, J. K., Knopoff, L. (1974), Is the sequence of earthquakes in Southern California, with aftershocks removed, Poissonian? [*Bull. Seismol. Soc. Am.*]{}, [*64*]{}, 1363-1367. Gomberg, J., P. Bodin, K. Larson, and H. Dragert (2004), Earthquake nucleation by transient deformations caused by the M = 7.9 Denali, Alaska, earthquake, [*Nature*]{}, *427*, 621-624. Gutenberg, B., and C. F. Richter (1954), [*Seismicity of the Earth and Associated Phenomena*]{}, 2nd ed., Princeton University Press, Princeton. Hill, D. P., et al. (1993), Remote seismicity triggered by the M7.5 Landers, California earthquake of June 28, 1992, [*Science*]{}, [*260*]{}, 1617-1623. Ito, Y., K. Obara, K. Shiomi, S. Sekine, and H. Hirose (2006), Slow earthquakes coincident with episodic tremors and slow slip events, [*Science*]{}, [*26*]{}, 503Ð506. Katsumata, A., and N. Kamaya (2003), Low-frequency continuous tremor around the Moho discontinuity away from volcanoes in the southwest Japan, [*Geophys. Res. Lett.*]{}, [*30*]{}, 1020, doi:10.1029/2002GL015981. Kawasaki, I. et al. (1995), The 1992 Sanriku-oki, Japan, ultra-slow earthquake, [*J. Phys. Earth*]{}, [*43*]{}, 105Ð116. Kerr, R. A. (2011), More Megaquakes on the Way? That Depends on Your Statistics, [*Science*]{}, *332*, 411. Michael, A. J. (2011), Random Variability Explains Apparent Global Clustering of Large Earthquakes, [*Geophys. Res. Lett.*]{}, [*38*]{}, L21301, doi:10.1029/2011GL049443. Pacheco, J. F., and L. R. Sykes (1992), Seismic moment catalog of large shallow earthquakes, 1900 to 1989, [*Bull. Seismol. Soc. Am.*]{}, [*82*]{}, 1306-1349. Peng., Z., and J. Gomberg (2010), An integrated perspective of the continuum between earthquakes and slow-slip phenomena, [*Nat. Geosci.*]{}, *3*, 599-607. Shearer, P. M., and P. B. Stark, (2012), The global risk of big earthquakes has not recently increased, [*Proc. Nat. Acad. Sci.*]{}, [*109*]{}(3), 717-721. Simons, M., et al. (2011), The 2011 Magnitude 9.0 Tohoku-Oki Earthquake: Mosaicking the Megathrust from Seconds to Centuries, [*Science*]{}, [*332*]{}, 1421-1425. Parsons, T., and A. A. Velasco (2011), Absence of remotely triggered large earthquakes beyond the mainshock region, [*Nat. Geosci.*]{}, *4*, 312-316. Wells, D. L., and K. J. Coppersmith (1994), New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface displacement, [*Bull. Seismol. Soc. Am.*]{}, *84*, 1053-1069.
--- abstract: 'Evidence for the presence of high energy magnetic excitations in overdoped La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) has raised questions regarding the role of spin-fluctuations in the pairing mechanism. If they remain present in overdoped LSCO, why does $T_c$ decrease in this doping regime? Here, using results for the dynamic spin susceptibility ${\rm Im}\chi(\bm{q},\omega)$ obtained from a determinantal quantum Monte Carlo (DQMC) calculation for the Hubbard model we address this question. We find that while high energy magnetic excitations persist in the overdoped regime, they lack the momentum to scatter pairs between the anti-nodal regions. It is the decrease in the spectral weight at large momentum transfer, not observed by resonant inelastic X-ray scattering (RIXS), which leads to a reduction in the $d$-wave spin-fluctuation pairing strength.' author: - 'Edwin W. Huang' - 'Douglas J. Scalapino' - 'Thomas A. Maier' - Brian Moritz - 'Thomas P. Devereaux' title: 'Decrease of d-wave pairing strength in spite of the persistence of magnetic excitations in the overdoped Hubbard model' --- Recent resonant inelastic X-ray scattering (RIXS) studies of La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) have found that high energy magnetic excitations near the antiferromagnetic zone boundary are present across a wide range of doping in the LSCO phase diagram [@Dean2013; @Wakimoto2015; @Meyers2017]. In particular, while these excitations gradually soften and broaden in the overdoped region, they remain even as the superconducting transition temperature $T_c$ decreases. This raises questions regarding the role of spin fluctuations in the pairing mechanism [@Scalapino2012]. Specifically, if these magnetic excitations persist in the overdoped LSCO, what is responsible for the destruction of high temperature superconductivity? Here we discuss results for the dynamic spin susceptibility ${\rm Im}\chi(\bm{q},\omega)$, obtained from determinantal quantum Monte Carlo (DQMC) calculations for the doped 2D Hubbard model [@White1989; @Jia2014; @Kung2015]. We find that similar to the RIXS studies, high-energy magnetic excitations persist into the overdoped regime. However at large momentum transfer, beyond the range observed by RIXS [@Ament2011], a reduction and hardening of the strength of the spin-fluctuation spectral weight is observed. We discuss the doping dependence of magnetic excitations for different momenta $\bm{q}$, segregating regions which promote $d$-wave pairing (near $\bm{q}=(\pi,\pi)$), are indifferent to pairing (along the AF zone boundary), and are hurtful to pairing (near zone center). The overall reduction of strength as well as hardening of magnetic spectral weight near $(\pi,\pi)$ leads to a decrease in the strength of the $d$-wave pair coupling consistent with the suppression of superconductivity in the overdoped regime. The Hamiltonian for the Hubbard model appropriate for the hole doped cuprates has the usual near neighbor hopping $t$, onsite $U$ and a negative next-near-neighbor hopping $t'$. $$H = -t \sum_{\langle i j \rangle \sigma} c_{i\sigma}^\dagger c_{j\sigma} -t' \sum_{\langle\langle i j \rangle\rangle \sigma} c_{i\sigma}^\dagger c_{j\sigma} -\mu \sum_{i \sigma} n_{i\sigma} + U \sum_i n_{i\uparrow} n_{i\downarrow} \label{eq:1}$$ Here we will measure energies in units of $t$ and set $t'=-0.25$ and $U=6.5$. The chemical potential $\mu$ in Eq. (\[eq:1\]) is used to fix the doping. The DQMC calculations were carried out for an $8\times8$ lattice with 40 imaginary time slices of width $\Delta\tau = 0.1$, for an inverse temperature of $\beta = 4.0$. For each doping level, 200 independently seeded Markov chains are run, each with $10^6$ full spacetime sweeps for measurements. ![The spin-fluctuation spectral weight ${\rm Im}\chi(\bm{q},\omega)$ versus $\omega$ at a temperature $T=0.25t$ for several $\bm{q}$ values showing its evolution with doping. The high energy magnetic excitations at the BZ boundary $\bm{q}=(\pi,0)$ and part way out along the anti-nodal direction with $\bm{q}=(\pi/2,\pi/2)$ remain as the system is doped. However the spectral weight associated with the magnetic excitations at larger anti-nodal momentum transfers $\bm{q}=(3\pi/4,3\pi/4)$ and $(\pi,\pi)$ is reduced and shifted to higher frequencies.\[fig:1\]](fig1.pdf){width="7.5cm"} The imaginary time spin susceptibility is calculated directly from DQMC as $$\chi(\bm{q},\tau) = \sum_{\bm{r}} e^{-i \bm{q} \cdot \bm{r}} \frac{1}{N} \sum_{\bm{r'}} \left\langle S_z(\bm{r}+\bm{r'},\tau) S_z(\bm{r'},0)\right\rangle \label{eq:2}$$ where $S_z(\bm{r}) = \frac{1}{2}(n_{\bm{r}\uparrow} - n_{\bm{r}\downarrow})$ is the $z$ component of the spin at site $\bm{r}$. The real frequency susceptibility is related to the imaginary time susceptibility by $$\chi(\bm{q},\tau) = \int_0^\infty \frac{d\omega}{\pi} \frac{e^{-\tau\omega} + e^{-(\beta-\tau)\omega}} {1 - e^{-\beta\omega}} {\rm Im} \chi(\bm{q},\omega). \label{eq:chi}$$ Since inverting Eq. \[eq:chi\] is numerically ill-posed, we use Maximum Entropy analytic continuation [@Jarrell1996] to extract ${\rm Im}\chi(\bm{q},\omega)$ from the DQMC data. As described in Ref. [@Jarrell1996; @Gunnarsson2010], a model function based on the first moments of the data is used for the analytic continuation. The spin fluctuation spectral weight ${\rm Im}\chi(\bm{q},\omega)$ for some selected $\bm{q}$ values is plotted versus $\omega$ in Fig. \[fig:1\] for different dopings. For the half-filled system, the $\bm{q}=(\pi,\pi)$ response continues to increase and drop lower in frequency as $T$ decreases. However, for the doped system, the spectral weight is well developed at this temperature and the magnetic spin-fluctuation response evolves smoothly as the doping is increased. For large momentum transfers near $(\pi,\pi)$, the hole doping both reduces and shifts the spin-fluctuation spectral weight to higher frequencies. However, similar to the RIXS data, for smaller anti-nodal momentum transfers $\bm{q}=(\pi/2,\pi/2)$ or for momentum transfers along the nodal direction $\bm{q}=(\pi, 0)$, the peak in ${\rm Im}\chi(\bm{q},\omega)$ found in the DQMC calculations remains. To further illustrate the evolution of the calculated spin-fluctuation spectrum with doping, Fig. \[fig:2\] shows a plot of the peak in ${\rm Im}\chi(\bm{q},\omega)$ for different dopings versus $\bm{q}$ along the nodal and anti-nodal directions from zone center. The ends of the vertical bars mark the energies where ${\rm Im}\chi(\bm{q},\omega)$ has dropped to half of its maximum value. The unshaded region denotes the momentum transfer regime observed in the RIXS experiments. ![The peak in the spin-fluctuation spectral weight ${\rm Im}\chi(\bm{q},\omega)$ versus $\bm{q}$ for different dopings. Here to the right of $(0,0)$, $\bm{q}$ moves along the diagonal and to the left from $(0,0)$ to $(\pi,0)$. The shaded region at large momentum transfer marks a region which is not measured by the RIXS experiments of Refs [@Dean2013; @Meyers2017].\[fig:2\]](fig2.pdf){width="7.5cm"} From the results shown in Fig. \[fig:1\] and \[fig:2\], one can see that while doping leads to changes in the overall magnetic excitation spectrum, the AF excitations accessible to RIXS remain relatively unchanged with doping. There is a clear similarity between the experimental RIXS data for LSCO and the DQMC results. However, the region outside of the reach of transition metal $L-$edge RIXS near $(\pi,\pi)$, due to the overall scale of photon momenta, changes considerably and, as we will discuss, has an impact on the strength of $d$-wave pairing in the Hubbard model. ![(a) The strength $\lambda_d$ of the d-wave pairing interaction given by Eq. (\[eq:lambda\_d\]) , versus doping at $T=0.25t$ (b) The interaction vertex $\Gamma_d$, Eq. (\[eq:gammad\]), versus doping at $T=0.25t$.\[fig:3\]](fig3.pdf){width="7.5cm"} A measure of the strength of the spin-fluctuation $d$-wave pairing interaction in weak coupling [@Maier2007] is given by $$\lambda_d=-\frac{3}{2}U^2\left\langle\phi_d(\bm{k})\int^\infty_0\frac{d\omega}{\pi} \frac{{\rm Im}\chi(\bm{k}-\bm{k'},\omega)}{\omega}\phi_d(\bm{k'})\right\rangle_{\rm FS}\Big/\left\langle\phi^2_d(\bm{k})\right\rangle_{\rm FS} \label{eq:lambda_d}$$ Here $\phi_d(\bm{k})=(\cos k_x-\cos k_y)$ and the $\bm{k}$ averages are taken over a region of band energies $\pm0.5t$ around the Fermi surface. A plot of $\lambda_d$ versus doping is shown in Fig. \[fig:3\]a. Here one sees that this coupling strength decreases with doping. This same behavior is observed in a direct calculation of the correlated and uncorrelated $d$-wave pair-field susceptibilities [@White1989; @Moreo1991] and the corresponding interaction vertex, defined respectively as $$P_d = \int_0^\beta d\tau \frac{1}{N^2} \sum_{\bm{k},\bm{k'}} \phi_d(\bm{k}) \left\langle c_{-\bm{k}\downarrow}(\tau) c_{\bm{k}\uparrow}(\tau) c_{\bm{k'}\uparrow}^{\dagger}(0) c_{-\bm{k'}\downarrow}^{\dagger}(0) \right\rangle \phi_d(\bm{k'}) \label{eq:pd}$$ $$\overline{P}_d = \int_0^\beta d\tau \frac{1}{N^2} \sum_{\bm{k}} \phi_d^2(\bm{k}) \left\langle c_{-\bm{k}\downarrow}(\tau) c_{-\bm{k}\downarrow}^{\dagger}(0) \right\rangle \left\langle c_{\bm{k}\uparrow}(\tau) c_{\bm{k}\uparrow}^{\dagger}(0) \right\rangle \label{eq:pdbar}$$ $$\Gamma_d = \frac{1}{P_d} - \frac{1}{\overline{P}_d} \label{eq:gammad}$$ The interaction vertex $\Gamma_d$ provides another gauge of the $d$-wave pairing strength, with negative values indicating an attractive interaction. As plotted in Fig. \[fig:3\]b, this measure confirms the decrease of the pairing interaction upon doping similar to the behavior of $\lambda_d$ seen in Fig. \[fig:3\]a. The decrease of both $\lambda_d$ and $-\Gamma_d$ reflects the reduction and hardening of the spin-fluctuation spectral weight in the large momentum $\bm{q} \sim (\pi,\pi)$ transfer region marked by the shaded regions of Fig. \[fig:2\]. ![Plot of $F(\bm{q})$, Eq. (\[eq:F\]), normalized to its absolute value at $\bm{q} = (0,0)$, versus $(q_x,q_y)$ over the first Brillouin zone using the same $\phi_d(\bm{k})$ gap functions and cut-off around the Fermi surface as in Fig. \[fig:3\]. Momentum transfers near $(\pi,\pi)$ (red shaded region) lead to a positive contribution from the spin-fluctuations to the coupling strength $\lambda_d$. Spin-fluctuations with momentum transfers near $(0,0)$ (blue shaded region) give a negative contribution.\[fig:4\] ](fig4.pdf){width="7.5cm"} The high energy magnetic excitations seen by RIXS at the edge of the BZ in the anti-nodal direction as well those seen along the nodal direction with $\bm{q} = (\pi/2,\pi/2)$ lack the momentum transfer to scatter pairs between the anti-nodal regions. This is illustrated in Fig. \[fig:4\] which shows a plot of the convolved $d$-wave form factor $$F(\bm{q})=-\left\langle\phi_d(\bm{k})\phi_d(\bm{k}+\bm{q})\right\rangle_{\rm FS} \label{eq:F}$$ for $\langle n \rangle = 0.9$. The pairing strength $\lambda_d$ given by Eq. (\[eq:lambda\_d\]) is proportional to a weighted average of ${\rm Re}\chi(\bm{q},\omega=0)$ with respect to $F(\bm{q})$. As shown in Fig. \[fig:4\], the spectral weight of the spin-fluctuations at large momenta transfer give rise to the $d$-wave pairing while the small momentum transfers suppress the pairing. The intermediate region, where the RIXS experiments find magnetic excitations, play a marginal role as earlier suggested in Ref [@Meyers2017]. Regions near $(\pi,\pi)$ which are accessible via polarized inelastic neutron scattering provide the dominant contribution to the strength of the d-wave pairing interaction. A closer inspection of the momentum regions accessible near the zone center by RIXS would also be useful in understanding the decrease of pairing strength. The DQMC results reported here and elsewhere are consistent with the weakening of spectral intensity and hardening of the spin excitations observed near the magnetic zone center $(\pi,\pi)$ in those measurements [@Wakimoto2004; @Wakimoto2005; @Wakimoto2007; @Lipscombe2007; @Wakimoto2015]. Thus we conclude that the evolution of the spin spectrum of excitations with doping in the Hubbard model is consistent with the existing data in the cuprates and can account for the reduction of $d$-wave pairing strength with doping. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge helpful discussions with Christian Mendl, Yvonne Kung, and Chunjing Jia. Computational work was performed on the Sherlock cluster at Stanford University. This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Contract No. DE-AC02-76SF00515. A portion of this research was conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility. [16]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1038/nmat3723) [****,  ()](\doibase 10.1103/PhysRevB.91.184513) [****,  ()](\doibase 10.1103/PhysRevB.95.075139) [****,  ()](\doibase 10.1103/RevModPhys.84.1383) [****,  ()](\doibase 10.1103/PhysRevB.40.506) [****,  ()](\doibase 10.1038/ncomms4314) [****,  ()](\doibase 10.1103/PhysRevB.92.195108) [****,  ()](\doibase 10.1103/RevModPhys.83.705) [****,  ()](\doibase 10.1016/0370-1573(95)00074-7) [****,  ()](\doibase 10.1103/PhysRevB.81.155107) [****,  ()](\doibase 10.1103/PhysRevB.75.134519) [****,  ()](\doibase 10.1103/PhysRevB.43.8211) [****,  ()](\doibase 10.1103/PhysRevLett.92.217004) [****,  ()](\doibase 10.1103/PhysRevB.72.064521) [****,  ()](\doibase 10.1103/PhysRevLett.98.247003) [****,  ()](\doibase 10.1103/PhysRevLett.99.067002)
--- abstract: 'Signal processing tasks as fundamental as sampling, reconstruction, minimum mean-square error interpolation and prediction can be viewed under the prism of reproducing kernel Hilbert spaces. Endowing this vantage point with contemporary advances in sparsity-aware modeling and processing, promotes the nonparametric basis pursuit advocated in this paper as the overarching framework for the confluence of kernel-based learning (KBL) approaches leveraging sparse linear regression, nuclear-norm regularization, and dictionary learning. The novel sparse KBL toolbox goes beyond translating sparse parametric approaches to their nonparametric counterparts, to incorporate new possibilities such as multi-kernel selection and matrix smoothing. The impact of sparse KBL to signal processing applications is illustrated through test cases from cognitive radio sensing, microarray data imputation, and network traffic prediction.' author: - | Juan Andrés Bazerque and Georgios B. Giannakis\ Dept. of ECE and Digital Technology Center\ Univ. of Minnesota, Minneapolis, MN 55455, USA title: | Nonparametric Basis Pursuit\ via Sparse Kernel-based Learning$^\dag$[^1] --- Introduction ============ Reproducing kernel Hilbert spaces (RKHSs) provide an orderly analytical framework for nonparametric regression, with the optimal kernel-based function estimate emerging as the solution of a regularized variational problem [@W90]. The pivotal role of RKHS is further appreciated through its connections to “workhorse” signal processing tasks, such as the Nyquist-Shannon sampling and reconstruction result that involves sinc kernels [@NS12]. Alternatively, spline kernels replace sinc kernels, when smoothness rather than bandlimitedness is to be present in the underlying function space [@U90]. Kernel-based function estimation can be also seen from a Bayesian viewpoint. RKHS and linear minimum mean-square error (LMMSE) function estimators coincide when the pertinent covariance matrix equals the kernel Gram matrix. This equivalence has been leveraged in the context of field estimation, where spatial LMMSE estimation referred to as Kriging, is tantamount to two-dimensional RKHS interpolation [@C91]. Finally, RKHS based function estimators can linked with Gaussian processes (GPs) obtained upon defining their covariances via kernels [@RW06]. Yet another seemingly unrelated, but increasingly popular theme in contemporary statistical learning and signal processing, is that of matrix completion [@F02], where data organized in a matrix can have missing entries due to e.g., limitations in the acquisition process. This article builds on the assertion that imputing missing entries amounts to interpolation, as in classical sampling theory, but with the low-rank constraint replacing that of bandlimitedness. From this point of view, RKHS interpolation emerges as the prudent framework for matrix completion that allows effective incorporation of a priori information via kernels [@ABEV09], including sparsity attributes. Recent advances in sparse signal recovery and regression motivate a sparse kernel-based learning (KBL) redux, which is the purpose and core of the present paper. Building blocks of sparse signal processing include the (group) least-absolute shrinkage and selection operator (Lasso) and its weighted versions [@HTF09], compressive sampling [@CT05], and nuclear norm regularization [@F02]. The common denominator behind these operators is the sparsity on a signal’s support that the $\ell_1$-norm regularizer induces. Exploiting sparsity for KBL leads to several innovations regarding the selection of multiple kernels [@MP05; @KY10], additive modeling [@RLLW09; @LZ06], collaborative filtering [@ABEV09], matrix and tensor completion via dictionary learning [@BMG12], as well as nonparametric basis selection [@BMG11]. In this context, the main contribution of this paper is a *nonparametric* basis pursuit (NBP) tool, unifying and advancing a number of *sparse* KBL approaches. Constrained by space limitations, a sample of applications stemming from such an encompassing analytical tool will be also delineated. Sparse KBL and its various forms contribute to computer vision [@SNPC12; @VB02], cognitive radio sensing [@BMG11], management of user preferences [@ABEV09], bioinformatics [@SL11], econometrics [@LZ06; @RLLW09], and forecasting of electric prices, load, and renewables (e.g., wind speed) [@kvlg2013isgt], to name a few. The remainder of the paper is organized as follows. Section II reviews the theory of RKHS in connection with GPs, describing the Representer Theorem and the kernel trick, and presenting the Nyquist-Shannon Theorem (NST) as an example of KBL. Section III deals with sparse KBL including sparse additive models (SpAMs) and multiple kernel learning (MKL) as examples of additive nonparametric models. NBP is introduced in Section IV, with a basis expansion model capturing the general framework for sparse KBL. Blind versions of NBP for matrix completion and dictionary learning are developed in Sections V and VI. Finally, Section VII presents numerical tests using real and simulated data, including RF spectrum measurements, expression levels in yeast, and network traffic loads. Conclusions are drawn in Section VIII, while most technical details are deferred to the Appendix. KBL Preliminaries {#KBL-prelims} ================= In this section, basic tools and approaches are reviewed to place known schemes for nonparametric (function) estimation under a common denominator. RKHS and the Representer Theorem -------------------------------- In the context of reproducing kernel Hilbert spaces (RKHS) [@W90], nonparametric estimation of a function $f:{\mathcal X}\rightarrow \mathbb R$ defined over a measurable space ${\mathcal X}$ is performed via interpolation of $N$ training points $\{(x_1,z_1),\ldots,(x_N,z_N)\},$ where $x_n\in {\mathcal X},$ and $z_n=f(x_n)+e_n\in \mathbb R$. For this purpose, a kernel function $k:{\mathcal X}\times{\mathcal X}\rightarrow\mathbb R$ selected to be *symmetric* and *positive definite,* specifies a linear space of interpolating functions $f(x)$ given by $${\mathcal H_{\mathcal X}}:\hspace{-0.02cm}=\hspace{-0.02cm}\left\{\hspace{-0.02cm}f(x)\hspace{-0.02cm}=\hspace{-0.08cm}\sum_{n=1}^\infty \hspace{-0.02cm}\alpha_nk(x_n,x)\hspace{-0.02cm}:\alpha_n\hspace{-0.02cm}\in\hspace{-0.02cm}\mathbb R, x_n\hspace{-0.02cm}\in\hspace{-0.02cm}\hspace{-0.02cm}{\mathcal X}, n\hspace{-0.02cm}\in\hspace{-0.02cm}\mathbb N\hspace{-0.02cm}\right\}\hspace{-0.02cm}. \nonumber$$ For many choices of $k(\cdot,\cdot)$, ${\mathcal H_{\mathcal X}}$ is exhaustive with respect to (w.r.t) families of functions obeying certain regularity conditions. The spline kernel for example, generates the Sobolev space of all low-curvature functions [@D77]. Likewise, the sinc kernel gives rise to the space of bandlimited functions. Space ${\mathcal H_{\mathcal X}}$ becomes a Hilbert space when equipped with the inner product $<f,f'>_{{\mathcal H_{\mathcal X}}}:=\sum_{n,n'=1}^\infty \alpha_n\alpha'_{n'}k(x_n,x'_{n'})$, and the associated norm is $\|f\|_{{\mathcal H_{\mathcal X}}}:=\sqrt{<f,f>_{{\mathcal H_{\mathcal X}}}}$. A key result in this context is the so-termed Representer Theorem [@W90], which asserts that based on $\{(x_n,z_n)\}_{n=1}^N$, the optimal interpolator in ${\mathcal H_{\mathcal X}}$, in the sense of $$\vspace*{-0.2cm} \hat f=\arg\min_{f\in {\mathcal H_{\mathcal X}}} {\sum_{n=1}^N}(z_{n}-f(x_n))^2+\mu \|f\|^2_{{\mathcal H_{\mathcal X}}} \label{representer}$$ admits the finite-dimensional representation $$\hat f(x)=\sum_{n=1}^N\alpha_nk(x_n,x). \label{kernel_expansion}$$ This result is nice in its simplicity, since functions in space ${\mathcal H_{\mathcal X}}$ are compound by a numerable but arbitrarily large number of kernels, while $\hat f$ is a combination of just a *finite* number of kernels around the training points. In addition, the regularizing term $\mu \|f\|^2_{{\mathcal H_{\mathcal X}}}$ controls smoothness, and thus reduces overfitting. After substituting into , the coefficients $\bm\alpha^T:=[\alpha_1,\ldots,\alpha_N]$ minimizing the regularized least-squares (LS) cost in are given by $\bm\alpha=(\mathbf K+\mu\mathbf I)^{-1}{\mathbf z}$, upon recognizing that $\|f\|_{{\mathcal H_{\mathcal X}}}^2:=\bm\alpha^T{\mathbf K}\bm\alpha$, and defining ${\mathbf z}^T:=[z_1,\ldots,z_N]$ as well as the kernel dependent Gram matrix ${\mathbf K}\in\mathbb R^{N\times N}$ with entries $\mathbf K_{n,n'}:=k(x_n,x_{n'})$ ($\cdot^T$ stands for transposition). **Remark 1.** The finite-dimensional expansion solves for more general fitting costs and regularizing terms. In its general form, the Representer Theorem asserts that is the solution $$\vspace*{-0.2cm} \hat f=\arg\min_{f\in {\mathcal H_{\mathcal X}}} {\sum_{n=1}^N}\ell(z_{n},f(x_n))+\mu \Omega(\|f\|_{{\mathcal H_{\mathcal X}}}) \label{general_representer}$$ where the loss function $\ell(z_n, f(x_n))$ replacing the LS cost in can be selected to serve either robustness (e.g., using the absolute-value instead of the square error); or, application dependent objectives (e.g., the Hinge loss to serve classification applications); or, for accommodating non-Gaussian noise models when viewing from a Bayesian angle. On the other hand, the regularization term can be chosen as any increasing function $\Omega$ of the norm $\|f\|_{{\mathcal H_{\mathcal X}}},$ which will turn out to be crucial for introducing the notion of sparsity, as described in the ensuing sections. LMMSE, Kriging, and GPs ----------------------- Instead of the deterministic treatment of the previous subsection, the unknown $f(x)$ can be considered as a random process. The KBL estimate offered by the Representer Theorem has been linked with the LMMSE-based estimator of random fields $f(x)$, under the term Kriging [@C91]. To predict the value $\zeta=f(x)$ at an exploration point $x$ via Kriging, the predictor $\hat f(x)$ is modeled as a linear combination of noisy samples $z_n:=f(x_n)+\eta(x_n)$ at measurement points $\{x_n\}_{n=1}^N$; that is, $$\label{lmmse} \hat f(x)={\sum_{n=1}^N}\hat\beta_n z_n = \mathbf z^T\bm{\hat \beta}$$ where $\bm{\hat \beta}^T:=[\hat\beta_1,\ldots,\hat\beta_N]$ are the expansion coefficients, and $\mathbf z^T:=[z_1,\ldots,z_N]$ collects the data. The MSE criterion is adopted to find the optimal $\bm{\hat \beta}:=\arg\min_{\bm\beta} E[f(x)-\mathbf z^T \bm \beta]^2$. Solving the latter yields $\bm{\hat\beta}=\mathbf R_{{\mathbf z}{\mathbf z}}^{-1}\mathbf r_{{\mathbf z}\zeta}$, where $\mathbf R_{{\mathbf z}{\mathbf z}}:=E[{\mathbf z}{\mathbf z}^T]$ and $\mathbf r_{{\mathbf z}\zeta}:=E[\mathbf z f(x)]$. If $\eta(x)$ is zero-mean white noise with power $\sigma_\eta^2$, then $\mathbf R_{{\mathbf z}{\mathbf z}}$ and $\mathbf r_{\mathbf z\zeta}$ can be expressed in terms of the unobserved $\bm\zeta^T:=[f(x_1),\ldots,f(x_N)]$ as $\mathbf R_{{\mathbf z}{\mathbf z}}=\mathbf R_{\bm \zeta\bm\zeta}+\sigma_{\eta}^2\mathbf I$, where $\mathbf R_{\bm\zeta\bm\zeta}:=E[\bm\zeta\bm\zeta^T]$, and $\mathbf r_{\mathbf z\zeta}=\mathbf r_{\bm\zeta\zeta}$, with $\mathbf r_{\bm\zeta\zeta}:=E[\bm\zeta f(x)]$. Hence, the LMMSE estimate in takes the form $$\label{kriging} \hat f(x)=\mathbf z^T(\mathbf R_{\bm\zeta\bm\zeta}+\sigma_{\eta}^2\mathbf I)^{-1}\mathbf r_{\bm\zeta\zeta}={\sum_{n=1}^N}\alpha_n r(x,x_n)$$ where $\bm\alpha^T:={\mathbf z}^T(\mathbf R_{\bm\zeta\bm\zeta}+\sigma_{\eta}^2\mathbf I)^{-1},$ and the $n$-th entry of $\mathbf r_{\bm\zeta \zeta}$, denoted by $r(x_n, x):= E[f(x)f(x_n)]$, is indeed a function of the exploration point $x$, and the measurement point $x_n$. With the Kriging estimate given by , the RKHS and LMMSE estimates coincide when the kernel in is chosen equal to the covariance function $r(x,x')$ in . The linearity assumption in is unnecessary when $f(x)$ and $e(x)$ are modeled as zero-mean GPs [@RW06]. GPs are those in which instances of the field at arbitrary points are jointly Gaussian. Zero-mean GPs are specified by $\textrm{cov}(x,x'):=E[f(x)f(x')]$, which determines the covariance matrix of any vector comprising instances of the field, and thus its specific zero-mean Gaussian distribution. In particular, the vector $\bm{\bar \zeta}^T:=[f(x), f(x_1), \ldots,f(x_N)]$ collecting the field at the exploration and measurement points is Gaussian, and so is the vector $\mathbf{\bar z}^T:=[f(x), f(x_1)+\eta(x_1), \ldots,f(x_N)+\eta(x_N)]=[\zeta,{\mathbf z}^T].$ Hence, the MMSE estimator, given by the expectation of $f(x)$ conditioned on ${\mathbf z}$, reduces to [@K01] $$\label{conditional_mean} \hat f(x)=E(f(x)|\mathbf z)={\mathbf z}^T \mathbf R_{{\mathbf z}{\mathbf z}}^{-1} \mathbf r^T_{{\mathbf z}\zeta} ={\sum_{n=1}^N}\alpha_n \textrm{cov}(x_n,x).$$ By comparing with , one deduces that the MMSE estimator of a GP coincides with the LMMSE estimator, hence with the RKHS estimator, when $\textrm{cov}(x,x')=k(x,x')$. The kernel trick ---------------- Analogous to the spectral decomposition of matrices, Mercer’s Theorem establishes that if the symmetric positive definite kernel is square-integrable, it admits a possibly infinite eigenfunction decomposition $k(x,x')=\sum_{i=1}^{\infty} \lambda_i e_i(x)e_i(x')$ [@W90], with $<e_i(x),e_{i'}(x)>_{{\mathcal H_{\mathcal X}}}=\delta_{i-i'}$ where $\delta_{i}$ stands for Kronecker’s delta. Using the weighted eigenfunctions $\phi_i(x):=\sqrt{\lambda_i}e_i(x),\ i\in \mathbb N,$ a point $x\in {\mathcal X}$ can be mapped to a vector (sequence) $\bm\phi\in\mathbb R^{\infty}$ such that $\phi_i=\phi_i(x),\ i \in \mathbb N.$ This mapping interprets a kernel as an inner product in $\mathbb R^\infty$, since for two points $x,x'\in{\mathcal X},$ $k(x,x')=\sum_{i=1}^{\infty}\phi_i(x)\phi_i(x'):=\bm\phi^T(x)\bm\phi(x')$. Such an inner product interpretation forms the basis for the *“kernel trick.”* The kernel trick allows for approaches that depend on inner products of functions (given by infinite kernel expansions) to be recast and implemented using finite dimensional covariance (kernel) matrices. A simple demonstration of this valuable property can be provided through kernel-based ridge regression. Starting from the standard ridge estimator $\bm{\hat\beta}:=\arg\min_{\bm\beta\in \mathbb R^D} {\sum_{n=1}^N}(z_n-{\bm\phi}_n^T\bm \beta)^2+\mu\|\bm\beta\|^2$ for ${\bm\phi}_n \in \mathbb R^D$, and $\bm\Phi:=[\bm\phi_1,\ldots,\bm\phi_N]$, it is possible to rewrite and solve $\bm{\hat\beta}= \arg\min_{\bm\beta\in \mathbb R^D}\|{\mathbf z}-\bm\Phi^T\bm\beta\|^2+\mu\|\bm\beta\|^2=(\bm\Phi\bm\Phi^T+\mu\mathbf I)^{-1}\bm\Phi{\mathbf z}$. After $\bm{\hat\beta}$ is obtained in the training phase, it can be used for prediction of an ensuing $\hat z_{N+1}=\bm\phi_{N+1}^T\bm{\hat\beta}$ given $\bm\phi_{N+1}$. By using the matrix inversion lemma, $\hat z_{N+1}$ can be written as $\hat z_{N+1}=(1/\mu) {\bm\phi}_{N+1}^T\bm\Phi{\mathbf z}-(1/\mu){\bm\phi}_{N+1}^T\bm\Phi(\mu\bm I+\bm\Phi^T\bm\Phi)^{-1}\bm\Phi^T\bm\Phi{\mathbf z}$. Now, if $\bm\phi_n=\bm\phi(x_n)$ with $D=\infty$ is constructed from $x_n\in {\mathcal X}$ using eigenfunctions $\{\phi_i(x_n)\}_{i=1}^{\infty}$, then ${\bm\phi}^T_{N+1}\bm\Phi =\mathbf k^T(x_{N+1}):=[k(x_{N+1},x_1),\ldots,k(x_{N+1},x_N) ],$ and $\bm\Phi^T\bm\Phi={\mathbf K}$, which yields $$\begin{aligned} \nonumber \hat z_{N+1}&=(1/\mu) \mathbf k^T(x_{N+1})[\mathbf I- (\mu\bm I+{\mathbf K})^{-1}{\mathbf K}]{\mathbf z}\\ &=\mathbf k^T(x_{N+1})(\mu\bm I+{\mathbf K})^{-1}{\mathbf z}\label{kernel_trick}\end{aligned}$$ coinciding with , , and with the solution of . Expressing a linear predictor in terms of inner products only is instrumental for mapping it into its kernel-based version. Although the mapping entails the eigenfunctions $\{\phi_i(x)\}$, these are not explicitly present in , which is given solely in terms of $k(x,x')$. This is crucial since ${\bm\phi}$ can be infinite dimensional which would render the method computationally intractable, and more importantly the explicit form of $\phi_i(x)$ may not be available. Use of kernel trick was demonstrated in the context of ridge regression. However, the trick can be used in any vectorial regression or classification method whose result can be expressed in terms of inner products only. One such example is offered by support vector machines, which find a kernel-based version of the optimal linear classifier in the sense of minimizing Vapnik’s $\epsilon$-insensitive Hinge loss function, and can be shown equivalent to the Lasso [@G98]. In a nutshell, the kernel trick provides a means of designing KBL algorithms, both for nonparametric function estimation \[cf. \], as well as for classification. KBL vis à vis Nyquist-Shannon Theorem ------------------------------------- Kernels can be clearly viewed as interpolating bases \[cf. \]. This viewpoint can be further appreciated if one considers the family of bandlimited functions ${\mathcal B_{\pi}}:= \{f\in \mathcal L^2({\mathcal X}):\ \int f(x)e^{-i\omega x}dx =0,\ \forall |\omega|>\pi\}$, where $\mathcal L^2$ denotes the class of square-integrable functions defined over ${\mathcal X}=\mathbb R$ (e.g., continuous-time, finite-power signals). The family ${\mathcal B_{\pi}}$ constitutes a linear space. Moreover, any $f\in{\mathcal B_{\pi}}$ can be generated as the linear combination (span) of ${\textrm{sinc}}$ functions; that is, $f(x)=\sum_{n\in \mathbb Z} f(n){\textrm{sinc}}(x-n)$. This is the cornerstone of signal processing, namely the NST for sampling and reconstruction, but can be viewed also under the lens of RKHS with $k(x,x')={\textrm{sinc}}(x-x')$ as a reproducing kernel [@NS12]. The following properties (which are proved in the Appendix) elaborate further on this connection. **P1.** The ${\textrm{sinc}}$-kernel Gram matrix $\mathbf K\in\mathbb R^{N\times N}$ satisfies $\mathbf K\succeq \mathbf 0$.\ **P2.** The ${\textrm{sinc}}$ kernel decomposes over orthonormal eigenfunctions $\{\phi_{n}(x)={\textrm{sinc}}(x-n),\ n\in \mathbb Z\}$.\ **P3.** The RKHS norm is \[prop3\]$\|f\|_{{\mathcal H_{\mathcal X}}}^2=\int f^2(x) dx$. P1 states that ${\textrm{sinc}}(x-x')$ qualifies as a kernel, while P2 characterizes the eigenfunctions used in the kernel trick, and P3 shows that the RKHS norm is the restriction of the $\mathcal L^2$ norm to ${\mathcal B_{\pi}}$. P1-P3 establish that the space of bandlimited functions ${\mathcal B_{\pi}}$ is indeed an RKHS. Any $f\in{\mathcal B_{\pi}}$ can thus be decomposed as a numerable combination of eigenfunctions, where the coefficients and eigenfunctions obey the NST. Consequently, existence of eigenfunctions $\{\phi_n(x)\}$ spanning ${\mathcal B_{\pi}}$ is a direct consequence of ${\mathcal B_{\pi}}$ being a RKHS, and does not require the NST unless an explicit form for $\phi_n(x)$ is desired. Finally, strict adherence to NST requires an infinite number of samples to reconstruct $f\in{\mathcal B_{\pi}}$. Alternatively, the Representer Theorem fits $f\in{\mathcal B_{\pi}}$ to a finite set of (possibly noisy) samples by regularizing the power of $f$. Sparse additive nonparametric modeling ====================================== The account of sparse KBL methods begins with SpAMs and MKL approaches. Both model the function to be learned as a sparse sum of nonparametric components, and both rely on group Lasso to find it. The additive models considered in this section will naturally lend themselves to the general model for NBP introduced in Section IV, and used henceforth. SpAMs for High-Dimensional Models --------------------------------- Additive function models offer a generalization of linear regression to the nonparametric setup, on the premise of dealing with *the curse of dimensionality,* which is inherent to learning from high dimensional data [@HTF09]. Consider learning a multivariate function $f:{\mathcal X}\to \mathbb R$ defined over the Cartesian product ${\mathcal X}:={\mathcal X}_1\otimes \ldots\otimes{\mathcal X}_P$ of measurable spaces ${\mathcal X}_i$. Let ${\mathbf x}^T:=[x_1,\ldots,x_P]$ denote a point in $\mathcal X$, $k_i$ the kernel defined over ${\mathcal X}_i\times {\mathcal X}_i$, and $\mathcal H_i$ its associated RKHS. Although $f(\mathbf x)$ can be interpolated from data via after substituting $\mathbf x$ for $x$, the fidelity of is severely degraded in high dimensions. Indeed, the accuracy of depends on the availability of nearby points ${\mathbf x}_n$, where the function is fit to the (possibly noisy) data $z_n$. But proximity of points ${\mathbf x}_n$ in high dimensions is challenged by the curse of dimensionality, demanding an excessively large dataset. For instance, consider positioning $N$ datapoints randomly in the hypercube $[0,1]^P$, repeatedly for $P$ growing unbounded and $N$ constant. Then $\lim_{P\to\infty}\min_{n\neq n'}\mathbf E \|{\mathbf x}_n-{\mathbf x}_{n'}\|=1$; that is, the expected distance between any two points is equal to the side of the hypercube [@HTF09]. To overcome this problem, an additional modeling assumption is well motivated, namely constraining $f({\mathbf x})$ to the family of separable functions of the form $$f({\mathbf x})=\sum_{i=1}^P c_i(x_i)\label{additive}$$ with $c_i\in\mathcal H_i$ depending only on the $i$-th entry of ${\mathbf x}$, as in e.g., linear regression models $f_{\textrm{linear}}({\mathbf x}):=\sum_{i=1}^P \beta_i x_i$. With $f({\mathbf x})$ separable as in $\eqref{additive}$, the interpolation task is split into $P$ one-dimensional problems that are not affected by the curse of dimensionality. The additive form in is also amenable to subsect selection, which yields a SpAM. As in sparse linear regression, SpAMs involve functions $f$ in that can be expressed using only a few entries of ${\mathbf x}$. Those can be learned using a variational version of the Lasso given by [@RLLW09] $$\label{nplasso} \hat f =\arg\min_{f\in\mathcal F_P}\frac{1}{2}\sum_{n=1}^N (z_n-f({\mathbf x}_n))^2+\mu\sum_{i=1}^P \|c_i\|_{\mathcal H_i}$$ where $\mathcal F_P:=\{f:{\mathcal X}\to\mathbb R:\ f({\mathbf x})=\sum_{i=1}^Pc_i(x_i)\}$. With $x_{ni}$ denoting the $i$th entry of $\mathbf x_n$, the Representer Theorem can be applied per component $c_i(x_i)$ in , yielding kernel expansions $\hat c_i(x_i)=\sum_{n=1}^N \gamma_{ni} k_i(x_{ni},x_i)$ with scalar coefficients $\{\gamma_{ni},\ i=1,\ldots,P, \ n=1,\ldots,N\}$. The fact that yields a SpAM is demonstrated by substituting these expansions back into and solving for ${{\bm \gamma}_{i}}^T:=[\gamma_{i1},\ldots,\gamma_{iN}]$, to obtain $$\label{glasso_gamma} \{\hat{{\bm \gamma}_{i}}\}_{i=1}^P =\arg\hspace{-0.1cm}\min_{\{{{\bm \gamma}_{i}}\}_{i=1}^P}\frac{1}{2}\left\|\mathbf z -\textstyle{\sum_{i=1}^P} \mathbf K_i{{\bm \gamma}_{i}}\right\|_2^2+\mu\sum_{i=1}^P \|{{\bm \gamma}_{i}}\|_{\mathbf K_i}$$ where $\mathbf K_i$ is the Gram matrix associated with kernel $k_i$, and $\|\cdot \|_{\mathbf K_i}$ denotes the weighted $\ell_2$-norm $\|\bm\gamma_i \|_{\mathbf K_i}:=(\bm\gamma_i^T\mathbf K_i\bm\gamma_i)^{1/2}$. Nonparametric Lasso ------------------- Problem constitutes a weighted version of the group Lasso formulation for sparse linear regression. Its solution can be found either via block coordinate descent (BCD) [@RLLW09], or by substituting $\bm \gamma'_i=\mathbf K_i^{1/2} \bm\gamma_i$ and applying the alternating-direction method of multipliers (ADMM) [@BMG11], with convergence guaranteed by its convexity and the separable structure of the its non-differentiable term [@TS09]. In any case, group Lasso regularizes sub-vectors ${{\bm \gamma}_{i}}$ separately, effecting group-sparsity in the estimates; that is, some of the vectors $\hat{{\bm \gamma}_{i}}$ in end up being identically zero. To gain intuition on this, can be rewritten using the change of variables $\mathbf K_i^{1/2}{{\bm \gamma}_{i}}=t_i\mathbf u_i$, with $t_i\geq 0$ and $\|\mathbf u_i\|=1$. It will be argued that if $\mu$ exceeds a threshold, then the optimal $t_i$ and thus $\hat{{\bm \gamma}_{i}}$ will be null. Focusing on the minimization of w.r.t. a particular sub-vector ${{\bm \gamma}_{i}}$, as in a BCD algorithm, the substitute variables $t_i$ and $\mathbf u_i$ should minimize $$\label{glasso_t} \frac{1}{2}\left\|\mathbf{ z}_i - \mathbf K_i^{1/2}t_i\mathbf u_i \right\|_2^2+\mu t_i$$ where $\mathbf{ z}_i:=\mathbf{ z}-\sum_{j\neq i} \mathbf K_j\bm\gamma_j.$ Minimizing over $t_i$ is a convex univariate problem whose solution lies either at the border of the constraint, or, at a stationary point; that is, $$\label{solving_ti} t_i=\max\left\{0,\frac{\mathbf z_i^T\mathbf K_i^{1/2}\mathbf u_i-\mu}{\mathbf u_i^T \mathbf K_i \mathbf u_i}\right\}.$$ The Cauchy-Schwarz inequality implies that $\mathbf z_i^T\mathbf K_i^{1/2}\mathbf u_i \leq \|\mathbf K_i^{1/2}\mathbf z_i\|$ holds for any $\mathbf u_i$ with $\|\mathbf u_i\|=1$. Hence, it follows from that if $\mu\geq \|\mathbf K_i^{1/2}\mathbf z_i\|$, then $t_i=0$, and thus $\bm {{\bm \gamma}_{i}}=\mathbf 0$. The sparsifying effect of on the additive model is now revealed. If $\mu$ is selected large enough, some of the optimal sub-vectors $\hat{{\bm \gamma}_{i}}$ will be null, and the corresponding functions $\hat c_i(x_i)=\sum_{n=1}^N \hat\gamma_{ni} k(x_{ni},x_i)$ will be identically zero in . Thus, estimation via provides a nonparametric counterpart of Lasso, offering the flexibility of selecting the most informative component-function regressors in the additive model. The separable structure postulated in facilitates subset selection in the nonparametric setup, and mitigates the problem of interpolating scattered data in high dimensions. However, such a model reduction may render inaccurate, in which case extra components depending on two or more variables can be added, turning into the ANOVA model [@LZ06]. Multi-Kernel Learning --------------------- Specifying the kernel that “shapes” ${\mathcal H_{\mathcal X}}$, and thus judiciously determines $\hat f$ in is a prerequisite for KBL. Different candidate kernels $k_1,\ldots,k_P$ would produce different function estimates. Convex combinations can be also employed in , since elements of the convex hull $\mathcal K :=\{k={\sum_{i=1}^P}a_i k_i, \ a_i\geq 0,\ {\sum_{i=1}^P}a_i=1\}$ conserve the defining properties of kernels. A data-driven strategy to select “the best” $k\in \mathcal K$ is to incorporate the kernel as a variable in , that is [@KY10] $$\vspace*{-0.2cm} \hat f=\arg\min_{k\in \mathcal K , f\in {\mathcal H_{\mathcal X}}^k} {\sum_{n=1}^N}(z_{n}-f(x_n))^2+\mu \|f\|_{{\mathcal H_{\mathcal X}}^k} \label{representerK}$$ where the notation ${\mathcal H_{\mathcal X}}^k$ emphasizes dependence on $k$. Then, the following Lemma brings MKL to the ambit of sparse additive nonparametric models. \[lemma1\] Let $\{k_1,\ldots,k_P\}$ be a set of kernels and $k$ an element of their convex hull $\mathcal K$. Denote by ${\mathcal H_{i}}$ and ${\mathcal H_{\mathcal X}}^k$ the RKHSs corresponding to $k_i$ and $k$, respectively, and by ${\mathcal H_{\mathcal X}}$ the direct sum ${\mathcal H_{\mathcal X}}:= \mathcal H_1\oplus\ldots\oplus\mathcal H_P.$ It then holds that: 1. ${\mathcal H_{\mathcal X}}^k={\mathcal H_{\mathcal X}},\ \forall k\in\mathcal K$; and 2. $\forall\ f,\ \inf\{\|f\|_{{\mathcal H_{\mathcal X}}^k}:\ k\in \mathcal K\}=\min\{{\sum_{i=1}^P}\|c_i\|_{{\mathcal H_{i}}}:\ f={\sum_{i=1}^P}c_i,\ c_i\in {\mathcal H_{i}}\}$. According to Lemma \[lemma1\], ${\mathcal H_{\mathcal X}}$ can replace ${\mathcal H_{\mathcal X}}^k$ in , rendering it equivalent to $$\begin{aligned} \hat f=&\arg\min_{f\in {\mathcal H_{\mathcal X}}} {\sum_{n=1}^N}(z_{n}-f(x_n))^2+\mu {\sum_{i=1}^P}\|c_i\|_{{\mathcal H_{i}}} \label{mkl}\\ \nonumber&{\textrm{s. to }}\{f={\sum_{i=1}^P}c_i,\ c_i\in {\mathcal H_{i}}, \ {\mathcal H_{\mathcal X}}:= \mathcal H_1\oplus\ldots\oplus\mathcal H_P \}.\end{aligned}$$ MKL as in resembles , differing in that components $c_i(x)$ in depend on the same variable $x$. Taking into account this difference, is reducible to and thus solvable via BCD or ADMoM, after substituting $k_i(x_n,x)$ for $k_i(x_{ni},x_i)$. On the other hand, a more general case of MKL is presented in [@MP05], where $\mathcal K$ is the convex hull of an infinite and possibly uncountable family of kernels. An example of MKL applied to wireless communications is offered in Section \[sec:applications\], where two different kernels are employed for estimating path-loss and shadowing propagation effects in a cognitive radio sensing paradigm. In the ensuing section, basis functions depending on a second variable $y$ will be incorporated to broaden the scope of the additive models just described. Nonparametric basis pursuit =========================== Consider function $f:{\mathcal X}\times{\mathcal Y}\to\mathbb R$ over the Cartesian product of spaces ${\mathcal X}$ and ${\mathcal Y}$ with associated RKHSs ${\mathcal H_{\mathcal X}}$ and ${\mathcal H_{\mathcal Y}}$, respectively. Let $f$ abide to the bilinear expansion form $$f(x,y)=\sum_{i=1}^P c_i(x)b_i(y)\label{bilinear}$$ where $b_i:{\mathcal Y}\to\mathbb R$ can be viewed as bases, and $c_i:{\mathcal X}\to\mathbb R$ as expansion coefficient functions. Given a finite number of training data, learning $\{c_i,b_i\}$ under sparsity constraints constitutes the goal of the NBP approaches developed in the following sections. The first method for sparse KBL of $f$ in is related to a *nonparametric* counterpart of basis pursuit, with the goal of fitting the function $f(x,y)$ to data, where $\{b_i\}$ are prescribed and $\{c_i\}$s are to be learned. The designer’s degree of confidence on the modeling assumptions is key to deciding whether $\{b_i\}$s should be prescribed or learned from data. If the prescribed $\{b_i\}$s are unreliable, model will be inaccurate and the performance of KBL will suffer. But neglecting the prior knowledge conveyed by $\{b_i\}$s may be also damaging. Parametric basis pursuit [@CDS98] hints toward addressing this tradeoff by offering a compromising alternative. A functional dependence $z=f(y)+e$ between input $y$ and output $z$ is modeled in [@CDS98] with an overcomplete set of bases $\{b_i(y)\}$ (a.k.a. regressors) as $$\begin{aligned} z={\sum_{i=1}^P}c_i b_i(y) + e,\ ~~~e\sim\mathcal N(0,\sigma^2).\end{aligned}$$ Certainly, leveraging an overcomplete set of bases $\{b_i(y)\}$ can accommodate uncertainty. Practical merits of basis pursuit however, hinge on its capability to learn the few $\{b_i\}$s that “best” explain the given data. The crux of NBP on the other hand, is to fit $f(x,y)$ with a basis expansion over the $y$ domain, but learn its dependence on $x$ through nonparametric means. Model comes handy for this purpose, when $\{b_i(y)\}_{i=1}^P$ is a generally overcomplete collection of prescribed bases. With $\{b_i(y)\}_{i=1}^P$ known, $\{c_i(x)\}_{i=1}^P$ need to be estimated, and a kernel-based strategy can be adopted to this end. Accordingly, the optimal function $\hat f(x,y)$ is searched over the family $\mathcal F_b:=\{f(x,y)=\sum_{i=1}^Pc_i(x)b_i(y)\}$, which constitutes the feasible set for the NBP-tailored nonparametric Lasso \[cf. \] $$\label{nbp} \hat f =\arg\min_{f\in \mathcal F_b}\sum_{n=1}^N (z_n-f(x_n,y_n))^2+\mu\sum_{i=1}^P \|c_i\|_{{\mathcal H_{\mathcal X}}}.$$ The Representer Theorem in its general form can be applied recursively to minimize w.r.t. each $c_i(x)$ at a time, rendering $\hat f$ expressible in terms of the kernel expansion as $\hat f(x,y)=\sum_{i=1}^P\sum_{n=1}^N\gamma_{in}k(x_n,x)b_i(y)$, where coefficients ${{\bm \gamma}_{i}}^T:=[\gamma_{i1},\ldots,\gamma_{iN}]$ are learned from data $\mathbf z^T:=[z_1,\ldots,z_N]$ via group Lasso \[cf. \] $$\label{group_lasso_nbp} \min_{\{ {{\bm \gamma}_{i}}\in \mathbb R^N \}_{i=1}^P}\left\|\mathbf z-\textstyle{{\sum_{i=1}^P}} \mathbf K_i {{\bm \gamma}_{i}}\right\|^2+\mu\sum_{i=1}^P \|{{\bm \gamma}_{i}}\|_{\mathbf K}$$ with $\mathbf K_i:=\textrm{{Diag}}[b_i(y_1),\ldots,b_i(y_N)]\mathbf K$. As it was argued in Section III, group Lasso in effects group-sparsity in the subvectors $\{{{\bm \gamma}_{i}}\}_{i=1}^{P}$. This property inherited by is the capability of selecting bases in the nonparametric setup. Indeed, by zeroing ${{\bm \gamma}_{i}}$ the corresponding coefficient function $c_i(x)={\sum_{n=1}^N}\gamma_{in}k(x_n,x)$ is driven to zero, and correspondingly $b_i(y)$ drops from the expansion . **Remark 2.** A single kernel $k_{{\mathcal X}}$ and associated RKHS ${\mathcal H_{\mathcal X}}$ can be used for all components $c_i(x)$ in , since the summands in are differentiated through the bases. Specifically, for a common $\mathbf K$, a different $b_i(y)$ per coefficient $c_i(x)$, yields a distinct diagonal matrix $\textrm{{Diag}}[b_i(y_1),\ldots,b_i(y_N)]$, defining an individual ${\mathbf K}_i$ in that renders vector ${{\bm \gamma}_{i}}$ identifiable. This is a particular characteristic of , in contrast with and Lemma \[lemma1\] which are designed for, and require, multiple kernels. **Remark 3.** The different sparse kernel-based approaches presented so far, namely SpAMs, MKL, and NBP, should not be viewed as competing but rather as complementary choices. Multiple kernels can be used in basis pursuit, and a separable model for $c_i(x)$ may be due in high dimensions. An NBP-MKL hybrid applied to spectrum cartography illustrates this point in Section \[sec:applications\], where bases are utilized for the frequency domain $\mathcal Y$. Blind NBP for matrix and tensor completion ========================================== A kernel-based matrix completion scheme will be developed in this section using a *blind* version of NBP, in which bases $\{b_i\}$ will not be prescribed, but they will be learned together with coefficient functions $\{c_i\}$. The matrix completion task entails imputation of missing entries of a data matrix $\mathbf Z\in\mathbb R^{M\times N}$. Entries of an index matrix $\mathbf W\in\{0,1\}^{M\times N}$ specify whether datum $z_{mn}$ is available ($w_{mn}=1$), or missing ($w_{mn}=0$). Low rank of ${\mathbf Z}$ is a popular attribute that relates missing with available data, thus granting feasibility to the imputation task. Low-rank matrix imputation is achieved by solving $$\begin{aligned} \label{low-rank} \hat{\mathbf Z}=\arg\min_{\mathbf A\in \mathbb R^{M\times N}}&\frac{1}{2}\|(\mathbf Z-\mathbf A)\odot\mathbf W\|_F^2 \textrm{ s. to rank}(\mathbf A)\leq P\end{aligned}$$ where $\odot$ stands for the Hadamard (element-wise) product. The low-rank constraint corresponds to an upperbound on the number of nonzero singular values of matrix $\mathbf A$, as given by its $\ell_0$-norm. Specifically, if $\mathbf s^T:=[s_1,\ldots,s_{\min\{M,N\}}]$ denotes vector of singular values of $\mathbf A$, and the cardinality $|\{s_i\neq 0, \ i=1,\ldots, \min\{M,N\}\}|:=\|\mathbf s\|_0$ defines its $\ell_0$-norm, then the ball of radius $P$, namely $\|\mathbf s\|_0\leq P$, can replace $\textrm{rank}(\mathbf A)\leq P$ in . The feasible set $\|\mathbf s\|_0\leq P$ is not convex because $\|\mathbf s\|_0$ is not a proper norm (it lacks linearity), and solving requires a combinatorial search for the nonzero entries of $\mathbf s$. A convex relaxation is thus well motivated. If the $\ell_0$-norm is surrogated by the $\ell_1$-norm, the corresponding ball $\|\mathbf s\|_1\leq P$ becomes the convex hull of the original feasible set. As the singular values of $\mathbf A$ are non-negative by definition, it follows that $\|\mathbf s\|_1=\sum_{i=1}^{\min\{M,N\}}s_i$. Since the sum of singular values equals the dual norm of the $\ell_2$-norm of $\mathbf A$ [@BV04 p.637], $\|\mathbf s\|_1$ defines a norm over the matrix $\mathbf A$ itself, namely the nuclear norm of $\mathbf A$, denoted by $\|\mathbf A\|_*$. Upon substituting $\|\mathbf A\|_*$ for the rank, is further transformed to its Lagrangian form by placing the constraint in the objective as a regularization term, i.e., $$\begin{aligned} \label{nuclear norm} \hat{\mathbf Z}=\arg\min_{\mathbf A\in \mathbb R^{M\times N}}&\frac{1}{2}\|(\mathbf Z-\mathbf A)\odot\mathbf W\|_F^2+\mu\|\mathbf A\|_*.\end{aligned}$$ The next step towards kernel-based matrix completion relies on an alternative definition of $\|\mathbf A\|_*$. Consider bilinear factorizations of matrix $\mathbf A={\mathbf C}{\mathbf B}^T$ with $\mathbf B\in\mathbb R^{N\times P}$ and $\mathbf C\in\mathbb R^{M\times P}$, in which the constraint $\textrm{rank}(\mathbf A)\leq P$ is implicit. The nuclear norm of $\mathbf A$ can be redefined as (see e.g., [@MMG12]) $$\label{infimum} \|\mathbf A\|_*=\inf_{ \mathbf A=\mathbf C\mathbf B^T }{\frac{1}{2}(\|\mathbf B\|_F^2+\|\mathbf C\|_F^2)}.$$ Result states that the infimum is attained by the singular value decomposition of $\mathbf A$. Specifically, if $\mathbf A =\mathbf U \mathbf \Sigma\mathbf V^T $ with $\mathbf U$ and $\mathbf V$ unitary and $\mathbf \Sigma:=\textrm{diag}(\mathbf s),$ and if $\mathbf B$ and $\mathbf C$ are selected as $\mathbf B=\mathbf V\mathbf \Sigma^{1/2}$, and $\mathbf C=\mathbf U\mathbf \Sigma^{1/2}$, then $\frac{1}{2}(\|\mathbf B\|_F^2+\|\mathbf C\|_F^2)=\sum_{i=1}^{P} s_i = \|\mathbf A\|_*.$ Given , it is possible to rewrite as $$\begin{aligned} \label{nuclear norm_BC} \hat{\mathbf Z}=\arg\min_{\mathbf A={\mathbf C}{\mathbf B}^T}&\frac{1}{2}\|(\mathbf Z-\mathbf A)\odot\mathbf W\|_F^2+\frac{\mu}{2}(\|\mathbf B\|_F^2+\|\mathbf C\|_F^2).\end{aligned}$$ A formal proof of the equivalence between and can be found in [@MMG12]. Matrix completion in its factorized form can be reformulated in terms of and RKHSs. Following [@ABEV09], define spaces ${\mathcal X}:=\{1,\ldots,M\}$ and ${\mathcal Y}:=\{1,\ldots,N\}$ with associated kernels $k_{{\mathcal X}}(m,m')$ and $k_{{\mathcal Y}}(n,n'),$ respectively. Let $f(m,n)$ represent the $(m,n)$-th entry of the approximant matrix $\mathbf A$ in , and $P$ a prescribed overestimate of its rank. Consider estimating $f:{\mathcal X}\times{\mathcal Y}\to \mathbb R$ in over the family $\mathcal F :=\{f(m,n)={\sum_{i=1}^P}c_i(n)b_i(m),\ c_i\in {\mathcal H_{\mathcal X}},\ b_i\in{\mathcal H_{\mathcal Y}}\}$ via $$\begin{aligned} \hat f=\arg\min_{f\in\mathcal F} \frac{1}{2}&\sum_{m=1}^M{\sum_{n=1}^N}w_{mn}(z_{mn}-f(m,n))^2\nonumber\\ &+\frac{\mu}{2} {\sum_{i=1}^P}\left(\|c_i\|^2_{{\mathcal H_{\mathcal X}}}+\|b_i\|^2_{{\mathcal H_{\mathcal Y}}}\right) \label{nuclear_norm_delta}.\end{aligned}$$ If both kernels are selected as Kronecker delta functions, then coincides with . This equivalence is stated in the following lemma. \[lemma2\] Consider spaces ${\mathcal X}:=\{1,\ldots,M\},$ ${\mathcal Y}:=\{1,\ldots,N\}$ and kernels $k_{{\mathcal X}}(m,m'):=\delta(m-m')$ and $k_{{\mathcal Y}}(n,n'):=\delta(n-n')$ over the product spaces ${\mathcal X}\times{\mathcal X}$ and ${\mathcal Y}\times{\mathcal Y}$, respectively. Define functions $f:{\mathcal X}\times{\mathcal Y}\to \mathbb R$, $c_i:{\mathcal X}\to \mathbb R$, and $b_i:{\mathcal Y}\to \mathbb R, \ i=1,\ldots, P$, and matrices $\mathbf A \in \mathbb R^{M\times N}$, $\mathbf B \in \mathbb R^{N\times P},$ and $\mathbf C \in \mathbb R^{M\times P}.$ It holds that: 1. RKHS ${\mathcal H_{\mathcal X}}$ (${\mathcal H_{\mathcal Y}}$) of functions over ${\mathcal X}$ (correspondingly ${\mathcal Y}$), associated with $k_{{\mathcal X}}$ ($k_{{\mathcal Y}}$) reduce to ${\mathcal H_{\mathcal X}}=\mathbb R^M$ (${\mathcal H_{\mathcal Y}}=\mathbb R^N$). 2. Problems , , and are equivalent upon identifying $f(m,n)=A_{mn}$, $b_i(n)=B_{ni}$, and $c_i(m)=C_{mi}.$ According to Lemma \[lemma2\], the intricacy of rewriting as in does not introduce any benefit when the kernel is selected as the Kronecker delta. But as it will be argued next, the equivalence between these two estimators generalizes nicely the matrix completion problem to sparse KBL of missing data with arbitrary kernels. The separable structure of the regularization term in enables a finite dimensional representation of functions $$\begin{aligned} \nonumber \hat c_i(m)&=\sum_{m'=1}^M \gamma_{im'}k_{{\mathcal X}}(m',m),\ m=1,\ldots,M, \\ \hat b_i(n)&=\sum_{n'=1}^N \beta_{in'}{k_{\mathcal Y}}(n',n),\ n=1,\ldots,N. \label{c_and_b}\end{aligned}$$ Optimal scalars $\{\gamma_{im}\}$ and $\{\beta_{in}\}$ are obtained by substituting into , and solving $$\begin{aligned} \min_{\substack{{\mathbf{\tilde C}}\in \mathbb R^{M\times P}\\{\mathbf{\tilde B}}\in \mathbb R^{N\times P}}}&\frac{1}{2}\|({\mathbf Z}-{\mathbf K_{\mathcal X}}{\mathbf{\tilde C}}{\mathbf{\tilde B}}^T{\mathbf K_{\mathcal Y}}^T)\odot{\mathbf W}\|_F^2\nonumber\\&+\frac{\mu}{2} \left[{\textrm{trace}}({\mathbf{\tilde C}}^T{\mathbf K_{\mathcal X}}{\mathbf{\tilde C}})+{\textrm{trace}}({\mathbf{\tilde B}}^T{\mathbf K_{\mathcal Y}}{\mathbf{\tilde B}})\right]\label{coefs_nmc}\end{aligned}$$ where matrix ${\mathbf{\tilde C}}$ (${\mathbf{\tilde B}}$) is formed with entries $\gamma_{mi}$ ($\beta_{ni}$). A Bayesian approach to kernel-based matrix completion is given next, followed by an algorithm to solve for ${\mathbf{\tilde B}}$ and ${\mathbf{\tilde C}}$. Bayesian Low-Rank Imputation and Prediction ------------------------------------------- To recast in a Bayesian framework, suppose that the available entries of ${\mathbf Z}$ obey the additive white Gaussian noise (AWGN) model ${\mathbf Z}=\mathbf A+\mathbf E,$ with $\mathbf E$ having entries independent identically distributed (i.i.d.) according to the zero-mean Gaussian distribution $\mathcal N(0,\sigma^2)$. Matrix $\mathbf A$ is factorized as $\mathbf A={\mathbf C}{\mathbf B}^T$ without loss of generality (w.l.o.g.). Then, a Gaussian prior is assumed for each of the columns $\mathbf b_i$ and $\mathbf c_i$ of ${\mathbf B}$ and ${\mathbf C}$, respectively, $$\begin{aligned} \label{prior} \mathbf b_i &\sim \mathcal N(\mathbf 0,\mathbf R_B),\ \mathbf c_i \sim \mathcal N(\mathbf 0,\mathbf R_C) \end{aligned}$$ independent across $i,$ and with ${\textrm{trace}}(\mathbf R_B)={\textrm{trace}}(\mathbf R_C)$. Invariance across $i$ is justifiable, since columns are a priori interchangeable, while ${\textrm{trace}}(\mathbf R_B)={\textrm{trace}}(\mathbf R_C)$ is introduced w.l.o.g. to remove the scalar ambiguity in $\mathbf A={\mathbf C}{\mathbf B}^T$. Under the AWGN model, and with priors , the maximum a posteriori (MAP) estimator of $\mathbf A$ given $\mathbf Z$ at the entries indexed by ${\mathbf W}$ takes the form \[cf. \] $$\begin{aligned} \min_{\substack{{\mathbf C}\in \mathbb R^{M\times P}\\{\mathbf B}\in \mathbb R^{N\times P}}}&\frac{1}{2}\|({\mathbf Z}-{\mathbf C}{\mathbf B}^T)\odot{\mathbf W}\|_F^2\nonumber\\&+\frac{\sigma^2}{2} \left[{\textrm{trace}}({\mathbf C}^T\mathbf R_C^{-1}{\mathbf C})+{\textrm{trace}}({\mathbf B}^T\mathbf R_B^{-1}{\mathbf B})\right].\label{bayesian_nmc}\end{aligned}$$ With $\mathbf R_C={\mathbf K_{\mathcal X}}$ and $\mathbf R_B={\mathbf K_{\mathcal Y}}$, and substituting ${\mathbf B}:={\mathbf K_{\mathcal Y}}{\mathbf{\tilde B}}$ and ${\mathbf C}:={\mathbf K_{\mathcal X}}{\mathbf{\tilde C}}$, the MAP estimator that solves coincides with the estimator solving for the coefficients of kernel-based matrix completion, provided that covariance and Gram matrices coincide. From this Bayesian perspective, the KBL matrix completion method provides a generalization of , which can accommodate a priori knowledge in the form of correlation across rows and columns of the incomplete ${\mathbf Z}$. With prescribed correlation matrices $\mathbf R_B$ and $\mathbf R_C$, can even perform smoothing and prediction. Indeed, if a column (or row) of ${\mathbf Z}$ is completely missing, can still find an estimate $\mathbf{ \hat Z}$ relying on the covariance between the missing and available columns. This feature is not available with , since the latter relies only on rank-induced colinearities, so it cannot reconstruct a missing column. The prediction capability is useful for instance in collaborative filtering [@ABEV09], where a group of users rates a collection of items, to enable inference of new-user preferences or items entering the system. Additionally, the Bayesian reformulation provides an explicit interpretation for the regularization parameter $\mu=\sigma^2$ as the variance of the model error, which can thus be obtained from training data. The kernel-based matrix completion method is summarized in Algorithm \[KML-table\], which solves upon identifying $\mathbf R_C={\mathbf K_{\mathcal X}}$, $\mathbf R_B={\mathbf K_{\mathcal Y}}$, and $\sigma^2=\mu$, and solves after changing variables ${\mathbf B}:={\mathbf K_{\mathcal Y}}{\mathbf{\tilde B}}$ and ${\mathbf C}:={\mathbf K_{\mathcal X}}{\mathbf{\tilde C}}$ (compare with lines 13-14 in Algorithm \[KML-table\]). \[KML-table\] Detailed derivations of the updates in Algorithm \[KML-table\] are provided in the Appendix. For a high-level description, the columns of ${\mathbf B}$ and ${\mathbf C}$ are updated cyclically, solving via BCD iterations. This procedure converges to a stationary point of , which in principle does not guarantee global optimality. Opportunely, it can be established that local minima of are global minima, by transforming into a convex problem through the same change of variables proposed in [@MMG12] for the analysis of . This observation implies that Algorithm \[KML-table\] yields the global optimum of , and thus . The kernel-based matrix completion method here offers an alternative to [@ABEV09], where the low-rank constraint is introduced indirectly through the kernel trick. Furthermore, bypassing the nuclear norm and using instead, renders generalizable to tensor imputation [@BMG12]. Kernel-based dictionary learning ================================ Basis pursuit approaches advocate an overcomplete set of bases to cope with model uncertainty, thus learning from data the most concise subset of bases that represents the signal of interest. But the extensive set of candidate bases (a.k.a. dictionary) still needs to be prescribed. The next step towards model-agnostic KBL is to learn the dictionary from data, along with the sparse regression coefficients. Under the sparse linear model $$\begin{aligned} \label{dictionary_model} \mathbf z_m = {\mathbf B}\bm \gamma_m+\mathbf e_m,\ m=1,\ldots,M\end{aligned}$$ with dictionary of bases ${\mathbf B}\in\mathbb R^{N\times P},$ and vector of coefficients $\bm \gamma_m\in\mathbb R^P$, the goal of dictionary learning is to obtain ${\mathbf B}$ and ${\mathbf C}:=[\bm\gamma_1,\ldots,\bm\gamma_M]^T$ from data ${\mathbf Z}:=[\mathbf z_1,\ldots,{\mathbf z}_M]^T$. A swift count of equations and unknowns yields $NP+MP$ scalar variables to be learned from $MN$ data (see Fig. \[fig:KDL\]). This goal is not plausible for an overcomplete design ($P>N$) unless sparsity of $\{\bm\gamma_m\}_{m=1}^M$ is exploited. Under proper conditions, it is possible to recover a sparse $\bm\gamma_m$ containing at most $S$ nonzero entries from a reduced number $N_s:=\theta S \log P\leq N$ of equations [@CT05], where $\theta$ is a proportionality constant. Hence, the number of equations needed to specify ${\mathbf C}$ reduces to $MN_s$, as represented by the darkened region of ${\mathbf Z}^T$ in Fig. \[fig:KDL\]. With $N_s<N$, it is then possible and crucial to collect a sufficiently large number $M$ of data vectors in order to ensure that $MN\geq NP+MN_S$, thus accommodating the additional $NP$ equations needed to determine ${\mathbf B}$, and enable learning of the dictionary. ![Comparison between KDL and NBP; (top) dictionary ${\mathbf B}$ and sparse coefficients $\bm\gamma_m$ for KDL, where $MN_S$ equations are sufficient to recover ${\mathbf C}$; (bottom) low-rank structure $\mathbf A={\mathbf C}{\mathbf B}^T$ presumed in KMC. []{data-label="fig:KDL"}](dictionary_learning_Ns.eps "fig:"){width="\linewidth"}\ Having collected sufficient training data, one possible approach to find ${\mathbf B}$ and ${\mathbf C}$ is to fit the data via the LS cost $\|{\mathbf Z}-{\mathbf C}{\mathbf B}^T\|_F^2$ regularized by the $\ell_1$-norm of ${\mathbf C}$ in order to effect sparsity in the coefficients [@KMRELS03]. This dictionary leaning approach can be recast into the form of blind NBP by introducing the additional regularizing term $\lambda{\sum_{i=1}^P}\|c_i\|_1$, with $\|c_i\|_1:=\sum_{m=1}^M |c_i(m)|$. The new regularizer on functions $c_i:{\mathcal X}\to\mathbb R$ depends on their values at the measurement points $m$ only, and can be absorbed in the loss part of . Thus, the optimal $\{c_i\}$ and $\{b_i\}$ conserve their finite expansion representations dictated by the Representer Theorem. Coefficients $\{\gamma_{mp},\beta_{np}\}$ must be adapted according to the new cost, and becomes $$\begin{aligned} \label{bayesian_DL}\min_{\substack{{\mathbf C}\in \mathbb R^{M\times P}\\{\mathbf B}\in \mathbb R^{N\times P}}}&\frac{1}{2}\|({\mathbf Z}-{\mathbf C}{\mathbf B}^T)\odot{\mathbf W}\|_F^2+\lambda\|{\mathbf C}\|_1\\\nonumber&+\frac{\sigma^2}{2} \left[{\textrm{trace}}({\mathbf B}^T\mathbf R_B^{-1}{\mathbf B})+{\textrm{trace}}({\mathbf C}^T\mathbf R_C^{-1}{\mathbf C})\right].\end{aligned}$$ **Remark 4.** Kernel-based dictionary learning (KDL) via inherits two attractive properties of kernel matrix completion (KMC), that is blind NBP, namely its flexibility to introduce a priori information through $\mathbf R_B$ and $\mathbf R_C$, as well as the capability to cope with missing data. While both KDL and KMC estimate bases $\{b_i\}$ and coefficients $\{c_i\}$ jointly, their difference lies in the size of the dictionary. As in principal component analysis, KMC presumes a low-rank model for the approximant $\mathbf A={\mathbf C}{\mathbf B}^T$, compressing signals $\{\mathbf z_m\}$ with $P'<M$ components (Fig. \[fig:KDL\] (bottom)). Low rank of $\mathbf A$ is not required by the dictionary learning approach, where signals $\{\mathbf z_m\}$ are spanned by $P\geq M$ dictionary atoms $\{b_i\}$ (Fig. \[fig:KDL\] (top)), provided that each $\mathbf z_m$ is composed by a few atoms only. Algorithm \[KML-table\] can be modified to solve by replacing the update for column $\mathbf c_i$ in line 7 with the Lasso estimate $$\mathbf c_i:=\arg\min_{\mathbf c\in \mathbb R^M} \frac{1}{2}\mathbf c^T \mathbf H_i \mathbf c+\mathbf c^T({\mathbf W}\odot \mathbf{ Z}_i){\mathbf B}\mathbf e_i +\lambda\|\mathbf c\|_1.$$ The Bayesian interpretation of brings KDL close to [@XZC12], where a Bernoulli-Gaussian model for ${\mathbf C}$ accounts for its sparsity, and a Beta distribution is introduced for learning the distribution of ${\mathbf C}$ through hyperparameters. Although [@XZC12] assumes independent Gaussian variables across “time” samples in the underlying model for ${\mathbf C}$, generalization to correlated variables is straightforward. Bernoulli parameters controlling the sparsity of $c_{mp}$ are assumed invariant across $m$ in [@XZC12], which amounts to stationarity over $c_{mp}$. Sparse learning of temporally correlated data is studied also in [@ZR11], although the time-invariant model for the support of $\mathbf c_m$ does not lend itself to dictionary learning. Although dictionary learning can indeed be viewed as a blind counterpart of compressive sampling, its capability of recovering ${\mathbf B}$ and ${\mathbf C}$ from data is typically illustrated by examples rather than theoretical guarantees. Recent efforts on establishing identifiability and local optimality of dictionary learning can be found in [@GW11] and [@GS12]. A related KDL strategy has been proposed in [@SNPC12], where data and dictionary atoms are organized in classes, and the regularized learning criterion is designed to promote cohesion of atoms within a class. Applications {#sec:applications} ============ Spectrum cartography via NBP and MKL ------------------------------------ Consider the setup in [@BMG11] with $N_c=100$ radios distributed over an area ${\mathcal X}$ of $100\times 100\textrm{m}^2$ to measure the ambient RF power spectral density (PSD) at $N_f=24$ frequencies equally spaced in the band from $2,400$MHz to $2,496$MHz, as specified by IEEE 802.11 wireless LAN standard [@IEEE802]. The radios collaborate by sharing their $N=N_cN_f$ measurements with the goal of obtaining a map of the PSD across space and frequency, while specifying at the same time which of the $P=14$ frequency sub-bands are occupied. The wireless propagation is simulated according to the pathloss model affected by shadowing described in [@AP09], with parameters $n_p=3$, $\Delta_0=60$m, $\delta=25$m , $\sigma_X^2=25dB$, and with AWGN variance $\sigma_n^2=-10dB$. Fig. \[fig:power\_map\] depicts the distribution of power across space generated by two sources transmitting over bands $i=5$ and $i=8$ with center frequencies $2,432$MHz and $2,447$MHz, respectively. Fig. \[fig:simulated\_data\] shows the PSD as seen by a representative radio located at the center of ${\mathcal X}$. ![ PSD measurements at a representative location $x_n$.[]{data-label="fig:simulated_data"}](powermap.eps){width="4.8cm"} ![ PSD measurements at a representative location $x_n$.[]{data-label="fig:simulated_data"}](measurements65dB.eps){width="4.2cm"} Model is adopted for collaborative PSD sensing, with $x$ and $y$ representing the spatial and frequency variables, respectively. Bases $\{b_i\}$ are prescribed as Hann-windowed pulses in accordance with [@IEEE802], and the distribution of power across space per sub-band is given by $\{c_i(x)\}$ after interpolating the measurements obtained by the radios via . Two exponential kernels $k_r(x,x')=\exp(-\|x-x'\|^2/\theta_r^2),\ r=1,2$ with $\theta_1=10$m and $\theta_2=20$m are selected, and convex combinations of the two are considered as candidate interpolators $k(x,x')$. This MKL strategy is intended for capturing two different levels of resolution as produced by pathloss and shadowing. Correspondingly, each $c_i(x)$ is decomposed into two functions $c_{i1}(x)$ and $c_{i2}(x)$ which are regularized separately in $\eqref{nbp}$. Solving generates the PSD maps of Fig. \[fig:nbp\]. Only $\bm\gamma_5$ and $\bm\gamma_8$ in the solution to take nonzero values (more precisely $\bm\gamma_{5r}$ and $\bm\gamma_{8r},\ r=1,2$ in the MKL adaptation of ), which correctly reveals which frequency bands are occupied as shown in Fig. \[fig:nbp\] (first row). The estimated PSD across space is depicted in Fig. \[fig:nbp\] (second row) for each band respectively, and compared to the ground truth depicted in Fig. \[fig:nbp\] (third row). The multi-resolution components $c_{5r}(x)$ and $c_{8r}(x)$ are depicted in Fig. \[fig:nbp\] (last two rows), demonstrating how kernel $k_1$ captures the coarse pathloss distribution, while $k_2$ refines the map by revealing locations affected by shadowing. ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](base5dB.eps "fig:"){width="3cm"} ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](base8dB.eps "fig:"){width="3cm"}\ ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](mapf5.eps "fig:"){width="2.9cm"} ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](mapf8.eps "fig:"){width="2.9cm"}\ ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](mapSf5.eps "fig:"){width="2.9cm"} ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](mapSf8.eps "fig:"){width="2.9cm"}\ ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](map1f5.eps "fig:"){width="2.9cm"} ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](map1f8.eps "fig:"){width="2.9cm"}\ ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](map2f5.eps "fig:"){width="2.9cm"} ![[NBP for spectrum cartography using MKL.]{}[]{data-label="fig:nbp"}](map2f8.eps "fig:"){width="2.9cm"} These results demonstrate the usefulness of model $\eqref{bilinear}$ for collaborative spectrum sensing, with bases abiding to [@IEEE802] and multi-resolution kernels. The sparse nonparametric estimator serves the purpose of revealing the occupied frequency bands, and capturing the PSD map across space per source. Compared to the spline-based approach in [@BMG11], the MKL adaptation of here provides the appropriate multi-resolution capability to capture pathloss and shadowing effects when interpolating the data across space. Completion of Gene Expression Data via Blind NBP ------------------------------------------------ The imputation method is tested here on microarray data described in [@SSB04]. Expression levels of yeast across $N_g=4,772$ genes sampled at $N=13$ time points during the cell cycle are considered. A subset of $M=100$ genes is extracted and their expression levels are organized in the matrix $\mathbf Z\in\mathbb R^{M\times N}$ depicted in Fig. \[fig:microarrays\] (left). Severe data losses are simulated by discarding $90\%$ of the entries of ${\mathbf Z}$, including the nearly $5\%$ actually missing data. According to the Bayesian model , it follows that $$\begin{aligned} E[{\mathbf Z}{\mathbf Z}^T]&=\theta\mathbf R_C+\sigma^2_e \mathbf I,\ ~~ E[{\mathbf Z}^T{\mathbf Z}]=\theta\mathbf R_B+\sigma^2_e \mathbf I~.\label{RZZ}\end{aligned}$$ To study the effect of hydrogen peroxide on the cell cycle arrest, two extra microarray datasets ${\mathbf Z}^{(1)},\ {\mathbf Z}^{(2)}\in\mathbb R^{M\times N}$, synchronized with ${\mathbf Z}$, are collected in [@SSB04]. These two matrices are employed to form an estimate of $ E[{\mathbf Z}{\mathbf Z}^T]$, which is used instead of $\mathbf R_C$ in after neglecting the noise term in . Since the presence of hydrogen peroxide in samples ${\mathbf Z}^{(1)}$ and ${\mathbf Z}^{(2)}$ induces cell cycle arrest, the correlation between samples across time in ${\mathbf Z}^{(1)}$ and ${\mathbf Z}^{(2)}$ is altered, and thus these samples are not appropriate for estimating $E[{\mathbf Z}^T{\mathbf Z}]$. Alternatively, the sample estimate of $E[{\mathbf Z}^T{\mathbf Z}]$ is formed with the microarray data of the $(N_g-M)\times N$ genes set aside, and then used in place of $\mathbf R_{B}$ in . Solving with the available data ($10\%$ of the total) as shown in Fig. \[fig:microarrays\] (second left) results in the matrix $\mathbf{\hat Z}$ depicted in Fig. \[fig:microarrays\] (second right), where the imputed missing data introduce an average recovery error of $-8$dB \[cf. Fig. \[fig:microarray\_curves\]\]. In producing $\mathbf{\hat Z}$, the smoothing capability of to recover completely missing rows of ${\mathbf Z}$ (amounting to 25 in this example) is corroborated. Missing rows cannot be recovered by nuclear norm regularization alone \[cf. \], even if ${\mathbf Z}$ is padded with expression levels of the discarded $N_g-M$ genes. Fig. \[fig:microarrays\] (right) presents this case confirming that its performance dagrades w.r.t. NBP; while Fig. \[fig:microarray\_curves\] illustrates the sensitivity of the estimation error to the cross-validated regularization parameter $\mu$ for both estimators. Similar degraded results are observed when imputing missing entries of ${\mathbf Z}$ using the impute.knn() and svdImpute() methods, as implemented in the R packages pcaMethods and BioConductor-impute. These two methods were applied to the padded ${\mathbf Z}$, after the requisite discarding of the 25 missing rows, resulting in recovery errors on the remaining missing entries at $-3.84$dB and $-0.12$dB (with parameter nPcs$=12$), respectively. ![[Microarray data completion; from left to right: original sample; $10\%$ available data; recovery via NBP; and recovery via nuclear-norm regularized LS.]{}[]{data-label="fig:microarrays"}](microarray.eps "fig:"){width="1.6cm"} ![[Microarray data completion; from left to right: original sample; $10\%$ available data; recovery via NBP; and recovery via nuclear-norm regularized LS.]{}[]{data-label="fig:microarrays"}](microarray_B.eps "fig:"){width="1.6cm"} ![[Microarray data completion; from left to right: original sample; $10\%$ available data; recovery via NBP; and recovery via nuclear-norm regularized LS.]{}[]{data-label="fig:microarrays"}](microarray_K.eps "fig:"){width="1.6cm"} ![[Microarray data completion; from left to right: original sample; $10\%$ available data; recovery via NBP; and recovery via nuclear-norm regularized LS.]{}[]{data-label="fig:microarrays"}](microarray_N.eps "fig:"){width="1.6cm"} ![[Relative recovery error in dB with $90\%$ missing data; comparison between blind NBP (KMC) and nuclear norm regularization. ]{}[]{data-label="fig:microarray_curves"}](errors_microarray_legend.eps){width="1\linewidth" height="0.6\linewidth"} Network Flow Prediction via Blind NBP ------------------------------------- The Abilene network in Fig. \[fig:abilene\_network\], a.k.a. Internet 2, comprising $11$ nodes and $M=30$ links [@Abilene], is utilized as a testbed for traffic load prediction. Aggregate link loads $z_{mn}$ are recorded every $5$ minute intervals in the morning of December 22, 2008, between 12:00am and 11:55pm, and are collected in the first $N/2=144$ columns of matrix ${\mathbf Z}\in \mathbb R^{M\times N}.$ These samples are then used to predict link loads hours ahead, by capitalizing on their mutual cross-correlation, the periodic correlation across days, and their interdependence across links as dictated by the network topology. The correlation matrix $E({\mathbf Z}{\mathbf Z}^T)$ represented in Fig. \[fig:correlation\_abilene\] is estimated with training samples collected during the two previous weeks, from December 8 to December 21, 2008, and substituted for $\mathbf R_C$ in according to . A singular point at 11:00am in the traffic curve, as depicted in black in Fig. \[fig:network\_prediction\], is reflected in the sharp transition noticed in Fig. \[fig:correlation\_abilene\]. On the other hand, $\mathbf R_B$ is not estimated but derived from the network structure. Supposing i.i.d. flows across the network, it holds that $E({\mathbf Z}^T{\mathbf Z})=\sigma_f^2\mathbf R^T\mathbf R$, where $\mathbf R$ represents the network routing matrix and $\sigma_f^2$ the flow variance. Thus, $\sigma_f^2\mathbf R^T\mathbf R$, was used instead of $\mathbf R_B$ in , with $\sigma^2_f$ adjusted to satisfy $\textrm{tr} (E[{\mathbf Z}^T{\mathbf Z}])=\textrm{tr}( E[{\mathbf Z}{\mathbf Z}^T])$. Fig. \[fig:network\_prediction\] shows link loads predicted by on December 22, 2008, for a representative link, along with the actually recorded samples for that day. Prediction accuracy is compared in Fig. \[fig:network\_prediction\] to a base strategy comprising independent LMMSE estimators per link, which yield a relative prediction error $e_p=0.22$ aggregated across links, against $e_p=0.15$ that results from . Strong correlation among samples from 12:00am to 2:00pm \[cf. Fig. \[fig:correlation\_abilene\]\] renders LMMSE prediction accurate in this interval, relying on single-link data only. The benefit of considering the links jointly is appreciated in the subsequent interval from 2:00pm to 11:55pm, where the traffic correlation with morning samples fades away and the network structure comes to add valuable information, in the form of $\mathbf R_B$, to stabilize prediction. ![Internet 2 network topology graph [@Abilene].[]{data-label="fig:abilene_network"}](abilene_map.eps){width="0.6\linewidth"} ![ Sample estimates of $E({\mathbf Z}{\mathbf Z}^T)$ for link loads across time, are used to replace $\mathbf R_C$ and ${\mathbf K_{\mathcal Y}}$. []{data-label="fig:correlation_abilene"}](network_correlation.eps){width="0.6\linewidth"} ![[ Network prediction via KMC (blind NBP). Measured and predicted traffic on link $m=21$.]{}[]{data-label="fig:network_prediction"}](predictionBNBPmu10days15mu0P30l21.eps){width="0.9\linewidth"} Summary ======= A new methodology was outlined in this paper by cross fertilizing sparsity-aware signal processing tools with kernel-based learning. It goes well beyond translating sparse vector regression techniques into their nonparametric counterparts, to generate a series of unique possibilities such as kernel selection or kernel-based matrix completion. The present article contributes to these efforts by advancing NBP as the cornerstone of sparse KBL, including blind versions that emerge as nonparametric nuclear norm regularization and dictionary learning. KBL was connected with GP analysis, promoting a Bayesian viewpoint where kernels convey prior information. Alternatively, KBL can be regarded as an interpolation toolset though its connection with the NST, suggesting that the impact of the prior model choice is attenuated when the size of the dataset is large, especially when kernel selection is also incorporated. All in all, sparse KBL was envisioned as a fruitful research direction. Its impact on signal processing practice was illustrated through a diverse set of application paradigms. Appendix {#appendix .unnumbered} ======== Proofs of Properties P1-P3 {#proofs-of-properties-p1-p3 .unnumbered} -------------------------- 1\) If white noise $n(x): x\in \mathbb R$ is fed to an ideal low-pass filter with cutoff frequency $\omega_{\max}=\pi$, then $r(\xi):=E(z(x)z(x+\xi))={\textrm{sinc}}(\xi)$ is the autocorrelation of the output $z(x)$. Hence, $\mathbf K$ equals the covariance matrix of $\mathbf z^T:=[z(x_1),\ldots,z(x_N)]$, and as such $\mathbf K\succeq\mathbf 0$. 2\) Rewrite the kernel $f_{x'}(x):={\textrm{sinc}}(x-x')$ as a function parameterized by $x'$. Then, the NST applied to the bandlimited $f_{x'}(x)$ yields $f_{x'}(x)={\sum_{n\in\mathbb Z}}f_{x'}(n){\textrm{sinc}}(x-n)={\sum_{n\in\mathbb Z}}\phi_n(x')\phi_n(x)$. 3\) Upon defining $\alpha_n:=f(x_n)$, the reconstruction formula $f(x):={\sum_{n\in\mathbb Z}}f(n){\textrm{sinc}}(x-n)$ gives the kernel expansion of $f\in{\mathcal B_{\pi}}$. Hence, by definition of the RKHS norm $\|f\|^2_{{\mathcal H_{\mathcal X}}}={\sum_{n\in\mathbb Z}}\sum_{n'\in\mathbb Z} f(n){\textrm{sinc}}(n-n')f(n')$. Substituting the reconstructed $f(n)=\sum_{n'\in\mathbb Z}{\textrm{sinc}}(n-n')f(n')$ into the last equation yields $\|f\|_{{\mathcal H_{\mathcal X}}}^2={\sum_{n\in\mathbb Z}}f^2(n)$. Design of Algorithm 1 {#design-of-algorithm-1 .unnumbered} --------------------- In order to rewrite the cost $\frac{1}{2}\|({\mathbf Z}-{\mathbf C}{\mathbf B}^T)\odot{\mathbf W}\|_F^2+\frac{\mu}{2} \left[\textrm{Tr}({\mathbf C}^T{\mathbf K_{\mathcal X}}^{-1}{\mathbf C})+\textrm{Tr}({\mathbf B}^T{\mathbf K_{\mathcal Y}}^{-1}{\mathbf B})\right]$ in terms $\bci={\mathbf C}\bei$ and $\bbi={\mathbf B}\bei$, representing the $i$-th columns of matrix ${\mathbf B}$ and ${\mathbf C}$, respectively, define $\bbCi={\mathbf C}-\bci\bei^T$ and decompose ${\mathbf C}{\mathbf B}^T=\bbCi{\mathbf B}^T+\bci\bbi^T$. Then rewrite the cost as $$\begin{aligned} &\frac{1}{2}\|({\mathbf Z}_i-\bci\bbi^T)\odot{\mathbf W}\|_F^2+\frac{\mu}{2}\bci^T{\mathbf K_{\mathcal X}}^{-1}\bci\label{app:cost_ci}\end{aligned}$$ after defining ${\mathbf Z}_i:={\mathbf Z}-\bbCi{\mathbf B}^T$ and discarding regularization terms not depending on $\bci$. Let $\vv({\mathbf W})$ denote the vector operator that concatenates columns of ${\mathbf W}$, and $\mathbf D:=\dd[\mathbf x]$ the diagonal matrix operator such that $d_{ii}=x_i$. The Hadamard product can be bypassed by defining $\Dw:=\dd[\vv({\mathbf W})]$, substituting $\|\mathbf X\|_F=\|\vv(\mathbf X)\|_2$, and using the following identities $$\begin{aligned} \vv({\mathbf W}\odot \mathbf X)&=\Dw\vv(\mathbf X),\nonumber\\ \vv(\mathbf X_i\bbi^T)&=(\bbi\otimes \mathbf I_M)\vv(\mathbf X_i)\label{app:vec_otimes} \end{aligned}$$ with $\otimes$ representing the Kroneker product. Applying to yields $$\begin{aligned} \frac{1}{2}\|\Dw\vv({\mathbf Z}_i)-\Dw(\bbi\otimes \mathbf I_M)\bci\|_2^2+\frac{\mu}{2} \bci^T{\mathbf K_{\mathcal X}}^{-1}\bci\label{app:cost_cidiag}\end{aligned}$$ Equating the gradient of w.r.t. $\bci$ to zero, and solving for $\bci$ it results $$\begin{aligned} &\bci=\mathbf H_i^{-1} (\bbi^T\otimes \mathbf I_M)\Dw\vv({\mathbf Z}_i)\nonumber\\&\mathbf H_i:= \bbi^T\otimes \mathbf I_M)\Dw\Dw (\bbi^T\otimes \mathbf I_M) +\mu {\mathbf K_{\mathcal X}}^{-1} \label{app:solution_ci_cmplx}\end{aligned}$$ It follows from that $(\bbi^T\otimes \mathbf I_M)\Dw\vv({\mathbf Z}_i)=({\mathbf W}\odot {\mathbf Z}_i)$, and it can be established by inspection that $(\bbi^T\otimes \mathbf I_M)\Dw\Dw (\bbi^T\otimes \mathbf I_M)=\sum_{n=1}^N b_{in}^2 \dd[\mathbf w_n]=\dd\left[ {\mathbf W}(\bbi\odot\bbi)\right]$, so that reduces to $\bci= \left( \dd\left[ {\mathbf W}(\bbi\odot\bbi)\right]+\mu{\mathbf K_{\mathcal X}}^{-1}\right)^{-1}({\mathbf W}\odot {\mathbf Z}_i)\bbi$, coinciding with the update for $\bci$ in Algorithm 1. The corresponding update for $\bbi$ follows from parallel derivations. [1]{} \[Online\]. Available: http://internet2.edu/observatory/archive/data-collections.html. *IEEE Standard for Info. Tech.-Telecomms. and Info. Exchchange between Systems-Local and Metropolitan Area Nets., Part 11: Wir. LAN MAC and PHY Specifications,* IEEE Standard 802.11-2012, pp. 1-1184, Mar. 2012. J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert, “A new approach to collaborative filtering: Operator estimation with spectral regularization,” *J. Machine Learning Res.,* vol. 10, pp. 803-826, Mar. 2009. P. Agrawal and N. Patwari, “Correlated link shadow fading in multihop wireless network,” *IEEE Trans. on Wireless Comm.,* vol. 8, no. 8, pp. 4024-4036, Aug. 2009. S. Boyd and L. Vandenberghe, *Convex Optimization,* Cambridge University Press, 2004. J. A. Bazerque, G. Mateos, and G. B. Giannakis, “Group-Lasso on splines for spectrum cartography," *IEEE Trans. on Signal Proc.,* vol. 59, no. 10, pp. 4648-4663, Oct. 2011. J. A. Bazerque, G. Mateos, and G. B. Giannakis, “Nonparametric low-rank tensor imputation,” *IEEE Workshop on Stat. Signal Proc.,* Ann Arbor, MI, Aug. 5-8, 2012. E. J. Candes, and T. Tao, “Decoding by linear programming,” *IEEE Trans. on Info. Theory,* vol. 51, no. 12, pp. 4203-4215, Dec. 2005. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” *SIAM J. Sci. Computing,* vol. 20, no. 1, pp. 33-61, Dec. 1998. N. Cressie, *Statistics for Spatial Data,* Wiley, 1991. J. Duchon, *Splines Minimizing Rotation-Invariant Semi-norms in Sobolev Spaces,* New York: Springer-Verlag, 1977. M. Fazel, “Matrix rank minimization with applications" *PhD Thesis,* Electrical Engineering Dept., Stanford University, vol. 54, pp. 1-130, 2002. Q. Geng and J. Wright, “On the local correctness of $\ell_1$-minimization for dictionary learning,” *IEEE Trans. on Info. Theory*, 2011 (submitted); see arXiv:1101.5672v1 \[cs.IT\]. F. Girosi, “An equivalence between sparse approximation and support vector machines,” *Neural Computation* vol. 10, no. 6, pp. 1455-1480, Aug. 1998. R. Gribonval and K. Schnass, “Dictionary identification - sparse matrix factorization via $\ell_1$-minimization” *IEEE Trans. on Info. Theory,* vol. 56, no. 7, pp. 3523 - 3539, July 2010. T. Hastie, R. Tibshirani, and J. Friedman, *The Elements of Statistical Learning,* 2nd ed., Springer, NY, 2009. S. Kay, *Fundamentals of Statistical Signal Processing,* vol. 1, Prentice Hall, 2001. V. Kekatos, S. Veeramachaneni, M. Light, and G. B. Giannakis, “Day-ahead electricity market forecasting using kernels,” *Proc. of IEEE-PES on Innovative Smart Grid Technologies*, Washington, DC, Feb. 24-27, 2013. V. Koltchinskii and M. Yuan, “Sparsity in multiple kernel learning,” *Annals of Statistics* vol. 38, no. 6, pp. 3660-3695, Apr. 2010. K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T. W. Lee, and T. J. Sejnowski, “Dictionary learning algorithms for sparse representation,” *Neural Computation,* vol. 15, no. 2, pp. 349-396, Feb. 2003. Y. Lin and H. H. Zhang, “Component selection and smoothing in multivariate nonparametric regression,” *Annals of Statistics,* vol. 34, no. 5, pp. 2272-2297, May 2006. M. Mardani, G. Mateos, and G. B. Giannakis, “In-network sparsity-regularized rank minimization: Algorithms and applications,” [*IEEE Trans. on Signal Proc.*]{}, 2012; see also arXiv:1203.1507v1 \[cs.MA\]. C. Micchelli and M. Pontil, “Learning the kernel function via regularization,” *J. Machine Learning Res.,* vol. 6, pp. 1099-1125, Sep. 2005. M. Z. Nashed and Q. Sun, “Function spaces for sampling expansions,” *Multiscale Signal Analysis and Modelling*, edited by X. Shen and A. Zayed, Lecture Notes in EE, Springer, pp. 81-104, 2012. C. E. Rasmussen and C. K. I. Williams, *Gaussian Processes for Machine Learning,* the MIT Press, 2006. P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman, “Sparse additive models,” *J. Roy. Stat. Soc. B,* vol. 71, no. 5, pp. 1009-1030, Oct. 2009. M. Shapira, M. E. Segal, and D. Botstein, “Disruption of yeast forkhead-associated cell cycle transcription by oxidative stress,” *Molecular Biology of the Cell,* vol. 15, no. 12, pp. 5659-5669, Dec. 2004. A. Shrivastava, H. V. Nguyen, V. M. Patel, and R. Chellappa, “Design of non-linear discriminative dictionaries for image classification,” *Proc. of Asian Conf. on Computer Vision*, Daejeon, Korea, 2012. V. Sindhwani and A. C. Lozano, “Non-parametric group orthogonal matching pursuit for sparse learning with multiple kernels,” *Advances in Neural Information Processing Systems,* pp. 2519-2527, Granada, Spain, 2011. P. Tseng and S. Yun, “A coordinate gradient descent method for nonsmooth separable minimization,” *J. Mathematical Programming,* vol. 117, no. 1-2, pp. 387-423, Mar. 2009. M. Unser, “Splines: A perfect fit for signal and image processing,” *IEEE Signal Proc. Magazine,* vol. 16, no. 6, pp. 22-38, Nov. 1999. P. Vincent and Y. Bengio, “Kernel matching pursuit,” *Machine Learning,* vol. 48, pp. 169-191, 2002. G. Wahba, *Spline Models for Observational Data,* Society for Industrial and Applied Mathematics, PA 1990. Z. Xing, M. Zhou, A. Castrodad, G. Sapiro and L. Carin, “Dictionary learning for noisy and incomplete hyperspectral images,” *SIAM Journal on Imaging Sciences,* vol. 5, no. 1, pp. 33-56, 2012. Z. Zhang, and B. D. Rao, “Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning,” *IEEE J. Sel. Topics in Signal Proc.,* vol. 5, no. 5, pp. 912-926, Sep. 2011. [^1]: $^\dag$Work in this paper was supported by NSF-EARS grant no. AST-1247885; NIH grant no. 1R01GM104975-01; and the AFOSR MURI grant no. FA9550-10-1-0567.
--- abstract: 'The Macaulay2 package `DecomposableSparseSystems` implements methods for studying and numerically solving decomposable sparse polynomial systems. We describe the structure of decomposable sparse systems and explain how the methods in this package may be used to exploit this structure, with examples.' address: - | T. Brysiewicz\ Max Planck Institute for Mathematics in the Sciences\ Inselstr. 22, 04103\ Leipzig, Germany - | J. I. Rodriguez\ Department of Mathematics\ University of Wisconsin\ Madison, WI 53706\ USA - | F. Sottile\ Department of Mathematics\ Texas A&M University\ College Station\ Texas  77843\ USA - | T. Yahl\ Department of Mathematics\ Texas A&M University\ College Station\ Texas  77843\ USA author: - Taylor Brysiewicz - Jose Israel Rodriguez - Frank Sottile - Thomas Yahl bibliography: - 'jsag.bib' title: Decomposable sparse polynomial systems --- Introduction ============ Améndola and Rodriguez [@AmendolaRodriguez] gave numerical methods to efficiently solve systems of sparse polynomial equations in a family, when that family is decomposable (Definition \[D:decomposable\]). A consequence of Esterov’s study of Galois groups of systems of sparse polynomial equations [@Esterov] is that for sparse systems, recognizing and computing a decomposition is algorithmic. Solving a decomposable sparse system reduces to solving two smaller sparse polynomial systems. In [@SDSS], we presented algorithms to detect and compute such decompositions, and a recursive algorithm exploiting decomposability for solving a decomposable sparse polynomial system using numerical homotopy continuation. The Macaulay2 package `DecomposableSparseSystems` implements methods for decomposable sparse polynomial systems. These include methods to detect decomposability, to compute a decomposition, and a recursive procedure to compute numerical solutions to a given decomposable sparse system. Detection and computation of decompositions uses integer linear algebra, including computing a Smith normal form and the corresponding monomial changes of variables. Numerical homotopy continuation is used to compute solutions. When no further decompositions are possible, the algorithm solves multivariate systems using numerical software chosen by the user (default: `PHCPACK` [@V99]), and solves univariate polynomials using companion matrices. Using the methods in `DecomposableSparseSystems` to solve a decomposable system allows for quicker solving and more accurate solution counts than calling other solvers. One reason is that after each decomposition, the child systems always involve either fewer variables, or polynomials of smaller degree. The cost of the methods in `DecomposableSparseSystems` is low as they rely only on linear algebra and numerical homotopy algorithms. Decomposable Sparse Polynomial Systems ====================================== A is a dominant map $\pi\colon X\to Y$ of irreducible varieties $X$ and $Y$ of the same dimension. There is a number $d$ (the of $\pi$) and an open dense subset $V$ of $Y$ such that $\pi^{-1}(v)$ consists of $d$ points for $v\in V$. When $d>1$, the branched cover is . \[D:decomposable\] A branched cover $\pi\colon X\to Y$ is if it is a composition of nontrivial branched covers. That is, if there is a dense open subset $U\subset Y$ and a variety $Z$ such that $\pi^{-1}(U)\to U$ factors as $$\pi^{-1}(U)\ \longrightarrow\ Z\ \longrightarrow\ U\,,$$ with each map a nontrivial branched cover. $\diamond$ In general it is not easy to determine if a branched cover is decomposable, or even to compute a decomposition for a decomposable branched cover. (See [@AmendolaRodriguez Section 5.4] and [@SDSS Section 1.2] for examples and a discussion.) An integer vector $\alpha\in{\mathbb{Z}}^n$ is the exponent of a (Laurent) monomial ${{\color{RoyalBlue}x^\alpha}} := x_1^{\alpha_1}\dotsb x_n^{\alpha_n}$. A (complex) linear combination of monomials $\sum c_\alpha x^\alpha$ is a (Laurent) polynomial. Monomials are multiplicative maps $({\mathbb{C}}^\times)^n\to{\mathbb{C}}^\times$ and polynomials are maps $({\mathbb{C}}^\times)^n\to{\mathbb{C}}$. For a finite set ${\mathcal{A}}\subset{\mathbb{Z}}^n$ of exponents, the set of all polynomials whose monomials have exponents contained in ${\mathcal{A}}$ (have ${\mathcal{A}}$) forms the vector space [[${\mathbb{C}}^{\mathcal{A}}$]{}]{}. Given a list ${{{\mathcal{A}}_\bullet}}= ({\mathcal{A}}_1,\dotsc,{\mathcal{A}}_n)$ of finite subsets of ${\mathbb{Z}}^n$, write [[${\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$]{}]{} for the vector space ${\mathbb{C}}^{{\mathcal{A}}_1}\oplus\dotsb\oplus{\mathbb{C}}^{{\mathcal{A}}_n}$ of lists $F=(f_1,\dotsc,f_n)$ of polynomials with $f_i$ having support ${\mathcal{A}}_i$. Such a list $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is a function $F\colon({\mathbb{C}}^\times)^n\to{\mathbb{C}}^n$, and $F=0$ is a system of sparse polynomials with support ${{{\mathcal{A}}_\bullet}}$ whose solutions are $F^{-1}(0)$. \[Ex:First\] Let ${{{\mathcal{A}}_\bullet}}= ({\mathcal{A}}_1,{\mathcal{A}}_2)$ be the pair of supports in ${\mathbb{Z}}^2$ illustrated in Figure \[Fig:one\]. (85,71)(-14,0) (0,0)[![A pair of supports.[]{data-label="Fig:one"}](figures/A1.eps "fig:")]{} (-14,19)[${\mathcal{A}}_1$]{} (100,71)(-14,0) (0,0)[![A pair of supports.[]{data-label="Fig:one"}](figures/A2.eps "fig:")]{} (-14,19)[${\mathcal{A}}_2$]{} The corresponding vector spaces of polynomials are $$\begin{aligned} {\mathbb{C}}^{{\mathcal{A}}_1} &=& \left\{a_1+a_2xy^2+a_3x^2y+a_4x^3y^3\mid a_i\in{\mathbb{C}}\right\}\,,\\ {\mathbb{C}}^{A_2} &=& \left\{b_1+b_2y^3+b_3xy^2+b_4x^4y^2 \mid b_j\in{\mathbb{C}}\right\}\,,\end{aligned}$$ and ${\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is the space of systems of the form $$F\ =\ \begin{pmatrix} a_1+a_2xy^2+a_3x^2y+a_4x^3y^3 \\ b_1+b_2y^3+b_3xy^2+b_4x^4y^2 \end{pmatrix}\,, \quad a_i,b_j\ \in\ {\mathbb{C}}\,.$$ In `DecomposableSparseSystems`, the family ${\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is encoded by a list of matrices whose column vectors are the exponent vectors of each polynomial. Given a system $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$, these data can be extracted from a given system via the Macaulay2 function `exponents`. $\diamond$ The Bernstein-Kushnirenko Theorem [@Bernstein] provides a sharp [upper]{} bound on the number of solutions to a system of sparse polynomials. Denote the convex hull of a set ${\mathcal{A}}\subseteq{\mathbb{R}}^n$ by [[$\text{conv}({\mathcal{A}})$]{}]{}. Given a list of supports ${{{\mathcal{A}}_\bullet}}=({\mathcal{A}}_1,\dotsc,{\mathcal{A}}_n)$, let [[$\operatorname{{\rm MV}}({{{\mathcal{A}}_\bullet}})$]{}]{} be the mixed volume of the list $(\text{conv}({\mathcal{A}}_1),\dotsc,\text{conv}({\mathcal{A}}_n))$. Let ${{{\mathcal{A}}_\bullet}}$ be a list of $n$ finite subsets of ${\mathbb{Z}}^n$. For $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$, the number of isolated solutions in $({\mathbb{C}}^\times)^n$ to the system $F=0$ is bounded above by $\operatorname{{\rm MV}}({{{\mathcal{A}}_\bullet}})$ and this bound is achieved for $F$ lying in a dense, open subset of ${\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$. Define ${{\color{RoyalBlue}X_{{{\mathcal{A}}_\bullet}}}}\subset({\mathbb{C}}^\times)^n\times {\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ to be the set of pairs $(x,F)$ such that $F(x)=0$. For $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$, the fiber $\pi^{-1}(F)$ of the map $\pi\colon X_{{{\mathcal{A}}_\bullet}}\to{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ consists of solutions to $F=0$. By the Bernstein-Kushnirenko Theorem the map $\pi$ has degree $\operatorname{{\rm MV}}({{{\mathcal{A}}_\bullet}})$. When $\operatorname{{\rm MV}}({{{\mathcal{A}}_\bullet}})\ge 1$, it is a branched cover. When the branched cover $\pi:X_{{{\mathcal{A}}_\bullet}}\to{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is decomposable, we say the sparse system $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is decomposable. Decomposability depends only on the support ${{{\mathcal{A}}_\bullet}}$ of a system. There are two transparent ways for a sparse system to decompose. ### Lacunary {#lacunary .unnumbered} A system $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is if there is a surjective monomial map $\Phi\colon({\mathbb{C}}^\times)^n\to({\mathbb{C}}^\times)^n$ such that $F = G\circ\Phi$ for some sparse polynomial system $G$. We require that $\Phi$ be nontrivial in that its kernel is not the identity subgroup. A lacunary system $F=G\circ\Phi=0$ can be solved by computing solutions, $z_1,\dotsc,z_d$, to the system $G=0$ and then computing the fibres $\Phi^{-1}(z_1),\dotsc,\Phi^{-1}(z_d)$. In appropriate coordinates, $\Phi$ is diagonal, and $\Phi^{-1}(z)$ is obtained by extracting roots of the components of $z$. \[Ex:lacunary\] Consider the following system with support from Example \[Ex:First\]. $$F(x,y) \ =\ \begin{pmatrix} 1-2xy^2+3x^2y-4x^3y^3\\ 2+3y^3+5xy^2+7x^4y^2 \end{pmatrix} \ =\ \begin{pmatrix} 0\\ 0 \end{pmatrix}$$ It is lacunary as it is the composition of the following maps. $$G(s,t)\ =\ \begin{pmatrix} 1-2st^2+3st-4s^2t^3\\ 2+3st^3+5st^2+7s^2t^2 \end{pmatrix}\,, \quad \Phi(x,y)\ =\ (x^3,x^{-1}y)\,.$$ This can be detected via the methods in `DecomposableSparseSystems`. The method `isLacunary` extracts the set of supports of the system and computes the Smith normal form of a matrix associated to these supports to determine whether the system is lacunary.$\diamond$ ### Triangular {#triangular .unnumbered} A system $F\in{\mathbb{C}}^{{{\mathcal{A}}_\bullet}}$ is if there exists $k<n$ so that after a monomial change of variables, the system $F$ has the form $$F\ =\ (F_1(x_1,\dotsc,x_k),\dotsc,F_k(x_1,\dotsc,x_k),F_{k+1}(x_1,\dotsc,x_n),\dotsc,F_n(x_1,\dotsc,x_n))\,.$$ Solutions to triangular systems are computed by first computing the solutions $z_1,\dotsc,z_d$ of the square subsystem $(F_1,\dotsc,F_k)=0$. A residual system is obtained by substituting $z_1$ into the original system for the first $k$ variables, $F_2(z_1,x_{k+1},\dotsc,x_n)$. Solutions to the original system are obtained by solving the residual system and then applying a homotopy [algorithm]{} as described in [@SDSS]. Consider the system $$F(x,y,z)\ =\ \begin{pmatrix} y^2-2x+3x^2y\\ 2+3x^2y+5x^4y^2 \end{pmatrix} \ =\ \begin{pmatrix} 0\\ 0 \end{pmatrix}\,.$$ Figure \[Fig:two\] shows the supports. ![Triangular support.[]{data-label="Fig:two"}](figures/A3.eps "fig:") ![Triangular support.[]{data-label="Fig:two"}](figures/A4.eps "fig:") This system is triangular as the second polynomial is quadratic in the monomial $x^2y$. The method `isTriangular` detects this subsystem. $\diamond$ A consequence of Esterov’s study of Galois groups of sparse polynomial systems [@Esterov] and Pirola and Schlesinger’s result that a branched cover is decomposable if and only if its Galois group is imprimitive [@PirolaSchlesinger] is that a sparse polynomial system is decomposable if and only if it is either lacunary or triangular. In each case, the solutions to original system are computed via solutions to simpler systems. The methods in `DecomposableSparseSystems` iteratively decompose these sparse polynomial systems to efficiently solve them. Main method: solveDecomposableSystem ==================================== The main method implemented in the package `DecomposableSparseSystems` is named `solveDecomposableSystem` and this implements Algorithm 9 in [@SDSS]. It takes as input a sparse polynomial system $F \in \mathbb{C}^{{{{\mathcal{A}}_\bullet}}}$ and outputs all solutions to $F=0$ in the algebraic torus. It recursively checks whether or not the input sparse polynomial system is decomposable, computes the decomposition, and then calls itself on each portion of the decomposition. When the input is not decomposable it solves multivariate polynomial systems with the numerical solver given by the option `Software` (default: `PHCPACK`) and it solves univariate polynomial systems using companion matrices. For complete details see [@SDSS Section 3.1]. Using the main method --------------------- Consider the system $$\begin{aligned} F\ =\ \begin{pmatrix} 2+xyz-x^2y\\ 4-y^2z+2xz^2-3x^2z\\ 1-yz^2-3xyz \end{pmatrix} \ =\ \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}.\end{aligned}$$ This system is supported on the triple ${{{\mathcal{A}}_\bullet}}=({\mathcal{A}}_1,{\mathcal{A}}_2,{\mathcal{A}}_3)$ shown in Figure \[Fig:triple\]. (105,80) (0,0)[![Support of $F$.[]{data-label="Fig:triple"}](figures/3DSupport.eps "fig:")]{} (77,40)[${{{\mathcal{A}}_\bullet}}$]{} (35,0)[$x$]{} (90,61)[$y$]{} (11,72)[$z$]{} (105,80) (0,0)[![Support of $F$.[]{data-label="Fig:triple"}](figures/3DAC.eps "fig:")]{} (55,62)[${\mathcal{A}}_3$]{} (90,25)[${\mathcal{A}}_1$]{} (35,0)[$x$]{} (90,61)[$y$]{} (11,72)[$z$]{} (105,80) (0,0)[![Support of $F$.[]{data-label="Fig:triple"}](figures/3DB.eps "fig:")]{} (75,30)[${\mathcal{A}}_2$]{} (35,0)[$x$]{} (90,61)[$y$]{} (11,72)[$z$]{} The method `isDecomposable` determines that this system is decomposable. In particular, it is triangular with a subsystem indexed by the first and third polynomials. This can be observed in the figure as the span of the supports ${\mathcal{A}}_1$ and ${\mathcal{A}}_3$ are coplanar. It is also lacunary, as the exponent vectors lie in the sublattice of ${\mathbb{Z}}^3$ of index 3 generated by the columns of $\left(\begin{smallmatrix}1&1&2\\ 1&0&0 \\ 1&2&1\end{smallmatrix}\right)$. The solutions to $F=0$ are found via the main method, `solveDecomposableSystem`. Our main method also accepts a two-argument input $\texttt{(A,C)}$ where $\texttt{A}$ is a list of matrices whose columns support a system of (Laurent) polynomial equations, and $\texttt{C}$ is a list, whose $i$-th entry is the list of coefficients for the $i$-th polynomial equation. We demonstrate some of the other types of inputs here, and leave details to the documentation. Options for the main method --------------------------- Numerical in nature, `solveDecomposableSystem` features a variety of options for the user. The option `Software` (default: `PHCPACK`) dictates which numerical solver is used to solve multivariate sparse systems which are not decomposable. The method `solveDecomposableSystem` removes solutions having any coordinate which is numerically zero up to `Tolerance` (default: $10^{-5}$) throughout the computation. Having this tolerance is necessary, as our methods are for Laurent polynomials with solutions in the complex torus $({\mathbb{C}}^\times)^n$, while the solvers we call may return solutions in ${\mathbb{C}}^n$ that are not in the torus. When set to `1`, the option `Verify` (default: `0`) significantly increases the probability that `solveDecomposableSystem` computes the correct number of solutions. It does this by checking that `Software` computes $\operatorname{{\rm MV}}({{{\mathcal{A}}_\bullet}})$ solutions to any system $F$ with support ${{{\mathcal{A}}_\bullet}}$, where $\operatorname{{\rm MV}}({{{\mathcal{A}}_\bullet}})$ is probabilistically determined using `mixedVolume` in the package `Polyhedra` [@Polyhedra]. If the mixed volume according to `Polyhedra` and the number of solutions do not agree, then the missing solutions are searched for using techniques related to those in `MonodromySolver` [@MonodromySolver]. Lastly, we allow the user to compute the solutions to $F$ by first solving an internally generated random instance and then using that in a parameter homotopy [@LSY1989] to solve $F$ by setting $\texttt{Strategy}$ to $\texttt{FromGeneric}$. We conclude by using the options `Verify` and `Strategy` on an example with 6000 solutions.
--- --- Muon bremsstrahlung photons converted in front of the DELPHI main tracker (TPC) in dimuon events at LEP1 were studied in two photon kinematic ranges: $0.2 < E_\gamma \leq 1$ GeV and transverse momentum with respect to the parent muon $p_T < 40$ MeV/$c$, and $1 < E_\gamma \leq 10$ GeV and $p_T < 80$ MeV/$c$ . A good agreement of the observed photon rate with predictions from QED for the muon inner bremsstrahlung was found, contrary to the anomalous soft photon excess that has been observed recently in hadronic $Z^0$ decays. The obtained ratios of the observed signal to the predicted level of the muon bremsstrahlung are $1.06 \pm 0.12 \pm 0.07$ in the photon energy range $0.2 < E_\gamma \leq 1$ GeV and $1.04 \pm 0.09 \pm 0.12$ in the photon energy range $1 < E_\gamma \leq 10$ GeV. The bremsstrahlung dead cone is observed for the first time in the direct photon production at LEP.  \  \  \ 10.0pt = 0pt plus 1000fill [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} [,]{} 30.0pt Introduction ============ Recent observation of anomalous soft photon production in hadronic $Z^0$ decays collected in the DELPHI experiment at LEP1 [@aspdel] has demonstrated the persistence of the soft photon anomaly found earlier in several fixed target experiments with high energy hadronic beams, [@wa27; @na22; @wa83; @wa91; @wa102]. The photon kinematic range was defined in [@aspdel] as follows: $0.2 < E_{\gamma} \leq 1$ GeV, $p_T < 80$ MeV/$c$, the $p_T$ being the photon transverse momentum with respect to the parent jet direction. Though the reaction $e^+e^- \rightarrow Z^0 \rightarrow hadrons$ presents a distinct mechanism of hadron production as compared to [@wa27; @na22; @wa83; @wa91; @wa102], the observed soft photon production characteristics were found in [@aspdel] to be very close to those reported in [@wa27; @na22; @wa83; @wa91; @wa102], both for the measured production rate and for the observed ratio of the rate to the inner hadronic bremsstrahlung. The latter was expected to be the main source of the direct soft photons in kinematic ranges under study (see [@lan; @low; @gribov]), while the observed signals were found in [@aspdel; @wa27; @na22; @wa83; @wa91; @wa102] to be several times higher than the bremsstrahlung predictions. No theoretical explanation of this excess is available so far; reviews of the theoretical approaches to the problem can be found in [@pis; @lich] (see also the references \[13-33\] in [@aspdel]). From the experimental analysis, given a similarity of the soft photon production characteristics in both classes of experiments, the conclusion was drawn in [@aspdep] that the excess photons are created during the process of hadronization of quarks, i.e. their origin is strongly restricted to reactions of hadron production. If this ansatz is correct, a good agreement should be found between theory and experiment for the direct soft photon production in reactions of pure electroweak nature. What is the experimental situation in this field? The electron inner bremsstrahlung in $e^+ e^-$ collisions (initial state radiation, ISR) was an important (and rather inconvenient) effect at LEP, with which all the LEP experiments had to contend. No deviation of the ISR characteristics from those expected from theory was observed, either at $Z^0$ or at high energy (see for example the DELPHI studies [@isr]). Therefore the situation with the electron inner bremsstrahlung can be considered as showing a nice agreement between theory and experiment. On the other hand, tests of QED with the muon inner bremsstrahlung which appears as final state radiation (FSR) in $e^+ e^- \rightarrow \mu^+ \mu^-$ events were scarce at LEP. There were only two studies of photon production in $Z^0 \rightarrow \mu^+ \mu^-$ events at LEP1 [@delphimu; @opalmu] and a single study of $e^+ e^- \rightarrow \mu^+ \mu^-$ events at LEP2 [@delphimu2] [^1]. All these studies aimed at the separation of rather hard photons, isolated from the neighbouring muon. So, the DELPHI analysis of final state radiation from muons at LEP1 [@delphimu] was restricted to the photon kinematic range of $E_{\gamma} > 2$ GeV, $\theta_{\mu\gamma} > 5^{\circ}$, i.e. to the transverse momenta with respect to the muon direction $p_T > 174$ MeV/$c$. In [@delphimu2] the minimum value of the angle $\theta_{\mu\gamma}$ was increased up to $15^{\circ}$ (keeping the same photon energy threshold), tripling the minimum photon $p_T$. The OPAL analysis at LEP1 [@opalmu] used photons of $E_{\gamma} > 0.9$ GeV and $\theta_{\mu\gamma} > 200$ mrad, i.e. the photon transverse momenta with respect to the muon direction were $p_T > 179$ MeV/$c$. Thus, an analysis of the muon inner bremsstrahlung in the soft photon kinematic range close to that analyzed in [@aspdel] is completely missing at LEP. This motivated us to study the reaction $$e^+e^- \rightarrow Z^0 \rightarrow \mu^+ \mu^- n\gamma, ~~~~~~~~~n\geq 1$$ at LEP1 in a photon kinematic range similar to the one analyzed in [@aspdel] (with the photon transverse momentum being defined now with respect to the parent muon direction). In addition to the low energy (LE) band of $0.2 < E_{\gamma} \leq 1$ GeV explored in [@aspdel], a higher energy (HE) band of $ 1 < E_{\gamma} \leq 10$ GeV was also used in the analysis, being restricted however to the photons of small transverse momentum with respect to the parent muon direction, $p_T < 80$ MeV/$c$. The $p_T$ range of the LE band chosen for the definition of the bremsstrahlung signal was taken narrower in this work as compared to that in [@aspdel], namely $p_T < 40$ MeV/$c$. This choice was motivated by the fact that the photon angular variable used in this analysis, the photon polar angle relative to the parent muon direction, can be measured much more accurately as compared to the angular variable used in [@aspdel], the photon polar angle relative to the parent jet direction, and this confined most of the LE bremsstrahlung photons down to the mentioned $p_T$ range. The results obtained in this study are presented both uncorrected and corrected for the photon detection efficiency. The presentation of the uncorrected results is motivated by their better statistical accuracies and smaller systematic uncertainties in the absolute photon rates. Theoretical predictions for the muon inner bremsstrahlung ========================================================== In electroweak reactions like (1) the inner bremsstrahlung is a process of direct photon production calculated via purely QED machinery. The production rates for the bremsstrahlung photons from colliding $e^+ e^-$ (ISR) and from final $\mu^+ \mu^-$ pairs (FSR) in the $p_T$ range under study can be calculated at once using a universal formula descending from Low [@low] with a modification suggested by Haissinski [@hais]: $$\frac{dN_{\gamma}}{d^{3}\vec{k}} = \frac{\alpha}{(2 \pi)^2} \frac{1}{E_\gamma} \int d^3 \vec{p}_{\mu} \sum_{i,j} \eta_{i} \eta_{j} \frac{(\vec{p}_{i \bot} \cdot \vec{p}_{j \bot}) }{ ( P_{i} K ) ( P_{j} K )} \frac{ d N_{\mu}}{ d^{3} \vec{p}_{\mu} }$$ where $K$ and $\vec{k}$ denote photon four- and three-momenta, $P$ are the 4-momenta of the beam $e^+, e^-$ and the muon involved, and $\vec{p}_\mu$ is the 3-momentum of the muon; $\vec{p}_{i \bot} = \vec{p}_i-(\vec{n} \cdot \vec{p}_i) \cdot \vec{n}$ and  $\vec{n}$ is the photon unit vector, $\vec{n} = \vec{k}/k $; $\eta=1$ for the beam electron and for the outgoing $\mu^+$, $\eta=-1$ for the beam positron and for the outgoing $\mu^-$, and the sum is extended over both beam particles and the parent muon (formula (2) is presented in the form of the photon production rate per muon); the last factor in the integrand is a differential production rate of the parent muon. As can be seen, formula (2) is of the lowest (leading) order in $\alpha$. Higher order radiative corrections to it can be evaluated using exponentiated photon spectra in the LE and HE bands. In the accepted regions of low $p_T$ the effects of the exponentiation were found to be rather small, as considered in Sect. 6.2. To a great extent, formula (2) is used in this paper specifically to enable a comparison with the corresponding formula applied for the calculation of the inner hadronic bremsstrahlung in hadronic decays of $Z^0$ [@aspdel] (cf. the analogous formulae in [@wa27; @na22; @wa83; @wa91; @wa102]): $$\frac{dN_{\gamma}}{d^{3}\vec{k}} = \frac{\alpha}{(2 \pi)^2} \frac{1}{E_\gamma} \int d^3 \vec{p}_{1} . . . d^3 \vec{p}_{N} \sum_{i,j} \eta_{i} \eta_{j} \frac{(\vec{p}_{i \bot} \cdot \vec{p}_{j \bot}) }{ ( P_{i} K ) ( P_{j} K )} \frac{ d N_{h}}{ d^{3} \vec{p}_{1} ... d^{3} \vec{p}_{N}}$$ where $K$ and $\vec{k}$ denote again photon four- and three-momenta, $P$ are the 4-momenta of the beam $e^+, e^-$ and $N$ charged outgoing hadrons, and $\vec{p}_1$ ... $\vec{p}_N$ are the 3-momenta of the hadrons; $\eta=1$ for the beam electron and for positive outgoing hadrons, $\eta=-1$ for the beam positron and negative outgoing hadrons, and the sum is extended over all the $N+2$ charged particles involved; the last factor in the integrand is a differential hadron production rate (when calculating the photon production rate per jet only hadrons lying in the forward hemisphere of a given jet enter the sum). Calculations performed with formulae (2,3) show that the inner bremsstrahlung rate from one muon is approximately equal, in the kinematic region under study, to the predicted inner hadronic bremsstrahlung from a whole hadronic jet of a $Z^0$ hadronic decay. To a great extent, this is a consequence of the coherence of the photon radiation from the individual radiation sources, the charged hadrons produced in the fragmentation process. The contribution of the ISR to these rates is small, being below 1% in the photon kinematic range chosen for the analysis. This smallness is easy to understand: although the ISR from electron/positron beams is much more intense than the ISR from hadron beams in experiments [@wa27; @na22; @wa83; @wa91; @wa102], where it contributed a significant amount to the detected photon rate, all the extra photons in an experiment with colliding $e^+ e^-$ are emitted at very small polar angles with respect to the beam directions, with the angular distribution peaking at $\Theta_{\gamma} = \sqrt{3}/\Gamma$, where $\Gamma$ is a beam Lorentz factor ($\Gamma = 0.89 \times 10^5$ at the $Z^0$ peak), thus yielding few photons in the barrel region used in our analysis. The muon bremsstrahlung radiation (FSR) has the same angular behaviour of the photon production rate versus the photon polar angle relative to the parent muon direction (the photon production angle, $\theta_\gamma$), with $\Gamma$ being in this case a muon Lorentz factor. For the muons from $Z^0$ decays at rest the $\Gamma = 4.3 \times 10^2$ corresponds to the peak position at 4.0 mrad. Note that the position of the peak does not depend on the bremsstrahlung photon energy, since the dependences of the photon production rate on the photon energy and the photon production angle are factorized in formulae (2,3). The turnover of the muon bremsstrahlung angular distribution at the peak value and its vanishing at $\theta_{\gamma} \rightarrow 0$ is termed the dead cone effect. This behaviour is illustrated by Fig. 1a where the initial part of the production angle distribution for the FSR of the reaction (1) is shown, generated [^2] with formula (2). The observation of the dead cone presents an experimental challenge requiring a highly accurate apparatus; the angular resolution of the opening angle between the measured muon and photon directions which is necessary for the observation of the muon bremsstrahlung dead cone at LEP1, has to be of the order of 1$-$2 mrad. Experimental technique ====================== The DELPHI detector ------------------- The DELPHI detector is described in detail elsewhere [@delphi1; @delphi2]. The following is a brief description of the subdetector units relevant for this analysis. In the DELPHI reference frame the $z$ axis is taken along the direction of the $e^-$ beam. The angle $\Theta$ is the polar angle defined with respect to the $z$-axis, $\Phi$ is the azimuthal angle about this axis and $R$ is the distance from this axis. The TPC, the principal device used in this analysis, was the main tracker of the DELPHI detector; it covered the angular range from $20^{\circ}$ to $160^{\circ}$ in $\Theta$ and extended from 30 cm to 122 cm in R. It provided up to 16 space points for pattern recognition and ionization information extracted from 192 wires. The TPC together with other tracking devices (Vertex Detector, Inner Detector, Outer Detector and Forward Chambers) ensured a very good angular accuracy of the muon track reconstruction, which is a part of the overall angular resolution for the photon production angle. The distribution of the opening angles between the generated and reconstructed muon directions is shown in Fig. 1b; it can be characterized by the distribution mean of 0.42 mrad and its r.m.s. width of 0.37 mrad, which restricts 90% of the entries within the 0$-$1 mrad interval. The identification of muons was based on the muon chambers (MUC) surrounding the detector, the hadron calorimeter (HCAL) and the electromagnetic calorimeter (High density Projection Chamber, HPC), as described in [@muid]. The Monte Carlo (MC) data set used in this analysis was produced with the DYMU3 generator [@dymu3]. Higher order radiative corrections to the reaction (1) total cross section were accounted for via the exponentiation procedure implemented in the generator. The generated dimuon events were passed through the DELPHI detector simulation program DELSIM [@delphi2]. Detection of photons -------------------- Photon conversions in front of the main DELPHI tracker (TPC) were reconstructed by an algorithm that examined the tracks reconstructed in the TPC. A search was made along each TPC track for the point where the tangent of the track trajectory points directly to the beam spot in the $R\Phi$ projection. Under the assumption that the opening angle of the electron-positron pair is zero, this point represented a possible photon conversion point at radius $R$. All tracks which have had a solution $R$ that was more than one standard deviation away from the main vertex, as defined by the beam spot, were considered to be conversion candidates. If two oppositely charged conversion candidates were found with compatible conversion point parameters they were linked together to form the converted photon. The following selection criteria were imposed: - the $\Phi$ difference between the two conversion points should be at most 30 mrad; - a possible difference between the polar angles $\Theta$ of the two tracks should be at most 15 mrad; - at least one of the tracks should have no associated hits in front of the reconstructed mean conversion radius. For the pairs fulfilling these criteria a $\chi^2$ was calculated from $\Delta \Theta, \Delta \Phi$ and the difference of the reconstructed conversion radii $\Delta R$ in order to find the best combinations in cases where there were ambiguous associations. A constrained fit was then applied to the electron-positron pair candidate which forced a common conversion point with zero opening angle and collinearity between the momentum sum and the line from the beam spot to the conversion point. The photon detection efficiency, i.e. the conversion probability combined with the reconstruction efficiency, was determined with the hadronic MC data since the converted photon sample in dimuon events was insufficient statistically for such a determination. The efficiencies were tabulated against three variables: $E_{\gamma}$, $\Theta_{\gamma}$ (the photon polar angle to the beam), and $\theta_{\gamma tk}$ (the photon opening angle to the closest track). The efficiency varied with the energy from zero at the 0.2 GeV detection threshold up to 4 - 6% at $E_{\gamma} \geq 1$ GeV, depending on the two other variables (for details see [@aspdel]). In order to reduce a possible difference in the reconstruction of the converted photons in the MC and the real data (originating from the bias in the detector material distributions in the two data sets and from a possible distinction in their pattern recognition results) the recalibration procedure described in [@aspdel] was implemented, with the recalibration coefficients obtained with hadronic events. The angular precision of the photon direction reconstruction was studied using the dimuon MC events and was found to be of a Breit-Wigner shape, as expected for the superposition of many Gaussian distributions of varying width [@eadie]. The full widths ($\Gamma$’s) of the $\Delta \Theta_{\gamma}$ and $\Delta \Phi_{\gamma}$ distributions were $2.3 \pm 0.1$ mrad and $1.9 \pm 0.1$ mrad, respectively, for the combined 0.2$-$10 GeV interval (Figs. 1c, 1d). The full width of the distribution of the difference $\Delta \theta_{\gamma}$ between the generated and reconstructed muon-photon opening angles (which is the difference in the production angle $\theta_{\gamma}$ defined in Sect. 2 and therefore represents the overall angular resolution of the current analysis) was found to be $2.1 \pm 0.1$ mrad (Fig. 1e), thus providing a possibility for the observation of the muon bremsstrahlung dead cone. Moreover, one can improve essentially this raw resolution, though at the price of a loss of 50% of the converted photon statistics, by requiring the photon energy to exceed 1 GeV and the conversion radius to be greater than 25 cm. With these tighter cuts 1.4 mrad resolution (the full width) was achieved and used in a particular case which required a high angular resolution and is described below (Sect. 7.3). The accuracy of the converted photon energy measurement was studied also with the dimuon MC events. In both energy bands it was at the level of $\pm$1.5% (the Breit-Wigner full width about 3%); this is illustrated by Fig.1f where the distribution of the relative difference between the generated and reconstructed photon energy is plotted for the LE photons. The resolution was checked with events of the (hadronic) real data by comparing the $\pi^0$ peak width of the $\gamma \gamma$ mass distribution from these data to the analogous one from the MC. Data selection ============== Selection of dimuon events --------------------------- The data selection was done under standard cuts aimed at the separation of dimuon events (cf. [@delphimu; @muid]) which are described below. The consecutive application of these cuts reduced the MC sample of dimuon events by factors indicated in parentheses: - the number of charged particles $N_{ch}$ had to be within the interval of $2 \leq N_{ch} \leq 5$, and the two highest momentum particles had to have $p > 15$ GeV/$c$   (0.894); - the polar angles of the two highest momentum particles had to be within the interval of 20$^{\circ} \leq \Theta \leq 160^{\circ}$   (0.962); - the impact parameters of the two highest momentum particles had to be less than 0.2 and 4.5 cm in the $R\Phi$ and $z$ projections, respectively   (0.993); - no additional charged particles with momenta greater than 10 GeV/$c$ were allowed, unless the fastest particle had a momentum greater than 40 GeV/$c$   (0.999); - the acollinearity of the two highest momentum particles had to be less than $10^{\circ}$   (0.989); - the two highest momentum particles had to be identified as muons using either the muon chambers (MUC), the hadron calorimeter (HCAL), or the electromagnetic calorimeter (HPC), by requiring associated hits in the muon chambers, or by energy deposition in the calorimeters consistent with a minimum ionizing particle   (0.825). The total reduction factor for the MC events was 0.696. A total of 122 812 events of real data (RD) was selected under these cuts and compared to 373 918 selected MC events corresponding, after the normalization of the equivalent MC luminosity to the integrated RD luminosity, to 121 000 expected events. Selection of photons --------------------- The standard selection of converted photons was done under the following cuts: - only converted photons with both $e^+, e^-$ arms reconstructed were considered; - 20$^{\circ} \leq \Theta_{\gamma} \leq 160^{\circ}$; - 5 cm $\leq R_{conv} \leq 50$ cm, where $R_{conv}$ means conversion radius; - 200 MeV $ < E_{\gamma} \leq 10$ GeV. 384 and 1097 converted photons were found using these cuts in the real data in the LE and HE energy bands, respectively. Of these, 127 and 265 photons are in the selected $p_T$ regions: $p_T <$ 40 MeV/$c$ for the LE band and $p_T <$ 80 MeV/$c$ for the HE band. For a particular analysis done to scrutinize the dead cone effect (described in Sect. 7.3), the photon energy was required to be between 1 and 10 GeV, and the conversion radius to be between 25 and 50 cm. Backgrounds ============ The following background sources within the $\mu^+ \mu^-$ event sample were considered: - External muon bremsstrahlung:\ the bremsstrahlung radiation from muons when they pass through the material of the experimental setup. - Secondary photons:\ when a high energy photon (of any origin) generates an $e^+ e^-$ pair in the detector material in front of the TPC the pair particles may radiate (external) bremsstrahlung photons, which can enter our kinematic region. - “Degraded" photons:\ higher energy converted primary photons with degraded energy measurement due to the secondary emission of (external) bremsstrahlung by at least one of their electrons. DELSIM was invoked to reproduce these processes in the MC stream. Collection of background photons (all dubbed as External Brems) in the MC stream was done if any one of the following conditions was satisfied: - a given photon was absent at the event generator level, i.e. in the DYMU3 event record; - a given photon, found in the DYMU3 event record, migrated from outside a selected $p_T$ region into that region due to the energy degradation. $26.0 \pm 2.9$ and $61.5 \pm 4.5$ background photons (normalized to the RD statistics) were found in the selected $p_T$ regions: $p_T <$ 40 MeV/$c$ for the LE band and $p_T <$ 80 MeV/$c$ for the HE band, respectively. The background from $Z^0 \rightarrow \tau^+ \tau^-$ events was estimated using the MC data produced with the KORALZ 4.0 generator [@koralz] and passed through the full detector simulation and the analysis procedure. The $\tau^+ \tau^-$ contamination of the dimuon event sample was found to be $1536 \pm 20$ events (1.3 % of the dimuon sample), which contain zero photons in the LE band and $1.3 \pm 0.6$ photons in the HE band, of which 0.3 photons would be in the $p_T < 80$ MeV/$c$ region. In what follows, this background was neglected. The cosmic ray background was estimated from the real data, studying events which originated close to the interaction point, but outside the limits allowed for selected events. In both energy bands its contribution to the photon rates was below 0.1%. The background from $Z^0 \rightarrow e^+ e^-$ events tested with the BABAMC generator [@babamc] with the full detector simulation was found to be vanishingly small. The same is valid for the 4-fermion backgrounds $Z^0 \rightarrow e^+ e^- \mu^+ \mu^-$ and $Z^0 \rightarrow \mu^+ \mu^- q\overline{q}$ tested with events produced with generators [@excalibur; @jetset]. Systematic errors ================= Systematic uncertainties in the determination of the signal ----------------------------------------------------------- Since the converted photon sample in the dimuon event statistics collected by the DELPHI experiment during the LEP1 period was insufficient for the determination of the photon detection efficiencies and the recalibration coefficients, they were taken as being defined with hadronic events. Therefore it is worth to start the consideration of the systematic errors and their estimations with the uncertainties induced by these components of the analysis as they were determined in [@aspdel]. The uncertainty due to a difference in the photon propagation and conversion in the detector material in the RD and its simulation in the MC, and analogous difference in the pattern recognition, left after the recalibration procedure was applied (termed in [@aspdel] hardware systematics), was studied in [@aspdel] and evaluated to be 0.9% of the photon rate in the LE band and 2% in the HE band. The systematic error for the photon detection efficiency[^3], after the recalibration procedure mentioned above being applied, is a purely instrumental effect originating from the choice of the binning of the variables used for the efficiency parametrization, resolution effects, etc. In [@aspdel] it was found to range from 6% to 9% of the photon rate. These estimations were tested in the current study with the MC dimuon events by comparing the $p_T$ spectra of the DYMU3 inner bremsstrahlung photons transported through the DELPHI detector by DELSIM with a subsequent simulation of their conversions, with the spectra of the same photons taken at the generator level and convoluted with the photon detection efficiency. In both energy bands the difference was below 5% which was the level of the test statistical accuracy. This means that the aforementioned error due to the detection efficiency is likely to be overestimated in [@aspdel], or it is really smaller in the muonic data, in particular, due to a better angular resolution and due to a narrower angular ranges in both energy bands. In what follows, the value of 5% is used as an estimate for the uncertainty of the efficiency calculation. The systematic errors originating from the influence of the $p_T$ resolution on the selection cuts were estimated from runs with the reconstructed photon energy and production angle randomly shifted according to the appropriate resolution function (taking into account the different angular resolutions in the LE and HE bands). The changes were found to be less than 0.3% of the photon rate in the LE band and less than 0.4% of the photon rate in the HE band. In what follows, the corresponding errors were neglected. The uncertainty of the background (BG) estimation is composed of the uncertainties coming from the DYMU3 generator, efficiency and hardware systematics, BG selection, and the procedure of the BG photon conversion simulation. The systematic error from the photon conversion simulation is considered to be equal to the systematic error of the photon detection in the MC stream, before the recalibration is applied, i.e. it can be approximated by the recalibration corrections, which were within 3-4%. The systematic errors due to efficiency and hardware in the background estimation have strong positive correlations with the analogous components of the systematic error in the calculation of the real data photon rates (indeed they are of the same relative amplitude, but the background errors have to be reduced by factors of 3.9 and 3.3 in the LE and HE bands, respectively, when entering the final systematic error, since the background rates within the corresponding $p_T$ intervals constitute 25.7% and 30.2% of the RD$-$BG photon rates in the corresponding energy bands). Ignoring, for the sake of clarity, these correlations we will consider all the background systematics components as independent and uncorrelated with the analogous components in the RD rates. Then, calculating the background systematic errors similarly to those for the RD and taking into account the suppression factors mentioned above, the systematic background uncertainties appear to be 1.4% and 2.2% relative to the signal rate in the respective energy band in the case of the uncorrected data, and 1.9% and 2.7% in the case of the data corrected for efficiency. The above systematic errors are summarized in Table 1. 0.2cm [**Table 1.**]{} Systematic uncertainties (in % of the photon rates in the $p_T$ ranges below 40 MeV/$c$ and 80 MeV/$c$ for the LE and HE photons, respectively) for the signal and the predicted muon inner bremsstrahlung. The total systematic error of each of the two energy band photon rates and signal-to-bremsstrahlung ratios is the quadratic sum of the corresponding individual errors, as quoted in Tables 2,3 below. [ |c c c| ]{} &&\ Component & Data uncorrected for & Data corrected for\ & the detection efficiency& the detection efficiency\ &&\ &&\ & LE band      HE band &     LE band      HE band     \ &&\ &             [**Signal**]{} &\ Recalibration & 0.9%           2.0% & 0.9%           2.0%\ Efficiency & -               - & 5.0%           5.0%\ Background & 1.4%           2.2% & 1.9%           2.7%\ &&\ &           [**Predicted Bremsstrahlung**]{} &\ &&\ Efficiency & 5.0%           5.0% & -               -\ Formula (2) & 4.0%          10.0% & 4.0%          10.0%\ &&\ Estimation of the accuracy of the bremsstrahlung predictions ------------------------------------------------------------ The estimation of the accuracy of the bremsstrahlung predictions resulting from formula (2) was done by comparing FSR rates obtained with this formula and those delivered by the DYMU3 generator, in the corresponding $p_T$ ranges, as the difference between the predictions. In the LE band this uncertainty was about 4%, in the HE band about 10%. They are quoted in Table 1. These estimates agree well with the differences in the predictions for the muon inner bremsstrahlung rates obtained with formula (2), and those calculated with formulae which account for higher order radiative corrections, the calculations being performed with the non-exponentiated photon spectrum [@berends2] and with the exponentiated one [@yfs; @yellow]. In particular, the latter give 5.9% and 9.1% differences with formula (2) in the LE and HE bands, respectively. Note that when doing these calculations, parameter $\beta$ which governs the bremsstrahlung photon spectrum [@hais] was obtained by integration of formula (1.2) in [@yfs] applying our $p_T$ cuts, i.e. within rather narrow angular ranges varying as a function of the photon energy according to the $p_T$ cuts imposed in the corresponding energy band. The $\beta$ values were found to be 0.0146 in the LE band and 0.0088 in the HE band, i.e. considerably smaller than $\beta = 2 \alpha/\pi ~(\ln s/m_{\mu}^2-1) = 0.0582$, obtained by integration over all angles. The smallness of $\beta$ reduces the difference between the bremsstrahlung predictions for the exponentiated and non-exponentiated photon spectra. Results ======== Photon distributions for $\theta_{\gamma}, ~p_T$ and $p_T^2$ are presented both for the data and the background (left panels of Figs. 2-5), and for their difference (right panels of the figures). The latter distributions are accompanied by the calculated bremsstrahlung spectra according to Eq. (2) shown by triangles. To quantify the excess of the data over the background the difference between them, which represents the measured muon inner bremsstrahlung, was integrated in the $p_T$ interval from 0 to 40 MeV/$c$ for the photons of the LE band, and from 0 to 80 MeV/$c$ for the photons of the HE band (these intervals correspond to the filled areas in panels d,f of Figs. 2-5), and the values obtained were defined as signals. However these $p_T$ cuts were not applied when filling the angular distributions displayed in Figs. 2-6 in order to keep these distributions unbiased. Energy band   0.2 [$ < E_{\gamma} \leq 1$ GeV,   $p_T < $ 40 MeV/$c$]{} ----------------------------------------------------------------------- Photon distributions, uncorrected and corrected for the photon detection efficiency, are displayed in Figs. 2 and 3, respectively. The results for the signal rate are given in Table 2 together with the predictions for the muon bremsstrahlung and their ratios. 0.2cm [**Table 2.**]{} The signal (RD$-$Background), the predicted muon inner bremsstrahlung (both in units of $10^{-3} \gamma/\mu$) and their ratios in the $p_T < 40$ MeV/$c$ range for the photons from the LE band. The first errors are statistical, the second ones are systematic. ---------------------- ---------------------------------- ---------------------------------- Value     Data uncorrected for         Data corrected for         the detection efficiency         the detection efficiency     Signal   0.412$\pm 0.048\pm 0.007$     25.9$\pm 4.0 \pm 1.4$   Inner Bremsstrahlung   0.388$\pm 0.001\pm 0.025$     23.30$\pm 0.01\pm 0.93$   Signal/IB   1.06$\pm 0.12\pm 0.07 $     1.11$\pm 0.17\pm 0.07$   ---------------------- ---------------------------------- ---------------------------------- As can be seen from Table 2, the predicted and the measured muon bremsstrahlung rates agree well, within the measurement errors. The small differences in Signal/IB ratios between corrected and uncorrected data in Table 2 (and Table 3 below) arise from the non-uniformity of the efficiency reweighting factors. Energy band   1 [$ < E_{\gamma} \leq 10$ GeV,   $p_T < $ 80 MeV/$c$]{} ---------------------------------------------------------------------- Photon distributions, uncorrected and corrected for the photon detection efficiency, are displayed in Figs. 4 and 5, respectively. The results for the signal rate are given in Table 3 together with the predictions for the muon bremsstrahlung and their ratios. As can be seen from Table 3, the predicted and the measured muon bremsstrahlung rates agree well, within the measurement errors. The smaller values of the corrected experimental and predicted bremsstrahlung rates in the HE energy band as compared to those in the LE band (while the energy range factor following from formula (2), $\ln(E_{\gamma}^{max}/E_{\gamma}^{min})$, works in favour of the HE band with an enhancement factor of 1.43) are explained by a higher attenuation of the rates induced by the $p_T$ cut in the case of the bremsstrahlung photons from the HE band. 0.2cm [**Table 3.**]{} The signal (RD$-$Background), the predicted muon inner bremsstrahlung (both in units of $10^{-3} \gamma/\mu$) and their ratios in the $p_T < 80$ MeV/$c$ range for the photons from the HE band. The first errors are statistical, the second ones are systematic. ---------------------- ---------------------------------- ---------------------------------- Value     Data uncorrected for         Data corrected for         the detection efficiency         the detection efficiency     Signal 0.829$\pm 0.069 \pm 0.025$ 21.1$ \pm 2.2 \pm 1.3 $ Inner Bremsstrahlung 0.794$\pm 0.001 \pm 0.089$ 20.00$ \pm 0.01\pm 2.00$ Signal/IB 1.04$ \pm 0.09 \pm 0.12 $ 1.06$ \pm 0.11 \pm 0.12$ ---------------------- ---------------------------------- ---------------------------------- Observation of the dead cone of the muon bremsstrahlung ------------------------------------------------------- The distributions of the photon production angles with a fine binning (of 1 mrad bin width) are shown in Figs. 6a,b for the combined sample of the converted photons from both energy bands. The distribution obtained after background subtraction (Fig. 6b) is accompanied by the calculated bremsstrahlung points. The displayed measured distributions are raw spectra, without any unfolding of the detector angular resolution; the bremsstrahlung spectra calculated with formula (2) were smeared instead by the resolution. We prefer to present the uncorrected measured distributions in order to demonstrate the independence of the obtained results on the correction procedure. As can be seen from the plots, the experimental points follow well the predicted bremsstrahlung distribution, showing a turnover at the expected bremsstrahlung peak position of 4 mrad. This is therefore an observation of the muon inner bremsstrahlung dead cone, for the first time in high energy physics experiments. The observation enriches the agreement between the experimental findings of the muon inner bremsstrahlung characteristics reported in this work and the QED predictions for the process. However a deeper insight into the bremsstrahlung pattern can be obtained when considering, instead of the distribution $dN_\gamma/d\theta_\gamma$, the distribution $dN_\gamma/d\Omega$, where $d\Omega$ is a solid angle element. Such a distribution is free of kinematic suppression at the polar angles $\theta_\gamma$ approaching zero, and the remaining suppression of the photon production rate at very small angles is a purely dynamic effect, similar to that mentioned in Sect. 2 for the hadrons inside a jet, namely a destructive interference between the radiation sources, but this time less straightforward, just between the muon “before" and “after" the photon emission [^4]. The solid angle element $d\Omega$ is proportional to $d\cos \theta_\gamma$, which at small angles is, in turn, proportional to $d\theta_\gamma ^2$. The position of the $dN_\gamma/d\theta_\gamma ^2$ distribution turnover is predicted to be at $\theta_\gamma ^2 = 1/\Gamma ^2$ ($\Gamma = 430$, see Sect. 2), i.e. at $\theta_\gamma ^2 = 5.4 \times 10^{-6}$ rad$^2$. To observe this turnover, an improved angular resolution was required, achieved with the additional cuts (see Sect. 4.2) to be at the level of 1.4 mrad, as noticed in Sect. 3.2. The distribution $dN_\gamma/d\theta_\gamma ^2$ obtained with this resolution is shown in Fig. 6c, together with the bremsstrahlung predictions for this variable. Though the statistics are poor, the dip at $\theta_\gamma ^2 < 5\times 10^{-6}$ rad$^2$ is visible in this distribution, revealing the dynamical dead cone of the muon inner bremsstrahlung. In order to estimate the statistical significance of this observation the following procedure was undertaken. The initial part (about 20 bins) of the bremsstrahlung $\theta_\gamma ^2$ distribution shown in Fig. 6c, with first two bins omitted, was fitted by a smooth curve (by a polynomial of 4th or 5th order). Then the fitting curve was extrapolated to zero, as shown in the figure giving the value of $(5.64 \pm 0.27)~\gamma/5\times 10^{-6}$ rad$^2$ at the centre of the first bin of the distribution (the error reflects the variation in the fitting form and in the number of bins used in the fit). This value was assumed to represent the expected bremsstrahlung rate in the first bin of the distribution in a hypothetical situation when the bremsstrahlung dynamical dead cone is absent. The number of the real data photons in the first bin was 2 with the estimated background to be $0.66 \pm 0.46$, thus giving the signal value in this bin $(1.34 \pm 1.49)~\gamma /5\times 10^{-6}$ rad$^2$. Assuming Poisson distribution for the signal photons these numbers correspond to the probability of the absence of the bremsstrahlung dead cone of less than 4%. Comparison with the hadronic soft photon analysis ================================================= The main difference between the results of this analysis and the hadronic ones [@aspdel] is the absence of any essential excess of the soft photon production over the predicted inner bremsstrahlung rate reported in this study, contrary to the case for [@aspdel] where the observed soft photon rate was found to exceed the bremsstrahlung predictions by a factor of about 4. The 95% CL upper limits on the excess factors which can be extracted from the results of this work are 1.29 in the LE band, and 1.28 in the HE band. Another distinction between the two analyses is an essential difference in the background levels and in the possible systematic effects. However, the code transporting photons through the DELPHI detector and simulating their conversions in the detector material (DELSIM), the photon reconstruction algorithm and the determination of its efficiency, together with the recalibration procedure, were common to the two analyses. Thus the results of this work can be considered also as a cross-check of these procedures in the hadronic events study. On the other hand, the amount of dimuon events collected during the LEP1 period is considerably smaller than the number of collected hadronic events, due to a smaller $Z^0$ dimuon branching ratio (by a factor of 20). As a result, in the current analysis the statistical errors are either essentially higher than the systematic ones (in the LE band), or comparable to them (in the HE band), while in [@aspdel] the total uncertainties of the measured photon rates are dominated by systematic errors; nevertheless it should be emphasized that the results of both analyses show clear signals of direct photons (even though the strength of the signal in [@aspdel] is not explained theoretically). Summary ======= The results of the analysis of final state radiation in $\mu^+ \mu^-$ decays of $Z^0$ events at LEP1 are reported in this work. The radiation was studied in the region of small transverse momenta with respect to the parent muon, $p_T < 40$ MeV/$c$ in the photon energy range $0.2 < E_\gamma \leq 1$ GeV (LE band), and $p_T < 80$ MeV/$c$ in the photon energy range $1 < E_\gamma \leq 10$ GeV (HE band). The obtained photon rates uncorrected (corrected) for the photon detection efficiency were found to be, in units of $10^{-3} \gamma/\mu$, with the first error to be statistical and the second one systematic: a) in the LE band: 0.412$ \pm 0.048 \pm 0.007 $ (25.9$\pm 4.0 \pm 1.4$), while QED predictions for the muon inner bremsstrahlung were calculated to be 0.388$ \pm 0.001 \pm 0.025$ (23.30$\pm 0.01 \pm 0.93$); b) in the HE band: 0.829$\pm 0.069 \pm 0.025$ (21.1$ \pm 2.2 \pm 1.3 $), while the muon inner bremsstrahlung was calculated to be 0.794$\pm 0.001 \pm 0.089$ (20.00$ \pm 0.01\pm 2.00$). The obtained ratios of the observed signal to the predicted level of the muon inner bremsstrahlung are then $1.06 \pm 0.12 \pm 0.07$ in the LE band and $1.04 \pm 0.09 \pm 0.12$ in the HE band (uncorrected rates are used for these ratios, as they possess a better statistical accuracy). Thus, the analysis shows a good agreement between the observed photon production rates and the QED predictions for the muon inner bremsstrahlung, both in differential (see Figs. 2-5) and integral (see Tables 2,3) forms. This is in contrast with the anomalous soft photon production in hadronic decays of $Z^0$ reported earlier in [@aspdel]. The bremsstrahlung dead cone is observed for the first time in the direct photon production in $Z^0$ decays in particular, and in the muon inner bremsstrahlung in the high energy physics experiments in general, also being in good agreement with the predicted bremsstrahlung behaviour. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Profs. K. Boreskov, J.E. Campagne, F. Dzheparov and A. Kaidalov for useful discussions. We are greatly indebted to our technical collaborators, to the members of the CERN-SL Division for the excellent performance of the LEP collider, and to the funding agencies for their support in building and operating the DELPHI detector.\ We acknowledge in particular the support of\ Austrian Federal Ministry of Education, Science and Culture, GZ 616.364/2-III/2a/98,\ FNRS–FWO, Flanders Institute to encourage scientific and technological research in the industry (IWT) and Belgian Federal Office for Scientific, Technical and Cultural affairs (OSTC), Belgium,\ FINEP, CNPq, CAPES, FUJB and FAPERJ, Brazil,\ Ministry of Education of the Czech Republic, project LC527,\ Academy of Sciences of the Czech Republic, project AV0Z10100502,\ Commission of the European Communities (DG XII),\ Direction des Sciences de la Mati$\grave{\mbox{\rm e}}$re, CEA, France,\ Bundesministerium f$\ddot{\mbox{\rm u}}$r Bildung, Wissenschaft, Forschung und Technologie, Germany,\ General Secretariat for Research and Technology, Greece,\ National Science Foundation (NWO) and Foundation for Research on Matter (FOM), The Netherlands,\ Norwegian Research Council,\ State Committee for Scientific Research, Poland, SPUB-M/CERN/PO3/DZ296/2000, SPUB-M/CERN/PO3/DZ297/2000, 2P03B 104 19 and 2P03B 69 23(2002-2004),\ FCT - Fundação para a Ciência e Tecnologia, Portugal,\ Vedecka grantova agentura MS SR, Slovakia, Nr. 95/5195/134,\ Ministry of Science and Technology of the Republic of Slovenia,\ CICYT, Spain, AEN99-0950 and AEN99-0761,\ The Swedish Research Council,\ Particle Physics and Astronomy Research Council, UK,\ Department of Energy, USA, DE-FG02-01ER41155,\ EEC RTN contract HPRN-CT-00292-2002.\ [ References]{} DELPHI Collaboration, J. Abdallah et al., Eur. Phys. J. C [**47**]{}, 273 (2006) P.V. Chliapnikov et al., Phys. Lett. B [**141**]{}, 276 (1984) F. Botterweck et al., Z. Phys. C [**51**]{}, 541 (1991) S. Banerjee et al., Phys. Lett. B [**305**]{}, 182 (1993) A. Belogianni et al., Phys. Lett. B [**408**]{}, 487 (1997)\ A. Belogianni et al., Phys. Lett. B [**548**]{}, 122 (2002) A. Belogianni et al., Phys. Lett. B [**548**]{}, 129 (2002) L.D. Landau, I.Ya. Pomeranchuk, Dokl. Akad. Nauk SSSR [**92**]{}, 535, 735 (1953) (Papers No. 75 and 76 in the English edition of L.D. Landau collected works, Pergamon Press, New York, 1965) F. Low, Phys. Rev. [**110** ]{}, 974 (1958) V.N. Gribov, Sov. J. Nucl. Phys. [**5**]{}, 280 (1967) V. Balek, N. Pisútová and J. Pisút, Acta Phys. Pol. B [**21**]{}, 149 (1990) P. Lichard, Phys. Rev. D [**50**]{}, 6824 (1994) DELPHI Collaboration, J. Abdallah et al., [*Study of the Dependence of Direct Soft Photon Production on the Jet Characteristics in Hadronic $Z^0$ Decays*]{}, to be submitted to Eur. Phys. J. C. DELPHI Collaboration, P. Abreu et al., Eur. Phys. J. C [**16**]{}, 371 (2000)\ DELPHI Collaboration, J. Abdallah et al., Eur. Phys. J. C [**45**]{}, 589 (2006)\ DELPHI Collaboration, J. Abdallah et al., Eur. Phys. J. C [**46**]{}, 295 (2006) DELPHI Collaboration, P. Abreu et al., Z. Physik C [**65**]{}, 603 (1995) OPAL Collaboration, P.D. Acton et al., Phys. Lett. B [**273**]{}, 338 (1991) DELPHI Collaboration, P. Abreu et al., Phys. Lett. B [**380**]{}, 480 (1996) H.C. Ballagh et al., Phys. Rev. Lett. [**50**]{}, 1963 (1983)\ V.V. Ammosov et al., Sov. J. Nucl. Phys. [**47**]{}, 73 (1988) J. Haissinski, [*How to Compute in Practice the Energy Carried away by Soft Photons to all Orders in $\alpha$*]{}, LAL 87-11, 1987;\ http://ccdb4fs.kek.jp/cgi-bin/img$_-$index?8704270 DELPHI Collaboration, P. Aarnio et al., Nucl. Instr. and Meth. A [**303**]{}, 233 (1991) DELPHI Collaboration, P. Abreu et al., Nucl. Instr. and Meth. A [**378**]{}, 57 (1996) DELPHI Collaboration, P. Abreu et al., Nucl. Phys B [**367**]{}, 511 (1991)\ DELPHI Collaboration, P. Abreu et al., Nucl. Phys B [**417**]{}, 3 (1994)\ DELPHI Collaboration, P. Abreu et al., Nucl. Phys B [**418**]{}, 403 (1994) J.E. Campagne, R. Zitoun, Z. Phys. C [**43**]{}, 469 (1989)\ J.E. Campagne et al. in: Z Physics at LEP1, G. Altarelli, R. Kleiss and C. Verzegnassi eds., CERN Yellow Report No.89-08, 1989, vol.3, 3.2.5 W.T. Eadie et al., [*Statistical Methods in Experimental Physics*]{} (North-Holland, Amsterdam, 1982) p. 90 [**85**]{}, 437 (1995) S. Jadach, B.F.L. Ward, Z. Was, Comp. Phys. Commun. [**79**]{}, 503 (1994) F.A Berends, R. Kleiss and W. Hollik, Nucl. Phys B [**304**]{}, 712 (1988) F.A. Berends, R. Pittau, R. Kleiss, Comp. Phys. Commun. [**85**]{}, 1437 (1995) T. Sjöstrand, Comput. Phys. Commun. [**39**]{}, 347 (1986)\ T. Sjöstrand, M. Bengtsson, Comput. Phys. Commun. [**43**]{}, 367 (1987)\ T. Sjöstrand, [*JETSET 7.3 Program and Manual*]{}, CERN-TH/6488-92, 1992 F.A Berends, R. Kleiss and S. Jadach, Nucl. Phys B [**202**]{}, 63 (1982) D.R. Yennie, S.C. Frautschi, H Suura, Ann. Phys. [**13**]{}, 379 (1961) R. Kleiss et al. in: Z Physics at LEP1, G. Altarelli, R. Kleiss and C. Verzegnassi eds., CERN Yellow Report No.89-08, 1989, vol.3, 2.2 L.D. Landau and E.M. Lifshitz, [*The Classical Theory of Fields*]{} (Elsevier, Amsterdam, Boston, Heidelberg, London, New York, Oxford, Paris, San Diego, San Francisco, Singapore, Sydney, Tokyo), 4th revised English edition, Sect. 73 J.D. Jackson, [*Classical Electrodynamics*]{} (John Willey and Sons, Inc., New York, Chichester, Weinheim, Brisbane, Singapore, Toronto), 3rd edition, Sect. 14.3 [^1]: Outside the LEP experiments, a few studies of the muon inner bremsstrahlung have been done, see [@mubneu] and references therein. [^2]: The Monte Carlo data set of dimuon events described below was used as the input of the generation. [^3]: Note that when dealing with the data uncorrected for the detection efficiency the efficiency error is relevant to the bremsstrahlung predictions only (since bremsstrahlung spectra have to be convoluted with the efficiencies in this case). On the contrary, when dealing with the corrected data the efficiency uncertainty has to be applied to the measured photon rates only. [^4]: In classical language, the radiation intensity into the solid angle $d\Omega$ vanishes when three vectors: the muon velocity, its acceleration, and the radiation unit vector happen, in particular, to be parallel, see for example [@ll; @jack].
--- abstract: 'The coefficients that appear in uniform asymptotic expansions for integrals are typically very complicated. In the existing literature the majority of the work only give the first two coefficients. In a limited number of papers where more coefficients are given the evaluation of the coefficients near the coalescence points is normally highly numerically unstable. In this paper, we illustrate how well-known Cauchy type integral representations can be used to compute the coefficients in a very stable and efficient manner. We discuss the cases: (i) two coalescing saddles, (ii) two saddles coalesce with two branch points, (iii) a saddle point near an endpoint of the interval of integration. As a special case of (ii) we give a new uniform asymptotic expansion for Jacobi polynomials $P_n^{(\alpha,\beta)}(z)$ in terms of Laguerre polynomials $L_n^{(\alpha)}(x)$ as $n\to\infty$ that holds uniformly for $z$ near $1$. Several numerical illustrations are included.' author: - 'Sarah Farid Khwaja and Adri B. Olde Daalhuis[^1]' title: Computation of the coefficients appearing in the uniform asymptotic expansions of integrals --- Introduction ============ In this paper, we discuss approximations of integrals of the form $$\label{MainInt} F(\lambda,z)=\int_{\mathcal{C}}{\mathrm{e}}^{\lambda f(t,z)}g(t,z){\, \mathrm{d}}t,$$ where $\mathcal{C}$ is a contour in the complex plane and $\lambda$ is a large parameter. The critical points for these type of integrals are saddle points of $f(t,z)$, branch points of the integrand, and possibly end points of the contour of integration. These critical points will depend on the additional parameter $z$, and we assume that the $N$ relevant critical points will coalesce when $z=z_*$. Via the so-called Bleistein method (see [@Blei66]), one can obtain large $\lambda$ asymptotic expansions that hold for $z$ in some neighbourhood of $z_*$. Typically, the coefficients in these uniform asymptotic expansions are very complicated, and most publications only give the first two coefficients. For more than two decades, we already know that it is possible to obtain relatively simple Cauchy-type integral representations for these coefficients, see for example [@OT94]. In a recent paper [@TW14], the remarkable exponentially convergent properties of the trapezoidal rule for integrals is discussed. In this paper, we discuss how this numerical method can be used to compute coefficients in the uniform asymptotic expansions. With these results we make the uniform asymptotic expansions useful for the numerical evaluation of the integrals. The main steps of the Bleistein method are given in section \[Sect1\]. We note that the details in that section are not correct in all cases in which the Bleistein method can be used, but many cases are covered. The first step is to bring the integral in canonical form, and afterwards a special integration by parts will give us the uniform asymptotic expansion. With the method introduced in [@OT94] we obtain the Cauchy-type integral representations for these coefficients, and then we mention how the ideas of [@TW14] can be used to compute the coefficients numerically is a stable and efficient manner. The most well-known case of uniform asymptotics for integrals is the coalescence of two saddle points. We give the details in section \[Sect2\] and include as a numerical example the coefficients of the well-known uniform asymptotic expansion of the Bessel function $J_{\lambda}\left(\lambda z\right)$, as $\lambda\to\infty$ and $z$ near $1$. In that numerical illustration, we do not only observe that our method works, but also that exact representation of the coefficients is highly numerically unstable near the coalescence. In section \[Sect3\] we give full details for the case of the coalescence of two saddle points with two branch points. The main example is the Gauss hypergeometric function ${}_2F_1\left(a+\lambda,b-\lambda;c;(1-z)/2\right)$ in which again $\lambda\to\infty$. For the case $z$ near $1$, we derive a uniform asymptotic expansion in terms of Kummer $M$-functions. For the coefficients we obtain integral representations, but the results differ slightly from the ones in section \[Sect1\]. Hence, we give details on how these integral representations can be derived. Taking special values of the parameters $a$, $b$ and $c$ we obtain a new uniform asymptotic expansion for Jacobi polynomials $P_n^{(\alpha,\beta)}(z)$ in terms of Laguerre polynomials $L_n^{(\alpha)}(x)$ as $n\to\infty$ that holds uniformly for $z$ near $1$. The second case in section \[Sect3\] is for $z$ near $-1$. Since the details are very similar to the previous case, we only give a few details of the uniform asymptotic expansion of the Gauss hypergeometric function in terms of Kummer $U$-functions. Numerical illustrations are provided for both cases. The coalescence of a saddle point with the end point of the contour of integration is also important in many applications. We discuss this case in the final section of the paper, but we give only the main details. The Bleistein method {#Sect1} ==================== Before we can use the Bleistein method we have to bring integral (\[MainInt\]) in canonical form via a transformation $$\label{Transf} f(t,z)=p(\tau,\zeta)+p_{0}.$$ The new integrand should have a similar critical point structure, and $\zeta$ and $p_0$ are determined by the condition that the relevant critical points in the $t$-plane are mapped to the ones in the $\tau$-plane. Often the function $p(\tau,\zeta)$ is a polynomial in $\tau$, but a non-polynomial example will be included in this publication. The new integral representation is then of the form $$\label{Canon} F(\lambda,z)=\frac{{\mathrm{e}}^{\lambda p_0}}{2\pi{\mathrm{i}}}\int_{\widetilde{\mathcal{C}}}{\mathrm{e}}^{\lambda p(\tau,\zeta)}q(\tau,\zeta)G_{0}(\tau){\, \mathrm{d}}\tau,$$ in which $$\label{G0} G_0(\tau)=2\pi{\mathrm{i}}\frac{g(t,z)}{q(\tau,\zeta)}\frac{{\, \mathrm{d}}t}{{\, \mathrm{d}}\tau},$$ such that $$\label{Approximant} {\cal A}_n(\lambda,\zeta)=\frac{1}{2\pi{\mathrm{i}}}\int_{\widetilde{\mathcal{C}}}{\mathrm{e}}^{\lambda p(\tau,\zeta)}\tau^n q(\tau,\zeta){\, \mathrm{d}}\tau,\qquad n=0,\dots,N-1,$$ are the main approximants. In the case of two coalescing saddle points ${\cal A}_0(\lambda,\zeta)$ is an Airy function, $p(\tau,\zeta)$ is a cubic polynomial and $q(\tau,\zeta)=1$. See section \[Sect2\]. In other cases the function $q(\tau,\zeta)$ contains the branch-point behaviour of the non-exponential part of the integrand. See section \[Sect3\], especially (\[qG3\]). We note that the following details are not correct in all cases in which the Bleistein method can be used, but they cover many cases. We define for $s=0,1,2,\dots$, $$\begin{aligned} G_{s}(\tau)&=&\sum_{n=0}^{N-1}a_{sn}\tau^{n}+p'(\tau,\zeta)H_{s}(\tau),\nonumber \\ G_{s+1}(\tau)&=&\frac{-1}{q(\tau,\zeta)}\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(q(\tau)H_{s}(\tau)\right),\label{Gs}\end{aligned}$$ where $'$ indicates differentiation with respect to $\tau$. Then via integration by parts we have $$\label{UniExp} F(\lambda,z)={\mathrm{e}}^{\lambda p_0}\sum_{n=0}^{N-1}{\cal A}_n(\lambda,\zeta)\sum_{s=0}^{S-1}\frac{a_{s,n}}{\lambda^s}+R_S(\lambda,z),$$ where $$\label{Remainder} R_S(\lambda,z)=\lambda^{-S}\frac{{\mathrm{e}}^{\lambda p_0}}{2\pi{\mathrm{i}}}\int_{\widetilde{\mathcal{C}}}{\mathrm{e}}^{\lambda p(\tau,\zeta)}q(\tau,\zeta)G_{s}(\tau){\, \mathrm{d}}\tau.$$ Hence, the expansion in (\[UniExp\]) seems to have an asymptotic property. The coefficients $a_{s,n}$ can be expressed as, $$\label{coeff.asn} a_{s,n}=\frac{1}{2\pi{\mathrm{i}}}\oint_{|\tau|=r} A_{s,n}(\tau)G_{0}(\tau){\, \mathrm{d}}\tau,$$ where $A_{s,n}(\tau)$ are simple rational functions that satisfy the following equation $$\label{S1An} A_{s+1,n}(\tau)=\frac{q(\tau)}{p'(\tau,\zeta)}\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(\frac{A_{s,n}(\tau)}{q(\tau)}\right).$$ The initial rational functions $A_{0,n}(\tau)$ will be determined on a case by case basis. The radius $r$ in (\[coeff.asn\]) has to be chosen in such a way that all the relevant critical points are encircled by the contour once in the positive sense. According to [@TW14] the right-hand side of $$\label{treff_eqn} \frac{1}{2\pi{\mathrm{i}}}\oint_{|\tau|=r} F(\tau){\, \mathrm{d}}\tau\approx\frac{1}{2M}\sum_{m=0}^{2M-1}w_{m}F(w_{m}),~~~{\rm where}~~ w_{m}=r{\mathrm{e}}^{\pi{\mathrm{i}}m/M},$$ converges exponentially fast to the left-hand side as $M\to\infty$, as long as $F(\tau)$ is analytic in a disc $|\tau|\leq \widetilde{r}$, with $r<\widetilde{r}$. Applying this approximation to integral representation (\[coeff.asn\]), we obtain the approximation $$\label{a_s_n} a_{s,n}\approx \frac{1}{2M}\sum_{m=0}^{2M-1}w_{m}A_{s,n}(w_{m})G_{0}(w_{m}).$$ Since the $A_{s,n}(\tau)$ are simple rational functions, the approximation of, say, $a_{10,3}$ is not much harder than that of the first coefficient $a_{0,0}$. The main data that we need is $G_{0}(w_{m})$, $m=0,\dots,2M-1$. The function $G_0(\tau)$ is defined in (\[G0\]) and it involves the mapping (\[Transf\]). This nonlinear mapping usually causes multivaluedness issues in the complex plane when we have to determine the $t=t_m$ that corresponds to $\tau=w_m$. However, starting at $\tau=w_0=r$ it is relatively easy to control the multivaluedness when we move from point $\tau=w_{m}$ to $\tau=w_{m+1}$. One could use $t_m$ as an initial guess when one tries to determine $t_{m+1}$. In implementations of these results to approximate the integral via uniform asymptotic expansion (\[UniExp\]), it also makes sense to interchange the order of summation via $$\label{fulsum} \sum_{s=0}^{S-1}\frac{a_{s,n}}{\lambda^s}\approx \frac{1}{2M}\sum_{m=0}^{2M-1}w_{m}\widetilde{A}(w_{m})G_{0}(w_{m}),\quad {\rm where}\quad \widetilde{A}(\tau)=\sum_{s=0}^{S-1} \lambda^{-s}A_{s,n}(\tau).$$ The case of two coalescing saddle points {#Sect2} ======================================== Now we suppose that the integral (\[MainInt\]) has two saddle points located at $t=t_{\pm}$ which depend on a parameter $z$. In order to express this integral in its canonical form, we consider the following cubic transformation $$\label{cubic} f(t,z)=p(\tau,\zeta)+p_0=\ifrac{1}{3}\tau^{3}- \zeta \tau+ p_0,$$ suggested by Chester, Friedman and Ursell in 1957 in [@CFU53]. The saddle points $t=t_{\pm}$ should correspond to the saddle points of the cubic polynomial at $\tau=\pm \sqrt{ \zeta}$ in the complex $\tau$-plane. Thus we have $$\label{parameters} \ifrac{4}{3} \zeta^{3/2}=f(t_{-},z)-f(t_{+},z),\qquad 2p_0=f(t_{-},z)+f(t_{+},z).$$ Substituting the cubic transformation (\[cubic\]) in the integral (\[MainInt\]), we obtain $$\label{integral5} F(\lambda,\zeta)=\frac{{\mathrm{e}}^{\lambda p_0}}{2\pi{\mathrm{i}}}\int_{\mathcal{\widetilde{C}}}{\mathrm{e}}^{\lambda \left(\frac{1}{3}\tau^{3}- \zeta \tau\right)}G_{0}(\tau){\, \mathrm{d}}\tau,$$ where $\mathcal{\widetilde{C}}$ is the image of the contour $\mathcal{C}$ and $$\label{S2G0} G_{0}(\tau)=2\pi{\mathrm{i}}g(t)\frac{{\, \mathrm{d}}t}{{\, \mathrm{d}}\tau}.$$ In this case with $q(\tau)=1$, equations (\[Gs\]) take the following form $$\label{bm} G_{s}(\tau)=a_{s,0}+a_{s,1}\tau+\left(\tau^2- \zeta\right)H_{s}(\tau),\qquad G_{s+1}(\tau)=-H'_{s}(\tau).$$ It follows that $$\label{S2coeff} a_{s,0}=\frac{G_{s}(\sqrt{ \zeta})+G_{s}(-\sqrt{ \zeta})}{2},\qquad a_{s,1}=\frac{G_{s}(\sqrt{ \zeta})-G_{s}(-\sqrt{ \zeta})}{2\sqrt{ \zeta}}.$$ However, these representations are not very useful. For simplicity let us assume that we can deform the contour $\mathcal{\widetilde{C}}$ to a steepest descent path from $\infty e^{-\pi{\mathrm{i}}/3}$ to $\infty e^{\pi{\mathrm{i}}/3}$. If that is the case then the we obtain $$\begin{aligned} \label{approxAii} F(\lambda, \zeta)&=&{\mathrm{e}}^{p_0\lambda}\left(\Ai\left(\lambda^{2/3} \zeta\right)\sum_{s=0}^{S-1}\frac{a_{s,0}}{\lambda^{s+1/3}} -\Ai'\left(\lambda^{2/3} \zeta\right)\sum_{s=0}^{S-1}\frac{a_{s,1}}{\lambda^{s+2/3}}\right)\nonumber \\ &&+R_S(\lambda , \zeta),\end{aligned}$$ where $\Ai(x)$ is the [Airy function]{} and $\Ai'(x)$ is its derivate (see [@NIST:DLMF [§9.5(ii)](http://dlmf.nist.gov/9.5.ii)]), and where $$\label{UAI5} R_S(\lambda , \zeta)=\lambda^{-S}\frac{{\mathrm{e}}^{p_0\lambda}}{2\pi{\mathrm{i}}}\int_{\mathcal{\widetilde{C}}}{\mathrm{e}}^{\lambda \left(\frac{1}{3}\tau^{3}- \zeta \tau\right)}G_{S}(\tau){\, \mathrm{d}}\tau.$$ To obtain Cauchy-type integral representations for the coefficients $a_{s,n}$ we use [@OT94] and define $$\label{Asn} A_{0,0}(\tau)=\frac{\tau}{\tau^{2}-\zeta},\quad A_{0,1}(\tau)=\frac{1}{\tau^{2}-\zeta},\quad A_{s+1,n}(\tau)=\frac{1}{\tau^{2}-\zeta}\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}A_{s,n}(\tau)$$ for $s=0,1,2,\dots$, and $n=0,1$. We note that using these rational functions that we have for the coefficients integral representation(\[coeff.asn\]), the contour of integration does not have to be a circle. It can be any contour that encircles the two saddle points $\tau=\pm\sqrt\zeta$ once in the positive sense, and such that all singularities of $G_0(\tau)$ are in the exterior of the contour. Integral representation (\[coeff.asn\]) can be used to compute the higher coefficients in the uniform asymptotic expansion. For example, for this case, we have $$A_{1,0}(\tau)=-\frac{\tau^2+\zeta}{\left(\tau^{2}-\zeta\right)^3},\qquad A_{1,1}(\tau)=\frac{-2\tau}{\left(\tau^{2}-\zeta\right)^3},$$ and thus, using (\[coeff.asn\]), we obtain $$\begin{aligned} \label{secondcoeff} a_{1,0}&=&\frac{G_{0}''(-\sqrt{\zeta})-G_{0}''(\sqrt{\zeta})}{8\zeta^{1/2}}+\frac{G_{0}'(\sqrt{\zeta})+G_{0}'(-\sqrt{\zeta})}{8\zeta}-\frac{a_{0,1}}{4\zeta},\nonumber\\ a_{1,1}&=&-\frac{G_{0}''(\sqrt{\zeta})+G_{0}''(-\sqrt{\zeta})}{8\zeta}+\frac{G_{0}'(\sqrt{\zeta})-G_{0}'(-\sqrt{\zeta})}{8\zeta^{3/2}}.\end{aligned}$$ These representations are numerically unstable when we try to compute these coefficients near $\zeta=0$. Note that when we compute these coefficients via the trapezoidal rule (\[a\_s\_n\]) we encounter no special problems when $\zeta$ is small, since the integration variable is bounded away from the origin. In [@TemmeV02] more of the rational functions $A_{s,n}(\tau)$ are computed and the results are combined with computer algebra and two point Taylor series expansions to evaluate the coefficients $a_{s,n}$. Two point Taylor series expansions of $G_0(\tau)$ are also discussed in [@TemmeL02]. The Cauchy integral representations for the coefficients in these expansions are slightly simpler than (\[coeff.asn\]), but substituting these expansions in (\[integral5\]) results in expansions that are more complicated than (\[approxAii\]). However, also these Cauchy type integral representations can be combined with (\[treff\_eqn\]) to numerically compute the coefficients in a stable manner. The recent paper [@Dunster2017] deals with uniform asymptotic approximations in turning point problems, and they also consider the trapezoidal rule for Cauchy integrals. If we would write (\[approxAii\]) as $$\label{Dunster} F(\lambda, \zeta)={\mathrm{e}}^{p_0\lambda}\left(\Ai\left(\lambda^{2/3} \zeta\right)A(\lambda, \zeta) -\Ai'\left(\lambda^{2/3} \zeta\right)B(\lambda, \zeta)\right),$$ then the $A(\lambda, \zeta)$ and $B(\lambda, \zeta)$ are so-called coefficient functions. It is these coefficient functions that are numerical computed in [@Dunster2017] via the trapezoidal rule. Numerical illustration ---------------------- As a more concrete example we use for the Bessel function the well-known integral representation (see [@NIST:DLMF [10.9.17](http://dlmf.nist.gov/10.9.E17)]) $$J_{\lambda}(\lambda z)=\frac{1}{2\pi{\mathrm{i}}}\int_{\infty-\pi{\mathrm{i}}}^{\infty+\pi{\mathrm{i}}}{\mathrm{e}}^{\lambda(z\sinh t-t)}{\, \mathrm{d}}t.$$ Using the notation in (\[MainInt\]), we have $g(t,z)=1/(2\pi{\mathrm{i}})$ and the function $f(t,z)=z\sinh t-t$ has saddle points at $t=\pm \arccosh(z^{-1})$ which coalesce when $z=1$. Since $f(t,z)$ is odd in $t$ it follows from (\[parameters\]) that $p_0=0$ and we have $$\begin{aligned} \label{Bparameters} \ifrac{2}{3}\zeta^{2/3}=&\arccosh(1/z)-\sqrt{1-z^{2}},\qquad &0<z\leq1,\nonumber\\ \ifrac{2}{3}\left(-\zeta\right)^{2/3}=&\sqrt{z^{2}-1}-\arccos(1/z),\qquad &z\geq 1.\end{aligned}$$ Furthermore, it also follows from (\[coeff.asn\]) that $a_{2s+1,0}=a_{2s,1}=0$. Uniform asymptotic expansion (\[approxAii\]) is well-known, see for example [@NIST:DLMF [10.20.4](http://dlmf.nist.gov/10.20.E4)]. In this special case, the coefficients are easy to compute via the methods explained in [@NIST:DLMF [§10.20](http://dlmf.nist.gov/10.20)], and we can compare our numerical results with the exact results. In the numerical illustration below, we take $z$ close to the coalescing point, and we did observe that the ‘exact results’ are highly numerically unstable. This numerical instability of the exact coefficients was also observed in [@Temme97], and two methods were introduced to compute the asymptotic expansion (\[approxAii\]). Both of these methods involve expansions in powers of $\zeta$, and are useful when $\zeta$ is small. We take $z=0.995$, close to the coalescing point at $1$, then $\zeta=0.00630908356$. In our approximation (\[a\_s\_n\]) we take $r=1$ and $M=30$. The results are displayed in Table \[table:table1\]. Note that even with such a relatively small $M$ we already obtain $26$ digits precision in the first two coefficients. Even for $a_{10,0}$ we still have $10$ digits precision, and this reduces to $4$ digits for $a_{11,1}$. However, increasing $M$ from $30$ to $40$ we obtain $23$ digits precision for $a_{11,1}$. This illustrates the observation in [@TW14] that the trapezoidal rule for integrals converges exponentially fast. \[table:table1\] The case of coalescence of two saddle points with two branch points {#Sect3} =================================================================== In this section, we consider the Gauss hypergeometric function $$\label{hyp} \hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}},$$ where $\lambda \to \infty$. The results are related to the limits $$\label{S3limits} \lim_{\lambda,\mu\to\infty}\hyp{\lambda}{\mu}{\nu+1}{\frac{x}{\lambda\mu}}=\lim_{\mu\to\infty}M\left(\mu,\nu+1,\frac{x}{\mu}\right) =\frac{\Gamma(\nu+1)}{x^{\nu/2}} I_\nu\left(2\sqrt{x}\right).$$ From these limits it follows that we expect interesting behaviour when the variable is small, that is in the case of (\[hyp\]), when $z$ is close to $1$. In [@KwOD14] the authors derive a large $\lambda$ asymptotic expansion in terms of modified Bessel functions. In that paper the derivation was based on integral representation (\[S3Int1\]) in which two saddle points coalesce with two branch points as $z\to 1$. One of the branch points was ignored since the integrand was exponentially small near that point, and by considering just the coalescence of two saddle points with one branch point, a uniform asymptotic expansion in terms of modified Bessel functions could be obtained. That expansion was already known from the theory of differential equations, see [@Jones01]. Here we deal with the same integral representation, but now we really consider the coalescence of two saddle points with two branch points. The uniform asymptotic expansion that we derive in subsection \[S3a\] is in terms of the Kummer $M$-function, and holds uniformly for $z$ close to 1. Hence, the result is connected to the first equality sign in (\[S3limits\]). The asymptotic expansion will break down near the singularity of (\[hyp\]), that is, at $z=-1$. For $z$ close to $-1$, we give in subsection \[S3d\] a uniform asymptotic expansion in terms of the Kummer $U$-function. Since the derivation is very similar to the previous subsections we only give the main details. $z$ close to $1$ {#S3a} ---------------- For $0<z<1$, we start with the following integral representation (combine [15.8.1](http://dlmf.nist.gov/15.8.E1) with [15.6.3](http://dlmf.nist.gov/15.6.E3) in [@NIST:DLMF]) $$\label{S3Int1} \hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}=\frac{L}{2 \pi{\mathrm{i}}}\int_{\infty}^{(0+)}\frac{{\mathrm{e}}^{(\lambda+a-c) \pi{\mathrm{i}}}\left(\tau+1\right)^{b-c-\lambda}}{\tau^{\lambda+a-c+1}\left(\tau+\frac{1+z}{2}\right)^{b-\lambda} }{\, \mathrm{d}}\tau,$$ where $$\label{S3L} L=\frac{\Gamma(c)\Gamma(\lambda+1+a-c)}{ \Gamma(\lambda +a)}.$$ Using $\tau=e^{\pi{\mathrm{i}}}t$ in (\[S3Int1\]), we obtain $$\label{S3Int2} \hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}=\frac{L}{2 \pi{\mathrm{i}}}\int_{-\infty}^{(0+)}{\mathrm{e}}^{\lambda f(t)} g(t) {\, \mathrm{d}}t,$$ where $$\label{S3IntDef} f(t)=\ln\left(\frac{\frac{1+z}{2}-t}{1-t}\right)-\ln t,\qquad g(t)=\frac{t^{c-a-1}\left(1-t\right)^{b-c}}{\left(\frac{1+z}{2}-t\right)^{b}}.$$ Here the path of integration starts at ${\mathrm{e}}^{-\pi{\mathrm{i}}}\infty$ encircles $0$ once in the positive direction and returns to ${\mathrm{e}}^{\pi{\mathrm{i}}}\infty$. The points $1$ and $\frac{z+1}{2}$ lie outside the contour of integration. For $f(t)$ we choose branch cuts between $t=\frac{1+z}{2}$ and $t=1$ and the negative real axis. Using $z=\cos \theta$, the saddle points are located at $$\label{C2Definespa} t_{\pm}=\frac{1+{\mathrm{e}}^{\pm {\mathrm{i}}\theta}}{2}.$$ The branch points of the phase function are $t=0$, $t=1$ and $t=\frac{z+1}{2}$. Note that these saddle points coalesce when $\theta=0$ with two branch points at $t=1$. To obtain a uniform asymptotic expansion, we use the transformation, $$\label{S3p} f(t)=p(\tau,\zeta)+p_0=\ln\left(\frac{\tau-2\zeta}{\tau}\right)+\tau+p_0.$$ We take $\zeta=1-\cos\sigma$. For the function $p(\tau,\zeta)$ the saddle points are located at $$\tau_{\pm}=1-{\mathrm{e}}^{\pm {\mathrm{i}}\sigma},$$ and we will insist that these correspond to $t=t_{\pm}$. This gives us $$\label{S3fttau} f(t_{\pm})=\mp{\mathrm{i}}\theta=1\mp{\mathrm{i}}\sigma-{\mathrm{e}}^{\pm {\mathrm{i}}\sigma}+p_0,$$ where we obtain $p_0=-\zeta$ and $\theta=\sigma+\sin \sigma$. With the transformation (\[S3p\]), we obtain integral representation $$\label{S3Int3} \hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}=\frac{L{\mathrm{e}}^{-\lambda\zeta}}{2 \pi{\mathrm{i}}}\int_{-\infty}^{(0+,2\zeta+)}\frac{{\mathrm{e}}^{\lambda\tau}\left(\tau-2\zeta\right)^{\lambda-b}}{\tau^{\lambda-b+c}}G_{0}(\tau){\, \mathrm{d}}\tau,$$ with $$\label{qG3} q(\tau)=\frac{\tau^{b-c}}{\left(\tau-2\zeta\right)^b},\qquad G_{0}(\tau)=\left(\frac{\tau-2\zeta}{\frac{1+z}{2}-t}\right)^{b}\left(\frac{1-t}{\tau}\right)^{b-c}t^{c-a-1}\frac{{\, \mathrm{d}}t}{{\, \mathrm{d}}\tau},$$ where we will need $$\label{dttau} \frac{{\, \mathrm{d}}t}{{\, \mathrm{d}}\tau}(\tau_\pm)=\sqrt{\frac{p''(\tau_{\pm},\zeta)}{f''(t_{\pm})}},$$ which can be obtained via l’Hôpital’s method. Hence, we need $$\label{fpdd} f''(t_{\pm})=\pm {\mathrm{i}}\frac{4{\mathrm{e}}^{\mp {\mathrm{i}}\theta}}{ \sin \theta},\qquad p''(\tau_{\pm},\zeta)=\pm {\mathrm{i}}\frac{\sin \sigma}{1-\cos \sigma}.$$ Now (\[Gs\]) reads $$\begin{aligned} G_{s}(\tau)&=&a_{s,0}+a_{s,1}\tau+\frac{\tau^2-2\zeta\tau+2\zeta}{\tau\left(\tau-2\zeta\right)}H_{s}(\tau)\label{S3Gs1}\\ G_{s+1}(\tau)&=&-\frac{\left(\tau-2\zeta\right)^b}{\tau^{b-c}}\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(\frac{\tau^{b-c}}{\left(\tau-2\zeta\right)^b}H_{s}(\tau)\right),\label{S3Gs2}\end{aligned}$$ where $$\label{anbn} a_{s,1}=\frac{G_{s}(\tau_{+})-G_{s}(\tau_{-})}{\tau_+-\tau_-},\qquad a_{s,0}=G_{s}(\tau_{+})-\tau_{+} a_{s,1}.$$ For $s=0$, we have $$G_{0}(\tau_{\pm})={\mathrm{e}}^{\pm {\mathrm{i}}\left(\left(\frac{c}{2}-b\right)\sigma+(b-a)\frac{\theta}{2}\right)}R,$$ and $$R=\left(\frac{2\sin(\sigma/2)}{\sin(\theta/2)}\right)^{c-\frac{1}{2}}\sqrt{\cos\left(\frac{\sigma}{2}\right)}\left(\cos\left(\frac{\theta}{2}\right)\right)^{c-a-b-\frac{1}{2}}.$$ Thus the first two coefficients are $$\begin{aligned} \label{First2} a_{0,0}&=&\frac{\cos\left(\left(b-\frac{c-1}{2}\right)\sigma-(b-a)\frac{\theta}{2}\right)}{\cos \left(\sigma/2\right)} R,\nonumber\\ a_{0,1}&=&\frac{\sin\left(\left(b-\frac{c}{2}\right)\sigma-(b-a)\frac{\theta}{2}\right)}{\sin \sigma} R.\end{aligned}$$ Using the integral representation [@NIST:DLMF [13.4.13](http://dlmf.nist.gov/13.4.E13)] for the Kummer $M$-function we obtain $$\begin{aligned} \label{C2int2aa} \hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}&\sim& L{\mathrm{e}}^{-\lambda\zeta} \left(\frac{\lambda^{c-1}}{\Gamma(c)}{ M}(b-\lambda,c,2\zeta\lambda)\sum_{s=0}^\infty \frac{a_{s,0}}{\lambda^s}\right.\nonumber\\ &&\left.+\frac{\lambda^{c-2}}{\Gamma(c-1)}{ M}(b-\lambda,c-1,2\zeta\lambda) \sum_{s=0}^\infty \frac{a_{s,1}}{\lambda^s}\right),\nonumber\\ &&\end{aligned}$$ as $|\lambda|\to\infty$. This expansion is created such that it holds uniformly for $z$ near $1$. It will hold for bounded $z$, where we also have to bound $z$ away from $-1$, but it is probably not very practical when $z$ moves too far from $1$ since the tracking of the multivaluedness is not straightforward. In terms of the ‘Olver’ functions (see [@NIST:DLMF [13.2.3](http://dlmf.nist.gov/13.2.E3) and [15.2.2](http://dlmf.nist.gov/15.2.E2)]) this result reads $$\begin{aligned} \label{C3int3} &&\Ohyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}\nonumber\\ &&\qquad\quad\sim \frac{\Gamma(\lambda+1+a-c)}{ \Gamma(\lambda +a){\mathrm{e}}^{\lambda\zeta}} \left({\bf M}(b-\lambda,c,2\zeta\lambda)\sum_{s=0}^\infty \frac{a_{s,0}}{\lambda^{s+1-c}}\right.\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\quad\left.+{\bf M}(b-\lambda,c-1,2\zeta\lambda) \sum_{s=0}^\infty \frac{a_{s,1}}{\lambda^{s+2-c}}\right).\end{aligned}$$ Taking in (\[C2int2aa\]) $a={\alpha+\beta+1}$, $b=0$, $c=\alpha+1$ and $\lambda=n$ will give us a uniform asymptotic approximation for the Jacobi polynomials $$\label{Jacobi} P_n^{(\alpha,\beta)}(z)=\frac{(\alpha+1)_n}{n!}\hyp{-n}{n+\alpha+\beta+1}{\alpha+1}{{\frac{1-z}2}},$$ in terms of Laguerre polynomials $$\label{Laguerre} L_n^{(\alpha)}(x)=\frac{(\alpha+1)_n}{n!}M\left(-n,\alpha+1,x\right).$$ We can present the result as $$\begin{aligned} \label{JacobiLaguerre} P_n^{(\alpha,\beta)}(z)&\sim&\frac{\Gamma(n+\beta+1){\mathrm{e}}^{-n\zeta}}{\Gamma(n+\alpha+\beta+1)} \left(L_n^{(\alpha)}(2\zeta n)\sum_{s=0}^\infty \frac{a_{s,0}}{n^{s-\alpha}}\right.\nonumber\\ &&\qquad\qquad\quad\left.+(n+\alpha)L_n^{(\alpha-1)}(2\zeta n)\sum_{s=0}^\infty \frac{a_{s,1}}{n^{s-\alpha+1}}\right),\end{aligned}$$ as $n\to\infty$ uniformly for $z$ near $1$. Coefficients {#S3b} ------------ In order to find the coefficients $a_{s,n}$ in (\[treff\_eqn\]), we make use of the method introduced in [@OT94] and propose the following result. Let $$\label{S3A0B0} A_{0,0}(\tau,\zeta)=\frac{\tau-2\zeta}{\tau^{2}-2\zeta \tau+2\zeta},\qquad A_{0,1}(\tau,\zeta)=\frac{1}{\tau^{2}-2\zeta \tau+2\zeta},$$ and for $s=0,1,2,\dots$, $n=0,1$, let $$\label{S3An} A_{s+1,n}(\tau,\zeta)=\frac{\left(\tau-2\zeta\right)^{1-b}\tau^{b-c+1}}{\tau^{2}-2\zeta \tau+2\zeta}\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(\frac{\left(\tau-2\zeta\right)^{b}}{\tau^{b-c}}A_{s,n}(\tau,\zeta)\right).$$ Then the coefficients $a_{s,1}$ have integral representation (\[coeff.asn\]) with $n=1$, but for $a_{s,0}$ we have $$\label{AB0s1} a_{s,0}=\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}G_{0}(\tau)\left(A_{s,0}(\tau,\zeta)+(c-1)A_{s-1,1}(\tau,\zeta)\right){\, \mathrm{d}}\tau.$$ Since $G_{0}(\tau)$ is analytic near the saddle points $\tau_{\pm}$, we can write the first expression in (\[anbn\]) as $$\begin{aligned} \label{As1} a_{s,1}&=&\frac{1}{\tau_+ -\tau_-}\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}\left(\frac{G_{s}(\tau)}{\tau-\tau_{+}}-\frac{G_{s}(\tau)}{\tau-\tau_{-}}\right){\, \mathrm{d}}\tau\nonumber \\ &=&\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}G_{s}(\tau)A_{0,1}(\tau,\zeta){\, \mathrm{d}}\tau,\end{aligned}$$ and combining this result with the second expression in (\[anbn\]) gives us $$\begin{aligned} \label{As0} a_{s,0}&=&\frac{1}{1+{\mathrm{e}}^{{\mathrm{i}}\sigma}}\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}\left(\frac{G_{s}(\tau)}{\tau-\tau_{+}}+{\mathrm{e}}^{{\mathrm{i}}\sigma}\frac{G_{s}(\tau)}{\tau-\tau_{-}}\right){\, \mathrm{d}}\tau\nonumber \\ &=&\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}G_{s}(\tau)A_{0,0}(\tau,\zeta){\, \mathrm{d}}\tau,\end{aligned}$$ where $\mathcal{C}$ is a simple closed contour which encircles the points $\tau_{\pm}$. Using (\[S3Gs2\]) with $s$ replaced by $s-1$ and then integration by parts one obtains $$\begin{aligned} \label{As-10} a_{s,0}&=&\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}\frac{\tau^{b-c}}{\left(\tau-2\zeta\right)^{b}}H_{s-1}(\tau)\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(\frac{\left(\tau-2\zeta\right)^{b}}{\tau^{b-c}}A_{0,0}(\tau,\zeta)\right){\, \mathrm{d}}\tau\nonumber \\ &=&\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}\frac{\tau^{2}-2\zeta \tau+2\zeta}{\tau\left(\tau-2\zeta\right)}H_{s-1}(\tau)A_{1,0}(\tau,\zeta){\, \mathrm{d}}\tau\nonumber \\ &=&\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}\left(G_{s-1}(\tau)-a_{s-1,0}-a_{s-1,1}\tau\right)A_{1,0}(\tau,\zeta){\, \mathrm{d}}\tau,\end{aligned}$$ where in the second line we are using (\[S3An\]) and in the third line we use (\[S3Gs1\]). Since $A_{1,0}(\tau,\zeta)\sim (c-1)/\tau^2$ as $\tau\to\infty$ we have $$\label{As-11} a_{s,0}=(c-1)a_{s-1,1}+\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}G_{s-1}(\tau)A_{1,0}(\tau,\zeta){\, \mathrm{d}}\tau.$$ We can continue with this process and use the fact that for $s=2,3,4,\dots$, we have $A_{s,0}(\tau,\zeta)={\mathcal{O}}\left(\tau^{-3}\right)$ as $\tau\to\infty$. The result is $$\label{A0s} a_{s,0}=(c-1)a_{s-1,1}+\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}G_{0}(\tau)A_{s,0}(\tau,\zeta){\, \mathrm{d}}\tau.$$ Hence, this result differs from (\[coeff.asn\]) in the case $n=0$. For the case $n=1$ the details are very similar, but since for $s=1,2,3,\dots$, we have $A_{s,1}(\tau,\zeta)={\mathcal{O}}\left(\tau^{-3}\right)$, as $\tau\to\infty$, we do obtain (\[coeff.asn\]). Combining that result with (\[A0s\]) will give us (\[AB0s1\]). Numerical illustration {#S3c} ---------------------- We check our approximation for the coefficients by using them in uniform asymptotic approximation (\[C2int2aa\]). We take $a=b=c=\frac12$, $z=0.9$ and $\lambda=20$. The corresponding $\zeta=0.025536930$. In the calculation of the coefficients via (\[a\_s\_n\]) we take again $M=30$ and $r=1$. The results are displayed in Table \[table:table2\]. As mentioned before, in most publication on uniform asymptotics for integrals, the authors only give $a_{0,0}$ and $a_{0,1}$, like we do in (\[First2\]). Here we illustrate that it is now possible to take many more terms and obtain much better approximations. \[table:table2\] $z$ close to $-1$ {#S3d} ------------------ For $-1<z<0$, we start with integral representation (\[S3Int1\]) which we present as $$\label{S4Int1} \hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}=\frac{L}{2\pi{\mathrm{i}}}\int_{\infty}^{(0+)}{\mathrm{e}}^{\lambda f(t)}g(t) {\, \mathrm{d}}t ,$$ where $\Re(a+\lambda)>0$, $c-a-\lambda\neq 1,2,3,\cdots$, and where $$\label{C2bDefineff} f(t)=\ln\left(1+\frac{1+z}{2t}\right)-\ln \left(1+t\right),\qquad g(t)=\frac{t^{c-a-1}\left(1+t\right)^{b-c}}{\left(\frac{1+z}{2}+t\right)^{b}},$$ and $$\label{S4L} L=\frac{\Gamma(c)\Gamma(\lambda+1+a-c)}{{\mathrm{e}}^{\left(c-a-\lambda\right)\pi{\mathrm{i}}}\Gamma(\lambda+a)}.\qquad$$ We choose the branch cuts of the phase function $f(t)$ between the points $t=-\frac{1+z}{2}$ and $t=0$ and the half line $t<-1$. Using $z=-\cos \theta$, the saddle points are located at $$\label{C2Definespb} t_{\pm}=\frac{{\mathrm{e}}^{\pm {\mathrm{i}}\theta}-1}{2}.$$ when $\theta=0$ then $z=-1$ and the two saddle points and two of the branch points will coalesce at $t=0$. To obtain a uniform asymptotic expansion, we use the transformation, $$\label{S4p} f(t)=p(\tau,\zeta)+p_0=\ln\left(\frac{\tau+2\zeta}{\tau}\right)-\tau+p_0.$$ We take $\zeta=1-\cos\sigma$. For the function $p(\tau,\zeta)$ the saddle points are located at $$\tau_{\pm}={\mathrm{e}}^{\pm {\mathrm{i}}\sigma}-1,$$ and we will insist that these correspond to $t=t_{\pm}$. Again this will give us (\[S3fttau\]) and again we have $p_0=-\zeta$ and $\theta=\sigma+\sin \sigma$. With the transformation (\[S4p\]), we obtain integral representation $$\begin{aligned} \label{S3Int3a} &&\hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}=\frac{L{\mathrm{e}}^{-\lambda\zeta}}{2\pi{\mathrm{i}}}\int_{\infty}^{(0+)}\frac{{\mathrm{e}}^{-\lambda\tau}\left(\tau+2\zeta\right)^{\lambda-b}}{\tau^{\lambda-c+a+1}}G_{0}(\tau){\, \mathrm{d}}\tau,\end{aligned}$$ with $$q(\tau)=\frac{\tau^{c-a-1}}{\left(\tau+2\zeta\right)^b},\qquad G_{0}(\tau)=\frac{\left(t/\tau\right)^{c-a-1}}{\left({1+t}\right)^{c-b}}\left(\frac{\tau+2\zeta}{\frac{1+z}{2}+t}\right)^{b}\frac{{\, \mathrm{d}}t}{{\, \mathrm{d}}\tau}.$$ The functions $G_s(\tau)$ are defined via (\[Gs\]) and the coefficients $a_{s,n}$ are defined in (\[anbn\]). Again $\frac{{\, \mathrm{d}}t}{{\, \mathrm{d}}\tau}(\tau_\pm)$ follows from (\[dttau\]) and (\[fpdd\]). Thus for $s=0$, we have $$G_{0}(\tau_{\pm})={\mathrm{e}}^{\pm {\mathrm{i}}\left((b-a)\frac{\theta}{2}-\left(b+c-a-1\right)\frac{\sigma}{2}\right)}R,$$ where $$R=\left(\frac{2\sin(\sigma/2)}{\sin(\theta/2)}\right)^{b+a-c+\frac{1}{2}}\sqrt{\cos\left(\frac{\sigma}{2}\right)}\left(\cos\left(\frac{\theta}{2}\right)\right)^{\frac{1}{2}-c}.$$ For the first two coefficients we have again exact representations $$\begin{aligned} \label{S4a00} a_{0,0}&=&\frac{\cos\left(\left(b-a\right)\frac{\theta}{2}-(b-a+c)\frac{\sigma}{2}\right)}{\cos \left(\frac{\sigma}{2}\right)} R,\nonumber\\ a_{0,1}&=&\frac{\sin\left(\left(b-a\right)\frac{\theta}{2}-(b-a+c-1)\frac{\sigma}{2}\right)}{\sin \sigma} R.\end{aligned}$$ To find the coefficients $a_{s,n}$ we define the rational functions $$\label{S4A0B0} A_{0,0}(\tau,\zeta)=\frac{\tau+2\zeta}{\tau^{2}+2\zeta \tau+2\zeta},\qquad A_{0,1}(\tau,\zeta)=\frac{1}{\tau^{2}+2\zeta \tau+2\zeta},$$ and the other rational functions follow again from (\[S1An\]). For the case $n=1$ integral representation (\[coeff.asn\]) still holds, and for the case $n=0$ we have $$\label{S4AB0s} a_{s,0}=\frac{1}{2\pi{\mathrm{i}}}\int_{\mathcal{C}}G_{0}(\tau)\left(A_{s,0}(\tau,\zeta)+(a+b-c)A_{s-1,1}(\tau,\zeta)\right){\, \mathrm{d}}\tau.$$ The details for the derivation are similar to the previous case. Combining integral representation [@NIST:DLMF [13.4.14](http://dlmf.nist.gov/13.4.E14)] with tranformation [@NIST:DLMF [13.2.40](http://dlmf.nist.gov/13.2.E40)] will give us $$\begin{aligned} \label{S4C2int2a} &&\hyp{a+\lambda}{b-\lambda}{c}{\frac{1-z}{2}}\nonumber\\ &&\quad\sim \frac{\Gamma(c){\mathrm{e}}^{-\lambda\zeta}}{ \Gamma(a+\lambda)}\left( U(b-\lambda,a+b-c+1,2\zeta\lambda)\sum_{s=0}^\infty \frac{a_{s,0}}{\lambda^{s+c-a-b}}\right.\nonumber\\ &&\qquad\left.-(\lambda+a-c)U(b-\lambda,a+b-c,2\zeta\lambda)\sum_{s=0}^\infty \frac{a_{s,1}}{\lambda^{s+c-a-b+1}}\right),\end{aligned}$$ as $|\lambda|\to\infty$. Numerical illustration {#S3e} ---------------------- We check our approximation for the coefficients by using them in uniform asymptotic approximation (\[S4C2int2a\]). We take $a=b=c=\frac12$, $z=-0.9$ and $\lambda=20$. The corresponding $\zeta=0.025536930$. In the calculation of the coefficients via (\[a\_s\_n\]) we take again $M=30$ and $r=1$. The results are displayed in table \[table:table3\]. \[table:table3\] Saddle point near the end point of the interval {#Sect4} =============================================== According to [@Temme15 Chapter 22] the canonical form is $$\label{S5canon} F_\beta(\lambda,\zeta)=\frac1{\Gamma(\beta)}\int_0^\infty \tau^{\beta-1}{\mathrm{e}}^{\lambda\left(\zeta\tau-\frac12\tau^2\right)} G_0(\tau){\, \mathrm{d}}\tau,$$ and the integration by parts trick is $$\begin{aligned} \label{S5Gs} G_{s}(\tau)&=&a_{s,0}+a_{s,1}\tau+(\tau-\zeta)\tau H_{s}(\tau),\nonumber \\ G_{s+1}(\tau)&=&\tau^{1-\beta}\frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(\tau^{\beta}H_{s}(\tau)\right).\end{aligned}$$ Note that this differs slightly from (\[Gs\]). Using (\[S5Gs\]) in (\[S5canon\]) produces the expansion $$\begin{aligned} \label{S5Uniform} F_\beta(\lambda,\zeta)&\sim& {\mathrm{e}}^{\zeta^2\lambda/4}\left(U\left(\beta-\ifrac12,-\zeta\sqrt\lambda\right)\sum_{s=0}^\infty \frac{a_{s,0}}{\lambda^{(2s+\beta)/2}}\right.\nonumber\\ &&\qquad\qquad\left.+\beta U\left(\beta+\ifrac12,-\zeta\sqrt\lambda\right)\sum_{s=0}^\infty \frac{a_{s,1}}{\lambda^{(2s+\beta+1)/2}}\right),\end{aligned}$$ where for the parabolic cylinder function $U(a,z)$ we have used integral representation [@NIST:DLMF [12.5.1](http://dlmf.nist.gov/12.5.E1)]. The coefficients are defined via, $$\begin{aligned} \label{S5asGs} a_{s,0}=G_s(0),\qquad a_{s,1}=\frac{G_s(\zeta)-G_s(0)}{\zeta}.\end{aligned}$$ Again integral representations (\[coeff.asn\]) hold where in this case we have $$\begin{aligned} \label{S5As} A_{0,0}(\tau,\zeta)&=&\frac1\tau,\qquad A_{0,1}(\tau,\zeta)=\frac1{\tau\left(\tau-\zeta\right)},\nonumber\\ A_{s+1,n}(\tau,\zeta)&=&\frac{\tau^{\beta-1}}{\zeta-\tau} \frac{{\, \mathrm{d}}}{{\, \mathrm{d}}\tau}\left(\tau^{1-\beta}A_{s,n}(\tau,\zeta)\right),\end{aligned}$$ for $s=0,1,2,\dots$, $n=0,1$. Hence, in the case that $|\zeta|<r$ the coefficients can be computed numerically via (\[a\_s\_n\]). For more details for this case see [@Temme15 Chapter 22]. Acknowledgments {#acknowledgments .unnumbered} =============== This research was supported by a research grant (GRANT11863412/70NANB15H221) from the National Institute of Standards and Technology. [10]{} \[1\][`#1`]{} urlstyle \[1\][doi:\#1]{} \[2\]\[\][[\#2](#2)]{} <span style="font-variant:small-caps;">N. Bleistein</span>, Uniform asymptotic expansions of integrals with stationary point near algebraic singularity, *Comm. Pure Appl. Math.* 19:353–370 (1966). MR 0204943 (34 \#4778). <span style="font-variant:small-caps;">A. B. Olde Daalhuis</span> and <span style="font-variant:small-caps;">N. M. Temme</span>, Uniform [A]{}iry-type expansions of integrals, *SIAM J. Math. Anal.* 25:304–321 (1994). MR 1266561 (95h:41056). <span style="font-variant:small-caps;">L. N. Trefethen</span> and <span style="font-variant:small-caps;">J. A. C. Weideman</span>, The exponentially convergent trapezoidal rule, *SIAM Rev.* 56:385–458 (2014). MR 3245858. <span style="font-variant:small-caps;">T. M. Dunster</span>, <span style="font-variant:small-caps;">A. Gil</span>, and <span style="font-variant:small-caps;">J. Segura</span>, Computation of asymptotic expansions of turning point problems via [C]{}auchy’s integral formula: Bessel functions, *Constructive Approximation* (2017), 1–31. <span style="font-variant:small-caps;">C. Chester</span>, <span style="font-variant:small-caps;">B. Friedman</span>, and <span style="font-variant:small-caps;">F. Ursell</span>, An extension of the method of steepest descents, *Proc. Cambridge Philos. Soc.* 53:599–611 (1957). MR 0090690 (19,853a). http://dlmf.nist.gov/, Release 1.0.12 of 2016-09-09, F. W. J. Olver, A. B. [Olde Daalhuis]{}, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller and B. V. Saunders, eds. <http://dlmf.nist.gov/>. <span style="font-variant:small-caps;">R. Vidunas</span> and <span style="font-variant:small-caps;">N. M. Temme</span>, Symbolic evaluation of coefficients in [A]{}iry-type asymptotic expansions, *J. Math. Anal. Appl.* 269:317–331 (2002). MR 1907888. <span style="font-variant:small-caps;">J. L. L[[ó]{}]{}pez</span> and <span style="font-variant:small-caps;">N. M. Temme</span>, Two-point [T]{}aylor expansions of analytic functions, *Stud. Appl. Math.* 109:297–311 (2002). MR 1934653. <span style="font-variant:small-caps;">N. M. Temme</span>, Numerical algorithms for uniform [A]{}iry-type asymptotic expansions, *Numer. Algorithms* 15:207–225 (1997). MR 1475178. <span style="font-variant:small-caps;">S. Farid Khwaja</span> and <span style="font-variant:small-caps;">A. B. Olde Daalhuis</span>, Uniform asymptotic expansions for hypergeometric functions with large parameters [IV]{}, *Anal. Appl. (Singap.)* 12:667–710 (2014). MR 3277949. <span style="font-variant:small-caps;">D. S. Jones</span>, Asymptotics of the hypergeometric function, *Math. Methods Appl. Sci.* 24:369–389, Applied mathematical analysis in the last century (2001). MR 1821932 (2002f:33006). <span style="font-variant:small-caps;">N. M. Temme</span>, Asymptotic methods for integrals, *Series in Analysis*, vol. 6, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2015. MR 3328507. [^1]: Address for correspondence: Dr A. B. Olde Daalhuis, School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD; email: A.OldeDaalhuis@ed.ac.uk
--- abstract: 'The identification of strategic business partnerships can potentially provide competitive advantages for businesses; however, , this task could be challenging. To help , this study presents a similarity model between businesses that consider the opinions of users on content shared by businesses on social media. Thus, this model captures significant virtual relationships among businesses that are generated by users in the virtual world. Besides, we propose an algorithm for detecting business communities in the considered model. We also propose an algorithm to identify possible business outliers in the detected communities, which could represent an automatic way to identify non-obvious relations that might deserve particular attention of business owners. By exploring approximately 280 million user reactions on Facebook, we show that our results could favor the development of, for example, a new strategic business partnership recommendation service.' address: - ' , , , ' - ' , , , ' author: - - - title: Towards Business Partnership Recommendation Using User Opinion on Facebook --- Introduction ============ Strategic business partnerships are essential for various reasons. For instance, it could favor a competitive advantage for the business. A partnership with a true win-win intention could provide the edge a business needs to surpass its competitors. However, a poorly thought out partnership can hinder instead of help, making this procedure challenging [@bergquist1995building; @elmuti2001overview]. Recently, businesses from different segments have been exploring social media for various purposes, including for marketing. That involves producing and sharing content on social media platforms to promote a service or product, envisioning to achieve branding goals [@hoffman2010can; @tuten2017social]. Thus, social media platforms become also crucial for customer relationship management, among other things [@ferrari2016content; @chaffey2016global]. Those interactions in social media generate a considerable amount of customer-business relationship data. Exploring these data could be an interesting alternative to complement traditional business analysis, such as market analysis and market segmentation, which, typically, do not scale easily. Since the collection of customer-business relationship data on social media can be cheaper and faster, their proper analysis enables a market study in possibly shorter time and fewer effort [@culotta2016mining]. We know that many factors, such as tastes and opinions, could affect the customers’ preferences for business [@trainor2014social; @agnihotri2016social; @hudson2013impact]. We believe that users’ preferences for specific business could be implicitly manifested in the actions performed in the content shared by those companies in social media. In some social media platforms, such as Facebook, users can react to the companies content. Thus, these reactions could be a proxy to capture these implicit preferences. Preferences implicitly manifested by users in actions in social media was also assumed to exist in previous works [@cranshaw2012livehoods; @silva2014you; @Mueller2017; @brito18]. To contribute to the of identifying strategic business partnership, this study aims to find significant virtual relationships among businesses that are generated spontaneously by users in social media. The first step towards that is by proposing a similarity model between businesses that consider the opinions, i.e., reactions, of users on content shared by businesses on social media. For that, it is used public data on social media. Particularly, we explored more than 280 million public user reactions collected from Facebook’s users about businesses in Curitiba, PR, Brazil. Facebook, among many other social media platforms, has been chosen because it is a popular social media among customers and companies [@ferrari2016content]. Besides, we also provide an iterative algorithm for detecting businesses communities in the presented model. We found that it enables the identification of business communities that have a surprising similarity regarding categories of businesses inside each community. Even not considering any information of the businesses themselves, all communities have strong semantic similarities in business that compose them, indicating that our approach has the potential to extract cohesive communities of business. We also proposed an automatic approach to extract possible relations outliers, business, on those communities. We observe that non-obvious relationships between businesses could be extracted by using that approach, which might deserve particular attention of business owners. Our results favor the development of new services and applications. For example, a new strategic business partnership recommendation service that can be useful for entrepreneurs and business owners. This service can contribute to sales improvement, as well as to keep companies more sustainable in the market. We organize the rest of the study as follows. Section \[sec:relatedWork\] presents some of the main related work to this study. Section \[sec:datacollection\] describes the particularities of the data collected, which are used in the model proposed in Section \[sec:modelanalysis\]. Section \[sec:results\] presents the results obtained by this study. Finally, Section \[sec:conclusion\] concludes the paper and points out future work. Related Work {#sec:relatedWork} ============ According to Mukhopadhyay [@mukhopadhyay2018opinion], with the advent of online technologies and social media, individuals are increasingly sharing their views and opinions through the internet, which are influencing and affecting the business sociopolitical and personal contexts. A considerable amount of effort has been made by the academic community with the objective of extracting relevant information from social media, which is evidenced by the constant growth of literature. For instance, the average annual growth rate of the number of publications over the last five years is around $0.15$ at Scopus and $0.37$ at Web of Science Core Collection[^1], and a similar trend is observed in other scientific databases. The information from social media can be used for different purposes [@SilvaCSUR2019], ranging from mobility understanding [@hudson2013impact; @anaBeyondSights] and well-being improvement [@schwartz2013characterizing; @DeChoudhury2016] to city semantics understanding [@aiello2016chatty; @santosWI2018] and gender behavior study [@Mueller2017; @Magno2014]. Some of the studies in this direction have implications for existing and new business. For instance, Cheng et al. [@cheng2011exploring] presented spatiotemporal analyses of users’ displacement, exploring for that a dataset from a Foursquare-like system, i.e., a social media for location sharing. Their results can provide support in decisions about where and when to invest resources in a new business. By observing what people eat and drink in location-based social networks, Silva et al. [@silva2014you] developed an approach to identify boundaries between different cultures, which could be useful, for example, to businesses from a particular country that desire to verify the compatibility of cultural preferences across different markets. Cranshaw et al. [@cranshaw2012livehoods] introduced the concept of Livehoods, which are regions of a city partitioned and grouped by similarity of users’ behaviors. This behavioral targeting study may also be necessary for strategic decisions in companies. The geographic interaction characteristics of online public opinion propagation were also studied by Ai et al. [@ai2017national]. Barbier et al. [@barbier2011understanding] proposed a method to understand the behavior and dynamics of online groups, showing that their results could have practical business implications, such as a better understanding of customers and influence propagation. Yang et al. [@yang2017structural] presented the concept of core-periphery structures, that is a two-class partition of nodes (one is the core, and the other is the periphery), and provide empirical evidence that social communities always have this type of structure. This is an important study in the classic task of community detection, in social network analysis, which has implications on the different influences among users within the network. Thus, the core-periphery structures enable identification of such an influential actor, a business in our case, in a community. Wu et al. [@wu2016simple] were also concerned with studying a community detection task, presenting a new framework for detecting parallel communities, called SIMPLifying and Ensembling (SIMPLE). Similarly, Liu et al. [@liu2017markov] used the Markov-network for discovering latent links, i.e., links which are not directly observable but rather inferred, among people in social networks. Data analysis of social media based on community detection was also used by Alamsyah et al. [@alamsyah2017social], and some challenges of community detection in social media were presented by Tang and Liu [@tang2010understanding]. Social community detection studies are important because they can be applied to networks of different types. In fact, this study presents a business network demonstrating the utility of social community detection in the business context. Grizane and Jurgelane [@grizane2017social] reinforce the importance of social media as a marketing tool for business, and present a model assessing the benefits of investing financial resources in social media. Mahony et al. [@mahony2018if] investigate the adoption of social media by small business, showing that the use of social media can bring benefits not only to large companies but also to small and medium businesses. Kafeza et al. [@kafeza2017exploiting] also focused on assessing business process performance using social media analysis and community detection methods, but with the differential of examining how communities change in time. An interactive community mapping and detection scheme to reveal the dynamics of communities’ evolution around an event is proposed by Giatsoglou et al. [@giatsoglou2015user]. This approach to identifying static and evolutionary communities over time is of particular interest in investigating business dynamism. Dynamic evolution over time was also studied by Pepin et al. [@pepin2017visual], who presented an approach based on a graph model easily adaptable to an interactive visualization. Visual analysis can be useful, for example, in the presentation of business communities and how they change over the years. Considering that the contents of social media are dynamic, Palsetia et al. [@palsetia2014excavating] presented an approach for detecting social communities based on posts, comments, and tweets of users. Their approach is, in some aspects, similar to Algorithm \[alg:businessCommunity\] presented in this study, they iteratively remove communities from the main graph making the detection of the current community not influenced by previously detected communities. Closer to the proposal of this study, two studies were developed to help entrepreneurs and decision makers to find the best place in a city to open a new business, and the core decision process is based on social media data [@lin2016goldmine; @karamshuk2013geo]. Regarding the study of Lin et al. [@lin2016goldmine], business information, such as business type, location, and check-ins, is collected from public pages of Facebook in order to recommend the best places in the city of Singapore to open a new business. Similarly, Karamshuk et al. [@karamshuk2013geo] collected information from Foursquare also aiming to recommend better locations to business, but in contrast to the static data explored in the study of Lin, Karamshuk also considered users’ mobility. The authors of this present study have previously performed a study in this direction [@diegoCourb18]. To the best of our knowledge, [@diegoCourb18] differs from all other studies available in the literature, since it aims to identify virtual relationships among businesses that are generated spontaneously by users in social media. To achieve this goal, that study proposes a new way of modeling these relationships, as well as a strategy to extract relevant connections among businesses. Also, it presents a strategy for detecting businesses communities in the proposed model. The present study significantly builds upon our previous work [@diegoCourb18]. First, it is proposed in this study a new approach to extract relations outliers on the communities detected. , we observe that non-obvious relationships between businesses could be extracted by using that approach, improving the discussion and analysis of communities detected. Also, it is presented important properties of our dataset, and it is discussed more details about the proposed model, for instance, key statistics of the model. It is important to note that our study could be used in conjunction with some of the previous efforts. For instance, the model proposed by Grizane and Jurgelane [@grizane2017social] could be used in conjunction with the model proposed in this research (explained in Section \[sec:modelanalysis\]), in order to enrich the information provided to entrepreneurs about the impacts of the use of social media in business. Data Collection and Processing {#sec:datacollection} ============================== Data Choice {#sec:DataChoice} ----------- The decision of what data to collect is important to support further analysis, as well as to meet possible limitations imposed in the data collection process. For this study, data were collected from Facebook, As the purpose of the model is to identify virtual relationships among businesses exploring user reaction data on Facebook, the data were chosen considering our objectives and what is publicly available on Facebook. Table \[tab:datastructure\] displays the structure of the considered data. We could choose similar information from other social media platforms; however, this assessment is outside the scope of this present study. **Business Data** Example **User Reaction Data** Example --------------------- ----------------------- ---------------------------------- ----------------- Business ID 166765230043005 User ID 154573625550 Business Name Rubiane Frutos do Mar User Name Fulano de Tal Location -25.516122, 49.231571 Business ID that User Reacted to 166765230043005 Category Seafood Restaurant Reaction Type *Like* Number of Check-ins 38627 Number of Fans 15532 Average Evaluation 4.6 : Structure of the considered data.[]{data-label="tab:datastructure"} The first column of Table \[tab:datastructure\], called *Business Data*, represents data referring to the businesses themselves, such as their geographical location, their category (i.e., the market sector in which the business operates) and so forth. Therefore, each business in the dataset has all the information described in the first column. The second column contains *User Reaction Data* for businesses in our dataset. Each reaction expressed on Facebook comes from a user and refers to a particular business. User reaction data makes it possible to create similarity connections among businesses, primarily by using the common reactions expressed by users regarding two or more businesses, as explained in Section \[sec:modelanalysis\]. The business data is used mainly to extract names and categories of the businesses, which assists the reaction collection and evaluation of the results presented in Section \[sec:results\]. All information collected is open and publicly available on the Facebook platform. More details can be found on the Facebook Graph API[^2]. Data Collection --------------- ![Illustration of the data collection process.[]{data-label="fig:DataCollMethod"}](DataCollectionEnglish.png){width="95.00000%"} Figure \[fig:DataCollMethod\] illustrates the main steps performed in the collection process. The data were collected using the Facebook Graph API[^3], all business data collected are located in the city of Curitiba, Brazil, and all user reaction data are from November to December of 2017. First, we collected data referring to the first column of Table \[tab:datastructure\], i.e., *Business Data*. Facebook Graph API requires a geographical coordinate and a radius in meters, then returns results considering the coordinate entered as the center of a circle with the radius informed, returning up to $800$ results per search, that is, up to $ 800 $ businesses in this case. It is known that several regions of the studied city may have this number of businesses, then to increase the chance of getting most businesses of all regions of Curitiba, we considered twenty-one different geographic searches throughout the city. Each geographic search has a radius of $ 2000 $ meters and is centered in different regions of Curitiba, as shown in Figure \[fig:DataCollGrid\]. Figure \[fig:HeatMap\] shows a heatmap representing the number of business found in different regions by the collection process just described. The redder the color, the more business were found in that particular location. As we can see, the central region of the city has a reddish coloration, indicating that more businesses were collected in that region, as expected. Despite that, it is also possible to notice that the resulting dataset includes businesses spread all around the city of Curitiba. Figure \[fig:mapLike\] in Appendix \[app:reacDatabase\] shows how user reactions are distributed across regions of the city, as expected, it is easy to notice that the amount of reactions is more significant in the city center. ![Heatmap representing the number of businesses found in different regions by the business search process.[]{data-label="fig:HeatMap"}](DataCollectionGrid.png){width="\textwidth"} ![Heatmap representing the number of businesses found in different regions by the business search process.[]{data-label="fig:HeatMap"}](HeatMap_Business.png){width="\textwidth"} After obtaining the results of the geographical searches (*Business Data* in Table \[tab:datastructure\]), containing basic data of the businesses in Curitiba, the reactions of the users (*User Reaction Data* in the Table \[tab:datastructure\]) were collected from the business pages previously collected. For that, we obtained the reactions of the posts on the business pages. Because some businesses’ pages have hundreds of posts and others have millions, we only collected reactions of the first one hundred posts. There are five types of reactions available in Facebook, namely *Like, Angry, Wow, Sad and Thankful*; we included all types in the database. We collected a total of $1,986$ georeferenced pages and approximately $280$ million user reactions related to those pages. In Appendix \[app:reacDatabase\] it is presented supplementary information about this dataset. Figure \[fig:compLike\], shows the top twenty businesses regarding user reaction number, in addition, Figure \[fig:catLike\] shows the top twenty businesses categories concerning user reaction number. Data cleaning ------------- After obtaining all the data, an automatic cleaning procedure was performed to increase the consistency of the dataset, consequently increasing the consistency of the final results. We performed three main steps: - Duplicate records removal ; - Inconsistent records (e.g., unnamed pages, without location, and) removal ; - Removal of pages that do not represent businesses (for example, a public square) and their reactions . After the cleaning process, the dataset is left with $1,926$ pages, all representing businesses, and approximately $260$ million reactions. Modeling and Strategies for Data Analysis {#sec:modelanalysis} ========================================= Overview -------- The main steps employed in this study to achieve the proposed objectives can be described in a framework illustrated in Figure \[fig:Fluxograma\]. The framework entries are the *Clean Data*, described in Section \[sec:datacollection\], and a *Target Business*, the business chosen to be analyzed. As outputs, we obtain the *Egonet of the Target Business*, a network of the most relevant direct connections of the target business, and the *Business Tagged Communities*, tagged communities of businesses in which the target business is part of. All non-standard businesses inside communities, considered outliers, are tagged. We provide more details about those outputs next. ![General view of the proposed analysis framework for identifying virtual relations among businesses with social media data.[]{data-label="fig:Fluxograma"}](MethodologyFramework.png){width="95.00000%"} Business Relationship Graph {#sec:graph} --------------------------- With the obtained data, we can then create a model to represent the virtual relations among businesses. The model is a non-directed graph in which vertices represent businesses, and weighted edges represent relations between two businesses. This relationship is built looking at the reactions of users in common between any two businesses. The more common reactions two businesses have in proportion to their reactions, the stronger the relationship between them. Thus, we weight an edge by the Jaccard Index of the set of reactions of each business, representing an index of affinity or similarity between the two sets. In a more formal way, consider $ B=\{b_1,b_2,...,b_{n_b}\}$ being the set of all businesses, where $ n_b $ is the total number of businesses in the dataset. Now consider $ U_i $ being the set of all users who reacted to the *i-th* business. Thus, the graph is defined as in (\[eq:Graph\]): $$\label{eq:Graph} BusinessGraph = (V,E,W),$$ where vertices are businesses, $ V = B $; edges exist if businesses have a minimum of users’ reactions in common, $ E = \{(i,j) : |U_i \cap U_j|>lowerBound\} $; and the weights of the edges are represented, as in Equation (\[eq:GraphWeight\]), by the *Jaccard Index*: $$\label{eq:GraphWeight} W(i,j) = \begin{cases} \frac{|U_i \cap U_j|}{|U_i \cup U_j|} & \text{if } (i,j) \in E \\ 0 & \text{if } (i,j) \notin E \end{cases}$$ Filters and Graph Consistency ----------------------------- In order to increase the consistency of the information about the graph structure, it is considered two essential filtering steps: a reactions filter, and a weak edges filter. First, the reaction filter eliminates negative reactions (of the type *Angry* and *Sad*), because for possible partnerships between businesses what matters are positive reactions. Then, the reaction filter eliminates users who do not frequently express themselves about business in $ B $. So users with two or fewer reactions are eliminated from the model, On the other hand, the filter also eliminates users with too many reactions (i) The proportion of the number of edges $ a $ that a user with $ m $ reactions generates in the graph is quadratic $ a = \frac{m(m-1)}{2} $ Users with too many reactions could be robots (*bots*), a problem that appears in several Web systems [@tasse2017state]. As depicted in Figure \[fig:UserLikeDistribution\], which shows the distribution of total users per number of reactions, After the reaction filtering process, the resulting dataset has 220 million reactions. ![Distribution of the number of users by number of reactions.[]{data-label="fig:UserLikeDistribution"}](Distr_LikesEnglish.png){width="95.00000%"} the weak edge filter removes possible noises in the graph structure, thus removing edges classified as weak edges. We classify an edge as a weak edge by performing a random experiment, in which the typical reactions are randomly distributed among all possible edges in the graph, forming a random graph. Then the edges of the original structure with similar weight to the experiment’s graph can be considered weak edges. The expected value and the variance of the weight of any edge in the random graph (with binomial distribution) are, respectively, given by Equations (\[eq:ExpectedEdge\]) and (\[eq:VarianceEdge\]). $$\label{eq:ExpectedEdge} \mu = E[X] = \frac{n_r}{n_c}$$ $$\label{eq:VarianceEdge} \sigma^2 = Var[X] = \frac{n_r}{n_c} \big(1-\frac{1}{n_c}\big)$$ Where, $ n_r=\frac{\sum_{i,j:i \neq j} |U_i \cap U_j|}{2} $ , $ n_c = \frac{n_b (n_b-1)}{2} $ and $ n_b $ is the number of businesses in the dataset. In this way, an edge between businesses $ i $ and $ j $ is weak if: $ |U_i \cap U_j| \leq lowerBound $, as defined in Equation (\[eq:LowerBound\]), . $$\label{eq:LowerBound} lowerBound = \mu + 3\sigma$$ For the data collected in this study the calculated values were: $ \mu = 120.289 $, $ \sigma = 10.96 $. Thus, $ lowerBound = 153.195$. Considering the dataset of this study, $ \num[group-separator={,}]{978410} $ edges were eliminated, resulting in a total of $ \num[group-separator={,}]{223939} $ edges in the graph with less probability of being random noise. Detection of Business Communities {#subsec:communitiesdetection} --------------------------------- Given a consistent network of business relationships, an essential step in achieving the study’s goal is to detect business communities. As the business graph diameter is $4$, therefore not sparse considering the number of nodes and edges, community detection algorithms based on searching cliques or dense subgraphs with optimal solution, such as the Clique Percolation Method, have a very high computational complexity; therefore they are not applicable in this study. Raghavan et al. [@raghavan2007near] proposed a community detection algorithm based on label propagation , which iteratively uses the exchange of labels between adjacent vertices in such a way that promotes convergence of labels. this algorithm operates in almost linear time, which makes it tractable for dense graphs. For the problem addressed here, it is interesting that communities have at least four businesses (*minSize*), and a maximum of thirty businesses (*maxSize*), since huge communities lose cohesion in possible recommendations. Therefore, an iterative algorithm, Algorithm \[alg:businessCommunity\], is proposed for the detection of communities of businesses. The entries of this algorithm are the business graph (*BusinessGraph*), the minimum size (*minSize*) and maximum size (*maxSize*) of the communities, and the output is a set of business communities. The algorithm performs the following key steps: - Detection of communities with the algorithm described in [@raghavan2007near]; - Business communities with size of *minSize* and *maxSize* are saved for final return; - The union of detected communities, and not saved for return, composes a new graph, named *G*; - From this graph *G*, weak edges are cut forming the graph for the next iteration. \[alg:businessCommunity\] $G \leftarrow BusinessGraph$\ $allCommunities \leftarrow \emptyset$\ $minEdge \leftarrow \min_{i,j \in G} W(i,j)$\ $counter \leftarrow 1$\ $allCommunities$; The communities detected according to Algorithm \[alg:businessCommunity\] are subgraphs that tend to be dense (many edges), so businesses within the same community have a greater cohesion than businesses randomly chosen in the graph. This cohesion is formed by spontaneous user reactions, without additional information that could include a bias in communities detected. Clustering of Business Communities {#sec:clusteringBusCom} ---------------------------------- In this section, it is described the steps to cluster business communities by their similarity. As we are dealing with Facebook data, the clustering process is done considering categories of businesses given by Facebook. However similar processes could be done with data from any other social media platform. Facebook classifies all businesses into one of seven categories: *Interest; Community Organization; Media; Public Figure; Businesses; Non-Business Places*; and *Other*. All of them have subcategories, but the larger number of subcategories ($ 22 $), as well as the greater diversity, are subordinated to the *Business* category. Therefore, we considered all subcategories within *Interest, Community Organization, Media, Public Figure, Non-Business Places,* and *Other* as their parent categories. For instance, all subcategories of *Interest* are transformed into *Interest*. For the *Business* category, all subcategories have been considered. Under each of them, there are sub-subcategories. The sub-subcategories of *Business* were disregarded, as this level of specialization was not considered interesting for the analysis carried out. The *Advertising or Marketing* subcategory of *Business* has, for example, the sub-subcategories *Advertising Agency* and *Copywriting Service*. In this case, all sub-subcategories within *Advertising or Marketing* are considered *Advertising or Marketing*. We performed the same procedure for all other subcategories within *Business*. ![Examples of non-normalized feature vector for each community.[]{data-label="fig:CatVectors"}](FacebookCategories.png){width="\textwidth"} ![Examples of non-normalized feature vector for each community.[]{data-label="fig:CatVectors"}](CategoryVectors.png){width="\textwidth"} After doing this process, a total of $ 28 $ business’ category names ($ 6 $ from the parent categories, and $ 22 $ subcategories from *Business*) were obtained, as illustrated in Figure \[fig:FaceCategories\]. We then built a feature vector with these $ 28 $ categories and counted for each community the number of occurrences of businesses in each of the $ 28 $ categories, as in Figure \[fig:CatVectors\]. These values are then normalized based on the maximum number of locations in a given feature for each community. With this feature vector (represented formally as $ vector(c) $ for a given community $ c $), it is possible to identify the similarity of different communities and perform a clustering process. The clustering algorithm used was the k-means, with the Euclidean distance [@hartigan1975clustering]. For choosing the right $ k $ parameter of k-means algorithm, several different values were tested, and the value with the smallest sum of squared errors was chosen as the best fit for the data. For the dataset presented in this paper, the best fit was $ k = 8 $; thus this value is kept for this study. Formally, k-means receives a set of all communities (called $allCommunities $ as in Algorithm \[alg:businessCommunity\]) and returns a set of clusters, where clusters are disjoint non-empty subsets of $ allCommunities $: $$clusters = kmeans(allCommunities),$$ $\forall{cl_1, cl_2 \in clusters ; cl_1 \neq cl_2}, \quad cl_1 \cap cl_2 = \emptyset, \quad cl_1 \neq \emptyset \quad \textrm{and} \quad cl_2 \neq \emptyset.$ Business Outlier Detection {#sec:outlierdetec} -------------------------- To detect possible businesses outliers inside business communities, first a clustering of business communities, as described in Section \[sec:clusteringBusCom\], has to be performed. In possession of clusters of business communities, the detection is based on *cluster centroids*, which represent the common proportion of business categories for each cluster, and are a central piece in the business outlier detection. In a high level of abstraction, if communities differ significantly from its cluster centroids, they have some outlier businesses inside them. Formally, the cluster centroid is the average vector ($ \in {\rm I\!R^{28}} $) among all the communities from that cluster, as in Equation \[eq:centroid\]: $$\label{eq:centroid} \forall cl \in clusters, \quad centroid(cl) = \frac{\sum_{c\in cl}{vector(c)}}{|cl|}$$ Cluster centroids are used to build *cluster signatures*. A cluster signature is a set of the most representative business categories for a particular cluster. Each category (dimension) inside the cluster centroid vector represents a certain percentage of all categories present in the vector so that all percentages sum up to 100%. Therefore, picking the categories with the greatest percentages so that their percentages sum up to a threshold ($>50\%$), makes them the set of the most representative categories for that cluster. For example, a cluster could have as its signature the categories *Food and Beverage*, *Shopping and Retail* and *Entertainment*, because they meet a 70% threshold defined in a particular application. More formally, Algorithm \[alg:getSignature\] specifies how to capture a *cluster signature* given two inputs: A vector $v \in {\rm I\!R^n}$, representing the centroid, where $n$ is the total number of categories; and a $threshold \in \quad ]0.5,1] $, a number representing the minimum percentage considered as majority. As a result, Algorithm \[alg:getSignature\] returns a set containing the greatest dimensions, i.e. categories, of the vector $v$ that corresponds to the closest possible number to $threshold*100\%$ of all dimensions. \[alg:getSignature\] $signature$\ Finally, Algorithm \[alg:outlier\] is responsible for tagging all businesses which categories are not included in its corresponding cluster signature, therefore considered as outliers. This algorithm takes as input a set of clusters and returns the same input structure, however with additional outlier tags. Note that firstly it calls Algorithm \[alg:getSignature\], to get each cluster signature, and next it tags businesses which categories are not in its cluster signature. Note that each community has a vector (as in Figure \[fig:CatVectors\]), and each cluster has a centroid (as in Equation \[eq:centroid\]), both are captured in Algorithm \[alg:outlier\] as $vector(community)$ and $centroid(cl)$, respectively. \[alg:outlier\]\ Results {#sec:results} ======== The two outputs of the framework are the tagged communities, i.e., tagged as an outlier or not, that includes the target business (informed by the user), and the *egonet* of the target business, consisting of a subgraph of all edges of the target business (see Figure \[fig:Fluxograma\]). As there are considerably large *egonets* for individual businesses, the *egonet* size in this study was limited to a maximum of seven adjacent vertices (with the strongest edges) plus the target business. Note that this parameter could be adjusted for each case under study. ![Partial view of the business graph on the map of Curitiba.[]{data-label="fig:FaceGraphMap"}](FaceMap.png){width="85.00000%"} Figure \[fig:FaceGraphMap\] illustrates the complete graph, constructed by the proposed framework, shown on a map of the city of Curitiba. To ease the visualization, we only show edges with more than $ 1,500 $ common reactions. The thicker the edge, the stronger the relationship between nodes. We show each node in the graph according to the location of the business it represents. Note that there are vertices far from the city center with a considerably high density of edges going towards the city center, indicating a strong activity also in outlying neighborhoods. An important detail was captured during analysis. The nodes “Prefeitura de Curitiba” (N1) and “RPC” (N2) have a strong influence on business graph (as shown in Appendix \[app:reacDatabase\] by Tables \[tab:graphNodeRanking\] and \[tab:graphEdgeRanking\]). N1 represents the city hall of Curitiba, and it is a very popular Facebook page [^4] not only in the city of Curitiba but also nationally. N2 is the largest TV channel in Curitiba. Both strongly impact the analysis carried out by this study. N1 does not represent a business and N2 is an obvious TV media partnership. Their edges influence on the business relationship graph potentially hides interesting smaller business relationships. In order to favor more valuable relationships, the analysis was performed without both nodes. As this graph is quite large, the extraction of useful information becomes complicated to human eyes, justifying the extraction of communities and *egonets*. After running Algorithm \[alg:businessCommunity\] with the parameters considered, $ 144 $ communities were detected, each ranging from $ 4 $ to $ 30 $ businesses located in the city of Curitiba. Figure \[fig:LeisureCommunity\] illustrates a community containing entertainment businesses (e.g., “Blood Rock Bar”, and “SSCWB - Shinobi Spirit”) and food businesses (e.g., “Ca’dore Comida Descomplicada”), so they are businesses united by the “leisure” context. The two communities illustrated in Figures \[fig:FashionCommunity\] and \[fig:FashionCommunity2\] both have businesses bound together by the context that can be called “fashion”, because it contains businesses from the beauty salon sector (e.g., “Studio Andressa Mega Hair” and “Cheias de Charme Costméticos”), modeling agencies (e.g., “Nk Agencia de Modelos” and “South Models Parana”) and fashion stores (e.g., “TONY JEANS” and “Zandra Bolsas” ). We can note that, even though both the business network construction (see Section \[sec:graph\]) and the Algorithm \[alg:businessCommunity\] did not use any information of the businesses themselves, all communities detected have similar strong semantics that binds businesses together inside each community. The business category clustering analysis can illustrate those contexts in a more general view, considering all communities detected. The clustering step, then, unites all similar communities, by business categories, in eight different clusters (for $ k = 8 $ as discussed in Section \[sec:clusteringBusCom\]). To illustrate the clusters, eight word clouds of businesses’ categories, inside each cluster, were generated and shown in Figure \[fig:wordcloudsGrupos\]. Besides, Figure \[fig:clusterLike\] shows the number of user reactions in each cluster. In those word clouds, we did not perform the subcategory renaming process made to execute K-means, so we considered the original names. Note the surprising similarity between the categories in each group. For example, Cluster 1 is related to leisure, containing predominantly food, drink and entertainment businesses, Cluster 2 contains most businesses related to beauty and style, while Cluster 3 is more related to establishments about automotive products and services. This analysis shows the existence of a predominant context in each community. Taking into account information in Figure \[fig:wordcloudsGrupos\] and Figure \[fig:clusterLike\], it is possible to notice that the most popular contexts are leisure and food, represented by Cluster 1, followed by shopping malls, represented by Cluster 7. ![A chart presenting user reaction number per cluster[]{data-label="fig:clusterLike"}](ClusterLikes.png){width="\textwidth"} Knowing that there is a tendency of having a predominant context of business in communities, outliers, i.e., business outside the predominant type of business, can be useful for decision makers. Next, we present results in this direction following the procedures described in Section \[sec:outlierdetec\]. The community illustrated in Figure \[fig:FashionCommunity2\], which is a fashion related community (its predominant context), has one outlier inside it, which is the business called “Grupo AllCross” (tagged in red). This business is a health plan consultant business, being not part of the “fashion” context and, thus, correctly identified as an outlier by Algorithm \[alg:outlier\]. outliers cannot be ignored in the results presented here, as they might represent non-trivial potential business partnerships. Although outliers are not part of the dominant context, they still have strong connections to businesses from that context. Figure \[fig:EgonetRubiane\] shows the *egonet* of Rubiane, a seafood restaurant, which was arbitrarily chosen for analysis, and Figure \[fig:FoodCommunity\] shows a detected community in which Rubiane is included. On the one hand, having the business’ *egonet*, it is possible to visualize the direct connections that the target business possesses with other businesses. On the other hand, having communities, it is possible to notice connections that may not be direct to the target business. Since these non-direct connections are within a community (detected by the Algorithm \[alg:businessCommunity\]), they are cohesive (a dense subgraph) and may represent possible non-trivial partnerships for the business under evaluation. For example, the company “Quintal do Monge” does not appear in the Rubiane’s *egonet* shown in Figure \[fig:EgonetRubiane\], but it appears in a community where Rubiane is also included, shown in Figure \[fig:FoodCommunity\]. Also in Figure \[fig:FoodCommunity\] notice that the business called “Cannes Turismo” is a tourism related business and was tagged as an outlier by Algorithm \[alg:outlier\]. Rubiane, for instance, could make use of this result to increase its sales by creating business partnerships, such as selling products and services along with the businesses found in the results, as well as marketing partnerships and joint marketing campaigns. For the case of the restaurant analyzed here, we observe that competitors appeared in the same community, for example, “Braseirinho Frutos do Mar”. For the case involving restaurants, this could be explained by the fact that users tend to attend several restaurants and some may be of the same type. However, this is not a problem with the proposed approach, since the entrepreneur is who decides the best strategy of how to explore the results. Note that a partnership could be made with competing establishments. However, these cases deserve special attention. Conclusion and Future Work {#sec:conclusion} ========================== The approach introduced in this study aims to provide a new way of identifying significant and non-trivial relations between business, which could ease the laborious task of strategic business partnerships recommendation. This study shows, using large-scale data from Facebook, that the proposed approach could be an important building block for the development of new applications and services, including a business partnership recommender. For instance, we observe the existence of competing businesses in the same community. As one of the possible implications of this study is to contribute to identifying new business partnerships, it may be interesting to determine a way to detect whether two businesses are competitors to improve the performance of possible recommendations. In addition, it is essential to perform a qualitative evaluation considering business owners or decision makers of the studied businesses. This is essential to understand how to explore the results in practice better. Another direction is to consider the temporality of the reactions, to evaluate, for example, the temporal correlation in the communities. Competing interests {#competing-interests .unnumbered} =================== The authors declare that they have no competing interests. Author’s contributions {#authors-contributions .unnumbered} ====================== D.P.T. and T.H.S. designed the model and the computational framework and analyzed the data. D.P.T. carried out the implementation. D.P.T., A.T.F., and T.H.S. wrote the manuscript with input from all authors. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Lucca Rawlyk, Fernanda Gubert, and Erik Almeida for their valuable help in this work. This study was partially supported by the project CNPq-URBCOMP (process 403260/2016-7), CAPES, CNPq, and Fundacao Araucaria. [44]{} \#1[ISBN \#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[**\#1**]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1\#1\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1PreBibitemsHook , , : . , () , : . (), – () , : (), () , : . , () : Content marketing and brand engagement on social media: a study of facebook[']{} s posts in the ecommerce industry in brazil. PhD thesis, FGV - Fundação Getúlio Vargas (2016) : Global social media research summary 2016. Smart Insights: Social Media Marketing (2016) , : . (), – () , , , : . (), – () , , , : . , – () , : . (), – () , , , : . In: , () , , , , : In: , () , , , : . (), (). doi: , , , : . In: , () : . (), – () , : . (), () : . (), – () , , , , , , : . (), – () , , : . In: , pp. – () , , , , , , , , , , : In: , () , , : . In: , pp. –. , () , , , : . (), () , , , , : . In: , () , : , pp. –. , () , , , : In: , , pp. – () , , , , : The national geographic characteristics of online public opinion propagation in china based on wechat network. GeoInformatica, 1–24 (2018) , , : . (), – () , , , , : . , – (). , , , : . (), – () , , , , , : Markov-network based latent link analysis for community detection in social behavioral interactions. Applied Intelligence, 1–16 (2017) , : . In: , pp. – (). , : . In: , pp. –. , () , : . , – () , , , : . In: , , p. (). , , : . In: , pp. – (). , , : . (), – () , , , , : . , – () , , , : . (), () , , , , , : . In: , , pp. – (). , , , , : . In: , , pp. – (). doi:. , , : . In: , () , , , : In: , , pp. – () , , : . (), () : Clustering algorithms (1975) Supplementary information on business reaction database and business relationship graph {#app:reacDatabase} ======================================================================================= ![Histogram of the top twenty companies in terms of user reaction number. []{data-label="fig:compLike"}](CompanyLikes.png){width=".85\textwidth"} ![Histogram of the top twenty categories in terms of user reaction number. The names are the original ones and do not reflect the renaming process shown in figure \[fig:FaceCategories\].[]{data-label="fig:catLike"}](CategoryLikes.png){width=".85\textwidth"} ![A map showing the amount of user reactions per region considered in the collection process presented in Section \[sec:datacollection\].[]{data-label="fig:mapLike"}](MapLikes.jpg){width=".55\textwidth"} **Ranking** **Business Name** **Number of Connected Edges** ------------- ------------------------------------------ ------------------------------- 1 Prefeitura de Curitiba 1396 2 RPC 1357 3 Portal Banda B 1329 4 Clube Atlético Paranaense 1273 5 Aliança Móveis 1264 6 Shopping Mueller 1236 7 Dr. Freeze 1200 8 CWB Brasil 1200 9 Curitiba Comedy Club 1199 10 Shopping Palladium - Curitiba 1196 11 ParkShopping Barigüi 1196 12 Disk Ingressos 1164 13 Ripa na Chulipa 1148 14 Taco el Pancho 1148 15 Integra - Cursos & Treinamentos Curitiba 1138 16 Saia Justa Vestidos 1132 17 Cosmeticos Curitiba Centro 1100 18 Leve Sabor 1100 19 Zapata Bar 1099 20 Universidade Tuiuti do Paraná - UTP 1093 21 Espaço Gourmet Escola de Gastronomia 1086 22 Pele Morena Lingerie 1086 23 Invisible Braces Ortodontia 1083 24 Mais 55 1077 25 Daju 1077 26 La Passion Palladium 1074 27 Vida Leve Refeições Saudáveis 1071 28 Vivá 1063 29 Blood Rock Bar 1061 30 Rádio 98FM Curitiba 1052 31 Bar Quermesse 1047 32 Seven Entretenimento 1047 33 Power Airsoft 1045 34 Bar e Restaurante Hora Extra 1035 35 Hallorino Jr 1032 36 PolloShop 1025 37 Nell Carmo Hair Stylist 1025 38 BandNews FM Curitiba 1020 39 Cantinho da Bica 1010 40 Rádio Mundo Livre FM 1000 41 Império da Pizza 995 42 DOCK Premium 991 43 Autoescola Bello 991 44 A Barateira Feirão de Calçados 989 45 Hospital Pequeno Príncipe 988 46 Barone Consignações 985 47 Tatica Imoveis 984 48 Shopping Total 981 49 Casa da Bruxa 973 50 Plus Santé Emergências Médicas Ltda 973 : Business graph nodes ranked by number of connections.[]{data-label="tab:graphNodeRanking"} **Ranking** **Edge Node** **Edge Node** ------------- ------------------------------------------ ---------------------------------------- -------- 1 RPC Prefeitura de Curitiba 127958 2 Portal Banda B Prefeitura de Curitiba 82904 3 Loja Us Store Vissothi 75629 4 Portal Banda B RPC 74084 5 Clube Atlético Paranaense Prefeitura de Curitiba 73948 6 Associação Evangelizar É Preciso TV Evangelizar 72184 7 Loja Us Store Pantufas.com.br 57844 8 Clube Atlético Paranaense RPC 56555 9 Shopping Mueller Prefeitura de Curitiba 54823 10 Clube Atlético Paranaense Portal Banda B 54437 11 ParkShopping Barigüi Prefeitura de Curitiba 51522 12 Curitiba Cult Prefeitura de Curitiba 50463 13 Shopping Palladium - Curitiba Prefeitura de Curitiba 49198 14 BandNews FM Curitiba Prefeitura de Curitiba 46635 15 Universidade Tuiuti do Paraná - UTP Prefeitura de Curitiba 45870 16 Vissothi Pantufas.com.br 45213 17 Arquitêta - Arquitetura & Arte Prefeitura de Curitiba 45194 18 Blood Rock Bar Prefeitura de Curitiba 44101 19 Rádio Mundo Livre FM Prefeitura de Curitiba 44033 20 ParkShopping Barigüi Shopping Mueller 43942 21 Saveiro Brasil ADRENALINA MX 42377 22 Dr. Freeze Prefeitura de Curitiba 41481 23 Ripa na Chulipa Aliança Móveis 41188 24 CWB Brasil Prefeitura de Curitiba 41041 25 Portal Banda B Aliança Móveis 40995 26 Bella Morena Boutique Rua Teffé 40642 27 Shopping Mueller Aliança Móveis 39563 28 Pele Morena Lingerie Aliança Móveis 38903 29 RPC Aliança Móveis 38812 30 Prefeitura de Curitiba Curitiba Comedy Club 38638 31 Shopping Mueller RPC 38469 32 Prefeitura de Curitiba Aliança Móveis 38298 33 Shopping Palladium - Curitiba RPC 38193 34 Pantufas.com.br Saveiro Brasil 38159 35 Hallorino Jr RPC 37498 36 ParkShopping Barigüi RPC 37350 37 Hospital Pequeno Príncipe Prefeitura de Curitiba 36904 38 Associação Evangelizar É Preciso Exper. de Deus com padre Reg. Manzotti 36505 39 Disk Ingressos Prefeitura de Curitiba 36248 40 Integra - Cursos & Treinamentos Curitiba Aliança Móveis 35842 41 Espaço Gourmet Escola de Gastronomia Prefeitura de Curitiba 35641 42 CWB Brasil RPC 35437 43 RPC BandNews FM Curitiba 35045 44 Saia Justa Vestidos Aliança Móveis 34837 45 Portal Banda B CWB Brasil 34823 46 Portal Banda B Shopping Palladium - Curitiba 34641 47 TV Evangelizar Exper. de Deus com padre Reg. Manzotti 34596 48 Invisible Braces Ortodontia Aliança Móveis 33152 49 Aliança Móveis Barone Consignações 33150 50 Portal Banda B Shopping Mueller 32969 : Business graph edges ranked by weight.[]{data-label="tab:graphEdgeRanking"} [^1]: These values represent average growth using the term “Social Media”, and “Social Media” and “Business” in conjunction. [^2]: https://developers.facebook.com. [^3]: https://developers.facebook.com/docs/graph-api/overview. [^4]: https://www.facebook.com/PrefsCuritiba.
--- abstract: | We present the results of the Virgo high-resolution CO survey (ViCS) obtained with the Nobeyama Millimeter-wave Array (NMA). This survey was made in the course of a long-term project at Nobeyama from 1999 December through 2002 April. The objects were selected from Virgo cluster members, considering CO richness from single dish flux, mild inclination, and lack of strong tidal perturbations. The central $1{'}$ regions ($\sim 4.7$ kpc) of 15 spiral galaxies were observed with resolutions of $2-5{''}$ and $10-20{\rm \,km\, s^{-1}}$, and sensitivities of $\sim 20\,{\rm mJy \, beam^{-1}}$ for a $10{\rm \,km\, s^{-1}}$ channel. The objects lie at the same distance of the Virgo cluster (16.1 Mpc), which is advantageous for comparisons among individual galaxies. We describe the details of observations and data reduction, and present an atlas of integrated CO intensity maps, velocity fields and position-velocity diagrams along the major axes. The molecular gas morphology in the Virgo galaxies shows a wealth of variety, not specifically depending on the Hubble types. Several galaxies show strong concentration of gas in the central few kpc region, where the CO morphology shows either “single-peak” or “twin-peaks”. Morphology of more extended CO components can be classified into “arm-type”, “bar-type”, and “amorphous-type”. [**Key Words**]{} : galaxies: spiral — galaxies: ISM — galaxies: structure — galaxies: Virgo author: - | Y. Sofue$^1$, J. Koda$^{1,2,3}$, H. Nakanishi$^1$, S. Onodera$^1$, K. Kohno$^1$,\ A. Tomita$^4$, and S. K. Okumura$^3$\ \ [1. Institute of Astronomy, University of Tokyo, Mitaka, Tokyo 181-0015]{}\ [2. National Astronomical Observatory, Mitaka, Tokyo 181-8588]{}\ [3. Nobeyama Radio Observatory, Minamimaki-mura, Nagano 384-1305]{}\ [4. Faculty of Education, Wakayama University, Wakayama 640-8510]{}\ \ [Email: sofue@ioa.s.u-tokyo.ac.jp]{} date: title: | The Virgo High-Resolution CO Survey I.\ – CO Atlas – --- =-5mm =-5mm =-10mm Introduction ============ CO-line observations play an essential role in studying the kinematics and interstellar physics in the central regions of spiral galaxies, where the interstellar matter is mostly in the molecular-gas phase and is strongly concentrated (Sofue et al. 1995; Honma et al. 1995). There have been numerous observations of nearby galaxies in the CO line emissions with single dish telescopes (Young & Scoville 1991; Braine et al. 1993; Young et al. 1995; Nishiyama & Nakai 2001). Large-scale CO line surveys of the Virgo galaxies have been obtained using the FCRAO 14-m telescope at an angular resolution of 45$''$ by Kenney & Young (1988), and the BTL 7-m telescope by Stark et al. (1986). These surveys with single dishes were made with angular resolutions of tens of arcsec, and have given information about the global structure of molecular disks in Virgo and nearby galaxies. Interferometer observations at high angular resolutions are crucial for studying detailed molecular disk structures within the central few hundred parsecs (Sargent & Welch 1993). High-spectral resolution is also crucial to investigate the detailed kinematics of the central gas disks. Both high spatial and high spectral resolutions provide us with precise velocity fields and rotation curves, which are the basis for deriving the fundamental parameters such as the mass distribution, bars and related shock phenomena, triggering mechanism of starburst and/or fueling mechanism of massive black holes. Interferometer observations have often performed to investigate the individuality of each galactic center and activity. Recently, some large surveys of nearby galaxies have started to be reported. The Nobeyama mm-wave Array (NMA) and Owens Valley Radio Observatory (OVRO) mm-wave array were used since 1990’s to map the central regions of nearby spiral galaxies with the CO line at a typical angular resolution of $4{''}$ (Sakamoto et al. 1999a). The Berkely-Illinois-Maryland Association Survey of Nearby Galaxies (BIMA SONG) has mapped 44 nearby galaxies at typical resolutions of $6''$ (Regan et al. 2001). Interferometer observations of several nearby galaxies have been also conducted from various interests, such as bars (e.g., Kenney et al. 1992; Regan et al. 1999), star formation (e.g., Wong & Blitz 2002), and nuclear activity (e.g., Baker 1999; Sakamoto et al. 1999a; Kohno et al. 1999; Schinnerer et al. 1999). The ViCS (Virgo high-resolution CO survey) project with the NMA has been performed in order to obtain a homogeneous high angular- and spectral-resolution database for a large number of CO-bright Virgo Cluster spirals in the ${{\rm ^{12}CO} (J=1-0)}$ line. Angular resolutions were $\sim 3{''}$ after reduction in the conventional CLEAN procedure with natural weighting. The major scientific motivation was to investigate the detailed central kinematics of the galaxies, particularly the innermost rotation curves from analyses of position-velocity diagrams across the nuclei, which would be effective to detect central compact massive objects. The data are also useful for investigation of the kinematics and ISM physics of the central molecular disks, and their environmental effect in the cluster circumstance. An advantage to observe the Virgo Cluster galaxies is their almost identical distance, which has been accurately determined to be 16.1 Mpc ($1{''}$ corresponds to 78 pc) by the Cepheid calibrations (Ferrarese et al. 1996). Since our target galaxies lie within 2 Mpc from the Virgo center, M87, the distance ambiguity will be at most 15%, mostly less than 10%. The accurate distance will enable us to estimate physical quantities rather precisely, such as the CO and dynamical masses, and linear scales of gas disks. The ViCS results will be published in a series of papers. In this paper we describe the overall observations and reduction, and present an atlas of the central molecular disks of Virgo galaxies. In the forthcoming papers we will describe more details of the observations, analyses, and results for individual galaxies, as well as rotation curves and central kinematics, investigations of the ISM physics, and comparison with other wavelengths observations. The database will be opened for public use on our web page. Observations and Reduction ========================== Target Galaxies --------------- The target galaxies in the survey have been selected from the list of spiral galaxies of the FCRAO CO-line survey by Kenney & Young (1988) by the following criteria. 1. [The sources were chosen in the order of CO line peak antenna temperatures $T_{\rm A}^{\star}({\rm peak})$ at the optical centers. Twenty-eight galaxies with the peak antenna temperatures above $\rm 20\,mK$ were selected from 42 galaxies of Kenney & Young (1988).]{} 2. [Inclination angles were limited to be $25{^\circ}\leq i \leq 75{^\circ}$ in order to investigate central gas dynamics. This criterion excluded NGC 4293 ($i=76{^\circ}$), NGC 4302 ($i=90{^\circ}$), NGC 4312 ($i=78{^\circ}$), and NGC 4710 ($i=90{^\circ}$).]{} 3. [Galaxies with morphological type of S0, i.e. NGC 4293, NGC 4710 and NGC 4438, were excluded.]{} 4. [Interacting galaxies were excluded by a criterion that the galaxies have no companion within $8'$ radius. Pairs of NGC 4568/4567, NGC 4298/4302, and NGC 4647 were excluded.]{} 5. [Peculiar galaxies in optical images, i.e. NGC 4438 and NGC 4064, were excluded.]{} 6. [Galaxies observed with the NMA since 1994 were excluded. NGC 4321 and NGC 4527 have been observed by Sakamoto et al. (1995) and Sofue et al. (1999), respectively.]{} Sixteen galaxies were selected on the basis of these criteria, and we have observed 15 galaxies except NGC 4450. The targets are listed in Table 1, which also summarizes the morphological type, B-band total magnitude, optical size, inclination, position angle from optical isophotal contours, and nuclear activity from optical spectroscopy (Ho et al. 1997a,b). The table list also the CO-line peak temperature, integrated intensity, intensity-weighted mean velocity, and velocity width from the single dish CO-line observations (Kenney & Young 1988), The selection criterion 1 by the peak antenna temperature was applied because of higher probability of detection in a single channel. We note that the FCRAO survey with 45$''$ resolution (Kenney & Young 1988) shows that all galaxies, except one, are centrally CO peaked. Their data show that the peak temperature and maximum CO intensity have approximately a linear correlation, and that the CO scale radius is an increasing function of maximum CO intensity. This implies that our selection by peak temperature is approximately equivalent to a selection by total integrated CO intensity, and hence by total CO luminosity for their equal distances. Hence, our target galaxies, and some that were not selected by the reason that they were already observed by NMA, would represent the most CO luminous Virgo galaxies. — Table 1 — Observations ------------ We have performed aperture synthesis observations of the ${{\rm ^{12}CO} (J=1-0)}$ line emission from the 15 Virgo galaxies listed in Table 1 in the course of a long-term project during the winter seasons of 2000 (1999 Dec. -2000 Apr.), 2001 (2000 Dec.-2001 Apr.) and 2002 (2001 Dec.-2002 Apr.). We made the observations in three available configurations: AB (long baselines), C (medium) and D (short) configurations. The visibility data covered projected baselines from about 10 to 351 m. The NMA consisted of six 10-m antennas, providing a field of view with a FWHP beam width of $65{''}$ at 115 GHz. Since interferometry observations sample data in a Fourier space, the range of collected Fourier components, or baselines, determines the detectable sizes of objects. In our observations, the antenna size limited the minimum projected baseline length, restricting the largest detectable size to about $54''$. Thus our data may miss some fluxes of extended components of the objects. Table 2 lists the observation periods and array configurations, observed central frequencies, and the positions of pointing centers as well as phase-reference centers for individual galaxies. — Table 2 — The antenna were equipped with tuner-less SIS receivers, which had receiver noise temperatures of about 30 K in double sidebands, and the typical system noise temperatures were about 400 K in single sideband. We used a digital spectro-correlator system (Okumura et al. 2000), which had two spectroscopic modes (bandwidth of 512 and 1024 MHz); we used the mode covering 512 MHz ($1331 {\rm \,km\, s^{-1}}$) with 256 channels and 2 MHz ($5.2 {\rm \,km\, s^{-1}}$) resolutions. The nearby radio point source 3C 273 was used as the flux and phase calibrator, which was observed every 20 minutes. The band pass response across the channels was also calibrated using the 3C 273 data. The intrinsic flux density of 3C 273 at the observing frequency was calibrated for each observing run (typically 5 days) using the planets (Mars etc.). The flux of 3C 273 during the three years of observing periods was gradually variable between 9 and 12 Jy. The uncertainty in the absolute flux scale for each observing run was $\sim \pm$ 15 %, which apply to all results presented in this paper. Reduction --------- The raw data were calibrated using UVPROC-II, a first-stage reduction software (Tsutumi et al. 1997), and were Fourier-transformed using the NRAO Astronomical Image Processing System (AIPS). We applied the CLEAN method with natural weighting to obtain three dimensional data cubes (RA, DEC, ${V_{\rm LSR}}$). The intensity data were averaged in 2 to 6 bins ($10.4 - 31.2 {\rm \,km\, s^{-1}}$) of the original channels in the spectro-correlator, and the channel increments were set to 2 to 4 ($10.4 - 20.8 {\rm \,km\, s^{-1}}$). The intensity scale at this stage was in Jy per synthesized beam, which can be converted to brightness temperature in Kelvin. The resultant synthesized beam sizes range in $2-5{''}$. The typical rms noise scaled for a $10{\rm \,km\, s^{-1}}$ channel was ${\rm 20 mJy \, beam^{-1}}$. Table 3 lists the resultant parameters of the data cubes for individual galaxies. — Table 3 — In addition to the above reduction parameter set, we CLEANed the data with tapered and uniform weighting functions, which provided low ($\sim5"$) and high ($\sim1"$) resolution maps, respectively. We will present those maps in separated papers for discussing individual galaxies. We calculated the fractions of the recovered single-dish flux from our aperture synthesis observations, which are listed in table 3. We first corrected the data cube for primary beam attenuation, and convolved with a Gaussian single-dish beam of FWHM $45{''}$ (comparable to that of the FCRAO survey), and took the flux at the pointing center of the FCRAO observations. The recovered fluxes were typically 80%, which recovered almost all fluxes within the field of view. However, a few galaxies showed exceptionally low and high recovered fluxes. The recovered flux of NGC 4535 was greater than the FCRAO flux by about twice. We made careful analyses of the raw data for several times: We made UV data for C, D, C+D, and AB+C+D array configurations, which were obtained in independent observing periods (Table 2). We, then CLEANed them separately, but obtained about the same flux for all the configurations. Also the rms noises of the reduced cubes and maps are comparable to those for other galaxies observed in the same periods. Hence, we conclude that the flux calibration was correct. The flux disagreement could be possible, if the FCRAO flux was about significantly under-estimated, and ours was about 15% over-estimated, both within the measurement errors. The recovered flux of NGC 4548 was only 16% of the FCRAO flux. This may have happened due to larger sizes of the missing components than the maximum detectable size of our observations (§2.2). Alternatively, it could be due to very low brightness of the extended components. In fact, the FCRAO flux is as weak as 6.7 K  with 45$''$ beam. If the rest 84% is extended in the 45$''$ beam, the intensity for our beam ($2''.6 x 2''.0$) would be 5.6 K . For an assumed line width of about 20 in our beam, the expected brightness is only 30 mK, which is much below our detection limit. We have checked for continuum sources in the galaxies by making channel maps for a wide range of velocity. The channel maps for individual galaxies are shown in figure A1 in the Appendix, where the outernost channels can be used to check continuum fluxes. As figure A1 shows, no significant continuum source has been detected in any of the observed galaxies by the present sensitivity, which is typically 20 mJy beam$^{-1}$ in rms noise as listed in table 3. Nevertheless, such galaxy as NGC 4579, which contains an AGN, could have some continuum emission. So, we applied deeper continuum checking for this galaxy. We CLEANed the 2 MHz $\times$ 256-channel data cube of NGC 4579 by binning every 32 channels (168 km/s). However, no continuum source stronger than 10 mJy was found in the outermost-velocity channels ($\pm \sim 500 {\rm \,km\, s^{-1}}$), where no CO line emission is expected. The CO Atlas of Virgo Spirals ============================= CO Intensity Maps ----------------- Figure 1 (top-left panels) shows optical ‘looks’ of the observed galaxies for a $5'\times5'$ area taken from the STScI Digitized Sky Survey (DSS) second generation blue images. The inserted squares show areas for the CO maps. We present total integrated intensity maps of the CO emission in the bottom left panel. The intensity maps were obtained by using the AIPS task ’MOMNT’, which integrated the intensities by velocity only when the intensity exceeds a threshold level. The threshold level was taken to be 2 to 3 times the rms noise in the data cube. Channel maps of the observed galaxies are shown in Appendix, and will be discussed in more detail in the forthcoming papers of this series on individual galaxies. The bottom-right panel shows intensity-weighted velocity fields for the $1'\times 1'$ regions (our field of view is $65{''}$ at FWHP at 115 GHz), and the top-right panels show position-velocity diagrams along the major axes (top right), except for NGC 4254 and NGC 4402, for which $80''\times80''$ regions are shown. The primary-beam correction has not been applied in these maps. The intensity scales in the maps are in Kelvin of brightness temperature, rather than in $\rm Jy\,beam^{-1}$ scale that is directly derived from interferometer observations, for convenience to compare with single dish observations and to convert to the molecular hydrogen column density. We measured the peak CO intensities using these maps and listed in table 5 together with the CO peak brightness temperatures as read from the data cubes. — Table 5 — Figure 2 shows the CO intensity maps in the same angular and linear scales. Each box covers a $1' \times 1'$ region, which corresponds to 4.7 kpc $\times$ 4.7 kpc region for an assumed Virgo distance of 16.1 Mpc (Ferrarese et al. 1996). Figure 3 shows velocity fields corresponding to figure 2. In figure 4 we plot the CO intensity maps in the same angular scale on the Virgo Cluster region, where each map scale has been enlarged by 50 times the real angular size. — Fig. 1 — — Fig. 2 — — Fig. 3 — — Fig. 4 — Velocity Fields --------------- Figure 1 (bottom left panels) shows intensity-weighted velocity fields for the observed $1'\times 1'$ regions, which are the same regions for the integrated-intensity maps at the bottom right panels. For NGC 4254 and NGC 4402, the $80''\times80''$ regions are presented. Figure 3 shows the velocity fields in the same angular scales. The general pattern of the velocity field in figures 1 and 3 is a symmetric spider diagram, indicating a regular circular rotation of the CO disk. Slight non-circular streaming motions, such as due to spiral arms and bars, are superposed on the regular rotation. However, there are some galaxies that show strong non-circular motion; NGC 4569 has an extremely large deviation from the circular rotation, indicating either high-velocity streaming or a large-amplitude warping in the central disk. Position-Velocity (PV) diagrams ------------------------------- Position-velocity (PV) diagrams are shown in figure 1 (top right). These diagrams were made by slicing the data cubes along the optical major axes with appropriate widths for individual galaxies. Most galaxies have steeply rising rotation velocity in the central 100 to 200 pc radii. Such sudden rise of rotation velocity in the close vicinity of the nuclei had not been clearly detected in the lower-resolution observations. One of the major purposes of the present CO survey was to obtain the central rotation curves to investigate possible central massive cores, which have been found in many nearby galaxies (Sofue et al. 1999; Takamiya & Sofue 2000; Sofue and Rubin 2001; Sofue et al 2001; Koda et al 2002). In a separated paper (Sofue et al. 2003a), we describe the result of detailed analyses of the PV diagrams and derivation of accurate rotation curves by applying a new iteration method (Takamiya & Sofue 2002), and discuss the central mass distribution. Uni-scale Atlas of CO Intensities --------------------------------- In order to give an overview on the general characteristics of distributions of the molecular gas (CO intensity) in the observed galaxies, it is helpful to compare the galaxies in a unified scheme. In figure 2, we present the observed $I_{\rm CO}$ distributions in the same angular and intensity scales. The image sizes are all $1.'0 \times 1.'0$, corresponding to 4.68 kpc $\times$ 4.68 kpc for an assumed distance of 16.1 Mpc. The contours are drawn at the same levels of 5, 10, 20, 40, 80, 160 $\rm K \, {\rm \,km\, s^{-1}}$ for all galaxies. Uni-scale Atlas of Velocity Fields ---------------------------------- Figure 3 shows the same as Figure 2, but for distributions of intensity weighted velocities. The velocity fields generally show a ’spider diagram’ pattern, indicating circular rotation of the molecular disk. The rotation velocity rises rapidly in the central few hundred parsecs, which is more clearly observed in the position-velocity diagrams in figure 1. In many galaxies, the spider diagrams are more or less distorted, indicating either non-circular streaming motion or warping of the gas disk. Sky Plot of CO Maps on the Virgo Cluster region ----------------------------------------------- In figure 4 we plot the $I_{\rm CO}$ maps on the sky area of the Virgo Cluster in a similar manner to a plot of HI maps by Cayatte et al. (1990). The angular scales are enlarged by a factor of 50 for individual galaxies. The CO distributions appear to be not strongly correlated with the distance from the center of the Cluster at M87, which is marked by a cross. This property is very different from the HI gas distribution; The HI disks of inner-cluster galaxies are usually largely distorted and are often truncated by the ram-pressure of the intracluster medium (Cayatte et al. 1990), while the central CO disks are not strongly perturbed by the ambient gas in the cluster, probably because they lie deep in their galactic potentials. The ViCS Data Base ------------------ The calibrated and reduced data presented in this paper will be opened on our web page in the form of FITS formatted cubes and maps, and in gif-formatted images at the URL, http://www.ioa.s.u-tokyo.ac.jp/radio/virgo/. Central Positions and CO Distributions ====================================== Central Positions {#sec:dycen} ----------------- In order to determine the central positions, we fitted a disk model with a Brandt-type rotation curve (Brandt 1965) to intensity-weighted velocity fields using the AIPS task GAL. Since this task assumes a pure circular rotation, we used only central several arcseconds for the fitting, out of which the isovelocity contours indicate some deviations from circular rotation. The iteration in the task could occasionally provide different results with different initial guesses. We checked this error by changing the initial guess in appropriate ranges, and confirmed that the task suffices to provide the dynamical centers accurate to about $1{''}$ in most cases. The derived central positions are listed in Table 4. The thus obtained center positions were in coincident with the NED center positions within an arcsecond in most cases. However, in such cases that the central CO distribution is not smooth, or the velocity field is strongly perturbed, and hence above dynamical centers are not reliable, we determined the central positions from the literature. The center of NGC 4212, which shows patchy CO distribution (Figure 1), was adopted from the optical observations by Cotton et al. (1999), which coincides with our CO emission peak within an error of $\sim1{''}$. NGC 4569 shows strong noncircular motions in the velocity field and position-velocity diagram; we adopted the central position determined by Sakamoto et el. (1999) from their CO interferometry observations with a lower resolution. The center of NGC 4579 was taken to coincide with the position of the unresolved radio continuum source (Ho and Ulvestad 2001). — Table 4 — Radial Distribution of CO gas ----------------------------- Figure 5 displays azimuthally averaged radial profiles of CO-line intensities in unit of K ${\rm \,km\, s^{-1}}$ as projected on the galaxies’ disks corrected for the inclinations. In order to make these plots, integrated-intensity maps without clipping were corrected for the primary beam attenuation, and we applied the AIPS task IRING around the central positions derived in §\[sec:dycen\]. We fixed the inclination and position angles for each galaxy to those provided from optical observations (Table 1). The sampling intervals in the plots were set to be $0''.5$. However, the effective sampling intervals are equal to the beam widths. The number of effective sampling points for the fit increases with radius $r$ proportionally to $r$, and hence the statistical error decreases proportionally to $r^{-1/2}$. The intensity error at each point in a map is given by $\Delta I = \Delta T \times N_{\rm c}^{1/2}\times \Delta V$ ($\sim 15$ K ) , where $\Delta T (\sim 0.3$ K), $N_{\rm c} (\sim 25)$, and $\Delta V (\sim 10 {\rm \,km\, s^{-1}})$ are the rms, number of channels within the expected velocity width, and the velocity interval, respectively, as given in table 3. Therefore, the typical error of the profiles is given approximately by $ \sim 15(r/{\rm beam~width})^{-1/2}$ K . — Fig. 5 — The intensity distributions in the central 10 to 15$''$ regions are approximately exponential with scale radii 5 to 10$''$, or 400 to 800 pc. These scale radii are a few times smaller than those derived by Regan et al. (2001). Since our survey has three times higher resolution (2“) than theirs (6”), we may detect the central cusps of the CO distributions. In most cases, the outer regions than 20$''$ are not significantly detected, except for some cases with disk components. Three of our target galaxies, NGC 4254, 4501 and 4569, have been observed in CO line with lower sensitivity and resolutions (Sakamoto et al. 1999). Our radial profiles for NGC 4501 and 4569 are consistent with those from the previous observations. The profile for NGC 4254 is also roughly consistent with the previous result, while there are some small scale deviations. The ellipse fit gives a quantitative presentation of the radial profiles including the outskirts. However, the fit looses linear resolution, depending strongly on the inclination, because it uses the data in the minor axis direction with an equal weight. Figure 5 shows that many galaxies have a strong concentration of CO gas within the central 10$''$ (0.8 kpc) radius, while some have a plateau or a dip at the center. We discuss such galaxies in §5 as a central-/single-peak and twin-peaks types, respectively. Molecular gas morphology ======================== The molecular gas distributions show a wealth of variety. Although it is difficult to categorize them in a simple way, we can find some characteristic types in the central gas distributions. Many galaxies have high concentration of CO gas in the central a few kpc region, where the CO morphology shows either single peak or twin peaks. More extended components have more variety of morphologies, which can be classified into arm type, bar type, and amorphous type. Central gas distributions ------------------------- 1. [Central peak and/or Single peak:]{} Many galaxies show strong concentration of molecular gas around the nuclei in so far as the present maps are concerned. NGC 4212, NGC 4419, NGC 4501, NGC 4535, and NGC 4536 are the examples. The typical size of these central peaks is about 200 - 400 pc. In most cases, their peaks are single at the present resolution, which we call “single peak”. Figure 6 shows the CO intensity distributions in the central $20'' \times 20''$ regions (1.6 kpc square) of the central-/single-peak galaxies. The central/single peak galaxies shares a considerable fraction among the observed galaxies. Note that the object selection was made by peak antenna temperature in the FCRAO survey with a 45$''$ beam. Hence, the present maps could have a selection effect for galaxies with higher peak-temperatures. However, as argued in §2, our object selection is approximately equivalent to a selection by CO luminosity. We may consider that the statistics with the presently observed galaxies is significant to discuss the general types of central CO morphology of the most CO luminous Virgo galaxies. — Fig. 6 — 2. [Twin peaks:]{} Typical example of twin-peak molecular gas distribution is seen for NGC 4303, which shows two offset open molecular arms along the optical bar, which end at a molecular ring with two peaks. Kenney et al. (1992) have reported both single-peak, and twin-peaks types. They selected four barred galaxies with strong CO and FIR emission, and showed that the barred galaxies have twin peaks in CO likely as the consequence of bar-induced inflow. In so far as the present data set is concerned, which includes galaxies of random types with the CO emission concentrated in the central $45''$ (3.5 kpc) regions, twin-peak galaxies shares rather a small fraction. Kenney et al. (1992) showed that the separations between twin peaks are about 200 - 400 pc, while our single peak galaxies do not have double peaks even in the same scale ($3''\sim200\,{\rm pc}$). Note, however, that this classification may depend on the spatial resolution; it may happen that a single peak at our resolution consists of a more number of inner structures at higher resolution. In fact, NGC 4501 appears to be a single peak type in the present atlas, while a higher resolution image show a small patchy ring with a diameter of $\sim 3"$ (Onodera et al. 2003:private communication). PV diagrams may apparently imply spatially-unresolved double peaks with separated velocities. However, a single peak galaxy may apparently show two peaks on a PV diagram, at the positive and negative terminal velocity ends at turnover radii of central rising and outer flat rotation curves, even when the gas distribution has no spatially-separated double peaks (Sofue et al. 1999; Sakamoto et al. 1999a). Hence, PV diagrams may not be used for the spatial morphological classification. Description of Individual Galaxies ================================== We describe individual galaxies about their CO properties obtained from the present observations. NGC 4192 --------- An extremely bright CO peak is observed at the center, which classifies this galaxy in a “central-peak” type, while it appears that the peak may be resolved into two peaks at higher resolution. Hence, the classification depends on the resolution. The central peak is surrounded by a bright disk at high inclination at about the same inclination as the optical disk. The position-velocity diagram shows a very high-velocity rotation, whose maximum reaches almost $250 {\rm \,km\, s^{-1}}$. However, the central CO peak is rotating more slowly at about $100 {\rm \,km\, s^{-1}}$. NGC 4212 --------- The CO distribution consists of a molecular core and extended straight arms in the direction of the major axis. The core is shifted from the map center toward the NE by a few arcseconds. The velocity field and PV diagram indicate that the rotation is rigid body-like, while the core shows a steeper velocity gradient. NGC 4254 --------- The central molecular gas distribution shows a bar-like elongation, while no optical bar feature is seen in the visual-band images. The CO intensity has a slight depression at the dynamical center, which coincides with the nucleus. Two well-developed spiral arms wind out from the bar ends toward the south and north. The south-eastern arm bifurcates into a tightly-wound dense molecular arm with an almost zero pitch angle. Hence, the molecular disk has three arms, and the arms are well correlated with optical dark lanes. The velocity field shows a regular spider pattern, indicating a circular rotation of the disk, on which small-amplitude streaming motion due to the spiral arms are superposed. The PV diagram shows a sharp rise in the central few arcseconds, indicating a massive core, and then the velocity increases gradually. Overall distributions and kinematics agree with the previous low resolution observations (Sakamoto et al. 1999a). A detailed study of this galaxy with consideration of the ram-pressure effect by the intra-cluster medium is presented in Sofue et al. (2003b). NGC 4303 --------- CO gas is highly concentrated in the nuclear disk within a radius $r \sim 8''$ (600 pc). The nuclear disk comprises the “twin peaks” at the eastern and western edges of the nuclear disk, and there appears to exist a diffuse central component around the nucleus between the twin peaks. Two prominent bisymmetric spiral arms, or offset ridges, wind out from these twin peaks, and extend toward the north and south along the dark lanes in the optical bar. The PV diagram along the major axis (north-south) indicates a rise of rotation velocity within $r \sim 2''$ (160 pc) to 160 - 180 . Our result is consistent with the high-resolution observations with the OVRO interferometer by Schinnerer et al. (2002). Detailed description of this galaxy are given in a separate paper of this series by Koda et al. (2003: private communication). NGC 4402 -------- The CO intensity distribution shows a high density nuclear molecular disk of $r \sim 10''$. The nuclear disk is surrounded by a more extended molecular disk of radius $\sim 30''$ (2 kpc). This outer disk appears to be consisting of two spiral arms, one extends to the west from the southern edge of the nuclear disk, tracing the dark lane, and the other arm toward the east from the north-eastern edge of the nuclear disk. The velocity field shows a usual spider diagram superposed by some streaming motion in the molecular ring/arms. The PV diagram shows a nuclear component and outer ring/arms. NGC 4419 --------- The CO gas is strongly concentrated in the central $5''$ radius disk, which is associated with an elongated outer disk component. Kenney et al. (1990) also reported the concentrated CO distribution. The outer molecular disk is lopsided toward the north-west. The PV diagram indicates that the central component has a rotation velocity as high as 150 within 5$''$ radius, which is followed by a gradually rising disk rotation. NGC 4501 --------- A nuclear concentration of the molecular gas of radius $3''$ is remarkable, as reported by Sakamoto et al. (1999a). This is classified as the “single-peak” type, although the peak intensity is not particularly high compared with the other typical single peaks. This central peak is surrounded by an extended component elongated in the SE to NW direction, with the SE end at 5$''$ radius being brighter. Two prominent molecular arms are running at $r \sim 20''$. The north-eastern arm is much stronger than the south-western arm. Both arms are associated with the dark lanes along the optical spiral arms. The velocity field and PV diagram indicate sharp rise of rotation velocity in the nuclear disk. Detailed description and modeling by spiral-shock accretion mechanism are given in Onodera et al. (2003:private communication). NGC 4535 --------- The molecular gas shows a strong concentration in the central region of $\sim 6''$ radius. This galaxy is a typical “single-peak” type. Offset bars are extending from the central disk toward the NE and SW, coinciding with the optical dark lanes in the bar. The velocity field shows a usual spider diagram with the zero velocity node at position angle of $90\deg$, coinciding with the optical minor axis. However, the CO arms along the dark lanes show some non-circular streaming velocity, indicating inflow along the arms. The PV diagram shows a sharply rising, but rigid body-like behavior within the central molecular disk. NGC 4536 --------- This is also a typical “single-peak” type galaxy with the molecular gas being concentrated in the nuclear disk of $\sim 10''$ radius and an unresolved compact core exists at the nucleus. The velocity field shows a spider diagram, and the PV diagram indicates that the rotation velocity rises to 200 within the central $2''$. There appears no strong non-circular streaming motion. NGC 4548 --------- This is a typical barred galaxy. CO emission is very weak compared with the other galaxies. The CO distribution is highly concentrated near the nucleus, being centrally peaked, and no extended emission is detected. The recovered flux is only 16% of the FCRAO flux with 45" beam (§2.3), and the rest 84% (5.6 K ) could be due to very extended components with sizes greater than our maximum detectable size (54$''$). NGC 4569 --------- The molecular gas is highly concentrated within $\sim 1$ kpc radius. The CO intensity distribution is elongated in the same direction as the optical major axis, and has two peaks with depression at the nucleus, consistent with the earlier CO map (Sakamoto et al. 1999a). Thus, the central molecular morphology of this galaxy may be classified in twin peaks at the present resolution. However, a higher resolution CO map reveals that the apparent two peaks coincide with both ends of an elliptical molecular ring, while they are not associated with so called offset ridges (Nakanishi et al. 2003). The velocity field is strongly disturbed from circular rotation, and the PV diagram indicates significant ‘forbidden’ velocities. Nakanishi et al.(2003) discuss the kinematics of this galaxy in detail, and tried to explain these features using two models of non-circular motion and warping of the inner disk. And they conclude that it is natural that disturbed velocity field and forbidden velocities of the PV diagram are due to non-circular motion. Helfer et al. (2001) have reported an extended CO emission from their wide-field mosaic image. Jogee et al. (2001) have also presented a high resolution CO image of this galaxy. NGC 4571 --------- No significant detection of the CO line was obtained for this galaxy, not only in channel maps, but also in an integrated-intensity map. Our resultant rms was $11.5\,K\,{\rm \,km\, s^{-1}}$ for a $130 {\rm \,km\, s^{-1}}$ width, that is greater than $3.3\,K\,{\rm \,km\, s^{-1}}$ from single dish observations (Table 1). Since the map shows only noises, we do not present the result. NGC 4579 --------- The CO distribution is elongated in the east-west direction, about $30\deg$ displaced from the optical bar axis. There are two major CO peaks with asymmetric peak intensities, which are associated with symmetric spiral features as reported by Kohno et al. (1999). The velocity field shows a higher rotation velocity than 200 in the central few arcseconds, which is more clearly visible in the position-velocity diagram. NGC 4654 --------- This galaxy is known for its lopsided structure in the optical as well HI disks, most likely due to the ram-pressure effect of the intracluster gas, blowing from the northwest (Phookun and Mundy 1995). The CO distribution is also lopsided, in the same direction as that of the HI and optical image tail. The lopsided CO distribution suggests that the ram pressure effect is not negligible even in such a central molecular disk. Moreover, the CO distribution is more elongated than the optical/HI disks. The velocity structure is rigid-body like with mildly increasing rotation velocity with the radius. Such rotation characteristics is exceptional among the presently observed PVDs. NGC 4689 --------- Like the optical spiral arm features, the CO intensity distribution is amorphous, and is patchy and widely extended. There appears neither spiral arms nor bars in CO, and no central peaks are found. The peak $I_{\rm CO}$ amounts only to $\sim 24 {\rm K {\rm \,km\, s^{-1}}}$, the lowest among the observed galaxies. The velocity field indicates a regular rotation pattern, but the central rise of rotation velocity is mild, as indicated by the PV diagram. Summary and Discussion ====================== We have obtained high-resolution CO-line survey of 15 Virgo spiral galaxies using the Nobeyama Millimeter-wave Array in AB, C and D configurations, and presented the result in the forms of integrated intensities, velocity fields and position-velocity diagrams along the major axes. The galaxies were sampled from the CO brightest galaxies in the Kenney and Young’s (1988) list without any bias. The CO properties may be compared with each other without ambiguity of the linear scale, as the distance to the Virgo cluster galaxies are safely taken to be 16.1 Mpc from the Cepheid calibration (Ferrarese et al. 1996). For the homogeneity, our data will be useful for investigating correlation between the CO properties and other characteristics. We will discuss the correlation of the central peaked CO distributions and nuclear activities in a forthcoming paper of this series. In the second paper of this series, we will derive exact rotation curves by analyzing the position-velocity diagrams, and discuss the dynamical properties of the central regions of the Virgo galaxies, as well as detailed mass distributions. We summarize the results obtained in this paper as follows, and discuss the implications below. The mean radial profiles of the molecular gas distribution in the inner 10 to 15$''$ radius regions are approximately exponential with $e$-folding scale radius of 400 to 700 pc. In some galaxies, more extended disk components are detected, whose scale radii are greater, while the present interferometry data are not appropriate to determine the disk radii precisely. A careful inspection of the intensity maps shows that the observed intensity distributions have a variety of types, which may be classified into the following types. The centrally concentrated components can be classified into two types: the central-peak or single-peak type and twin peaks type. The latter shows plateau-like radial profiles near the center. The distributions of more extended components can be classified into spiral arms, bars, and amorphous types. It is of particular interest to consider what causes the variety of molecular gas morphology, as it could be intimately related to the activities in the centers of galaxies. Twin-peaks of molecular gas at the ends of a set of two bisymmetric offset molecular bars (dark lanes) along the optical bar have been noticed for several barred galaxies in relation to fueling mechanisms of interstellar gas toward the central regions (Kenney et al. 1992; Sakamoto et al. 1999a). NGC 4303 is a typical case for the twin-peaks type, and the structure is well explained by a bar-potential and galactic shock hypothesis (Schinnerer et al 2002; Koda et al 2003:private communication). However, the fraction of galaxies having “twin-peaks” is not particularly high in so far as the present resolution maps ($2-4''$ or 150 - 300 pc) are concerned. On the other hand, “central-” or “single-peak” galaxies share a larger fraction: five galaxies among the fifteen in the present survey at our resolution. Examples are NGC 4192 (SAB(s)ab)), NGC 4419 (SB(s)a), NGC 4501 (SA(rs)b), NGC 4535 (SAB(s)c) and NGC 4536 (SAB(rs)bc). Twin-peaks are often thought to be a consequence of characteristic gaseous orbits in a bar potential: x1 and x2-orbits in a bar intersect each other at the inner ILR, causing collision of the gas at the two intersecting points, and consequently producing twin peaks. Although this simple interpretation of the twin-peaks is attractive, the current study has shown that the single-peak is more popular than the twin-peaks. It is an interesting subject to consider the origin of the single-peak, while we need a more careful simulation and gas dynamics. We here try to speculate a possible mechanism. First of all, the twin-peaks are not the final and stable structure of the gas distribution in a bar. The selfgravity of the gas will cause a collapse in the gas structures, particularly such large clumps as twin-peaks will be gravitationally unstable, and cause further infall of gas into the center due to the friction among the clouds (Wada & Habe 1992). Moreover, if there exists central massive object, the friction due to stronger shearing motion will accelerate the accretion toward the nucleus (Fukuda et al. 1998). In fact, most of the galaxies show very steep rise of rotation curve in the central $\sim 100$ pc region, indicating the existence of massive compact cores of mass $10^8 -10^9 M_\odot$ around the nuclei (Sofue et al. 2003a). Hence the central single-peak may be formed after the twin-peaks have developed. The above mechanism could work even if a galaxy shows no prominent bar in the optical/infrared photographs, because even a very weak bisymmetric distortion of a disk potential can cause non-circular motion of gas (Koda & Wada 2003). Also, Onodera et al. (2003: private communication) discuss another possible mechanism to produce a central single-peak by stellar spiral arms for the case of NGC 4501, which indeed has no bar, but shows continuous spiral structure from the disk to the nucleus. Acknowledgements: The observations were performed as a long-term project from 1999 December till 2002 April at the Nobeyama Radio Observatory (NRO) of the National Astronomical Observatories of Japan. We are indebted to the staff of NRO for their help during the observations. We thank T. Takamiya and M. Hidaka for their help with the observations and reductions, A. Kawamura and M. Honma for their help with the observations. The data reductions were made using the NRAO AIPS package. We made use of the data archive from NASA Extragalactic Database (NED). J. K. was financially supported by the Japan Society for the Promotion of Science (JSPS) for Young Scientists. [**References**]{} Baker, A. J. 1999, in The Physics and Chemistry of the Interstellar Medium, ed. V. OssenKopf, J. Stutzki, & G. Winnewisser (GCA-Verlag, Herdecke), 30 Braine, J., Combes, F., Casoli, F., Dupraz, C., Gélin, M., Klein, M., Wielebinsky, R., & Brouillet, N. 1993, A&AS, 97, 1791 Cayatte, V., van Gorkom, J. H., Balkowski, C., and Kotanye, C. 1990 AJ 100, 604. Cotton, W. D., Condon, J. J., & Arbizzani, E. 1999, ApJS, 125, 409 Ferrarese, L., Freedman, W. L., Hill, R. J., Saha, A., Madore, B. F., et al. ApJ. 1996, 464, 568 Fukuda, H., Wada, K., Habe, A. 1998, MNRAS 295, 463. Helfer, T., Regan, M. W., Thornley, M., Wong, T., Sheth, K., Vogel, S. N., Bock, D. C. J., Blitz, L., and Harris, A. 2001, ApSS, 276, 1131 Ho, L. C., Filippenko, A. V., & Sargent W. L. W. 1997a, ApJS 112, 315 Ho, L. C., Filippenko, A. V., & Sargent W. L. W. 1997b, ApJ 487, 591 Ho, L. C., & Ulvestad, J. S. 2001, ApJS, 133, 77 Honma, M., Sofue, Y., & Arimoto, N., AAp 304, 1 Hummel, E., Davies, R. D., Pedlar, A., Wolstencroft, R. D., van der Hulst, J. M 1988 A&A 199, 91 Jogee, S., Baker, A. J., Sakamoto, K., Scoville, N. Z., and Kenney, J. D. P. 2001, in The Central Kiloparsec of Starbursts and AGN: The La Palma Connection, ASP Conf. Ser. Vol. 249, ed. J. H. Knapen, J. E. Beckman, I. Shlosman, and T. J. Mahoney (ASP, San Fransisco), 612 Kenney, J. D., & Young, J. S., 1988, ApJS 66, 261 Kenney, J. D. P., Young, J. S., Hasegawa, T., Nakai, N. 1990, ApJ, 353, 460 Kenney, J. D. P., Wilson, C. D., Scoville, N. Z., Devereux, N. A., & Young, J. S. 1992, ApJ, 395, L79 Koda, K., Sofue, Y., Kohno, K., Nakanishi, H., Onodera, S., Okumura, S.K. and Irwin, Judith A. 2002 ApJ. 573, 105. Koda, J., Wada, K. 2003 AA, in press. Kohno, K., Kawabe, R., & Vila-Vilaró, B. 1999, in The Physics and Chemistry of the Interstellar Medium, ed. V. OssenKopf, J. Stutzki, & G. Winnewisser (GCA-Verlag, Herdecke), p.34 Kohno, K., Vila-Vilaro, B., Kawabe, R., Sakamoto, S., Ishizuki, S., and Matsushita, S. 2002, PASJ, submitted Nakanishi, H., Sofue, Y., Koda, J., and Onodera, S. 2003 PASJ submitted. Nishiyama, K., Nakai, N., 2001 PASJ 53, 713 Nishiyama, K., Nakai, N., & Kuno, N. 2001 PASJ 53, 757 Okumura, S. K., Momose, M., Kawaguchi, N. et al. 2000 PASJ 52, 339 Phookun, B., Mundy, L. G. 1995 ApJ 453, 154. Regan, M W., Thornley, M D., Helfer, T T., et al. 2001 ApJ 561, 218 Saikia, D. J., Junor, W., Cornwell, T. J., Muxlow, T. W. B., Shastri, P. 1990 MNRAS 245, 408 Sakamoto, K., Okumura, S. K., Ishizuki, S., & Scoville, N. Z., 1999a, ApJS 124, 403 Sakamoto, K., Okumura, S. K., Ishizuki, S., & Scoville, N. Z., 1999b, ApJ, 525, 691 Schinnerer, E., Eckart, A., & Tacconi, L. J. 1999, ApJ, 524, L5 Schinnerer, E., Maciejewski, W., Scoville, N.Z., Moustakas, L.A. 2002, ApJ, in press Sargent, A. I. Welch, W. J. 1993 ARA&A 31, 297 Sofue, Y., Honma, M., & Arimoto, N., 1995, AAp 296, 33. Sofue, Y., Tomita, A., Honma, M., & Tutui, Y. 1999, PASJ, 51, 737 Sofue, Y., Koda, J., Nakanishi, H., Onodera, S. M. 2003a PASJ submitted. Sofue, Y., Koda, J., Nakanishi, H., Onodera, S., Hidaka, M. 2003b PASJ in this volume. Sofue, Y., Koda, J., Kohno, K., Okumura, S. K., Honma, M., Kawamura, A. and Irwin, J. A. 2001 ApJ 547 L115. Sofue, Y. and Rubin, V. 2001, Ann. Rev. Astron. Astrophys. 39, 137. Sofue, Y., Tutui, Y., Honma, M., Tomita, A., Takamiya, T., Koda, J., and Takeda, Y. 1999 ApJ. 523, 136. Stark, A. A, Knapp, G. R., Bally, J., Wilson, R. W., Penzias, A. A., Rowe, H. 1986 ApJ 310, 660 Takamiya, T, & Sofue, Y., 2000, ApJ 534, 670 Takamiya, T, & Sofue, Y., 2002, ApJ Letters, submitted Terashima, Y., Iyomoto, N., Ho, L. C., & Ptak, A. F. 2002, ApJS, 139, 1 Wada, K., & Habe, A. 1992, MNRAS, 258, 82 Wong, T., & Blitz, L. 2002, ApJ, 569, 157 Young, J. S. Scoville, N. Z. 1991 ARA&A 29, 581 Young, J. S., Xie, S., Tacconi, L., et al. 1995 ApJS, 98, 219 [ccccccc]{} & & & & Frequency &\ NGC & Config. & Year & & (GHz) & Reference\ (1) & (2) & (3) & RA(J2000) & DEC(J2000) & (5) & (6)\ 4192 & AB+C+D & 2000-2002 & 12 13 48.30 & +14 54 02.9 & 115.350000 & 1\ 4212 & C+D & 2000-2001 & 12 15 39.11 & +13 54 04.8 & 115.286574 & 2\ 4254 & AB+C+D & 2000 & 12 18 50.03 & +14 24 52.8 & 114.355710 & 3\ 4303 & AB+C+D & 2000 & 12 21 54.87 & +04 28 24.9 & 114.659250 & 2\ 4402 & AB+C+D & 2001-2002 & 12 26 07.06 & +13 06 45.7 & 115.200000 & 1\ \ 4419 & AB+C+D & 2000-2002 & 12 26 56.43 & +15 02 51.1 & 115.286574 & 1\ 4501 & AB+C+D & 2001-2002 & 12 31 59.14 & +14 25 12.9 & 114.407000 & 4\ 4535 & AB+C+D & 2000-2001 & 12 34 20.25 & +08 11 52.2 & 114.507290 & 5\ 4536 & AB+C+D & 2001 & 12 34 27.07 & +02 11 18.3 & 114.586000 & 5\ 4548 & AB+C+D & 2001 & 12 35 26.40 & +14 29 47.0 & 115.098000 & 3\ \ 4569 & AB+C+D & 2000-2002 & 12 36 49.82 & +13 09 45.8 & 115.286574 & 4\ 4571 & C+D & 2001-2002 & 12 36 56.40 & +14 13 02.0 & 115.138000 & 3\ 4579 & AB+C+D & 2001-2002 & 12 37 43.53 & +11 49 05.4 & 114.710000 & 6\ 4654 & AB+C+D & 2000-2002 & 12 43 55.74 & +13 07 44.2 & 114.811630 & 1\ 4689 & C+D & 2000-2002 & 12 47 45.60 & +13 45 46.0 & 114.659250 & 3\ Notes. — Col.(1): Galaxy name. Col.(2): NMA configulations used for the observations. Col.(3): Year of the observations. Col.(4): Pointing position as well as the phase tracking center. Col.(5): Observing frequency. Col.(6): References for positions: (1) Condon et al. 1990; (2) Saikia et al. 1994; (3) NED; (4) Sakamoto et al. 1999; (5) Hummel et al. 1987; (6) Kohno et al. 1999 [ccccccccccccc]{} & & & & & & [$T_b$ for ${\rm 1Jy\,beam^{-1}}$]{} & $f_{45{''}}$\ NGC & & & & $N_c$ & & (K) & (%)\ (1) & (${''}$) & (${''}$) & (${^\circ}$) & & (3) & (4) & (5) & [(mJy/beam)]{} & (mK) & (7) & (8)\ 4192 & 2.4 & 1.9 & 158 && 20.8 & 20.8 & 23 & 18 & 363 & 20.2 & 47\ 4212 & 4.0 & 3.7 & 149 && 20.8 & 10.4 & 22 & 28 & 174 & 6.2 & 76\ 4254 & 3.0 & 2.3 & 148 && 20.8 & 10.4 & 22 & 25 & 333 & 13.3 & 102\ 4303 & 2.8 & 1.9 & 27 && 10.4 & 10.4 & 17 & 21 & 363 & 17.3 & 97\ 4402 & 2.8 & 2.3 & 166 && 20.8 & 10.4 & 20 & 26 & 371 & 14.3 & 106\ \ 4419 & 3.5 & 2.7 & 159 && 10.4 & 10.4 & 38 & 26 & 253 & 9.7 & 83\ 4501 & 5.6 & 3.7 & 160 && 10.4 & 10.4 & 46 & 17 & 75 & 4.4 & 85\ 4535 & 3.1 & 2.6 & 164 && 10.4 & 10.4 & 23 & 21 & 240 & 11.4 & 208\ 4536 & 2.5 & 1.8 & 173 && 10.4 & 10.4 & 34 & 17 & 347 & 20.4 & 68\ 4548 & 2.6 & 2.0 & 154 && 31.2 & 15.6 & 19 & 22 & 389 & 17.7 & 16\ \ 4569 & 4.5 & 3.1 & 146 && 10.4 & 10.4 & 37 & 25 & 165 & 6.6 & 111\ 4571 & 3.8 & 2.5 & 154 && —- & —- & 0 & 33 & 319 & 9.7 & —\ 4579 & 4.5 & 3.5 & 152 && 20.8 & 10.4 & – & 22 & 128 & 5.8 & 31\ 4654 & 5.2 & 3.7 & 149 && 20.8 & 10.4 & 23 & 22 & 105 & 4.8 & 142\ 4689 & 5.2 & 4.2 & 134 && 20.4 & 10.4 & 17 & 20 & 84 & 4.2 & 68\ Notes. — Col.(1): Galaxy name. Col.(2): Major and minor axis sizes and position angle for the synthesized beam. Col.(3) and (4): Integrated velocity width of a channel, and sampling width between channels of the cube. Col.(5): Number of channels where emission are detected. Col.(6): Rms noise scaled for a ${\rm 10 \, km\,s^{-1}}$ channel. Col.(7): Equivalent antenna temprature for ${\rm 1 \, Jy\,beam^{-1}}$. Col.(8): Fraction of single-dish flux recovered by the aperture synthesis observations. Single-dish data are from Kenney & Young (1988). [cccc]{} & &\ NGC & & Reference\ (1) & RA(J2000) & DEC(J2000) & (3)\ 4192 & 12 13 48.29 & +14 54 01.9 & 1\ 4212 & 12 15 39.40 & +13 54 04.6 & 2\ 4254 & 12 18 49.61 & +14 24 59.6 & 1\ 4303 & 12 21 54.94 & +04 28 25.6 & 1\ 4402 & 12 26 07.45 & +13 06 44.7 & 1\ \ 4419 & 12 26 56.40 & +15 02 50.2 & 1\ 4501 & 12 31 59.12 & +14 25 13.3 & 1\ 4535 & 12 34 20.35 & +08 11 52.2 & 1\ 4536 & 12 34 27.08 & +02 11 17.1 & 1\ 4548 & 12 35 26.44 & +14 29 47.4 & 1\ \ 4569 & 12 36 49.82 & +13 09 45.8 & 3\ 4579 & 12 37 43.53 & +11 49 05.5 & 4\ 4654 & 12 43 56.67 & +13 07 36.1 & 1\ 4689 & 12 50 15.86 & +13 29 27.4 & 1\ Notes. — Col.(1): Galaxy name. Col.(2): Adopted central position. Col.(3): Reference of the position: (1) This study. Dynamical center derived with the AIPS/GAL package; (2) Cotton et al. 1999; (3) Sakamoto et al. 1999; (4) Ho & Ulvestad 2001. [cccc]{}\ NGC & $T_{\rm b, peak}$ & $I_{\rm CO, peak}$ & $I_{\rm CO, peak}\times {\rm cos} i$\ & \[K\] & \[$\rm K {\rm \,km\, s^{-1}}$\] & \[$\rm K {\rm \,km\, s^{-1}}$\]\ \ 4569 & 3.54 & 586 & 266\ 4419 & 5.71 & 585 & 229\ 4536 & 5.87 & 575 & 224\ 4303 & 6.48 & 436 & 395\ 4535 & 5.17 & 377 & 275\ \ 4192 & 3.26 & 348 & 96\ 4402 & 4.06 & 224 & 58\ 4501 & 1.14 & 210 & 111\ 4548 & 1.46 & 123 & 98\ 4212 & 1.38 & 115 & 78\ \ 4254 & 3.88 & 110 & 97\ 4579 & 0.78 & 84 & 67\ 4654 & 1.15 & 44 & 27\ 4689 & 0.62 & 24 & 21\ \ Each value contains a systematic error of about $\pm 15$ %. Inclination angles from optical isophotos (Table 1) are assumed. v Figure Captions PS figures are available at http://www.ioa.s.u-tokyo.ac.jp/radio/virgo v Fig. 1. Atlas of the observed Virgo galaxies. Top top-left panels show DSS second generation blue optical images, each for $5'\times5'$ area. ${{\rm ^{12}CO} (J=1-0)}$ observations were obtained for the central $1'\times1'$ regions. Bottom panels show observed CO intensity distributions (left) and corresponding velocity fields (center). Position-velocity diagrams along the major axes are shown in the top right panels. Parameters of the galaxies and displayed areas are indicated at the bottom for each galaxy. Indicated RA and Dec are in J2000. The north to the top, and the east to the left. v (a)[**NGC 4192**]{}: (tl) DSS b-band $5'\times5'$; $i=74^\circ$; $PA= 155$ (bl) Ico: $1'\times 1'$; cl=20 $\times$ 1 2,..10,12,,,16,18 K . (tr) PVD: $1' \times 3''$, PA 155; cl=0.1 x (1, 2, ... 10, 12, ... 22) K. (br) V-field: $1'\times 1'$; cl=-400 to 50, every 50 . (b)[**NGC 4212**]{}: (tl) DSS b-band $5'\times5'$; $i=47^\circ$; $PA= 75$ (bl) Ico: $1'\times 1'$; cl=50 x (0.5, 1, 2, 3, ..., 10) K . (tr) PVD: $1' \times 3''$, PA 75; cl=0.5 x (0.25, 1,2,...10) K. (br) V-field: $1'\times 1'$; cl=-200 to 40, every 20 . \(c) [**NGC 4254**]{}: (tl) DSS b-band $5'\times5'$, $i$=42, PA=45 (bl) Ico: $80''\times 80''$; cl= 10 $\times$(1, 2, ... 12) K . (tr) PVD: $1' \times 3''$, PA 45; cl= 0.132 $\times$(1, 2, ... 12) K. (br) V-field: $1'\times 1'$; cl= 2300 to 2530, every 20 . \(d) [**NGC 4303**]{}: (tl) DSS b-band $5'\times5'$, $i$=25, PA=0 (bl) Ico: $1'\times 1'$; Beam $2''.80\times 1''.90$; cl= 25 $\times$(1, 2, ... 12) K . (tr) PVD: $1' \times 3''$, PA 340; cl= 0.5$\times$(1, 2, ... 12) K. (br) V-field: $1'\times 1'$; cl= 1500 to 1600, every 10 . \(e) [**NGC 4402**]{}: (tl) DSS b-band $5'\times5'$; $i=75^\circ$; $PA= 90$ (bl) Ico: $80''\times 80''$; cl=20 x (1, 2, 3, ..., 10) K . (tr) PVD: $80' \times 3''$, PA 90; cl=0.3 $\times$ (1, 2, ... 12) K. (br) V-field: $80''\times 80''$; cl= 60 to 340, every 20 . \(f) [**NGC 4419**]{}: (tl) DSS b-band $5'\times5'$; $i=67^\circ$; $PA= 133$ (bl) Ico: $1'\times 1'$; cl=50x (0.4, 0.8, 1.2, 2, 3, ..., 12) K . (tr) PVD: $1' \times 5''$, PA 133; cl= 0.5 x (0.5, 1,2,...10) K. (br) V-field: $1'\times 1'$; cl= -400 to 0, every 20. \(g) [**NGC 4501**]{}: (tl) DSS b-band $5'\times5'$; $i=58^\circ$; $PA= 140$ (bl) Ico: $80''\times 80''$; cl = 10 $\times$ (0.5, 1.0, 1.5, 2, 3, .., 10, 12, ..., 20) K . (tr) PVD: $1' \times 10''$, PA 140; cl= 0.1 x (0.5, 1,2,...10) K. (br) V-field: $80''\times 80''$; cl= 200 to 500, every 50 . \(h) [**NGC 4535**]{}: (tl) DSS b-band $5'\times5'$, $i$=43, PA=0 (bl) Ico: $1'\times 1'$; Beam $2''.80\times 1''.90$; cl= 10 $\times$(1, 2, 4, 6, ... 10, 15, 20, 25) K . (tr) PVD: $1' \times 10''$, PA 0; cl= 0.5$\times$(1, 2, ... 12) K. (br) V-field: $1'\times 1'$; cl= 1800 to 2100, every 10 . \(i) [**NGC 4536**]{}: (tl) DSS b-band $5'\times5'$; $i=67^\circ$; $PA= 116$ (bl) Ico: $1'\times 1'$; cl=20 $\times$ (1, 2, 4, 6, ... 10, 15, 20, 25, 30) K . (tr) PVD: $1' \times 5''$, PA 116; cl=0.5 x (0.25, 1,2,...10) K. (br) V-field: $1'\times 1'$; cl=1600 to 2000, every 50 . \(j) [**NGC 4548**]{}: (tl) DSS b-band $5'\times5'$; $i=37^\circ$; $PA= 150$ (bl) Ico: $1'\times 1'$; cl=10 x (1, 2, 3, ..., 12) K . (tr) PVD: $1' \times 5''$, PA 150; cl=0.15 x (1,2,...10) K. (br) V-field: $1'\times 1'$; cl=300 to 700, every 50 . \(k) [**NGC 4569**]{}: (tl) DSS b-band $5'\times5'$; $i=63^\circ$; $PA= 23$ (bl) Ico: $1'\times 1'$; cl= 50$\times$(0.25, 0.5, 1, 2, ... 12) K . (tr) PVD: $1' \times 5''$, PA 160; cl= 0.2$\times$(1, 2, ..... 15) K (br) V-field: $1'\times 1'$; cl= -380 to 0, every 20 km/s. \(l) [**NGC 4579**]{}: (tl) DSS b-band $5'\times5'$; $i=37^\circ$; $PA= 60$ (bl) Ico: $1'\times 1'$; cl=20 x (1, 2, 3, ..., 7) K . (tr) PVD: $1' \times 5''$, PA 90; cl=0.25 x (1,2, ..., 6) K. (br) V-field: $1'\times 1'$; cl=1320 to 1600, every 20 . \(m) [**NGC 4654**]{}: (tl) DSS b-band $5'\times5'$; $i=51^\circ$; $PA= 128$ (bl) Ico: $1'\times 1'$; cl=5 x (1, 2, ... 10) K . (tr) PVD: $1' \times 3''$, PA 128; cl= 0.1 x (1, 2,... 10) K. (br) V-field: $1'\times 1'$; cl= 960 to 1120, every 20. \(n) [**NGC 4689**]{}: (tl) DSS b-band $5'\times5'$; $i=30^\circ$; $PA= 160$ (bl) Ico: $1'\times 1'$; cl= 2.5 x (1, 2, 3, ..., 10) K . (tr) PVD: $1' \times 5''$, PA 160; cl=0.05 x (1,2,...10) K. (br) V-field: $1'\times 1'$; cl= 1500 to 1700, every 20 . Fig. 2. Integrated ${{\rm ^{12}CO} (J=1-0)}$-intensity maps of the observed galaxies in the same angular scale. The image sizes are $1.'0 \times 1.'0$, or 4.68 kpc $\times 4.68$ kpc for an assumed distance of 16.1 Mpc. The contours are drawn at 5, 10, 20, 40, 80, 160 K ${\rm \,km\, s^{-1}}$. Fig. 3. CO line velocity fields of the observed galaxies in the same angular scale corresponding to figure 2. Contours are drawn every 20 ${\rm \,km\, s^{-1}}$ relative to the systemic velocity, which is expressed by white thick contours. Darker coding represents redshift, and white for blueshift. Fig. 4. Sky plot of ${I_{\rm CO}}$ on the Virgo Cluster area. Each map is enlarged by 50 times the real angular size. The position of M87 is marked by a cross. Fig. 5. Radial profiles of the face-on CO intensity obtained by ellipse fitting. The plotted radius is $40''$. The primary-beam attenuation has been corrected. Fig. 6. CO intensity distributions in the central $20''\times 20''$ regions of the “central/single-peak” galaxies. Contour levels are 20 $\times$ (1, 2, ..., 10, 12, ... 20, 25, ... 40) K ${\rm \,km\, s^{-1}}$. [V\_[LSR]{}]{}[$V_{\rm lsr}$]{} [**Appendix**]{} We show the channel maps of individual galaxies in Figure A1, particularly to confirm that no significant continuum emission has been detected. Fig. A1. Channel maps of the $^{12}$CO ($J=1-0$) line emission of the Virgo galaxies. Intensity scale is in Kelvin of brightness temperature. Contours are drawn at $2^n$ times the lowest-contour value ($n=1, 2, 3, .....$). The lowest contour level (cl) is indicated for individual galaxies. v NGC 4192: Lowest cl= 0.5 K. NGC 4212: Lowest cl = 0.25 K. NGC 4254: Lowest cl = 0.5 K. NGC 4303: Lowest cl = 1.0 K. NGC 4402: Lowest cl = 0.5 K. NGC 4419: Lowest cl = 0.5 K. NGC 4501: Lowest cl = 0.20 K. NGC 4535: Lowest cl = 0.5 K. NGC 4536: Lowest cl = 0.5 K. NGC 4548: Lowest cl = 0.25 K. NGC 4569: Lowest cl = 0.5 K. NGC 4579: Lowest cl = 1.0 K. NGC 4654: Lowest cl = 0.5 K. NGC 4689: Lowest cl = 0.125 K.
--- abstract: | We present the second multi-frequency radio detection of a reverse shock in a $\gamma$-ray burst. By combining our extensive radio observations of the [*Fermi*]{}-LAT GRB 160509A at $z = 1.17$ up to $20$ days after the burst with [*Swift*]{} X-ray observations and ground-based optical and near-infrared data, we show that the afterglow emission comprises distinct reverse shock and forward shock contributions: the reverse shock emission dominates in the radio band at $\lesssim10$ days, while the forward shock emission dominates in the X-ray, optical, and near-infrared bands. Through multi-wavelength modeling, we determine a circumburst density of ${\ensuremath{n_{0}}}\approx10^{-3}$ [${\rm cm}^{-3}$]{}, supporting our previous suggestion that a low-density circumburst environment is conducive to the production of long-lasting reverse shock radiation in the radio band. We infer the presence of a large excess X-ray absorption column, $N_{\rm H} \approx 1.5\times10^{22}$ [${\rm cm}^{-2}$]{}, and a high rest-frame optical extinction, $A_{\rm V}\approx3.4$ mag. We identify a jet break in the X-ray light curve at ${\ensuremath{t_{\rm jet}}}\approx6$ d, and thus derive a jet opening angle of ${\ensuremath{\theta_{\rm jet}}}\approx4\degr$, yielding a beaming-corrected kinetic energy and radiated $\gamma$-ray energy of ${\ensuremath{E_{\rm K}}}\approx4\times10^{50}$ erg and ${\ensuremath{E_{\gamma}}}\approx1.3\times10^{51}$ erg (1–$10^4$ keV, rest frame), respectively. Consistency arguments connecting the forward and reverse shocks suggest a deceleration time of ${\ensuremath{t_{\rm dec}}}\approx 460$ s $\approx T_{90}$, a Lorentz factor of $\Gamma({\ensuremath{t_{\rm dec}}})\approx330$, and a reverse shock to forward shock fractional magnetic energy density ratio of ${\ensuremath{R_{\rm B}}}\equiv\epsilon_{\rm B,RS}/\epsilon_{\rm B,FS}\approx8$. author: - 'Tanmoy Laskar, Kate D. Alexander, Edo Berger, Wen-fai Fong, Raffaella Margutti, Isaac Shivvers, Peter K. G. Williams, Drejc Kopa[č]{}, Shiho Kobayashi, Carole Mundell, Andreja Gomboc, WeiKang Zheng, Karl M. Menten, Melissa L. Graham, and Alexei V. Filippenko' title: A Reverse Shock in GRB 160509A --- Introduction ============ Long duration $\gamma$-ray bursts (GRBs) are produced during the catastrophic collapse of massive stars [@mw99], their immense luminosity likely powered by relativistic outflows launched from a compact central engine [@pir05]. However, the nature of the central engine launching the outflow and the mechanism producing the collimated, relativistic jet remain two urgent open questions, with models ranging from jets dominated by baryons or by Poynting flux, and those with nascent black holes or magnetars providing the central engine [see @kz15 for a review]. A direct means of probing the outflow and thus the nature of the central engine is via the study of synchrotron radiation from the reverse shock (RS), expected when the ejecta first begin to interact with the surrounding medium [@mr93; @sp99]. Consistency arguments between the synchrotron spectrum of the forward shock (FS) and the RS at the time the RS has just crossed the ejecta (the deceleration time, ${\ensuremath{t_{\rm dec}}}$) allow a measurement of the ejecta Lorentz factor and the ejecta magnetization, i.e., the ratio of the fractional magnetic field energy density of the RS-shocked ejecta to that of the FS-shocked circumburst medium. Theoretically predicted to produce optical flashes on $\sim$ hour timescales, reverse shocks were expected to be easily observable with the rapid X-ray localization enabled by [*Swift*]{}. However, this signature has only been seen in a few cases in the [*Swift*]{} era, despite optical follow-up observations as early as a few minutes after $\gamma$-ray triggers [see @jkck+14 for a review]. The dearth of bright optical flashes suggests RS emission may instead be easier to observe at longer wavelengths [@mmg+07; @lbz+13; @kmk+15]. We have therefore initiated a program at the Karl G. Jansky Very Large Array (VLA) for radio RS studies, and here present the detection of a reverse shock in the [*Fermi*]{} [GRB 160509A]{}. Combining our radio observations with X-ray data from [*Swift*]{} and ground-based optical/near-infrared (NIR) observations, we perform detailed modeling of the afterglow in a robust statistical framework to derive the properties of the relativistic ejecta. Following on GRB 130427A [@lbz+13; @pcc+14], this is the second GRB where multi-frequency radio observations enable detailed characterization of the RS emission. All magnitudes are in the AB system [@og83], times are relative to the LAT trigger time, and uncertainties are reported at 68% ($1\sigma$), unless otherwise noted. GRB Properties and Observations {#text:GRB_Properties_and_Observations} =============================== High-energy: [*Fermi*]{} ------------------------ GRB 160509A was discovered by the [*Fermi*]{} Large Area Telescope [LAT; @aaa+09] on 2016 May 09 at 08:59:04.36UTC [@gcn19403]. The burst also triggered the [*Fermi*]{} Gamma-ray Burst Monitor [GBM; @gcn19411]. The burst duration in the 50–300keV GBM band is $T_{90} = 369.7\pm0.8$s with a 10keV–1MeV fluence of $(1.790\pm0.002)\times10^{-4}$erg[${\rm cm}^{-2}$]{}. X-ray: [*Swift*]{}/XRT {#text:data_analysis:XRT} ---------------------- The [*Swift*]{} X-ray Telescope [XRT; @bhn+05] began tiled observations of the LAT error circle 2 hr after the GRB. A fading X-ray transient was discovered at RA = 20h47m00.72s, Dec = +76d06286 (J2000), with an uncertainty radius of 15 [90% containment; @gcn19406; @gcn19407; @gcn19408].[^1] The count rate light curve exhibits a break at $\approx4\times10^{4}$s. We checked for spectral evolution across the break, by extracting XRT PC-mode spectra using the on-line tool on the [*Swift*]{} website [@ebp+07; @ebp+09] [^2] in the intervals $7.3\times10^3$s to $3.7\times10^4$s (spectrum 1) and $4.3\times10^4$s to $1.3\times10^6$s (spectrum 2). We employ `HEASOFT` (v6.18) and the corresponding calibration files to fit the spectra, assuming a photoelectrically absorbed power-law model with the Galactic neutral hydrogen absorption column fixed at $N_{\rm H, Gal} = 2.12\times10^{21}~{\ensuremath{{\rm cm}^{-2}}}$ [@wsb+13], and tying the value of the intrinsic absorption in the host galaxy, $N_{\rm H, int}$, to be the same between the two spectra since we do not expect any evolution in the intrinsic absorption with time. We find only marginal evidence for spectral evolution, with $\Gamma = 2.01\pm0.05$ in the first spectrum and $\Gamma = 2.12\pm0.05$ in the second. Fixing the two epochs to have the same spectral index, we obtain $\Gamma_{\rm X} = 2.07\pm0.04$ and an intrinsic absorption column, $N_{\rm H, int}=(1.52\pm0.13)\times10^{22}$[${\rm cm}^{-2}$]{}. We use this value of $\Gamma_{\rm X}$ (corresponding to a spectral index[^3] of $\beta_{\rm X}=-1.07\pm0.04$) and an associated counts-to-flux ratio of $6.5\times10^{-11}$erg [${\rm cm}^{-2}$]{} s$^{-1}$ ct$^{-1}$ to convert the count-rate to flux density, $f_{\nu}$ at 1keV. Optical/NIR {#text:data_analysis:optical} ----------- Ground-based observations at Gemini-North beginning at 5.75 hr uncovered a faint source ($r^{\prime}=23.52\pm0.15$ mag, $z^{\prime}=21.35\pm0.30$ mag) consistent with the XRT position [@gcn19410]. Subsequent observations by the Discovery Channel Telescope (DCT) $\approx 1.03$d after the LAT trigger showed the source had faded since the Gemini observations, confirming it as the afterglow [@gcn19416]. The red color in the Gemini observations, $r^{\prime}-z^{\prime}\approx2.1$ mag indicated a high redshift or a significant amount of dust extinction within the host galaxy. Gemini-North $J$- and $K$-band imaging at $\approx 1.2$ d revealed an NIR counterpart with $J\sim16.6$ mag and $K\sim19.7$ mag [Vega magnitudes; @gcn19419].[^4] Spectroscopic observations with Gemini-North at $\approx 1.2$ d yielded a single emission line identified as \[\]3727Å at $z=1.17$, other identifications being ruled out by the absence of other lines in the spectrum [@gcn19419]. At this redshift, the inferred isotropic equivalent $\gamma$-ray energy in the 1–$10^4$ keV rest-frame energy band is ${\ensuremath{E_{\gamma,\rm iso}}}=(5.76\pm0.05)\times10^{53}$erg. We observed [GRB 160509A]{} using Keck-I/LRIS [@occ+95] beginning at $\approx28.2$ d in $g$- and $R$-band with integration times of 972s and 900s, respectively. We calibrated the data using a custom LRIS pipeline, and performed photometry using Starfinder [@dbb+00] relative to SDSS stars in the field, obtaining $g^{\prime} = 25.39\pm0.12$ mag and $r^{\prime} = 24.18\pm0.35$ mag at 28.19 d. [ccc]{} 0.351 & 8.5 & $43.8 \pm 29.1$\ 0.351 & 11.0 & $50.6 \pm 27.4$\ 0.363 & 5.0 & $78.2 \pm 23.9$\ 0.363 & 7.4 & $90.8 \pm 18.6$\ …& …& … Radio {#text:data_analysis:radio} ----- We observed the afterglow with the VLA starting at 0.36d. We tracked the flux density of the afterglow over multiple epochs spanning 1.2 to 33.5GHz, using 3C48, 3C286, and 3C147 as flux and bandpass calibrators, and J2005+7752 as the gain calibrator. We carried out data reduction using the Common Astronomy Software Applications (CASA), and list the results of our VLA monitoring campaign in Table \[tab:data:VLA\]. -------------------------------------------------------- ----------------------------------------------------------- ![image](160509A_sed){width="45.00000%"} ![image](fig4d){width="45.00000%"} ![image](160509A_multimodel_ISM_UV){width="45.00000%"} ![image](160509A_multimodel_ISM_radio){width="45.00000%"} -------------------------------------------------------- ----------------------------------------------------------- Multi-wavelength Modeling {#text:modeling} ========================= Basic Considerations {#text:basic_considerations} -------------------- We interpret the observed behavior of the afterglow from radio to X-rays in the framework of the standard synchrotron model, described by three break frequencies (the self-absorption frequency, ${\ensuremath{\nu_{\rm a}}}$, the characteristic synchrotron frequency, ${\ensuremath{\nu_{\rm m}}}$, and the cooling frequency, ${\ensuremath{\nu_{\rm c}}}$) and an overall flux normalization, allowing for two possibilities for the density profile of the circumburst medium: the ISM profile [$\rho={\rm const}$; @spn98] and the wind profile [$\rho \propto r^{-2}$; @cl00]. ### X-rays – location of the cooling frequency and a jet break We fit the [*Swift*]{} XRT light curve as a power-law with two temporal breaks. The first break occurs at $t_{\rm b,1} = 0.37\pm0.14$ d when the decline rate steepens from $\alpha_{\rm X,1}=-0.51\pm0.12$ to $\alpha_{\rm X, 2} = -1.27\pm0.11$ ($\Delta \alpha_{12} = -0.76\pm0.17$). This steepening does not have a simple explanation in the standard synchrotron model (for instance, the passage of [$\nu_{\rm c}$]{} results in a steepening of the light curve by only $\Delta \alpha = -0.25$). It is possible that the X-ray data before $t_{\rm b,1}$ are part of a plateau phase, which is commonly observed among GRB X-ray afterglows [@nkg+06], and we therefore do not consider the X-ray observations before $\approx0.35$ d in the remainder of our analysis. At $t_{\rm b,2} = 5.4\pm2.3$ d, the light curve steepens again to $\alpha_{\rm X,3}=-2.2\pm0.3$ ($\Delta\alpha_{23}=-0.9\pm0.3$), suggestive of a jet break. Since ${\ensuremath{\nu_{\rm m}}}\propto t^{-1.5}$ is expected to be below the X-ray band at this time and the post-break decay rate at $\nu > {\ensuremath{\nu_{\rm m}}}$ is $t^{-p}$, we determine that the energy index of non-thermal electrons, $p\approx2.2$ [@sph99]. For this value of $p$, we expect a spectral slope of $\beta_{\rm X} \approx -1.1$ or $\beta_{\rm X} \approx -0.6$ for ${\ensuremath{\nu_{\rm c}}}< {\ensuremath{\nu_{\rm X}}}$ and ${\ensuremath{\nu_{\rm c}}}> {\ensuremath{\nu_{\rm X}}}$, respectively. The measured X-ray spectral index of $\beta_{\rm X} = -1.07\pm0.04$ requires the former, whereupon we expect $\alpha_{\rm X} = (2-3p)/4 \approx -1.2$. This is consistent with the measured value of $\alpha_{\rm X,2} = -1.27\pm0.11$. Thus, we conclude that the X-ray light curve and spectrum are both consistent with $p\approx2.2$ and ${\ensuremath{\nu_{\rm c}}}< {\ensuremath{\nu_{\rm X}}}$. We note that in this regime the X-ray light curve does not distinguish between the ISM and wind models. ### Optical/NIR – Extinction and Host Flux {#text:basic_considerations:opt} At the time of the Gemini $z^{\prime}$- and $r^{\prime}$-band observations (0.24 d), the X-ray to $z^{\prime}$-band spectral index is flat, $\beta_{\rm ox}=-0.11\pm0.06$, while the $z^{\prime}$-$r^{\prime}$ spectral index is extremely steep, $\beta_{\rm zr} = -5.4\pm1.1$. Given the moderate redshift of the burst, the only explanation for these observations is a large amount of extinction along the sight-line through the GRB host galaxy, suppressing the optical flux. On the other hand, the spectral index between the DCT $r^{\prime}$- and $g$-band observations at $\approx1$ d is $\beta_{\rm gr}=-1.9\pm0.6$, significantly shallower than $\beta_{\rm zr}$, while the $r^{\prime}$-band light curve before $\approx 1$ d declines as $\alpha_{\rm r}=-0.33\pm0.02$, shallower than expected in the standard afterglow model. Together, these observations indicate a significant contribution to the afterglow photometry from the host galaxy. This is confirmed by our Keck $g$- and $R$-band observations at $\approx28$ d, which yield flux densities similar to the DCT observations at $\approx1$ d. We find that modeling the $r^{\prime}$-band light curve as a sum of a power-law and a constant yields $\alpha_{\rm r} = -1.09\pm0.45$, with the additive constant $f_{\nu,\rm r} = 0.75\pm0.10$ $\mu$Jy. We note that whereas the light curve decay rate at ${\ensuremath{\nu_{\rm m}}}< \nu < {\ensuremath{\nu_{\rm c}}}$ is expected to provide diagnostic power for the circumburst density profile, the paucity of optical data and the large uncertainty in the optical decay rate for this event preclude such a discrimination. In the detailed modeling (Section \[text:FS\]) we fit for the host galaxy flux density in all optical/NIR filters, together with the optical extinction along the line of sight through the host. ### Radio – Multiple Components The radio spectral energy distribution (SED) at 4.06 d exhibits a clear peak at $\approx8.4$ GHz with a flux density of $\approx1.2$ mJy. At this time, the measured X-ray flux density is $f_{\nu,\rm X} = (6.3\pm1.9)\times10^{-4}$ mJy. Fitting the radio data with a broken power-law and extrapolating to the X-rays, we find that the expected X-ray flux density is at least two orders of magnitude lower than observed (Figure \[fig:160509A\_radioxrtsed\]). This suggests that the radio and X-ray emission at 4.06 d arise from separate processes. Further, we note that the radio spectral index above 10 GHz at 10 d is $\beta_{\rm radio}(10~\rm d)= 0.1\pm0.2$, in contrast to the spectral index above the peak at 4.06 d, $\beta_{\rm radio}(4.06~\rm d) = -0.79\pm0.02$. Since such a hardening of the spectral index is not expected in the standard synchrotron model, we propose that the radio peak at $4.06$ d has faded to reveal a fainter underlying component at 10 d. We show this underlying emission to be consistent with the FS in Section \[text:FS\]. To summarize, the X-ray spectral index and light curve are consistent with a forward shock origin for the X-ray emission with $p\approx2.2$ and ${\ensuremath{\nu_{\rm c}}}< {\ensuremath{\nu_{\rm X}}}$. The radio spectrum at 4.06 d cannot be extrapolated to match the observed X-ray flux at this time, suggesting that the radio and X-ray emission arise from separate processes. The radio peak at 4.06 d fades to reveal an underlying power-law continuum, which we ascribe to the FS. Finally, there is insufficient information in the afterglow observations to constrain the circumburst density profile. ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- ![image](epoch1_model_radio){width="31.00000%"} ![image](epoch2_model_radio){width="31.00000%"} ![image](epoch3_model_radio){width="31.00000%"} ![image](epoch4_model_radio){width="31.00000%"} ![image](epoch5_model_radio){width="31.00000%"} ![image](epoch6_model_radio){width="31.00000%"} ![image](epoch7_model_radio){width="31.00000%"} ![image](epoch8_model_radio){width="31.00000%"} ![image](epoch9_model_radio){width="31.00000%"} ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- [lr]{}\ ${\ensuremath{\nu_{\rm a,RS}}}$ & $2.5\times10^{10}$ Hz\ ${\ensuremath{\nu_{\rm m,RS}}}$ & $1.5\times10^{10}$ Hz\ ${\ensuremath{\nu_{\rm c,RS}}}$ & $4\times10^{11}$ Hz\ ${\ensuremath{f_{\nu, \rm m,RS}}}$ & $9$ mJy\ \ \ $p$ & $2.39\pm0.03$\ ${\ensuremath{\epsilon_{\rm e}}}$ & $0.84^{+0.06}_{-0.08}$\ ${\ensuremath{\epsilon_{\rm B}}}$ & $0.11^{+0.07}_{-0.05}$\ ${\ensuremath{n_{0}}}$ & $(8.6\pm2.2)\times10^{-4}$ [${\rm cm}^{-3}$]{}\ ${\ensuremath{E_{\rm K,iso}}}$ & $\left(18.7^{+5.4}_{-2.6}\right)\times10^{52}$ erg\ ${\ensuremath{A_{\rm V}}}$ & $3.35^{+0.08}_{-0.07}$ mag\ ${\ensuremath{t_{\rm jet}}}$ & $5.7^{+0.6}_{-0.5}$ d\ $f_{\nu, \rm host, \it g}$ & $0.29~\mu$Jy\ $f_{\nu, \rm host, \it r}$ & $0.88~\mu$Jy\ $f_{\nu, \rm host, \it z}$ & $9.0~\mu$Jy\ $f_{\nu, \rm host, \it J}$ & $11.9~\mu$Jy\ $f_{\nu, \rm host, \it K}$ & $28.8~\mu$Jy\ ${\ensuremath{\theta_{\rm jet}}}$ & $3.89^{+0.14}_{-0.16}\degr$\ ${\ensuremath{E_{\rm K}}}$ & $\left(4.4^{+1.1}_{-0.7}\right)\times10^{50}$ erg\ ${\ensuremath{E_{\gamma}}}$ & $(1.3\pm0.1)\times10^{51}$ erg\ ${\ensuremath{\nu_{\rm a,FS}}}$ & $1.2\times10^{7}$ Hz\ ${\ensuremath{\nu_{\rm m,FS}}}$ & $8.7\times10^{14}$ Hz\ ${\ensuremath{\nu_{\rm c,FS}}}$ & $3.2\times10^{15}$ Hz\ ${\ensuremath{f_{\nu, \rm m,FS}}}$ & $1.6$ mJy\ \ \ $p$ & $2.11$\ ${\ensuremath{\epsilon_{\rm e}}}$ & $0.60$\ ${\ensuremath{\epsilon_{\rm B}}}$ & $0.40$\ ${\ensuremath{A_{*}}}$ & $5.3\times10^{-3}$ [${\rm cm}^{-3}$]{}\ ${\ensuremath{E_{\rm K,iso}}}$ & $3.0\times10^{53}$ erg\ ${\ensuremath{A_{\rm V}}}$ & $4.1$ mag\ ${\ensuremath{t_{\rm jet}}}$ & $5.5$ d\ $f_{\nu, \rm host, \it g}$ & $0.26~\mu$Jy\ $f_{\nu, \rm host, \it r}$ & $0.86~\mu$Jy\ $f_{\nu, \rm host, \it z}$ & $7.2~\mu$Jy\ $f_{\nu, \rm host, \it J}$ & $15.7~\mu$Jy\ $f_{\nu, \rm host, \it K}$ & $66.4~\mu$Jy\ ${\ensuremath{\theta_{\rm jet}}}$ & $1.6\degr$\ ${\ensuremath{E_{\rm K}}}$ & $1.3\times10^{50}$ erg\ ${\ensuremath{E_{\gamma}}}$ & $(2.2\pm0.2)\times10^{50}$ erg\ ${\ensuremath{\nu_{\rm a,FS}}}$ & $1.2\times10^7$ Hz\ ${\ensuremath{\nu_{\rm m,FS}}}$ & $1.2\times10^{14}$ Hz\ ${\ensuremath{\nu_{\rm c,FS}}}$ & $1.1\times10^{16}$ Hz\ ${\ensuremath{f_{\nu, \rm m,FS}}}$ & $1.6$ The Reverse Shock {#text:RS} ----------------- We construct a model SED for the radio to X-ray emission at 1.13 days comprising two emission components: (1) a FS (Section \[text:FS\]), which peaks between the radio and optical bands, fits the NIR to X-ray SED, and provides negligible contribution in the radio band, and (2) a RS (this section), which fits the radio SED and provides negligible contribution at higher frequencies. The synchrotron parameters of the RS are listed in Table \[tab:params\]. We find that this combined RS plus FS model completely describes the observed SED at 1.13 days (Figure \[fig:160509A\_radioxrtsed\]). We evolve both emission components to the epochs of our radio observations. The evolution of the RS spectrum depends on whether the shock is Newtonian or relativistic in the frame of the unshocked ejecta, and is determined by the evolution of the ejecta Lorentz factor with radius, quantified by the parameter $g$: $\Gamma\propto R^{-g}\propto t^{-g/(1+2g)}$. This was first measured observationally for GRB 130427A, where a value of $g\approx5$ was inferred for a Newtonian RS [@lbz+13]. We find that evolving the RS SED for [GRB 160509A]{} with $g \approx 2$ matches the observed radio spectrum well from 0.36 d to 10 d. This value of $g$ closely matches the predicted value of $g\approx2.2$ from numerical calculations of the RS evolution for a Newtonian RS [@ks00]. A value of $g\approx3$ expected for a relativistic RS is ruled out by the observed evolution of the radio SED, providing the second direct measurement of $g$, and the first observational confirmation of the numerical theory. The radio peak ascribed to the RS emission fades faster than expected from the RS model after $\approx5$ d. We note that this coincides with the time of the jet break in the X-ray light curve (Section \[text:basic\_considerations\]). The standard FS jet break is a combination of geometrical effects that take place when the FS Lorentz factor, $\Gamma\approx \theta_{\rm jet}^{-1}$: the observer sees the edge of the jet and the swept-up material begins to expand sideways [@rho99; @dcrgl12; @gp12]. In the case of the RS, the ejecta internal energy drops rapidly after the RS crossing and the local sound velocity in the ejecta is expected to be sub-relativistic. Thus, we expect the lateral expansion to be fairly slow, resulting in no change in the dynamics or the scaling of the RS break frequencies across the jet break. The geometric effect is expected to dominate, resulting in a change in the RS peak flux scaling by $\Gamma_{\rm RS}^2$ at ${\ensuremath{t_{\rm jet}}}$. Setting the RS jet break time to 5.2 d as derived from a preliminary fit to the FS (Section \[text:FS\]), we find that the resultant evolution of the RS SED fits all subsequent radio observations well (Figure \[fig:160509A\_radioseds\]). Finally, we note that ${\ensuremath{\nu_{\rm c,RS}}}$ passes through the NIR at $\approx 3\times10^{-2}$ d in this model. After this time, we do not expect observable RS emission in the optical/NIR. This is consistent with the earliest available $R$-band observation [$R<19.5$ mag at $6.5\times10^{-2}$ d; @gcn19409], and with all subsequent optical/NIR data. The Forward Shock {#text:FS} ----------------- To model the FS emission we employ the framework of synchrotron radiation from relativistic shocks, including the effects of inverse Compton cooling [@se01; @gs02]. The parameters of the fit are the kinetic energy ([$E_{\rm K,iso}$]{}), the density ([$n_{0}$]{}), the electron energy index ($p$), and the fraction of the shock energy given to electrons ([$\epsilon_{\rm e}$]{}) and magnetic fields ([$\epsilon_{\rm B}$]{}). We use the Small Magellanic Cloud (SMC) extinction curve to model the extinction ([$A_{\rm V}$]{}) in the GRB host galaxy [@pei92], and include the flux density of the host in the $grzJK$ bands ($f_{\nu, \rm host}$), together with the jet break time (${\ensuremath{t_{\rm jet}}}$), as additional free parameters. The afterglow observations in this case do not allow us to directly determine the circumburst density profile, and both ISM and wind-like environments have been inferred for GRBs in the past [e.g. @pk02; @yhsf03; @cfh+10; @cfh+11; @skb+11]. However, we find that consistency arguments between the FS and RS SEDs at the deceleration time provide meaningful results in the ISM case, but not in the wind case. We therefore focus on the ISM model in the remainder of the article, and discuss the wind model briefly in Section \[text:wind\]. We fit all available photometry with a combination of the RS and FS contributions. A least-squares analysis provides the starting point, using which we find a FS jet break time of ${\ensuremath{t_{\rm jet}}}\approx5.2$ d. We fix the RS jet break time to this value. To efficiently sample parameter space and to uncover correlations between the parameters, we then carry out a Markov Chain Monte Carlo (MCMC) analysis using <span style="font-variant:small-caps;">emcee</span> [@fhlg13]. Our analysis methods are described in detail by [@lbt+14]. The resultant marginalized posterior density functions are summarized in Table \[tab:params\] and Figure \[fig:hists\]. Correlation functions between the four physical parameters are plotted in Figure \[fig:corrplots\]. In our best-fit model ($\chi^2 = 16.4$ for 12 degrees of freedom), the FS transitions from fast cooling to slow cooling at $\approx0.3$ d, while the Compton Y-parameter is $\approx2.4$, indicating that inverse-Compton cooling is moderately significant. ---------------------------------------------------- -------------------------------------------------------- ---------------------------------------------------- ![image](160509A_ISM_hist_p){width="31.00000%"} ![image](160509A_ISM_hist_ee){width="31.00000%"} ![image](160509A_ISM_hist_eb){width="31.00000%"} ![image](160509A_ISM_hist_n0){width="31.00000%"} ![image](160509A_ISM_hist_E52){width="31.00000%"} ![image](160509A_ISM_hist_AV){width="31.00000%"} ![image](160509A_ISM_hist_tjet){width="31.00000%"} ![image](160509A_ISM_hist_thetajet){width="31.00000%"} ![image](160509A_ISM_hist_Ecor){width="31.00000%"} ---------------------------------------------------- -------------------------------------------------------- ---------------------------------------------------- ---------------------------------------------------------- ---------------------------------------------------------- --------------------------------------------------------- ![image](160509A_ISM_corrplot_n0_E52){width="31.00000%"} ![image](160509A_ISM_corrplot_ee_E52){width="31.00000%"} ![image](160509A_ISM_corrplot_ee_n0){width="31.00000%"} ![image](160509A_ISM_corrplot_eb_E52){width="31.00000%"} ![image](160509A_ISM_corrplot_eb_n0){width="31.00000%"} ![image](160509A_ISM_corrplot_eb_ee){width="31.00000%"} ---------------------------------------------------------- ---------------------------------------------------------- --------------------------------------------------------- Discussion {#text:discussion} ========== Self-consistency of RS and FS models {#text:consistency} ------------------------------------ In the standard synchrotron model, the break frequencies of the RS and FS spectra are expected to be related at [$t_{\rm dec}$]{}: ${\ensuremath{\nu_{\rm c,RS}}}/{\ensuremath{\nu_{\rm c,FS}}}\sim {\ensuremath{R_{\rm B}}}^{-3/2}$, ${\ensuremath{\nu_{\rm m,RS}}}/{\ensuremath{\nu_{\rm m,FS}}}\sim {\ensuremath{R_{\rm B}}}^{1/2}\Gamma_0^{-2}$, and ${\ensuremath{f_{\nu, \rm m,RS}}}/{\ensuremath{f_{\nu, \rm m,FS}}}\sim\Gamma_0{\ensuremath{R_{\rm B}}}^{1/2}$, where $\Gamma_0$ is the bulk Lorentz factor at [$t_{\rm dec}$]{}, and ${\ensuremath{R_{\rm B}}}\equiv\epsilon_{\rm B, RS}/\epsilon_{\rm B, FS}$ is the ejecta magnetization parameter [@gkg+08; @hk13]. The three relations above then provide three constraints that can be solved exactly for [$t_{\rm dec}$]{}, $\Gamma_0$, and [$R_{\rm B}$]{}. For our best-fit FS+RS model, we find ${\ensuremath{t_{\rm dec}}}\approx 460$ s $\approx T_{90}$, $\Gamma_0\approx 330$, and ${\ensuremath{R_{\rm B}}}\approx8$. We note that the derived values of ${\ensuremath{E_{\rm K,iso}}}$, ${\ensuremath{n_{0}}}$, ${\ensuremath{\theta_{\rm jet}}}$, and $\Gamma_0$ can be used to derive a jet break time for the RS using the relation, ${\ensuremath{t_{\rm jet}}}= 110(1+z)(E_{\rm K,iso,52}/{\ensuremath{n_{0}}})^{1/3}{\ensuremath{\theta_{\rm jet}}}^{5/2}\Gamma_0^{-1/6}$ d [@glz+13]. Using the best-fit FS model, we find $t_{\rm jet, RS} \approx 3.4$ d, which is slightly earlier than the FS jet break time, as expected. The difference between this value and our assumed value of $\approx5.2$ d in Section \[text:RS\] only marginally affects the fit at one of the epochs (4.06 d) in Figure \[fig:160509A\_radioseds\]. A fully consistent solution requires bootstrapping the FS and RS parameters together, and we defer such an analysis to future work. Low-density Environments and the RS ----------------------------------- In our previous work on GRB 130427A, we suggested that a slow-cooling RS is more likely to produce detectable radio emission [@lbz+13]. Since ${\ensuremath{\nu_{\rm c,RS}}}/{\ensuremath{\nu_{\rm c,FS}}}\propto{\ensuremath{n_{0}}}^{-4/3}$ at [$t_{\rm dec}$]{}, a low-density environment may be a requisite factor for observing long-lasting RS emission [@kob00; @rz16]. We find a low circumburst density in the context of long-lasting reverse shock emission for [GRB 160509A]{}, leading credence to this hypothesis. However, we also note that additional considerations such as high ${\ensuremath{f_{\nu, \rm m,RS}}}$ or late deceleration times may also contribute to stronger RS signatures; therefore, the detectability of a RS remains a complex question [@kmk+15]. Wind Model {#text:wind} ---------- Since the available afterglow observations do not distinguish strongly between a wind and ISM model, we also provide the parameters for a fiducial wind model (Table \[tab:params\]). For this model, the spectrum transitions from fast cooling to slow cooling at 0.17 d, and the spectral break frequencies at 1 d are within a factor of $\approx 3$ of the values derived for the ISM model in Section \[text:FS\]. We note that the value of $g\approx2$ for the RS remains plausible in the wind environment as well and, therefore, the RS parameters derived in Section \[text:RS\] remain reasonable. Combining the RS and FS parameters for the wind model, we find ${\ensuremath{t_{\rm dec}}}\approx 170$ s, $\Gamma_0\approx34$, and ${\ensuremath{R_{\rm B}}}\approx0.05$. The low value of $\Gamma_0$, the low inferred magnetization, and finding ${\ensuremath{t_{\rm dec}}}\lesssim T_{90}$, together argue against the wind model [@feh93; @wl95]. Neutral Hydrogen Column Density and Extinction ---------------------------------------------- A correlation between the neutral hydrogen column derived from X-ray absorption and the line-of-sight extinction, $N_{\rm H} \approx 2\times10^{21}{\ensuremath{{\rm cm}^{-2}}}({\ensuremath{A_{\rm V}}}/{\rm mag})$, has been observed for the Milky Way [@ps95; @go09]. However, the majority of GRB afterglows exhibit lower values of ${\ensuremath{A_{\rm V}}}$ than would be expected from this correlation [e.g., @gw01; @sfa+04; @zwm+10; @zbm+13a]. We note that the extinction of GRB afterglows by their host galaxy is often well fit with an SMC extinction curve [as we also do here; @jcg+15]. We therefore derive a corresponding correlation for the SMC using the relation between $N_{\rm H}$ and $E(B-V)$ from [@wxw12] and the mean $R_{\rm V}=2.74$ for the SMC bar from [@gcm+03], obtaining $\log{\left(N_{\rm H}/10^{21}{\ensuremath{{\rm cm}^{-2}}}\right)} = 21.95\pm0.36 + \log\left({\ensuremath{A_{\rm V}}}/{\rm mag}\right)$. For $N_{\rm H} \approx 1.5\times10^{22}$[${\rm cm}^{-2}$]{}, this gives $\log({\ensuremath{A_{\rm V}}}/{\rm mag}) = 0.23\pm0.36$ or ${\ensuremath{A_{\rm V}}}=1.7^{+2.2}_{-1.0}$ mag, while the MW correlation gives $A_{\rm V}=(7.6\pm0.7)$ mag. Our observed value of ${\ensuremath{A_{\rm V}}}= 3.35^{+0.08}_{-0.07}$ mag is, therefore, intermediate between the values expected from the two relations. Conclusions {#text:conclusions} =========== We present a detailed multi-wavelength study of the [*Fermi*]{}-LAT GRB 160509A at $z = 1.17$. Our VLA observations spanning $0.36$–$20$ days after the burst clearly reveal the presence of multiple spectral components in the radio afterglow. We identify the two spectral components as arising from the forward and reverse shock, and from a joint analysis of the two emission components, we conclude: - The reverse shock dominates in the radio before $\approx 10$ d, and the forward shock dominates in the X-ray and optical/NIR. - The evolution of the reverse shock spectrum requires a Lorentz factor index, $g\approx2$, consistent with theoretical predictions for a Newtonian RS. We derive a deceleration time of $460$ s, a Lorentz factor of $\Gamma_0\approx330$ at the deceleration time, and an ejecta magnetization of ${\ensuremath{R_{\rm B}}}\approx8$. - The afterglow observations do not strongly constrain the density profile of the circumburst environment. However, the RS-FS consistency relations yield a very low Lorentz factor in the wind environment. - We derive a circumburst density of ${\ensuremath{n_{0}}}\approx10^{-3}$[${\rm cm}^{-3}$]{}, supporting the hypothesis that a low density environment may be a requisite factor in producing a slow-cooling and long-lasting RS. This work follows on our previous successful identification and characterization of a reverse shock in GRB 130427A, and highlights the importance of rapid-response radio observations in the study of the properties and dynamics of GRB ejecta. T.L. is a Jansky Fellow of the National Radio Astronomy Observatory. E.B. acknowledges support from NSF grant AST-1411763 and NASA ADA grant NNX15AE50G. W.F. is supported by NASA through Einstein Postdoctoral Fellowship grant number PF4-150121. A.V.F.’s group at UC Berkeley has received generous financial assistance from Gary and Cynthia Bengier, the Richard and Rhoda Goldman Fund, the Christopher R. Redlich Fund, the TABASGO Foundation, NSF grant AST-1211916, and NASA/[*Swift*]{} grant NNX12AD73G. This work was supported in part by the NSF under grant No. PHYS-1066293; A.V.F. thanks the Aspen Center for Physics for its hospitality during the black holes workshop in June 2016. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation. VLA observations were taken as part of our VLA Large Program 15A-235 (PI: E. Berger). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Atwood, W. B., et al. 2009, , 697, 1071 Burrows, D. N., et al. 2005, , 120, 165 Cenko, S. B., et al. 2011, , 732, 29 Cenko, S. B., et al. 2010, , 711, 641 , S. B., [Troja]{}, E., & [Tegler]{}, S. 2016, GRB Coordinates Network, 19416 Chevalier, R. A., & Li, Z.-Y. 2000, , 536, 195 De Colle, F., Ramirez-Ruiz, E., Granot, J., & Lopez-Camara, D. 2012, , 751, 57 Diolaiti, E., Bendinelli, O., Bonaccini, D., Close, L., Currie, D., & Parmeggiani, G. 2000, , 147, 335 , P. A. 2016, GRB Coordinates Network, 19406 Evans, P. A., et al. 2009, , 397, 1177 Evans, P. A., et al. 2007, , 469, 379 Fenimore, E. E., Epstein, R. I., & Ho, C. 1993, , 97, 59 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, , 125, 306 Galama, T. J., & Wijers, R. A. M. J. 2001, , 549, L209 Gao, H., Lei, W.-H., Zou, Y.-C., Wu, X.-F., & Zhang, B. 2013, New Astronomy Reviews, 57, 141 Gomboc, A., et al. 2008, , 687, 443 Gordon, K. D., Clayton, G. C., Misselt, K. A., Landolt, A. U., & Wolff, M. J. 2003, , 594, 279 Granot, J., & Piran, T. 2012, , 421, 570 Granot, J., & Sari, R. 2002, , 568, 820 Güver, T., & Özel, F. 2009, , 400, 2050 Harrison, R., & Kobayashi, S. 2013, , 772, 101 , L., [de Ugarte Postigo]{}, A., & [Thoene]{}, C. 2016, GRB Coordinates Network, 19409 Japelj, J., et al. 2015, , 579, A74 Japelj, J., et al. 2014, , 785, 84 , J. A. 2016, GRB Coordinates Network, 19407 , J. A., [Roegiers]{}, T. G. R., [Osborne]{}, J. P., & [Page]{}, K. L. e. a. 2016, GRB Coordinates Network, 19408 Kobayashi, S. 2000, , 545, 807 Kobayashi, S., & Sari, R. 2000, , 542, 819 Kopac, D., et al. 2015, , 806, 179 Kumar, P., & Zhang, B. 2015, , 561, 1 Laskar, T., et al. 2014, , 781, 1 Laskar, T., et al. 2013, , 776, 119 , A. J., [Tanvir]{}, N. R., [Cenko]{}, S. B., & [Perley]{}, D. 2016, GRB Coordinates Network, 19410 , F., [Bissaldi]{}, E., [Bregeon]{}, J., [McEnery]{}, J., [Ohno]{}, M., & [Zhu]{}, S. 2016, GRB Coordinates Network, 19403 MacFadyen, A. I., & Woosley, S. E. 1999, , 524, 262 Meszaros, P., & Rees, M. J. 1993, , 405, 278 Mundell, C. G., et al. 2007, , 660, 489 Nousek, J. A., et al. 2006, , 642, 389 Oke, J. B., et al. 1995, , 107, 375 Oke, J. B., & Gunn, J. E. 1983, , 266, 713 Panaitescu, A., & Kumar, P. 2002, , 571, 779 Pei, Y. C. 1992, , 395, 130 Perley, D. A., et al. 2014, , 781, 37 Piran, T. 2005, Rev. Mod. Phys., 76, 1143 Predehl, P., & Schmitt, J. H. M. M. 1995, , 293 Resmi, L., & Zhang, B. 2016, ArXiv e-prints Rhoads, J. E. 1999, , 525, 737 , O. J., [Fitzpatrick]{}, G., & [Veres]{}, P. 2016, GRB Coordinates Network, 19411 Sari, R., & Esin, A. A. 2001, , 548, 787 Sari, R., & Piran, T. 1999, , 520, 641 Sari, R., Piran, T., & Halpern, J. P. 1999, , 519, L17 Sari, R., Piran, T., & Narayan, R. 1998, , 497, L17 Schulze, S., et al. 2011, , 526, A23 Stratta, G., Fiore, F., Antonelli, L. A., Piro, L., & De Pasquale, M. 2004, , 608, 846 , N. R., et al. 2016, GRB Coordinates Network, 19419 Welty, D. E., Xue, R., & Wong, T. 2012, , 745, 173 Willingale, R., Starling, R. L. C., Beardmore, A. P., Tanvir, N. R., & O’Brien, P. T. 2013, , 431, 394 Woods, E., & Loeb, A. 1995, , 453, 583 Yost, S. A., Harrison, F. A., Sari, R., & Frail, D. A. 2003, , 597, 459 Zafar, T., Watson, D. J., Malesani, D., Vreeswijk, P. M., Fynbo, J. P. U., Hjorth, J., Levan, A. J., & Michalowski, M. J. 2010, , 515, A94 Zauderer, B. A., et al. 2013, , 767, 161 [^1]: <http://www.swift.ac.uk/xrt_positions/00020607/> [^2]: <http://www.swift.ac.uk/xrt_spectra/00020607/> [^3]: We use the convention $f_{\nu}\propto t^{\alpha}\nu^{\beta}$. [^4]: In the absence of reported uncertainties, we assume an uncertainty of 0.3 mag, corresponding to a $3\sigma$ detection.
--- abstract: 'The big bounce singularity of a simple 5D cosmological model is studied. Contrary to the standard big bang space-time singularity, this big bounce singularity is found to be an event horizon at which the scale factor and the mass density of the universe are finite, while the pressure undergoes a sudden transition from negative infinity to positive infinity. By using coordinate transformation it is also shown that before the bounce the universe contracts deflationary. According to the proper-time, the universe may have existed for an infinitely long time.' author: - Lixin Xu - Hongya Liu - Beili Wang title: 'Big Bounce Singularity of a Simple Five-Dimensional Cosmological Model ' --- [^1] The inflationary cosmology can resolve three important problems of the standard big bang models: the galaxy formation problem, the horizon problem, and the flatness problem. However, there are other deep questions of cosmology which inflation does not resolve [@1]: What occurred at the initial singularity? Does time exists before the big bang? These issues have been popular in cosmology for a long time. Tolman [@2] firstly discussed an oscillating cosmological model within the framework of general relativity. He pointed out that the main difficulty of such an oscillating model is that the universe has to pass through a cosmological singularity on each bounce, and during each cycle, enormous inhomogeneities would undoubtedly be generated. This is the so-called entropy problem of the oscillating models. Recently, an ekpyrotic cosmological model was presented by Khoury *et. al.* within the framework of the brane world scenario [@3; @4]. According to this model, our big bang universe emerges from a collision between two branes. When the two branes collide inelastically and bounce off one another, brane kinetic energy is partially converted into matter and radiation and our universe begins to expand. In the ekpyrotic model the universe undergoes a single transition from contraction to expansion [@3; @4]. Drawn from his idea, Steinhardt and Turok presented a cyclic model in which the universe undergoes an endless sequence of cosmic cycles each of which begins with a “big bang” and ends with a “big crunch” [@1]. It was argued that in both the ekpyrotic and cyclic model all major cosmological problems may be resolved without any use of inflation [@1; @3; @4]. In this letter, we will discuss the big bounce cosmological model presented recently by Liu and Wesson [@5]. This model differs from Tolman’s oscillating model as well as the cyclic model in that the universe in the big bounce model undergoes a single transition from contraction to expansion. It also differs from the ekpyrotic model in that the big bounce universe contracts, before the bounce, deflationary from an empty de Sitter vacuum [@5]. We will focus on a simple exact solution of the big bounce model and study what occurred at and before the bounce. The idea of extra spatial dimensions comes from the attempt of unifying gravity with other interactions. The space-time-matter (STM) theory developed by Wesson and coworkers is inspired by the unification of matter and geometry [@6; @7]. In this theory, our 4D space-time is embedded in a 5D Ricci-flat manifold, and the 4D matter fields are assumed to be “induced” from pure geometry in 5D. Mathematically, the STM theory strongly relies on Compbell’s theorem which states that any solution of N-dimensional Einstein equations can locally be embedded in a Ricci-flat (N+1)-dimensional manifold [@8]. It has been show that the STM theory agrees with all the classical tests of general relativity in the solar system [@9], and it also gives physically interesting effects such as a new (fifth) force [@10]. There are equivalence between STM and brane model [@11]. Recently, the bounce cosmology has been used to construct brane models [@12]. Within the framework of the STM theory, an exact 5D cosmological solution was given by Liu and Mashhoon in 1995 [@13]. Then, in 2001, Liu and Wesson restudied the solution and showed that it describes a cosmological model with a big bounce as opposed to a big bang [@5]. The 5D metric of this solution reads $$dS^{2}=B^{2}dt^{2}-A^{2}\left( \frac{dr^{2}}{1-kr^{2}}+r^{2}d\Omega ^{2}\right) -dy^{2} \label{5-metric}$$ where $d\Omega ^{2}\equiv \left( d\theta ^{2}+\sin ^{2}\theta d\phi ^{2}\right) $ and $$\begin{aligned} B &=&\frac{1}{\mu }\frac{\partial A}{\partial t}\equiv \frac{\dot{A}}{\mu } \nonumber \\ A^{2} &=&\left( \mu ^{2}+k\right) y^{2}+2\nu y+\frac{\nu ^{2}+K}{\mu ^{2}+k}. \label{A-B}\end{aligned}$$ Here $\mu =\mu (t)$ and $\nu =\nu (t)$ are two arbitrary functions of $t$, $ k $ is the 3D curvature index $\left( k=\pm 1,0\right) $, and $K$ is a constant. This solution satisfies the 5D vacuum equations $R_{AB}=0$. So we have $$I_{1}\equiv R=0,\text{ \ }I_{2}\equiv R^{AB}R_{AB}=0,\text{ \ } I_{3}=R_{ABCD}R^{ABCD}=\frac{72K^{2}}{A^{8}}\text{ }, \label{3-invar}$$ which shows that $K$ determines the curvature of the 5D manifold. Using the 4D part of the 5D metric (\[5-metric\]) to calculate the 4D Einstein tensor, one obtains $$\begin{aligned} ^{(4)}G_{0}^{0} &=&\frac{3\left( \mu ^{2}+k\right) }{A^{2}}\text{ }, \nonumber \\ ^{(4)}G_{1}^{1} &=&^{(4)}G_{2}^{2}=^{(4)}G_{3}^{3}=\frac{2\mu \dot{\mu}}{A \dot{A}}+\frac{\mu ^{2}+k}{A^{2}}. \label{einstein}\end{aligned}$$ Suppose the induced matter is a perfect fluid with density $\rho $ and pressure $p$ moving with a 4-velocity $u^{\alpha }\equiv dx^{\alpha }/ds,$ i.e., $$^{(4)}T_{\alpha \beta }=\left( \rho +p\right) u_{\alpha }u_{\beta }-pg_{\alpha \beta }. \label{energy}$$ So $u^{\alpha }=(u^{0},0,0,0)$ and $u^{0}u_{0}=1$. Substituting (\[einstein\]) and (\[energy\]) into the 4D Einstein equations $^{\left( 4\right) }G_{\alpha \beta }=^{\left( 4\right) }T_{\alpha \beta }$, we find that $$\begin{aligned} \rho &=&\frac{3\left( \mu ^{2}+k\right) }{A^{2}}, \nonumber \\ p &=&-\frac{2\mu \dot{\mu}}{A\dot{A}}-\frac{\mu ^{2}+k}{A^{2}}\text{ }. \label{dens-pres}\end{aligned}$$ The solutions given in equations (\[5-metric\])-(\[dens-pres\]) contain two arbitrary functions $\mu \left( t\right) $ and $\nu \left( t\right) $ and are, therefore, quite general. As soon as the two functions $\mu \left( t\right) $ and $\nu \left( t\right) $ are given, the solutions are fixed. In another hand, if the coordinate $t$ and the equation of state $p=p\left( \rho \right) $ are fixed, we can also fix the solution. In this letter, to illustrate the bounce properties explicitly, we will use the former to fix the solution. That is, we let $$\begin{aligned} k &=&0,\text{ \ \ \ }K=1, \nonumber \\ \nu \left( t\right) &=&t_{c}\left/ t\right. ,\text{ \ \ }\mu \left( t\right) =t^{-1\left/ 2\right. }, \label{functions}\end{aligned}$$ where $t_{c}$ is a constant. Substituting (\[functions\]) into (\[A-B\]) and (\[dens-pres\]), we have $$\begin{aligned} A^{2} &=&t\left[ 1+\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2} \right] \nonumber \\ B^{2} &=&\frac{1}{4}\left[ 1-\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2}\right] ^{2}\left[ 1+\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2}\right] ^{-1} \label{f-A-B}\end{aligned}$$ and $$\begin{aligned} \rho &=&\frac{3}{t^{2}\left[ 1+\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2}\right] } \nonumber \\ p &=&\frac{2}{t^{2}\left[ 1-\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2}\right] }-\frac{1}{t^{2}\left[ 1+\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2}\right] }. \label{f-dens-pres}\end{aligned}$$ Equations (\[f-A-B\]) and (\[f-dens-pres\]) constitute a simple exact solution. From (\[f-A-B\]) we can show that in a given $y=constant$ hypersurface the scale factor $A\left( t,y\right) $ has a minimum point at $$t=\left| y+t_{c}\right| \equiv t_{b}\text{ ,}$$ at which we have $$A\left| _{_{t=t_{b}}}\right. =\left( 2t_{b}\right) ^{1\left/ 2\right. }\text{ , \ }B\left| _{_{t=t_{b}}}\right. =0\text{ , \ \ }\dot{A}\left| _{_{t=t_{b}}}\right. =0\text{ .}$$ So at the bounce point $t=t_{b}$ the three invariants in equation (\[3-invar\]) are normal. It means that there is no space-time singularity in the big bounce model. In equation (\[f-dens-pres\]), we can see that at the bounce point $t=t_{b}$ the pressure undergoes a transition from negative infinity to positive infinity, which corresponds to a phase transition of the matter, i.e., a matter singularity. For a radially moving photon we have $ds^{2}=0$, so $\left( dr\left/ dt\right. \right) \left| _{t=t_{b}}\right. =0 $. This implies that $B=0$ corresponds to an event horizon. For illustration, we plot the 3D graph of the evolution of the scale factor $ A\left( t,y\right) $ in Figure 1. ![Evolution of the scale factor $A\left( t,y\right) =\protect\sqrt{t \left[ 1+\left( \left( y+t_{c}\right) \left/ t\right. \right) ^{2}\right] }$ with $t_{c}=1$.](bounce3.eps){width="2.5in" height="2.5in"} From Figure 1 we see that according to the t-coordinate, the universe evolves smoothly across its minimum at $t=t_{b}$. This strongly suggests that time, and the arrow of time, exist before the big bounce. However, we notice that $t$ is not the proper-time. To make sure, we need a coordinate transformation from $t$ to the proper-time $\tau $. Now we let $ t=t_{b}=\left| y+t_{c}\right| $ corresponds to $\tau =0$, and we let the arrow of the $\tau $-coordinate points in the same direction as the $t$ -coordinate. Then from (\[5-metric\]) and (\[f-A-B\]), the coordinate transformation reads $$\int_{0}^{\tau }d\tau =\int_{t_{b}}^{t}\left| B\right| dt=\frac{1}{2} \int_{t_{b}}^{t}\left( \left| 1-\left( t_{b}\left/ t\right. \right) ^{2}\right| \cdot \left( 1+\left( t_{b}\left/ t\right. \right) ^{2}\right) ^{-1\left/ 2\right. }\right) dt. \label{integ}$$ The integration of (\[integ\]) gives $$\tau \left( t\right) =\frac{1}{2}\times \left\{ \begin{array}{c} -\sqrt{t^{2}+t_{b}^{2}}+t_{b}\ln \left( \frac{t}{t_{b}+\sqrt{t^{2}+t_{b}^{2}} }\right) +C\text{ \ },\text{ \ for }0<t\leqslant t_{b} \\ \sqrt{t^{2}+t_{b}^{2}}-t_{b}\ln \left( \frac{t}{t_{b}+\sqrt{t^{2}+t_{b}^{2}}} \right) -C\text{ \ },\text{ \ \ \ \ for }t_{b}\leq t\leq +\infty \end{array} \right. , \label{transfor}$$ where $$C=t_{b}\left[ \sqrt{2}-\ln \left( \sqrt{2}-1\right) \right] . \nonumber$$ In the coordinate transformation (\[transfor\]), we find that there is anone-to-one correspondence between $t$ and $\tau $, and as $t$ varies from zero to infinity, the proper-time $\tau $ varies from negative infinity to positive infinity. We also find that $$\lim\limits_{t\rightarrow t_{b}^{-}}\frac{d\tau }{dt}=\lim_{t\rightarrow t_{b}^{+}}\frac{d\tau }{dt}=0. \nonumber$$ It means that the proper-time joints together in a smooth way at the bounce point. The transformation (\[transfor\]) is shown in Figure 2 in which we have set $t_{b}=1$ without loss of generality. ![Coordinate transformation between the coordinate-time $t$ and the proper-time $\protect\tau $.](time.eps){width="2.5in" height="2.5in"}  $\ \ \ $Now we discuss the evolution of the scale factor $A$ versus the proper-time $\tau $. For simplicity, we consider it in an approximate way as follows. For $0<t\ll t_{b}$ (corresponding to $-\infty <\tau \ll 0$), we keep only the leading term in (\[transfor\]), so we get $$\begin{aligned} 2\tau &\sim &t_{b}\ln t \nonumber \\ A &\sim &t_{b}t^{-\frac{1}{2}}\sim t_{b}e^{-\tau \left/ t_{b}\right. } \label{as-tau}\end{aligned}$$ Now the 5D line element of (\[5-metric\]) reads $$dS^{2}\approx d\tau ^{2}-t_{b}^{2}e^{-2\tau \left/ t_{b}\right. }\left( dr^{2}+r^{2}d\Omega ^{2}\right) -dy^{2}\text{ .} \label{as-5-metric}$$ The 4D part of equation (\[as-5-metric\]) is in fact the de Sitter metric, which would be interpreted as having $\rho =0$ and $\Lambda =3\left/ t_{b}^{2}\right. $. In equation (\[as-5-metric\]), the scale factor is an exponential function of proper-time $\tau $ and corresponds to a deflationary stage of the universe. In equation (\[f-A-B\]), let $ t\rightarrow \infty $ (corresponding to $\tau \rightarrow \infty $), then $ A\rightarrow t^{1\left/ 2\right. },$ and $B\rightarrow 1$, the universe expands as in the standard Friedmann-Robertson-Walker (FRW) model for the radiation dominated epoch. At $t=t_{b}$, the scale factor reaches to its minimum point at $A=\left( 2t_{b}\right) ^{1\left/ 2\right. }$ which corresponds to the bounce point. Using (\[f-A-B\]), (\[f-dens-pres\]) and (\[integ\]) we can show that $$\begin{aligned} \lim_{\tau \rightarrow 0^{-}}\frac{dA}{d\tau } &=&-\frac{1}{\sqrt{t_{b}}} \text{, \ \ }\lim_{\tau \rightarrow 0^{+}}\frac{dA}{d\tau }=\frac{1}{\sqrt{ t_{b}}}\text{,} \nonumber \\ \lim_{\tau \rightarrow 0^{-}}p &=&-\infty \text{, \ \ \ }\lim_{\tau \rightarrow 0^{+}}p=+\infty \text{ .} \label{limit}\end{aligned}$$ So the scale factor $A$ expressed in terms of the proper-time $\tau $ is continuous but not smooth at bounce point. Meanwhile, the pressure undergoes a transition from negative infinity to positive infinity. This implies that a matter singularity exists at the bounce point. When $t\rightarrow 0$, $ \tau \rightarrow -\infty $, which means that according to the proper-time $ \tau $, the universe has existed for an infinitely long time. Consequently, with the time elapsing in the range $\left( 0,t_{b}\right) $, the universe contracts and crunches driven by negative pressure. At $t=t_{b}$ the universe gets to its minimum point, and then bounces off driven by positive pressure with radiation and matter creation. At that time the universe has a finite density $\rho =3/\left( 2t_{b}^{2}\right) .\ $From $t_{b}$ to now, the universe expands. In summary, the bounce singularity of a simple 5D cosmological model is studied. We point out that the bounce singularity with $A\neq 0$ and $B\equiv \dot{A}\left/ \mu \right. =0$ is not a space-time singularity. It is just a phase transition from de Sitter space to a FRW space. At the bounce point, the scale factor $A$ is continuous but not smooth with respect to the proper-time $\tau $, and the pressure has a jump from negative infinity to positive infinity which corresponds to a matter singularity and may represent a phase transition as in the inflationary models [@14]. We point out that the $B=0$ singularity is an event horizon as is in the Schwarzschild solution. According to the proper-time, the whole bounce scenario can be described as follows. Our universe has been existed for an infinitely long time. Before the bounce, the universe contracts deflationary. When it approaches to the bounce point, it undergoes a crunch driven by an infinite negative pressure, and then it bounces off driven by an infinite positive pressure and accompanied by creation of radiation and matter. After the bounce, the universe expands asymptotically as is in the standard FRW models. **Acknowledgements** We thank Paul Wesson and Guowen Peng for comments. This work was supported by NSF of P.R. China under Grants 19975007 and 10273004. [99]{} P. J. Steinhardt, N. Turok, Science, **296** (2002) 1436, hep-th/0111030. R. C. Tolman, *Relativity, Thermodynamics and cosmology*, (Oxford U. Press, Clarendon Press, 1934). J. Khoury, B. A. Ovrut, P. J. Steinhardt and N. Turok, Phys. Rev. D **64** (2001) 123522, hep-th/0103239. J. Khoury, B. A. Ovrut, N. Seiberg, P. J. Steinhardt and Turok, Phys. Rev. D **65** (2002) 086007, hep-th/0108187. H. Y. Liu and P.S.Wesson, Astrophys. J. **562** (2001) 1, gr-qc/0107093. P. S. Wesson, *Space-Time-Matter* (World Scentific Publishing Co. Pre. Ltd. 1999). J. M. Overduin and P. S. Wesson, Phys. Reports **283** (1997) 303. J. E. Campbell, *A Course of Differential Geometry*, (Clarendon Oxford, 1926). D. Kalligas, P. S. Wesson, C. W. F. Everitt, Astrophys. J. **439** (1995) 548; H.Y. Liu and J. M. Overduin, Astrophys. J. **538** (2000) 386. B. Mashhoon P. S. Wesson and H. Y. Liu, Gen. Rel. Gra. **30** (1998) 555; P. S. Wesson, B. Mashoon H.Y. Liu and Sajko M. N., Phys. Lett. B **456** (1999) 34; H. Y. Liu, B. Mashhoon, Phy. Lett. A **272** (2000) 26. J. Ponce de Leon, Mod. Phys. Lett. A **16** (2001) 2291-2304, gr-qc/0111011. H.Y. Liu, Phys. Lett. B **560** (2003) 149-154, hep-th/0206198. H. Y. Liu and B. Mashhoon, Ann. Phys. **4** (1995) 565. L. Kofman, A. Linde, V. Mukhanov JHEP **0210** (2002) 057, hep-th/0206088; Li X. Z., Liu D. J., Hao J. G., Chin. Phys. Lett. **20** (2) (2003) 192; Zhang Y., Chin. Phys. Lett. **19** (10) (2002) 1569; Yang G. H., Jiang Y., Duan$\ $Y. S., Chin. Phys. Lett. **18** (5) (2001) 631. [^1]: hyliu@dlut.edu.cn
--- author: - 'Petros Drineas [^1]' - 'Michael W. Mahoney [^2]' title: 'Lectures on Randomized Numerical Linear Algebra[^3] ' --- Introduction {#sxn:intro} ============ Matrices are ubiquitous in computer science, statistics, and applied mathematics. An $m \times n$ matrix can encode information about $m$ objects (each described by $n$ features), or the behavior of a discretized differential operator on a finite element mesh; an $n \times n$ positive-definite matrix can encode the correlations between all pairs of $n$ objects, or the edge-connectivity between all pairs of $n$ nodes in a social network; and so on. Motivated largely by technological developments that generate extremely large scientific and Internet data sets, recent years have witnessed exciting developments in the theory and practice of matrix algorithms. Particularly remarkable is the use of *randomization*—typically assumed to be a property of the input data due to, e.g., noise in the data generation mechanisms—as an algorithmic or computational resource for the development of improved algorithms for fundamental matrix problems such as matrix multiplication, least-squares (LS) approximation, low-rank matrix approximation, etc. Randomized Numerical Linear Algebra (RandNLA) is an interdisciplinary research area that exploits randomization as a computational resource to develop improved algorithms for large-scale linear algebra problems. From a foundational perspective, RandNLA has its roots in theoretical computer science (TCS), with deep connections to mathematics (convex analysis, probability theory, metric embedding theory) and applied mathematics (scientific computing, signal processing, numerical linear algebra). From an applied perspective, RandNLA is a vital new tool for machine learning, statistics, and data analysis. Well-engineered implementations have already outperformed highly-optimized software libraries for ubiquitous problems such as least-squares regression, with good scalability in parallel and distributed environments. Moreover, RandNLA promises a sound algorithmic and statistical foundation for modern large-scale data analysis. This chapter serves as a self-contained, gentle introduction to three fundamental RandNLA algorithms: randomized matrix multiplication, randomized least-squares solvers, and a randomized algorithm to compute a low-rank approximation to a matrix. As such, this chapter has strong connections with many areas of applied mathematics, and in particular it has strong connections with several other chapters in this volume. Most notably, this includes that of G. Martinsson, who uses these methods to develop improved low-rank matrix approximation solvers [@pcmi-chapter-martinsson]; R. Vershynin, who develops probabilistic tools that are used in the analysis of RandNLA algorithms [@pcmi-chapter-vershynin]; J. Duchi, who uses stochastic and randomized methods in a complementary manner for more general optimization problems [@pcmi-chapter-duchi]; and M. Maggioni, who uses these methods as building blocks for more complex multiscale methods [@pcmi-chapter-maggioni]. We start this chapter with a review of basic linear algebraic facts in Section \[sxn:introla\]; we review basic facts from discrete probability in Section \[sxn:dp\]; we present a randomized algorithm for matrix multiplication in Section \[chapter:MM\]; we present a randomized algorithm for least-squares regression problems in Section \[sxn:main:regression\]; and finally we present a randomized algorithm for low-rank approximation in Section \[sxn:main:lowrank\]. We conclude this introduction by noting that [@Drineas2016; @Mah-mat-rev_BOOK] might also be of interest to a reader who wants to go through other introductory texts on RandNLA. Linear Algebra {#sxn:introla} ============== In this section, we present a brief overview of basic linear algebraic facts and notation that will be useful in this chapter. We assume basic familiarity with linear algebra (e.g., inner/outer products of vectors, basic matrix operations such as addition, scalar multiplication, transposition, upper/lower triangular matrices, matrix-vector products, matrix multiplication, matrix trace, etc.). Basics. {#sxn:labasics} ------- We will entirely focus on matrices and vectors over the *reals*. We will use the notation ${\boldsymbol{x}}\in {\mathbb{R}^{n}}$ to denote an $n$-dimensional vector: notice the use of bold latin *lowercase* letters for vectors. Vectors will always be assumed to be column vectors, unless explicitly noted otherwise. The vector of all zeros will be denoted as ${\boldsymbol{0}}$, while the vector of all ones will be denoted as ${\boldsymbol{1}}$; dimensions will be implied from context or explicitly included as a subscript. We will use bold latin *uppercase* letters for matrices, e.g., ${\mathbf{A}}\in \mathbb{R}^{m \times n}$ denotes an $m \times n$ matrix ${\mathbf{A}}$. We will use the notation ${\mathbf{A}}_{i*}$ to denote the $i$-th row of ${\mathbf{A}}$ as a row vector and ${\mathbf{A}}_{*i}$ to denote the $i$-th column of ${\mathbf{A}}$ as a column vector. The (square) identity matrix will be denoted as ${\boldsymbol{I}}_n$ where $n$ denotes the number of rows and columns. Finally, we use ${\boldsymbol{e}}_i$ to denote the $i$-th column of ${\boldsymbol{I}}_n$, i.e., the $i$-th *canonical* vector. **Matrix Inverse.** A matrix ${\mathbf{A}}\in{\mathbb{R}^{n\times n}}$ is nonsingular or invertible if there exists a matrix ${\mathbf{A}}^{-1} \in {\mathbb{R}^{n \times n}}$ such that $${\mathbf{A}}{\mathbf{A}}^{-1}={\boldsymbol{I}}_{n\times n}={\mathbf{A}}^{-1}{\mathbf{A}}.$$ The inverse exists when all the columns (or all the rows) of ${\mathbf{A}}$ are linearly independent. In other words, there does not exist a non-zero vector ${\boldsymbol{x}}\in {\mathbb{R}^{n}}$ such that ${\mathbf{A}}{\boldsymbol{x}}= {\boldsymbol{0}}$. Standard properties of the inverse include: $({\mathbf{A}}^{-1})^\top = ({\mathbf{A}}^{\top})^{-1} = {\mathbf{A}}^{-\top}$ and $({\mathbf{A}}{\boldsymbol{B}})^{-1} = {\boldsymbol{B}}^{-1}{\mathbf{A}}^{-1}$. **Orthogonal matrix.** A matrix ${\mathbf{A}}\in{\mathbb{R}^{n\times n}}$ is orthogonal if ${\mathbf{A}}^\top = {\mathbf{A}}^{-1}$. Equivalently, for all $i$ and $j$ between one and $n$, $${\mathbf{A}}_{*i}^\top{\mathbf{A}}_{*j} = \begin{cases} 0, & \text{if }i\neq j\\ 1, & \text{if } i = j \end{cases}.$$ The same property holds for the rows of ${\mathbf{A}}$. In words, the columns (rows) of ${\mathbf{A}}$ are pairwise orthogonal and normal vectors. **QR Decomposition.** Any matrix ${\mathbf{A}}\in {\mathbb{R}^{n\times n}}$ can be decomposed into the product of an orthogonal matrix and an upper triangular matrix as: $${\mathbf{A}}= {\boldsymbol{Q}}{\boldsymbol{R}},$$ where ${\boldsymbol{Q}}\in {\mathbb{R}^{n\times n}}$ is an orthogonal matrix and ${\boldsymbol{R}}\in {\mathbb{R}^{n\times n}}$ is an upper triangular matrix. The QR decomposition is useful in solving systems of linear equations, has computational complexity ${O(n^3)}$, and is numerically stable. To solve the linear system ${\mathbf{A}}{\boldsymbol{x}}= {\boldsymbol{b}}$ using the QR decomposition we first premultiply both sides by ${\boldsymbol{Q}}^\top$, thus getting ${\boldsymbol{Q}}^\top{\boldsymbol{Q}}{\boldsymbol{R}}{\boldsymbol{x}}={\boldsymbol{R}}{\boldsymbol{x}}= {\boldsymbol{Q}}^\top{\boldsymbol{b}}$. Then, we solve ${\boldsymbol{R}}{\boldsymbol{x}}= {\boldsymbol{Q}}^\top{\boldsymbol{b}}$ using backward substitution [@GVL96]. Norms. ------ Norms are used to measure the size or mass of a matrix or, relatedly, the length of a vector. They are functions that map an object from ${\mathbb{R}^{m \times n}}$ (or ${\mathbb{R}^{n}}$) to ${\mathbb{R}^{}}$. Formally: Any function, $\|\cdot\|$: ${\mathbb{R}^{m\times n}} \rightarrow {\mathbb{R}^{}}$ that satisfies the following properties is called a [**norm**]{}: 1. Non-negativity: $\|{\mathbf{A}}\|\geq 0$; $\|{\mathbf{A}}\|= 0$ if and only if ${\mathbf{A}}= {\boldsymbol{0}}$. 2. Triangle inequality: $\|{\mathbf{A}}+{\boldsymbol{B}}\| \leq \|{\mathbf{A}}\|+\|{\boldsymbol{B}}\|$. 3. Scalar multiplication: $\|\alpha {\mathbf{A}}\| = |\alpha|\|{\mathbf{A}}\|$, for all $\alpha\in{\mathbb{R}^{}}$. The following properties are easy to prove for any norm: $\|-{\mathbf{A}}\| = \|{\mathbf{A}}\|$ and $$|\|{\mathbf{A}}\|-\|{\boldsymbol{B}}\||\leq \|{\mathbf{A}}-{\boldsymbol{B}}\|.$$ The latter property is known as the reverse triangle inequality. Vector norms. ------------- Given ${\boldsymbol{x}}\in {\mathbb{R}^{n}}$ and an integer $p\geq 1$, we define the vector $p$-norm as: $$\PNorm{{\boldsymbol{x}}} = \left(\sum_{i=1}^n|x_i|^p\right)^{1/p}.$$ The most common vector $p$-norms are: - One norm: $\ONorm{{\boldsymbol{x}}} = \sum_{i=1}^n|x_i|$. - Euclidean (two) norm: ${\mbox{}\|{\boldsymbol{x}}\|_2} = \sqrt{\sum_{i=1}^n|x_i|^2} = \sqrt{{\boldsymbol{x}}^\top{\boldsymbol{x}}}$. - Infinity (max) norm: $\VINorm{{\boldsymbol{x}}} = \max_{1\leq i \leq n}|x_i|$. Given ${\boldsymbol{x}},{\boldsymbol{y}}\in {\mathbb{R}^{n}}$ we can bound the inner product ${\boldsymbol{x}}^\top{\boldsymbol{y}}= \sum_{i=1}^n x_iy_i$ using $p$-norms. The Cauchy-Schwartz inequality states that: $$|{\boldsymbol{x}}^\top{\boldsymbol{y}}| \leq{\mbox{}\|{\boldsymbol{x}}\|_2}{\mbox{}\|{\boldsymbol{y}}\|_2}.$$ In words, it gives an upper bound for the inner product of two vectors in terms of the Euclidean norm of the two vectors. Hölder’s inequality states that $$|{\boldsymbol{x}}^\top{\boldsymbol{y}}|\leq\ONorm{{\boldsymbol{x}}}\VINorm{{\boldsymbol{y}}}\quad \text{and}\quad |{\boldsymbol{x}}^\top{\boldsymbol{y}}|\leq\VINorm{{\boldsymbol{x}}}\ONorm{{\boldsymbol{y}}}.$$ The following inequalities between common vector $p$-norms are easy to prove: $$\begin{aligned} \VINorm{{\boldsymbol{x}}}\leq \ONorm{{\boldsymbol{x}}} &\leq n\VINorm{{\boldsymbol{x}}},\\ {\mbox{}\|{\boldsymbol{x}}\|_2}\leq \ONorm{{\boldsymbol{x}}} &\leq \sqrt{n}{\mbox{}\|{\boldsymbol{x}}\|_2},\\ \VINorm{{\boldsymbol{x}}}\leq {\mbox{}\|{\boldsymbol{x}}\|_2} &\leq \sqrt{n}\VINorm{{\boldsymbol{x}}}.\end{aligned}$$ Also, ${\mbox{}\|{\boldsymbol{x}}\|_2}^2 = {\boldsymbol{x}}^T{\boldsymbol{x}}$. We can now define the notion of orthogonality for a pair of vectors and state the Pythagorean theorem. \[thm:pythagoras\] Two vectors ${\boldsymbol{x}},{\boldsymbol{y}}\in {\mathbb{R}^{n}}$ are orthogonal, i.e., ${\boldsymbol{x}}^\top{\boldsymbol{y}}= 0$, if and only if $${\mbox{}\|{\boldsymbol{x}}\pm{\boldsymbol{y}}\|_2^2} ={\mbox{}\|{\boldsymbol{x}}\|_2^2}+{\mbox{}\|{\boldsymbol{y}}\|_2^2}.$$ Theorem \[thm:pythagoras\] is also known as the Pythagorean Theorem. Another interesting property of the Euclidean norm is that it does not change after pre(post)-multiplication by a matrix with orthonormal columns (rows). Given a vector ${\boldsymbol{x}}\in{\mathbb{R}^{n}}$ and a matrix ${\boldsymbol{V}}\in {\mathbb{R}^{m\times n}}$ with $m \ge n$ and ${\boldsymbol{V}}^\top{\boldsymbol{V}}= {\boldsymbol{I}}_n$: $${\mbox{}\|{\boldsymbol{V}}{\boldsymbol{x}}\|_2} ={\mbox{}\|{\boldsymbol{x}}\|_2}\quad \mbox{and} \quad {\mbox{}\|{\boldsymbol{x}}^T{\boldsymbol{V}}^T\|_2} ={\mbox{}\|{\boldsymbol{x}}\|_2}.$$ Induced matrix norms. --------------------- Given a matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$ and an integer $p\geq 1$ we define the matrix $p$-norm as: $$\PNorm{{\mathbf{A}}} = \max_{{\boldsymbol{x}}\neq 0}{\frac{\PNorm{{\mathbf{A}}{\boldsymbol{x}}}}{{\mbox{}\|{\boldsymbol{x}}\|_p }}} = \max_{{\mbox{}\|{\boldsymbol{x}}\|_p } = 1}{\PNorm{{\mathbf{A}}{\boldsymbol{x}}}}.$$ The most frequent matrix $p$-norms are: - One norm: the maximum absolute column sum, $$\ONorm{{\mathbf{A}}} = \max_{1\leq j\leq n}\sum_{i=1}^m{|{\mathbf{A}}_{ij}|} = \max_{1\leq j \leq n}\ONorm{{\mathbf{A}}{\boldsymbol{e}}_j}.$$ - Infinity norm: the maximum absolute row sum, $$\VINorm{{\mathbf{A}}} = \max_{1\leq i\leq m}\sum_{j=1}^n{|{\mathbf{A}}_{ij}|} = \max_{1\leq j \leq m}\ONorm{{\mathbf{A}}^\top {\boldsymbol{e}}_i}.$$ - Two (or spectral) norm : $${\mbox{}\|{\mathbf{A}}\|_2} = \max_{{\mbox{}\|{\boldsymbol{x}}\|_2} = 1}\|{\mathbf{A}}{\boldsymbol{x}}\|_2 = \max_{{\mbox{}\|{\boldsymbol{x}}\|_2} = 1} \sqrt{{\boldsymbol{x}}^\top{\mathbf{A}}^\top{\mathbf{A}}{\boldsymbol{x}}} .$$ This family of norms is named “induced” because they are realized by a non-zero vector ${\boldsymbol{x}}$ that varies depending on ${\mathbf{A}}$ and $p$. Thus, there exists a unit norm vector (unit norm in the $p$-norm) ${\boldsymbol{x}}$ such that $\PNorm{{\mathbf{A}}} = \PNorm{{\mathbf{A}}{\boldsymbol{x}}}.$ The induced matrix $p$-norms follow the submultiplicativity laws: $$\PNorm{{\mathbf{A}}{\boldsymbol{x}}} \leq \PNorm{{\mathbf{A}}}\PNorm{{\boldsymbol{x}}}\qquad \mbox{and} \qquad \PNorm{{\mathbf{A}}{\boldsymbol{B}}}\leq \PNorm{{\mathbf{A}}}\PNorm{{\boldsymbol{B}}}.$$ Furthermore, matrix $p$-norms are invariant to permutations: $\PNorm{{\boldsymbol{P}}{\mathbf{A}}{\boldsymbol{Q}}} = \PNorm{{\mathbf{A}}},$ where ${\boldsymbol{P}}$ and ${\boldsymbol{Q}}$ are permutation matrices of appropriate dimensions. Also, if we consider the matrix with permuted rows and columns $${\boldsymbol{P}}{\mathbf{A}}{\boldsymbol{Q}}= \begin{pmatrix}{\boldsymbol{B}}& {\mathbf{A}}_{12}\\ {\mathbf{A}}_{21} & {\mathbf{A}}_{22} \end{pmatrix} ,$$ then the norm of the submatrix is related to the norm of the full unpermuted matrix as follows: $\PNorm{{\boldsymbol{B}}}\leq \PNorm{{\mathbf{A}}}.$ The following relationships between matrix $p$-norms are relatively easy to prove. Given a matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$, $$\begin{aligned} \frac{1}{\sqrt{n}}\VINorm{{\mathbf{A}}} \leq {\mbox{}\|{\mathbf{A}}\|_2} \leq\sqrt{m}\VINorm{{\mathbf{A}}}, \\ \frac{1}{\sqrt{m}}\ONorm{{\mathbf{A}}} \leq {\mbox{}\|{\mathbf{A}}\|_2} \leq\sqrt{n}\ONorm{{\mathbf{A}}}.\end{aligned}$$ It is also the case that $\ONorm{{\mathbf{A}}^\top} = \VINorm{{\mathbf{A}}}$ and $\VINorm{{\mathbf{A}}^\top} =\ONorm{{\mathbf{A}}}.$ While transposition affects the infinity and one norm of a matrix, it does not affect the two norm, i.e., ${\mbox{}\|{\mathbf{A}}^\top\|_2} = {\mbox{}\|{\mathbf{A}}\|_2}$. Also, the matrix two-norm is not affected by pre(post)- multiplication with matrices whose columns (rows) are orthonormal vectors: ${\mbox{}\|{\boldsymbol{U}}{\mathbf{A}}{\boldsymbol{V}}^\top\|_2} = {\mbox{}\|{\mathbf{A}}\|_2},$ where ${\boldsymbol{U}}$ and ${\boldsymbol{V}}$ are orthonormal matrices (${\boldsymbol{U}}^T{\boldsymbol{U}}={\boldsymbol{I}}$ and ${\boldsymbol{V}}^T{\boldsymbol{V}}={\boldsymbol{I}}$) of appropriate dimensions. The Frobenius norm. {#sxn:review:FNorm} ------------------- The Frobenius norm is not an induced norm, as it belongs to the family of Schatten norms (to be discussed in Section \[sxn:schatten\]). Given a matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$, we define the Frobenius norm as: $$\FNorm{{\mathbf{A}}} = \sqrt{\sum_{j=1}^n\sum_{i=1}^m{{\mathbf{A}}_{ij}^2}} = \sqrt{\Trace{{\mathbf{A}}^\top{\mathbf{A}}}},$$ where $\Trace{\cdot}$ denotes the matrix trace (where, recall, the trace of a square matrix is defined to be the sum of the elements on the main diagonal). Informally, the Frobenius norm measures the variance or variability (which can be given an interpretation of size or mass) of a matrix. Given a vector ${\boldsymbol{x}}\in {\mathbb{R}^{n}}$, its Frobenius norm is equal to its Euclidean norm, i.e., $\FNorm{{\boldsymbol{x}}} = {\mbox{}\|{\boldsymbol{x}}\|_2}$. Transposition of a matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$ does not affect its Frobenius norm, i.e., $\FNorm{{\mathbf{A}}}=\FNorm{{\mathbf{A}}^\top}$. Similar to the two norm, the Frobenius norm does not change under permutations or pre(post)- multiplication with a matrix with orthonormal columns (rows): $$\FNorm{{\boldsymbol{U}}{\mathbf{A}}{\boldsymbol{V}}^T}= \FNorm{{\mathbf{A}}},$$ where ${\boldsymbol{U}}$ and ${\boldsymbol{V}}$ are orthonormal matrices (${\boldsymbol{U}}^T{\boldsymbol{U}}={\boldsymbol{I}}$ and ${\boldsymbol{V}}^T{\boldsymbol{V}}={\boldsymbol{I}}$) of appropriate dimensions. The two and the Frobenius norm can be related by: $${\mbox{}\|{\mathbf{A}}\|_2} \leq \FNorm{{\mathbf{A}}} \leq \sqrt{\rm{rank}({\mathbf{A}})}{\mbox{}\|{\mathbf{A}}\|_2}\leq\sqrt{\min{\{m,n\}}}{\mbox{}\|{\mathbf{A}}\|_2}.$$ The Frobenius norm satisfies the so-called strong sub-multiplicativity property, namely: $$\FNorm{{\mathbf{A}}{\boldsymbol{B}}}\leq {\mbox{}\|{\mathbf{A}}\|_2}\FNorm{{\boldsymbol{B}}} \quad \mbox{and} \quad \FNorm{{\mathbf{A}}{\boldsymbol{B}}}\leq \FNorm{{\mathbf{A}}}{\mbox{}\|{\boldsymbol{B}}\|_2}.$$ Given ${\boldsymbol{x}}\in {\mathbb{R}^{m}}$ and ${\boldsymbol{y}}\in {\mathbb{R}^{n}}$, the Frobenius norm of their outer product is equal to the product of the Euclidean norms of the two vectors forming the outer product: $$\FNorm{{\boldsymbol{x}}{\boldsymbol{y}}^\top} ={\mbox{}\|{\boldsymbol{x}}\|_2}{\mbox{}\|{\boldsymbol{y}}\|_2}.$$ Finally, we state a matrix version of the Pythagorean theorem. \[l\_pyth\] Let ${\mathbf{A}},{\boldsymbol{B}}\in\mathbb{R}^{m\times n}$. If ${\mathbf{A}}^T{\boldsymbol{B}}={\boldsymbol{0}}$ then $$\|{\mathbf{A}}+{\boldsymbol{B}}\|_F^2=\|{\mathbf{A}}\|_F^2+\|{\boldsymbol{B}}\|_F^2.$$ The Singular Value Decomposition. {#sxn:review:SVD} --------------------------------- The Singular Value Decomposition (SVD) is the most important matrix decomposition and exists for every matrix. Given a matrix ${\mathbf{A}}\in{\mathbb{R}^{m\times n}}$, we define its full SVD as: $${\mathbf{A}}= {\boldsymbol{U}}{\boldsymbol{\Sigma}}{\boldsymbol{V}}^T = \sum_{i=1}^{\min\{m,n\}}\sigma_i{\boldsymbol{u}}_i{\boldsymbol{v}}_i^\top ,$$ where ${\boldsymbol{U}}\in{\mathbb{R}^{m\times m}}$ and ${\boldsymbol{V}}\in{\mathbb{R}^{n\times n}}$ are orthogonal matrices that contain the left and right singular vectors of ${\mathbf{A}}$, respectively, and ${\boldsymbol{\Sigma}}\in {\mathbb{R}^{m\times n}}$ is a diagonal matrix, with the singular values of ${\mathbf{A}}$ in decreasing order on the diagonal. We will often use ${\boldsymbol{u}}_i$ (respectively, ${\boldsymbol{v}}_j$), $i=1,\ldots, m$ (respectively, $j=1,\ldots, n$) to denote the columns of the matrix ${\boldsymbol{U}}$ (respectively, ${\boldsymbol{V}}$). Similarly, we will use $\sigma_i$, $i=1,\ldots, \min\{m,n\}$ to denote the singular values: $$\sigma_1\geq \sigma_2\geq\cdots\geq \sigma_{\min\{m,n\}}\geq 0.$$ The singular values of ${\mathbf{A}}$ are non-negative and their number is equal to $\min\{m,n\}$. The number of non-zero singular values of ${\mathbf{A}}$ is equal to the rank of ${\mathbf{A}}$. Due to orthonormal invariance, we get: $${\boldsymbol{\Sigma}}_{{\boldsymbol{P}}{\mathbf{A}}{\boldsymbol{Q}}^T} = {\boldsymbol{\Sigma}}_{{\mathbf{A}}},$$ where ${\boldsymbol{P}}$ and ${\boldsymbol{Q}}$ are orthonormal matrices (${\boldsymbol{P}}^T{\boldsymbol{P}}={\boldsymbol{I}}$ and ${\boldsymbol{Q}}^T{\boldsymbol{Q}}={\boldsymbol{I}}$) of appropriate dimensions. In words, the singular values of ${\boldsymbol{P}}{\mathbf{A}}{\boldsymbol{Q}}$ are the same as the singular values of ${\mathbf{A}}$. The following inequalities involving the singular values of the matrices ${\mathbf{A}}$ and ${\boldsymbol{B}}$ are important. First, if both ${\mathbf{A}}$ and ${\boldsymbol{B}}$ are in $\mathbb{R}^{m \times n}$, for all $i=1,\ldots, \min\{m,n\}$, $$\begin{aligned} \label{eqn:svineq1}\abs{\sigma_i({\mathbf{A}})-\sigma_i({\boldsymbol{B}})}\leq{\mbox{}\|{\mathbf{A}}-{\boldsymbol{B}}\|_2}.\end{aligned}$$ Second, if ${\mathbf{A}}\in \mathbb{R}^{p \times m}$ and ${\boldsymbol{B}}\in \mathbb{R}^{m \times n}$, for all $i=1,\ldots, \min\{m,n\}$, $$\begin{aligned} \label{eqn:svineq2}\sigma_i({\mathbf{A}}{\boldsymbol{B}})\leq \sigma_1({\mathbf{A}})\sigma_i({\boldsymbol{B}}),\end{aligned}$$ where, recall, $ \sigma_1({\mathbf{A}}) = {\mbox{}\|{\mathbf{A}}\|_2} $. We are often interested in keeping only the non-zero singular values and the corresponding left and right singular vectors of a matrix ${\mathbf{A}}$. Given a matrix ${\mathbf{A}}\in{\mathbb{R}^{m\times n}}$ with ${\mbox{rank}({\mathbf{A}})} = \rho$, its thin SVD can be defined as follows. Given a matrix ${\mathbf{A}}\in{\mathbb{R}^{m\times n}}$ of rank $\rho \leq \min\{m,n\}$, we define its thin SVD as: $${\mathbf{A}}= \underbrace{{\boldsymbol{U}}}_{m\times \rho}\underbrace{{\boldsymbol{\Sigma}}}_{\rho\times \rho}\underbrace{{\boldsymbol{V}}^\top}_{\rho\times n} = \sum_{i=1}^\rho\sigma_i{\boldsymbol{u}}_i{\boldsymbol{v}}_i^\top,$$ where ${\boldsymbol{U}}\in{\mathbb{R}^{m\times \rho}}$ and ${\boldsymbol{V}}\in{\mathbb{R}^{n\times \rho}}$ are matrices with pairwise orthonormal columns (i.e., ${\boldsymbol{U}}^T{\boldsymbol{U}}={\boldsymbol{I}}$ and ${\boldsymbol{V}}^T{\boldsymbol{V}}={\boldsymbol{I}}$) that contain the left and right singular vectors of ${\mathbf{A}}$ corresponding to the non-zero singular values; ${\boldsymbol{\Sigma}}\in {\mathbb{R}^{\rho\times \rho}}$ is a diagonal matrix with the non-zero singular values of ${\mathbf{A}}$ in decreasing order on the diagonal. If ${\mathbf{A}}$ is a nonsingular matrix, we can compute its inverse using the SVD: $${\mathbf{A}}^{-1} = ({\boldsymbol{U}}{\boldsymbol{\Sigma}}{\boldsymbol{V}}^\top)^{-1} = {\boldsymbol{V}}{\boldsymbol{\Sigma}}^{-1}{\boldsymbol{U}}^\top.$$ (If ${\mathbf{A}}$ is nonsingular, then it is square and full rank, in which case the thin SVD is the same as the full SVD.) The SVD is so important since, as is well-known, the best rank-$k$ approximation to any matrix can be computed via the SVD. Let ${\mathbf{A}}={\boldsymbol{U}}{{\boldsymbol{\Sigma}}}{\boldsymbol{V}}^\top \in {\mathbb{R}^{m\times n}}$ be the thin SVD of ${\mathbf{A}}$; let $k<{\mbox{rank}({\mathbf{A}})}=\rho$ be an integer; and let ${\mathbf{A}}_k = \sum_{i=1}^k\sigma_i{\boldsymbol{u}}_i{\boldsymbol{v}}_i^\top = {\boldsymbol{U}}_k {\boldsymbol{\Sigma}}_k {\boldsymbol{V}}_k^T$. Then, $$\sigma_{k+1} = \min_{{\boldsymbol{B}}\in{\mathbb{R}^{m\times n}},\ \rm{rank}({\boldsymbol{B}})=k}{\mbox{}\|{\mathbf{A}}-{\boldsymbol{B}}\|_2} = {\mbox{}\|{\mathbf{A}}-{\mathbf{A}}_k\|_2}$$ and $$\sum_{j=k+1}^\rho\sigma_j^2 = \min_{{\boldsymbol{B}}\in{\mathbb{R}^{m\times n}},\ \rm{rank}({\boldsymbol{B}})=k}{\mbox{}\|{\mathbf{A}}-{\boldsymbol{B}}\|_F^2} = {\mbox{}\|{\mathbf{A}}-{\mathbf{A}}_k\|_F^2}.$$ In words, the above theorem states that if we seek a rank $k$ approximation to a matrix ${\mathbf{A}}$ that minimizes the two or the Frobenius norm of the “error” matrix, i.e., of the difference between ${\mathbf{A}}$ and its approximation, then it suffices to keep the top $k$ singular values of ${\mathbf{A}}$ and the corresponding left and right singular vectors. We will often use the following notation: let ${\boldsymbol{U}}_k \in \mathbb{R}^{m \times k}$ (respectively, ${\boldsymbol{V}}_k \in \mathbb{R}^{n \times k}$) denote the matrix of the top $k$ left (respectively, right) singular vectors of ${\mathbf{A}}$; and let ${\boldsymbol{\Sigma}}_k \in \mathbb{R}^{k \times k}$ denote the diagonal matrix containing the top $k$ singular values of ${\mathbf{A}}$. Similarly, let ${\boldsymbol{U}}_{k,\perp} \in \mathbb{R}^{m \times (\rho-k)}$ (respectively, ${\boldsymbol{V}}_{k,\perp} \in \mathbb{R}^{n \times (\rho-k)}$) denote the matrix of the bottom $\rho-k$ nonzero left (respectively, right) singular vectors of ${\mathbf{A}}$; and let ${\boldsymbol{\Sigma}}_{k,\perp} \in \mathbb{R}^{(\rho-k) \times (\rho-k)}$ denote the diagonal matrix containing the bottom $\rho-k$ singular values of ${\mathbf{A}}$. Then, $$\label{eqn:svdnotation} {\mathbf{A}}_k={\boldsymbol{U}}_k{\boldsymbol{\Sigma}}_k{\boldsymbol{V}}_k^T \quad \mbox{and} \quad {\mathbf{A}}_{k,\perp}={\mathbf{A}}-{\mathbf{A}}_k = {\boldsymbol{U}}_{k,\perp}{\boldsymbol{\Sigma}}_{k,\perp}{\boldsymbol{V}}_{k,\perp}^T.$$ SVD and Fundamental Matrix Spaces. ---------------------------------- Any matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$ defines four fundamental spaces: The Column Space of ${\mathbf{A}}$ : This space is spanned by the columns of ${\mathbf{A}}$: $$\rm{range}({\mathbf{A}}) = \{{\boldsymbol{b}}: {\mathbf{A}}{\boldsymbol{x}}= {\boldsymbol{b}},\quad {\boldsymbol{x}}\in {\mathbb{R}^{n}}\} \subset {\mathbb{R}^{m}}.$$ The Null Space of ${\mathbf{A}}$ : This space is spanned by all vectors ${\boldsymbol{x}}\in{\mathbb{R}^{n}}$ such that $ {\mathbf{A}}{\boldsymbol{x}}= {\boldsymbol{0}}$: $$\rm{null}({\mathbf{A}}) = \{{\boldsymbol{x}}: {\mathbf{A}}{\boldsymbol{x}}= {\boldsymbol{0}}\} \subset {\mathbb{R}^{n}}.$$ The Row Space of ${\mathbf{A}}$ : This space is spanned by the rows of ${\mathbf{A}}$: $$\rm{range}({\mathbf{A}}^\top) = \{{\boldsymbol{d}}: {\mathbf{A}}^\top{\boldsymbol{y}}= {\boldsymbol{d}},\quad {\boldsymbol{y}}\in {\mathbb{R}^{m}}\} \subset {\mathbb{R}^{n}}.$$ The Left Null Space of ${\mathbf{A}}$ : This space is spanned by all vectors ${\boldsymbol{y}}\in{\mathbb{R}^{m}}$ such that $ {\mathbf{A}}^\top{\boldsymbol{y}}= {\boldsymbol{0}}$: $$\rm{null}({\mathbf{A}}^\top) = \{{\boldsymbol{y}}: {\mathbf{A}}^\top{\boldsymbol{y}}= {\boldsymbol{0}}\} \subset {\mathbb{R}^{m}}.$$ The SVD reveals orthogonal bases for all these spaces. Given a matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$, with ${\mbox{rank}({\mathbf{A}})} = \rho$, its SVD can be written as: $${\mathbf{A}}= \begin{pmatrix}{\boldsymbol{U}}_\rho & {\boldsymbol{U}}_{\rho,\perp}\end{pmatrix}\begin{pmatrix}{\boldsymbol{\Sigma}}_\rho & {\boldsymbol{0}}\\{\boldsymbol{0}}& {\boldsymbol{0}}\end{pmatrix}\begin{pmatrix}{\boldsymbol{V}}_\rho^\top\\{\boldsymbol{V}}_{\rho,\perp}^\top\end{pmatrix}.$$ It is easy to prove that: $${\mbox{range}({\mathbf{A}})} = {\mbox{range}({\boldsymbol{U}}_\rho)},$$ $${\mbox{null}({\mathbf{A}})} = {\mbox{range}({\boldsymbol{V}}_{\rho,\perp})},$$ $${\mbox{range}({\mathbf{A}}^\top)} ={\mbox{range}({\boldsymbol{V}}_\rho)},$$ $${\mbox{null}({\mathbf{A}}^\top)} = {\mbox{range}({\boldsymbol{U}}_{\rho,\perp})}.$$ The column space of ${\mathbf{A}}$ is orthogonal to the null space of ${\mathbf{A}}^\top$ and their union is ${\mathbb{R}^{m}}$. The column space of ${\mathbf{A}}^\top$ is orthogonal to the null space of ${\mathbf{A}}$ and their union is ${\mathbb{R}^{n}}$. Matrix Schatten norms. {#sxn:schatten} ---------------------- The matrix Schatten norms are a special family of norms that are defined on the vector containing the singular values of a matrix. Given a matrix ${\mathbf{A}}\in{\mathbb{R}^{m\times n}}$ with singular values $\sigma_1\geq\dots\geq\sigma_\rho >0$, we define the Schatten p-norm as: $$\|{\mathbf{A}}\|_p = \left(\sum_{i=1}^\rho\sigma_i^p\right)^{\frac{1}{p}}.$$ Common Schatten norms of a matrix ${\mathbf{A}}\in {\mathbb{R}^{m\times n}}$ are: Schatten one-norm : The nuclear norm, i.e., the sum of the singular values. Schatten two-norm : The Frobenius norm, i.e., the square root of the sum of the squares of the singular values. Schatten infinity-norm : The two norm, i.e., the largest singular value. Schatten norms are orthogonally invariant, submultiplicative, and satisfy Hölder’s inequality. The Moore-Penrose pseudoinverse. {#sxn:review:MP} -------------------------------- A generalized notion of the well-known matrix inverse is the Moore-Penrose pseudoinverse. Formally, given a matrix ${\mathbf{A}}\in\mathbb{R}^{m \times n}$, a matrix ${\mathbf{A}}^\dagger$ is the Moore Penrose pseudoinverse of ${\mathbf{A}}$ if it satisfies the following properties: 1. ${\mathbf{A}}{\mathbf{A}}^\dagger{\mathbf{A}}= {\mathbf{A}}$. 2. ${\mathbf{A}}^\dagger{\mathbf{A}}{\mathbf{A}}^\dagger = {\mathbf{A}}^\dagger$. 3. $({\mathbf{A}}{\mathbf{A}}^\dagger)^\top = {\mathbf{A}}{\mathbf{A}}^\dagger$. 4. $({\mathbf{A}}^\dagger{\mathbf{A}})^\top = {\mathbf{A}}^\dagger{\mathbf{A}}$. Given a matrix ${\mathbf{A}}\in{\mathbb{R}^{m\times n}}$ of rank $\rho$ and its thin SVD $${\mathbf{A}}= \sum_{i=1}^\rho \sigma_i{\boldsymbol{u}}_i{\boldsymbol{v}}_i^\top,$$ its Moore-Penrose pseudoinverse ${\mathbf{A}}^{\dagger}$ is $${{\mathbf{A}}}^\dagger = \sum_{i=1}^\rho\frac{1}{\sigma_i}{\boldsymbol{v}}_i{\boldsymbol{u}}_i^\top.$$ If a matrix ${\mathbf{A}}\in{\mathbb{R}^{n\times n}}$ has full rank, then ${\mathbf{A}}^{\dagger} = {\mathbf{A}}^{-1}$. If a matrix ${\mathbf{A}}\in{\mathbb{R}^{m\times n}}$ has full column rank, then ${\mathbf{A}}^{\dagger}{\mathbf{A}}= {\boldsymbol{I}}_n$, and ${\mathbf{A}}{\mathbf{A}}^{\dagger}$ is a projection matrix onto the column span of ${\mathbf{A}}$; while if it has full row rank, then ${\mathbf{A}}{\mathbf{A}}^{\dagger} = {\boldsymbol{I}}_m$, and ${\mathbf{A}}^{\dagger}{\mathbf{A}}$ is a projection matrix onto the row span of ${\mathbf{A}}$. A particularly important property regarding the pseudoinverse of the product of two matrices is the following: for matrices ${\boldsymbol{Y}}_1\in\mathbb{R}^{m\times p}$ and ${\boldsymbol{Y}}_2\in\mathbb{R}^{p\times n}$, satisfying ${\mbox{rank}({\boldsymbol{Y}}_1)}={\mbox{rank}({\boldsymbol{Y}}_2)}$, [@Bjo15 Theorem 2.2.3] states that $$\begin{aligned} \label{eqn:pinv} \left({\boldsymbol{Y}}_1{\boldsymbol{Y}}_2\right)^{\dagger} = {\boldsymbol{Y}}_2^{\dagger}{\boldsymbol{Y}}_1^{\dagger}.\end{aligned}$$ (We emphasize that the condition on the ranks is crucial: while the inverse of the product of two matrices always equals the product of the inverse of those matrices, the analogous statement is not true in full generality for the Moore-Penrose pseudoinverse [@Bjo15].) The fundamental spaces of the Moore-Penrose pseudoinverse are connected with those of the actual matrix. Given a matrix ${\mathbf{A}}$ and its Moore-Penrose pseudoinverse ${\mathbf{A}}^\dagger$, the column space of ${\mathbf{A}}^\dagger$ can be defined as: $$\rm{range}({\mathbf{A}}^\dagger) = \rm{range}({\mathbf{A}}^\top{\mathbf{A}}) = \rm{range}({\mathbf{A}}^\top) ,$$ and it is orthogonal to the null space of ${\mathbf{A}}$. The null space of ${\mathbf{A}}^\dagger$ can be defined as: $$\rm{null}({\mathbf{A}}^\dagger) = \rm{null}({\mathbf{A}}{\mathbf{A}}^\top) = \rm{null}({\mathbf{A}}^\top) ,$$ and it is orthogonal to the column space of ${\mathbf{A}}$. References. ----------- We refer the interested reader to [@Strang88; @GVL96; @TrefethenBau97; @Bjo15] for additional background on linear algebra and matrix computations, as well as to [@Stewart90; @Bhatia97] for additional background on matrix perturbation theory. Discrete Probability {#sxn:dp} ==================== In this section, we present a brief overview of discrete probability. More advanced results (in particular, Bernstein-type inequalities for real-valued and matrix-valued random variables) will be introduced in the appropriate context later in the chapter. It is worth noting that most of RandNLA builds upon simple, fundamental principles of discrete (instead of continuous) probability. Random experiments: basics. --------------------------- A random experiment is any procedure that can be infinitely repeated and has a well-defined set of possible outcomes. Typical examples are the roll of a dice or the toss of a coin. The sample space $\Omega$ of a random experiment is the set of all possible outcomes of the random experiment. If the random experiment only has two possible outcomes (e.g., success and failure) then it is often called a Bernoulli trial. In discrete probability, the sample space $\Omega$ is finite. (We will *not* cover countably or uncountably infinite sample spaces in this chapter.) An event is any subset of the sample space $\Omega$. Clearly, the set of all possible events is the powerset (the set of all possible subsets) of $\Omega$, often denoted as $2^{\Omega}$. As an example, consider the following random experiment: toss a coin three times. Then, the sample space $\Omega$ is $$\Omega = \{HHH, HHT, HTH, HTT, THH, THT, TTH, TTT\}$$ and an event ${\mathcal E}$ could be described in words as “the output of the random experiment was either all heads or all tails”. Then, ${\mathcal E} = \{HHH,TTT\}.$ The *probability measure* or *probability function* maps the (finite) sample space $\Omega$ to the interval $[0,1]$. Formally, let the function ${\mbox{}{\bf{Pr}}\left[\omega\right]}$ for all $\omega \in \Omega$ be a function whose domain is $\Omega$ and whose range is the interval $[0,1]$. This function has the so-called normalization property, namely $$\sum_{\omega \in \Omega} {\mbox{}{\bf{Pr}}\left[\omega\right]} = 1.$$ If $\mathcal E$ is an event, then $$\label{eqn:setprop1} {\mbox{}{\bf{Pr}}\left[\mathcal E\right]} = \sum_{\omega \in {\mathcal E}} {\mbox{}{\bf{Pr}}\left[\omega\right]},$$ namely the probability of an event is the sum of the probabilities of its elements. It follows that the probability of the empty event (the event ${\mathcal E}$ that corresponds to the empty set) is equal to zero, whereas the probability of the event $\Omega$ (clearly $\Omega$ itself is an event) is equal to one. Finally, the uniform probability function is defined as ${\mbox{}{\bf{Pr}}\left[\omega\right]} = 1/\abs{\Omega}$, for all $\omega \in \Omega$. Properties of events. --------------------- Recall that events are sets and thus set operations (union, intersection, complementation) are applicable. Assuming finite sample spaces and using Eqn. (\[eqn:setprop1\]), it is easy to prove the following property for the union of two events ${\mathcal E}_1$ and ${\mathcal E}_2$: $${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1 \cup {\mathcal E}_2\right]} = {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]} + {\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]} - {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\cap {\mathcal E}_2\right]}.$$ This property follows from the well-known inclusion-exclusion principle for set union and can be generalized to more than two sets and thus to more than two events. Similarly, one can prove that ${\mbox{}{\bf{Pr}}\left[\bar{\mathcal E}\right]} = 1-{\mbox{}{\bf{Pr}}\left[{\mathcal E}\right]}.$ In the above, $\bar{\mathcal E}$ denotes the complement of the event $\mathcal E$. Finally, it is trivial to see that if ${\mathcal E}_1$ is a subset of ${\mathcal E}_2$ then ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]} \leq {\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}.$ The union bound. ---------------- The union bound is a fundamental result in discrete probability and can be used to bound the probability of a union of events without any special assumptions on the relationships between the events. Indeed, let ${\mathcal E}_i$ for all $i=1,\ldots,n$ be events defined over a finite sample space $\Omega$. Then, the union bound states that $${\mbox{}{\bf{Pr}}\left[\bigcup_{i=1}^n{\mathcal E}_i\right]} \leq \sum_{i=1}^n {\mbox{}{\bf{Pr}}\left[{\mathcal E}_i\right]}.$$ The proof of the union bound is quite simple and can be done by induction, using the inclusion-exclusion principle for two sets that was discussed in the previous section. Disjoint events and independent events. --------------------------------------- Two events ${\mathcal E}_1$ and ${\mathcal E}_2$ are called *disjoint* or *mutually exclusive* if their intersection is the empty set, i.e., if $${\mathcal E}_1 \cap {\mathcal E}_2 = \emptyset .$$ This can be generalized to any number of events by necessitating that the events are all pairwise disjoint. Two events ${\mathcal E}_1$ and ${\mathcal E}_2$ are called *independent* if the occurrence of one does not affect the probability of the other. Formally, they must satisfy $${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1 \cap {\mathcal E}_2\right]} = {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]}\cdot {\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}.$$ Again, this can be generalized to more than two events by necessitating that the events are all pairwise independent. Conditional probability. ------------------------ For any two events ${\mathcal E}_1$ and ${\mathcal E}_2$, the conditional probability ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1|{\mathcal E}_2\right]}$ is the probability that ${\mathcal E}_1$ occurs given that ${\mathcal E}_2$ occurs. Formally, $${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1|{\mathcal E}_2\right]} = \frac{{\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\cap{\mathcal E}_2\right]}}{{\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}}.$$ Obviously, the probability of ${\mathcal E}_2$ in the denominator must be non-zero for this to be well-defined. The well-known Bayes rule states that for any two events ${\mathcal E}_1$ and ${\mathcal E}_2$ such that ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]}>0$ and ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}>0$, $${\mbox{}{\bf{Pr}}\left[{\mathcal E}_2|{\mathcal E}_1\right]} = \frac{{\mbox{}{\bf{Pr}}\left[{\mathcal E}_1|{\mathcal E}_2\right]}{\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}}{{\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]}}.$$ Using the Bayes rule and the fact that the sample space $\Omega$ can be partitioned as $\Omega = {\mathcal E}_2 \cup \overline{{\mathcal E}_2}$, it follows that $${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]} = {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1|{\mathcal E}_2\right]}{\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]} + {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1|\overline{{\mathcal E}_2}\right]}{\mbox{}{\bf{Pr}}\left[\overline{{\mathcal E}_2}\right]}.$$ We note that the probabilities of both events ${\mathcal E}_1$ and ${\mathcal E}_2$ must be in the open interval $(0,1)$. We can now revisit the notion of independent events. Indeed, for any two events ${\mathcal E}_1$ and ${\mathcal E}_2$ such that ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]}>0$ and ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}>0$ the following statements are equivalent: 1. ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1|{\mathcal E}_2\right]} = {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]}$, 2. ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_2|{\mathcal E}_1\right]} = {\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}$, and 3. ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_1 \cap {\mathcal E}_2\right]} = {\mbox{}{\bf{Pr}}\left[{\mathcal E}_1\right]}{\mbox{}{\bf{Pr}}\left[{\mathcal E}_2\right]}$. Recall that the last statement was the definition of independence in the previous section. Random variables. ----------------- Random variables are *functions* mapping the sample space $\Omega$ to the real numbers $\mathbb{R}$. Note that even though they are called variables, in reality they are functions. Let $\Omega$ be the sample space of a random experiment. A formal definition for the random variable $X$ would be as follows: let $\alpha\in\mathbb{R}$ be a real number (not necessarily positive) and note that the function $$X^{-1}\left(\alpha\right) = \left\{\omega \in \Omega\ :\ X\left(\omega\right)=\alpha\right\}$$ returns a subset of $\Omega$ and thus is an event. Therefore, the function $X^{-1}\left(\alpha\right)$ has a probability. We will abuse notation and write: $${\mbox{}{\bf{Pr}}\left[X = \alpha\right]}$$ instead of the more proper notation ${\mbox{}{\bf{Pr}}\left[X^{-1}\left(\alpha\right)\right]}$. This function of $\alpha$ is of great interest and it is easy to generalize as follows: $${\mbox{}{\bf{Pr}}\left[X \leq \alpha\right]} = {\mbox{}{\bf{Pr}}\left[X^{-1}\left(\alpha^{\prime}\right): \alpha^{\prime} \in (-\infty,\alpha]\right]} = {\mbox{}{\bf{Pr}}\left[\omega \in \Omega\ :\ X\left(\omega\right)\leq \alpha\right]}.$$ Probability mass function and cumulative distribution function. --------------------------------------------------------------- Two common functions associated with random variables are the probability mass function (PMF) and the cumulative distribution function (CDF). The first measures the probability that a random variable takes a particular value $\alpha\in\mathbb{R}$, and the second measures the probability that a random variable takes any value below $\alpha\in\mathbb{R}$. Given a random variable $X$ and a real number $\alpha$, the *probability mass function (PMF)* is the function $f(\alpha) = {\mbox{}{\bf{Pr}}\left[X=\alpha\right]}.$ Given a random variable $X$ and a real number $\alpha$, the *cumulative distribution function (CDF)* is the function $F(\alpha) = {\mbox{}{\bf{Pr}}\left[X\leq\alpha\right]}.$ It is obvious from the above definitions that $F(\alpha) = \sum_{x\leq \alpha} f(x).$ Independent random variables. ----------------------------- Following the notion of independence for events, we can now define the notion of independence for random variables. Indeed, two random variables $X$ and $Y$ are independent if for all reals $a$ and $b$, $${\mbox{}{\bf{Pr}}\left[X=a\ \mbox{and}\ Y=b\right]} = {\mbox{}{\bf{Pr}}\left[X=a\right]}\cdot{\mbox{}{\bf{Pr}}\left[Y=b\right]}.$$ Expectation of a random variable. --------------------------------- Given a random variable $X$, its expectation ${\mbox{}{\bf{E}}\left[X\right]}$ is defined as $${\mbox{}{\bf{E}}\left[X\right]} = \sum_{x \in X(\Omega)} x \cdot {\mbox{}{\bf{Pr}}\left[X=x\right]}.$$ In the above, $X(\Omega)$ is the image of the random variable $X$ over the sample space $\Omega$; recall that $X$ is a function. That is, the sum is over the range of the random variable $X$. Alternatively, ${\mbox{}{\bf{E}}\left[X\right]}$ can be expressed in terms of a sum over the domain of $X$, i.e., over $\Omega$. For finite sample spaces $\Omega$, such as those that arise in discrete probability, we get $${\mbox{}{\bf{E}}\left[X\right]} = \sum_{\omega \in \Omega} X(\omega) {\mbox{}{\bf{Pr}}\left[\omega\right]}.$$ We now discuss fundamental properties of the expectation. The most important property is linearity of expectation: for any random variables $X$ and $Y$ and real number $\lambda$, $$\begin{aligned} {\mbox{}{\bf{E}}\left[X+Y\right]} &= {\mbox{}{\bf{E}}\left[X\right]} +{\mbox{}{\bf{E}}\left[Y\right]},\ \mbox{and}\\ {\mbox{}{\bf{E}}\left[\lambda X\right]} &= \lambda{\mbox{}{\bf{E}}\left[X\right]}. $$ The first property generalizes to any finite sum of random variables and does not need any assumptions on the random variables involved in the summation. If two random variables $X$ and $Y$ are independent then we can manipulate the expectation of their product as follows: $${\mbox{}{\bf{E}}\left[XY\right]} = {\mbox{}{\bf{E}}\left[X\right]}\cdot{\mbox{}{\bf{E}}\left[Y\right]}.$$ Variance of a random variable. ------------------------------ Given a random variable $X$, its variance ${\mbox{}{\bf{Var}}\left[X\right]}$ is defined as $${\mbox{}{\bf{Var}}\left[X\right]} = {\mbox{}{\bf{E}}\left[X-{\mbox{}{\bf{E}}\left[X\right]}\right]}^2.$$ In words, the variance measures the average of the (square) of the difference $X - {\mbox{}{\bf{E}}\left[X\right]}$. The standard deviation is the square root of the variance and is often denoted by $\sigma$. It is easy to prove that $${\mbox{}{\bf{Var}}\left[X\right]} = {\mbox{}{\bf{E}}\left[X^2\right]} - {\mbox{}{\bf{E}}\left[X\right]}^2.$$ This obviously implies $${\mbox{}{\bf{Var}}\left[X\right]} \leq {\mbox{}{\bf{E}}\left[X^2\right]},$$ which is often all we need in order to get an upper bound for the variance. Unlike the expectation, the variance does not have a linearity property, unless the random variables involved are independent. Indeed, if the random variables $X$ and $Y$ are independent, then $$\begin{aligned} {\mbox{}{\bf{Var}}\left[X+Y\right]} = {\mbox{}{\bf{Var}}\left[X\right]} +{\mbox{}{\bf{Var}}\left[Y\right]}. $$ The above property generalizes to sums of more than two random variables, assuming that all involved random variables are pairwise independent. Also, for any real $\lambda$, $$\begin{aligned} {\mbox{}{\bf{Var}}\left[\lambda X\right]} = \lambda^2{\mbox{}{\bf{Var}}\left[X\right]}. $$ Markov’s inequality. {#sxn:review:Markov} -------------------- Let $X$ be a non-negative random variable; for any $\alpha > 0$, $${\mbox{}{\bf{Pr}}\left[X\geq \alpha\right]}\leq \frac{{\mbox{}{\bf{E}}\left[X\right]}}{\alpha}.$$ This is a very simple inequality to apply and only needs an upper bound for the expectation of $X$. An equivalent formulation is the following: let $X$ be a non-negative random variable; for any $k > 1$, $${\mbox{}{\bf{Pr}}\left[X\geq k\cdot {\mbox{}{\bf{E}}\left[X\right]}\right]}\leq \frac{1}{k},$$ or, equivalently, $${\mbox{}{\bf{Pr}}\left[X\leq k\cdot {\mbox{}{\bf{E}}\left[X\right]}\right]}\geq 1-\frac{1}{k}.$$ In words, the probability that a random variable exceeds $k$ times its expectation is at most $1/k$. In order to prove Markov’s inequality, we will show, $${\mbox{}{\bf{Pr}}\left[X \geq t\right]} \leq \frac{{\mbox{}{\bf{E}}\left[X\right]}}{t}$$ assuming $$k = \frac{t}{{\mbox{}{\bf{E}}\left[X\right]}},$$ for any $t > 0$. In order to prove the above inequality, we define the following function $$f(X) = \begin{cases} 1, & \text{if }X \geq t\\ 0, & \text{otherwise} \end{cases}$$ with expectation: $$\begin{aligned} {\mbox{}{\bf{E}}\left[f(X)\right]} = 1 \cdot {\mbox{}{\bf{Pr}}\left[X \geq t\right]} + 0 \cdot {\mbox{}{\bf{Pr}}\left[X < t\right]}={\mbox{}{\bf{Pr}}\left[X \geq t\right]}.\end{aligned}$$ Clearly, from the function definition, $f(X) \leq \frac{X}{t}$. Taking expectation on both sides: $${\mbox{}{\bf{E}}\left[f(X)\right]} \leq {\mbox{}{\bf{E}}\left[\frac{X}{t}\right]} = \frac{{\mbox{}{\bf{E}}\left[X\right]}}{t}.$$ Thus, $${\mbox{}{\bf{Pr}}\left[X \geq t\right]} \leq \frac{{\mbox{}{\bf{E}}\left[X\right]}}{t}.$$ Hence, we conclude the proof of Markov’s inequality. The Coupon Collector Problem. {#sxn:couponcollector} ----------------------------- Suppose there are $m$ types of coupons and we seek to collect them in independent trials, where in each trial the probability of obtaining any one coupon is $1/m$ (uniform). Let $X$ denote the number of trials that we need in order to collect at least one coupon of each type. Then, one can prove that [@MotwaniRaghavan95 Section 3.6]: $$\begin{aligned} {\mbox{}{\bf{E}}\left[X\right]} &= m \ln m + \Theta\left(m\right),\ \mbox{and}\\ {\mbox{}{\bf{Var}}\left[X\right]} &= \frac{\pi^2}{6} m^2 + \Theta\left(m \ln m\right). $$ The occurrence of the additional $\ln m$ factor in the expectation is common in sampling-based approaches that attempt to recover $m$ different types of objects using sampling in independent trials. Such factors will appear in many RandNLA sampling-based algorithms. References. ----------- There are numerous texts covering discrete probability; most of the material in this chapter was adapted from [@MotwaniRaghavan95]. Randomized Matrix Multiplication {#chapter:MM} ================================ Our first randomized algorithm for a numerical linear algebra problem is a simple, sampling-based approach to approximate the product of two matrices ${\mathbf{A}}\in \mathbb{R}^{m \times n}$ and ${\boldsymbol{B}}\in \mathbb{R}^{n \times p}$. This randomized matrix multiplication algorithm is at the heart of all of the RandNLA algorithms that we will discuss in this chapter, and indeed all of RandNLA more generally. It is of interest both pedagogically and in and of itself, and it is also used in an essential way in the analysis of the least squares approximation and low-rank approximation algorithms discussed below. We start by noting that the product ${\mathbf{A}}{\boldsymbol{B}}$ may be written as the sum of $n$ rank one matrices: $${\mathbf{A}}{\boldsymbol{B}}= \sum_{t=1}^{n} \underbrace{{\mathbf{A}}_{*t} {\boldsymbol{B}}_{t*}}_{\in \mathbb{R}^{m \times n}}, \label{AB_sum_rank_one}$$ where each of the summands is the *outer product* of a column of ${\mathbf{A}}$ and the corresponding row of ${\boldsymbol{B}}$. Recall that the standard definition of matrix multiplication states that the $(i,j)$-th entry of the matrix product ${\mathbf{A}}{\boldsymbol{B}}$ is equal to the *inner product* of the $i$-th row of ${\mathbf{A}}$ and the $j$-th column of ${\boldsymbol{B}}$, namely $$({\mathbf{A}}{\boldsymbol{B}})_{ij} = {\mathbf{A}}_{i*} {\boldsymbol{B}}_{*j} \in \mathbb{R}.$$ It is easy to see that the two definitions are equivalent. However, when matrix multiplication is formulated as in Eqn. (\[AB\_sum\_rank\_one\]), a simple randomized algorithm to approximate the product ${\mathbf{A}}{\boldsymbol{B}}$ suggests itself: in independent identically distributed (i.i.d.) trials, randomly sample (and appropriately rescale) a few rank-one matrices from the $n$ terms in the summation of Eqn. (\[AB\_sum\_rank\_one\]); and then output the sum of the (rescaled) terms as an estimator for ${\mathbf{A}}{\boldsymbol{B}}$. Consider the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm (Algorithm \[fig:BasicMatrixMultiplicationAlgorithm\]), which makes this simple idea precise. When this algorithm is given as input two matrices ${\mathbf{A}}$ and ${\boldsymbol{B}}$, a probability distribution $\left\{ p_k \right\}_{k=1}^{n}$, and a number $c$ of column-row pairs to choose, it returns as output an estimator for the product ${\mathbf{A}}{\boldsymbol{B}}$ of the form $$\sum_{t=1}^{c} \frac{1}{cp_{i_t}} {\mathbf{A}}_{*i_t} {\boldsymbol{B}}_{i_t *}.$$ Equivalently, the above estimator can be thought of as the product of the two matrices ${\boldsymbol{C}}$ and ${\boldsymbol{R}}$ formed by the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm, where ${\boldsymbol{C}}$ consists of $c$ (rescaled) columns of ${\mathbf{A}}$ and ${\boldsymbol{R}}$ consists of the corresponding (rescaled) rows of ${\boldsymbol{B}}$. Observe that $${\boldsymbol{C}}{\boldsymbol{R}}= \sum_{t=1}^{c} {\boldsymbol{C}}_{*t} {\boldsymbol{R}}_{t*} = \sum_{t=1}^{c} \left(\sqrt{\frac{1}{cp_{i_t}}} {\mathbf{A}}_{*i_t}\right)\left( \sqrt{\frac{1}{cp_{i_t}}}{\boldsymbol{B}}_{i_t *}\right) = \frac{1}{c} \sum_{t=1}^{c} \frac{1}{p_{i_t}} {\mathbf{A}}_{*i_t} {\boldsymbol{B}}_{i_t *}.$$ Therefore, the procedure used for sampling and scaling column-row pairs in the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm corresponds to sampling and rescaling terms in Eqn. (\[AB\_sum\_rank\_one\]). The analysis of RandNLA algorithms has benefited enormously from formulating algorithms using the so-called *sampling-and-rescaling matrix formalism*. Let’s define the sampling-and-rescaling matrix ${\boldsymbol{S}}\in {\mathbb{R}^{n \times c}}$ to be a matrix with ${\boldsymbol{S}}_{i_t t} = 1/\sqrt{cp_{i_t}}$ if the $i_t$-th column of ${\mathbf{A}}$ is chosen in the $t$-th trial (all other entries of ${\boldsymbol{S}}$ are set to zero). Then $${\boldsymbol{C}}= {\mathbf{A}}{\boldsymbol{S}}\mbox{ and } {\boldsymbol{R}}= {\boldsymbol{S}}^{T} {\boldsymbol{B}},$$ so that $ {\boldsymbol{C}}{\boldsymbol{R}}= {\mathbf{A}}{\boldsymbol{S}}{\boldsymbol{S}}^{T} {\boldsymbol{B}}\approx {\mathbf{A}}{\boldsymbol{B}}$. Obviously, the matrix ${\boldsymbol{S}}$ is very sparse, having a single non-zero entry per column, for a total of $c$ non-zero entries, and so it is not explicitly constructed and stored by the algorithm. The choice of the sampling probabilities $\left\{ p_k \right\}_{k=1}^{n}$ in the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm is very important. As we will prove in Lemma \[lem:multexpvar\], the estimator returned by the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm is (in an element-wise sense) unbiased, regardless of our choice of the sampling probabilities. However, a natural notion of the *variance* of our estimator (see Theorem \[lem:basicmult\] for a precise definition) is minimized when the sampling probabilities are set to $$p_k = \frac{\FNorm{{\mathbf{A}}_{*k} {\boldsymbol{B}}_{k*}}}{\sum_{k'=1}^n\FNorm{{\mathbf{A}}_{*k'} {\boldsymbol{B}}_{k'*}}} = \frac{{\mbox{}\|{\mathbf{A}}_{*k}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k*}\|_2}}{\sum_{k'=1}^n{\mbox{}\|{\mathbf{A}}_{*k'}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k'*}\|_2}}.$$ In words, the best choice when sampling rank-one matrices from the summation of Eqn. (\[AB\_sum\_rank\_one\]) is to select rank-one matrices that have larger Frobenius norms with higher probabilities. This is equivalent to selecting column-row pairs that have larger (products of) Euclidean norms with higher probability. This approach for approximating matrix multiplication has several advantages. First, it is conceptually simple. Second, since the heart of the algorithm involves matrix multiplication of smaller matrices, it can use any algorithms that exist in the literature for performing the desired matrix multiplication. Third, this approach does not tamper with the sparsity of the input matrices. Finally, the algorithm can be easily implemented in one pass over the input matrices ${\mathbf{A}}$ and ${\boldsymbol{B}}$, given the sampling probabilities $\left\{ p_k \right\}_{k=1}^{n}$. See [@dkm_matrix1 Section 4.2] for a detailed discussion regarding the implementation of the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm in the pass-efficient and streaming models of computation. Analysis of the [R[AND]{}M[ATRIX]{}M[ULTIPLY]{}]{} algorithm. {#sxn:matmult:analysis} ------------------------------------------------------------- This section provides upper bounds for the error matrix ${\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}$, where ${\boldsymbol{C}}$ and ${\boldsymbol{R}}$ are the outputs of the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm. Our first lemma proves that the expectation of the $(i,j)$-th element of the estimator ${\boldsymbol{C}}{\boldsymbol{R}}$ is equal to the $(i,j)$-th element of the exact product ${\mathbf{A}}{\boldsymbol{B}}$, regardless of the choice of the sampling probabilities. It also bounds the variance of the $(i,j)$-th element of the estimator, which does depend on our choice of the sampling probabilities. \[lem:multexpvar\] Let ${\boldsymbol{C}}$ and ${\boldsymbol{R}}$ be constructed as described in the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm. Then, $${\mbox{}{\bf{E}}\left[({\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]}=({\mathbf{A}}{\boldsymbol{B}})_{ij}$$ and $${\mbox{}{\bf{Var}}\left[({\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]} \leq \frac{1}{c}\sum_{k=1}^n \frac{{\mathbf{A}}_{ik}^2 {\boldsymbol{B}}_{kj}^2}{p_k}.$$ Fix some pair $i,j$. For $t=1,\dots,c$, define $ X_t = \left( \frac{ {\mathbf{A}}_{*i_t}{\boldsymbol{B}}_{i_t *} }{ cp_{i_t} } \right)_{ij} = \frac{ {\mathbf{A}}_{ii_t}{\boldsymbol{B}}_{i_tj} }{ cp_{i_t} } $. Thus, for any $t$, $${\mbox{}{\bf{E}}\left[X_t\right]} = \sum_{k=1}^n p_k \frac{ {\mathbf{A}}_{ik}{\boldsymbol{B}}_{kj} }{ cp_k } = \frac{1}{c} \sum_{k=1}^{n}{\mathbf{A}}_{ik}{\boldsymbol{B}}_{kj} = \frac{1}{c}({\mathbf{A}}{\boldsymbol{B}})_{ij}.$$ Since we have $({\boldsymbol{C}}{\boldsymbol{R}})_{ij} = \sum_{t = 1}^{c}X_t$, it follows that $${\mbox{}{\bf{E}}\left[({\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]} = {\mbox{}{\bf{E}}\left[\sum_{t = 1}^{c}X_t\right]} = \sum_{t = 1}^{c}{\mbox{}{\bf{E}}\left[X_t\right]} = ({\mathbf{A}}{\boldsymbol{B}})_{ij}.$$ Hence, ${\boldsymbol{C}}{\boldsymbol{R}}$ is an unbiased estimator of ${\mathbf{A}}{\boldsymbol{B}}$, regardless of the choice of the sampling probabilities. Using the fact that $({\boldsymbol{C}}{\boldsymbol{R}})_{ij}$ is the sum of $c$ independent random variables, we get $${\mbox{}{\bf{Var}}\left[({\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]} = {\mbox{}{\bf{Var}}\left[\sum_{t=1}^c X_t\right]} = \sum_{t=1}^c{\mbox{}{\bf{Var}}\left[X_t\right]}.$$ Using $ {\mbox{}{\bf{Var}}\left[X_t\right]} \leq {\mbox{}{\bf{E}}\left[X_t^2\right]}= \sum_{k=1}^n \frac{ {\mathbf{A}}_{ik}^2 {\boldsymbol{B}}_{kj}^2 }{ c^2 p_k }, $ we get $${\mbox{}{\bf{Var}}\left[({\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]} = \sum_{t=1}^c{\mbox{}{\bf{Var}}\left[X_t\right]} \leq c \sum_{k=1}^n \frac{ {\mathbf{A}}_{ik}^2 {\boldsymbol{B}}_{kj}^2 }{ c^2 p_k } = \frac{1}{c} \sum_{k=1}^n \frac{{\mathbf{A}}_{ik}^2 {\boldsymbol{B}}_{kj}^2}{p_k},$$ which concludes the proof of the lemma. Our next result bounds the expectation of the Frobenius norm of the error matrix ${\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}$. Notice that this error metric depends on our choice of the sampling probabilities $\left\{ p_k \right\}_{k=1}^{n}$. \[lem:basicmult\] Construct ${\boldsymbol{C}}$ and ${\boldsymbol{R}}$ using the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm and let ${\boldsymbol{C}}{\boldsymbol{R}}$ be an approximation to ${\mathbf{A}}{\boldsymbol{B}}$. Then, $${\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} \leq \sum_{k=1}^n \frac{{\mbox{}\|{\mathbf{A}}_{*k}\|_2^2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2^2}}{c p_k}.$$ Furthermore, if $$p_k = \frac{ {\mbox{}\|{\mathbf{A}}_{*k}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k*}\|_2} }{ \sum_{k^\prime=1}^n {\mbox{}\|{\mathbf{A}}_{*k^\prime}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k^\prime *}\|_2} } , \label{optimal_probs}$$ for all $k = 1,\ldots,n$, then $${\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} \leq \frac{1}{c}\left( \sum_{k=1}^n {\mbox{}\|{\mathbf{A}}_{*k}\|_2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2} \right)^2.$$ This choice for $\left\{ p_k \right\}_{k=1}^{n}$ minimizes ${\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]}$, among possible choices for the sampling probabilities. First of all, since ${\boldsymbol{C}}{\boldsymbol{R}}$ is an unbiased estimator of ${\mathbf{A}}{\boldsymbol{B}}$, ${\mbox{}{\bf{E}}\left[({\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]} = 0$. Thus, $${\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} = \sum_{i=1}^m \sum_{j=1}^p {\mbox{}{\bf{E}}\left[\left({\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\right)_{ij}^2\right]} = \sum_{i=1}^m \sum_{j=1}^p {\mbox{}{\bf{Var}}\left[({\boldsymbol{C}}{\boldsymbol{R}})_{ij}\right]} .$$ Using Lemma \[lem:multexpvar\], we get $$\begin{aligned} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} &\leq \frac{1}{c}\sum_{k=1}^n \frac{1}{p_k} \left(\sum_i {\mathbf{A}}_{ik}^2\right)\left(\sum_j {\boldsymbol{B}}_{kj}^2\right) \\ &= \frac{1}{c}\sum_{k=1}^n \frac{1}{p_k} {\mbox{}\|{\mathbf{A}}_{*k}\|_2^2} {\mbox{}\|{\boldsymbol{B}}_{k*}\|_2^2}. \end{aligned}$$ Let $p_k$ be as in Eqn. (\[optimal\_probs\]); then $$\begin{aligned} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} \leq \frac{1}{c}\left(\sum_{k=1}^n {\mbox{}\|{\mathbf{A}}_{*k}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k*}\|_2}\right)^2. \end{aligned}$$ Finally, to prove that the aforementioned choice for the $\left\{ p_k \right\}_{k=1}^{n}$ minimizes the quantity ${\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]}$, define the function $$f(p_1,\dots p_n) = \sum_{k=1}^n \frac{1}{p_k}{\mbox{}\|{\mathbf{A}}_{*k}\|_2^2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2^2},$$ which characterizes the dependence of ${\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]}$ on the $p_k$’s. In order to minimize $f$ subject to $\sum_{k=1}^n p_k =1$, we can introduce the Lagrange multiplier $\lambda$ and define the function $$g(p_1,\dots p_n) = f(p_1,\dots p_n) + \lambda\left(\sum_{k=1}^n p_k-1\right) .$$ We then have the minimum at $$0 = \frac{\partial g}{\partial p_k} = \frac{-1}{p_k^2}{\mbox{}\|{\mathbf{A}}_{*k}\|_2^2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2^2} + \lambda .$$ Thus, $$p_k = \frac{{\mbox{}\|{\mathbf{A}}_{*k}\|_2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2}}{\sqrt{\lambda}} = \frac{{\mbox{}\|{\mathbf{A}}_{*k}\|_2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2}} {\sum_{k^\prime=1}^n{\mbox{}\|{\mathbf{A}}_{*k^\prime}\|_2}{\mbox{}\|{\boldsymbol{B}}_{k^\prime *}\|_2}} ,$$ where the second equality comes from solving for $\sqrt{\lambda}$ in $\sum_{k=1}^{n} p_k = 1$. These probabilities are minimizers of $f$ because $\frac{\partial^2 g}{{\partial p_k}^2} > 0$ for all $k$. We conclude this section by pointing out that we can apply Markov’s inequality on the expectation bound of Theorem \[lem:basicmult\] in order to get bounds for the Frobenius norm of the error matrix ${\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}$ that hold with constant probability. We refer the reader to [@dkm_matrix1 Section 4.4] for a tighter analysis, arguing for a better (in the sense of better dependence on the failure probability than provided by Markov’s inequality) concentration of the Frobenius norm of the error matrix around its mean using a martingale argument. Analysis of the algorithm for nearly optimal probabilities. {#sxn:matmult:analysis:probs} ----------------------------------------------------------- We now discuss three different choices for the sampling probabilities that are easy to analyze and will be useful in this chapter. We summarize these results in the following list; all three bounds can be easily proven following the proof of Theorem \[lem:basicmult\]. Nearly optimal probabilities, depending on both ${\mathbf{A}}$ and ${\boldsymbol{B}}$ : Let the $\left\{ p_k \right\}_{k=1}^{n}$ satisfy $$\label{eqn:appopt1} \sum_{k=1}^n p_k=1 \quad \mbox{and} \quad p_k \ge \frac{ \beta{\mbox{}\|{\mathbf{A}}_{*k}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k*}\|_2} }{ \sum_{k^\prime=1}^n {\mbox{}\|{\mathbf{A}}_{*k^\prime}\|_2} {\mbox{}\|{\boldsymbol{B}}_{k^\prime *}\|_2} },$$ for some positive constant $\beta \le 1$. Then, $$\label{eqn:appopt1result} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} \leq \frac{1}{\beta c}\left( \sum_{k=1}^n {\mbox{}\|{\mathbf{A}}_{*k}\|_2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2} \right)^2.$$ Nearly optimal probabilities, depending only on ${\mathbf{A}}$ : Let the $\left\{ p_k \right\}_{k=1}^{n}$ satisfy $$\label{eqn:appopt2} \sum_{k=1}^n p_k=1 \quad \mbox{and} \quad p_k \ge \frac{ \beta{\mbox{}\|{\mathbf{A}}_{*k}\|_2^2}}{{\mbox{}\|{\mathbf{A}}\|_F^2}},$$ for some positive constant $\beta \le 1$. Then, $$\label{eqn:appopt2result} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} \leq \frac{1}{\beta c}{\mbox{}\|{\mathbf{A}}\|_F^2}{\mbox{}\|{\boldsymbol{B}}\|_F^2}.$$ Nearly optimal probabilities, depending only on ${\boldsymbol{B}}$ : Let the $\left\{ p_k \right\}_{k=1}^{n}$ satisfy $$\label{eqn:appopt3} \sum_{k=1}^n p_k=1 \quad \mbox{and} \quad p_k \ge \frac{ \beta{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2^2}}{{\mbox{}\|{\boldsymbol{B}}\|_F^2}},$$ for some positive constant $\beta \le 1$. Then, $$\label{eqn:appopt3result} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}{\boldsymbol{B}}-{\boldsymbol{C}}{\boldsymbol{R}}\|_F^2}\right]} \leq \frac{1}{\beta c}{\mbox{}\|{\mathbf{A}}\|_F^2}{\mbox{}\|{\boldsymbol{B}}\|_F^2}.$$ We note that, from the Cauchy-Schwartz inequality, $$\left( \sum_{k=1}^n {\mbox{}\|{\mathbf{A}}_{*k}\|_2}{\mbox{}\|{\boldsymbol{B}}_{k*}\|_2} \right)^2 \leq {\mbox{}\|{\mathbf{A}}\|_F^2}{\mbox{}\|{\boldsymbol{B}}\|_F^2},$$ and thus the bound of Eqn. (\[eqn:appopt1result\]) is generally better than the bounds of Eqns. (\[eqn:appopt2result\]) and (\[eqn:appopt3result\]). See [@dkm_matrix1 Section 4.3, Table 1] for other sampling probabilities and respective error bounds that might be of interest. Bounding the two norm. {#sxn:chapter1:spectral} ---------------------- In both applications of the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm that we will discuss in this chapter (see least-squares approximation and low-rank matrix approximation in Sections \[sxn:main:regression\] and \[sxn:main:lowrank\], respectively), we will be particularly interested in approximating the product ${\boldsymbol{U}}^T {\boldsymbol{U}}$, where ${\boldsymbol{U}}$ is a tall-and-thin matrix, by sampling (and rescaling) a few rows of ${\boldsymbol{U}}$. (The matrix ${\boldsymbol{U}}$ will be a matrix spanning the column space or the “important” part of the column space of some other matrix of interest.) It turns out that, without loss of generality, we can focus on the special case where ${\boldsymbol{U}}\in \mathbb{R}^{n \times d}$ ($n \gg d$) is a matrix with orthonormal columns (i.e., ${\boldsymbol{U}}^T{\boldsymbol{U}}= {\boldsymbol{I}}_d$). Then, if we let ${\boldsymbol{R}}\in \mathbb{R}^{c \times d}$ be a sample of $c$ (rescaled) rows of ${\boldsymbol{U}}$ constructed using the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm, and note that the corresponding $c$ (rescaled) columns of ${\boldsymbol{U}}^T$ form the matrix ${\boldsymbol{R}}^T$, then Theorem \[lem:basicmult\] implies that $$\label{eqn:UUT1} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\boldsymbol{U}}^T{\boldsymbol{U}}-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_F^2}\right]}={\mbox{}{\bf{E}}\left[{\mbox{}\|{\boldsymbol{I}}_d-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_F^2}\right]} \leq \frac{d^2}{\beta c}.$$ In the above, we used the fact that ${\mbox{}\|{\boldsymbol{U}}\|_F^2}=d$. For the above bound to hold, it suffices to use sampling probabilities $p_k$ ($k=1,\ldots, n$) that satisfy $$\label{eqn:appopt4} \sum_{k=1}^n p_k=1 \quad \mbox{and} \quad p_k \ge \frac{ \beta{\mbox{}\|{\boldsymbol{U}}_{k*}\|_2^2}}{d}.$$ (The quantities ${\mbox{}\|{\boldsymbol{U}}_{k*}\|_2^2}$ are known as leverage scores [@Mah-mat-rev_BOOK]; and the probabilities given by Eqn. (\[eqn:appopt4\]) are nearly-optimal, in the sense of Eqn. (\[eqn:appopt1\]), i.e., in the sense that they approximate the optimal probabilities for approximating the matrix product shown in Eqn (\[eqn:UUT1\]), up to a $\beta$ factor.) Applying Markov’s inequality to the bound of Eqn. (\[eqn:UUT1\]) and setting $$\label{eqn:cval1} c = \frac{10d^2}{\beta\epsilon^2},$$ we get that, with probability at least 9/10, $$\label{eqn:UUT2} \FNorm{{\boldsymbol{U}}^T{\boldsymbol{U}}-{\boldsymbol{R}}^T{\boldsymbol{R}}}=\FNorm{{\boldsymbol{I}}_d-{\boldsymbol{R}}^T{\boldsymbol{R}}}\leq \epsilon.$$ Clearly, the above equation also implies a two-norm bound. Indeed, with probability at least 9/10, $${\mbox{}\|{\boldsymbol{U}}^T{\boldsymbol{U}}-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_2} = {\mbox{}\|{\boldsymbol{I}}_d-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_2} \leq \epsilon$$ by setting $c$ to the value of Eqn. (\[eqn:cval1\]). In the remainder of this section, we will state and prove a theorem that also guarantees ${\mbox{}\|{\boldsymbol{U}}^T{\boldsymbol{U}}-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_2}\leq\epsilon$, while setting $c$ to a value that is *smaller* than the one in Eqn. (\[eqn:cval1\]). For related concentration techniques, see the chapter by Vershynin in this volume [@pcmi-chapter-vershynin]. \[thm:theorem7correct\] Let ${\boldsymbol{U}}\in {\mathbb{R}^{n \times d}}$ ($n \gg d$) satisfy ${\boldsymbol{U}}^T {\boldsymbol{U}}= {\boldsymbol{I}}_d$. Construct ${\boldsymbol{R}}$ using the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm and let the sampling probabilities $\left\{ p_k \right\}_{k=1}^{n}$ satisfy the conditions of Eqn. (\[eqn:appopt4\]), for all $k=1,\ldots, n$ and some constant $\beta \in (0,1]$. Let $\epsilon \in (0,1)$ be an accuracy parameter and let $$\label{eqn:CboundAppendix} c \geq \frac{96d}{\beta \epsilon^2}\ln \left(\frac{96d}{\beta \epsilon^2 \sqrt{\delta}}\right). $$ Then, with probability at least $1-\delta$, $${\mbox{}\|{\boldsymbol{U}}^T{\boldsymbol{U}}-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_2}={\mbox{}\|{\boldsymbol{I}}_d-{\boldsymbol{R}}^T{\boldsymbol{R}}\|_2}\leq \epsilon.$$ Prior to proving the above theorem, we state a matrix-Bernstein inequality that is due to Oliveira [@Oli10 Lemma 1]. \[lem:oliveira\] Let ${\boldsymbol{x}}^1,{\boldsymbol{x}}^2,\ldots,{\boldsymbol{x}}^c$ be independent identically distributed copies of a $d$-dimensional random vector ${\boldsymbol{x}}$ with $${\mbox{}\|{\boldsymbol{x}}\|_2}\leq M \qquad \mbox{and} \qquad {\mbox{}\|{\mbox{}{\bf{E}}\left[{\boldsymbol{x}}{\boldsymbol{x}}^T\right]}\|_2}\leq 1.$$ Then, for any $\alpha > 0$, $${\mbox{}\|\frac{1}{c}\sum_{i=1}^c {\boldsymbol{x}}^i {{\boldsymbol{x}}^i}^T-{\mbox{}{\bf{E}}\left[{\boldsymbol{x}}{\boldsymbol{x}}^T\right]}\|_2}\leq \alpha$$ holds with probability at least $$1-\left(2c^2\right)\exp\left(-\frac{c\alpha^2}{16M^2+8M^2\alpha}\right).$$ This inequality essentially bounds the probability that the matrix $\frac{1}{c}\sum_{i=1}^c {\boldsymbol{x}}^i {{\boldsymbol{x}}^i}^T$ deviates significantly from its expectation. This deviation is measured with respect to the two norm (namely the largest singular value) of the error matrix. (of Theorem \[thm:theorem7correct\]) Define the random *row* vector ${\boldsymbol{y}}\in {\mathbb{R}^{d}}$ as $$ {\mbox{}{\bf{Pr}}\left[{\boldsymbol{y}}= \frac{1}{\sqrt{p_k}}{\boldsymbol{U}}_{k*}\right]} = p_k \ge \frac{\beta{\mbox{}\|{\boldsymbol{U}}_{k*}\|_2^2}}{d}, $$ for $k=1 ,\ldots, n$. In words, ${\boldsymbol{y}}$ is set to be the (rescaled) $k$-th row of ${\boldsymbol{U}}$ with probability $p_k$. Thus, the matrix ${\boldsymbol{R}}$ has rows $\frac{1}{\sqrt{c}}{\boldsymbol{y}}^1,\frac{1}{\sqrt{c}}{\boldsymbol{y}}^2,\ldots,\frac{1}{\sqrt{c}}{\boldsymbol{y}}^c$, where ${\boldsymbol{y}}^1,{\boldsymbol{y}}^2,\ldots,{\boldsymbol{y}}^c$ are $c$ independent copies of ${\boldsymbol{y}}$. Using this notation, it follows that $$\label{eqn:expectyyt} {\mbox{}{\bf{E}}\left[{\boldsymbol{y}}^T{\boldsymbol{y}}\right]} = \sum_{k=1}^{n} p_k (\frac{1}{\sqrt{p_k}}{\boldsymbol{U}}_{k*}^T)(\frac{1}{\sqrt{p_k}}{\boldsymbol{U}}_{k*}) = {\boldsymbol{U}}^T{\boldsymbol{U}}= {\boldsymbol{I}}_d. $$ Also, $$ {\boldsymbol{R}}^T{\boldsymbol{R}}= \frac{1}{c}\sum_{t=1}^c \underbrace{{{\boldsymbol{y}}^t}^T {\boldsymbol{y}}^t}_{\mathbb{R}^{d \times d}}. $$ For this vector ${\boldsymbol{y}}$, let $$\label{eqn:defM} M \ge{\mbox{}\|{\boldsymbol{y}}\|_2} = \frac{1}{\sqrt{p_k}}{\mbox{}\|{\boldsymbol{U}}_{k*}\|_2}. $$ Notice that from Eqn. (\[eqn:expectyyt\]) we immediately get ${\mbox{}\|{\mbox{}{\bf{E}}\left[{\boldsymbol{y}}^T{\boldsymbol{y}}\right]}\|_2} = {\mbox{}\|{\boldsymbol{I}}_d\|_2} = 1$. Applying Lemma \[lem:oliveira\] (with ${\boldsymbol{x}}= {\boldsymbol{y}}^T$), we get $$\label{eqn:ExpectBound} {\mbox{}\|{\boldsymbol{R}}^T{\boldsymbol{R}}-{\boldsymbol{U}}^T{\boldsymbol{U}}\|_2} < \epsilon, $$ with probability at least $1-\left(2c\right)^2 \exp\left(-\frac{c\epsilon^2}{16M^2 + 8M^2 \epsilon}\right)$. Let $\delta$ be the failure probability of Theorem \[thm:theorem7correct\]; we seek an appropriate value of $c$ in order to guarantee $\left(2c\right)^2 \exp\left(-\frac{c\epsilon^2}{16M^2 + 8M^2 \epsilon}\right) \leq \delta$. Equivalently, we need to satisfy $$\frac{c}{\ln \left(2c/\sqrt{\delta}\right)} \geq \frac{2}{\epsilon^2}\left(16M^2 + 8M^2\epsilon\right).$$ Combine Eqns. (\[eqn:defM\]) and (\[eqn:appopt4\]) to get $M^2 \leq {\mbox{}\|{\boldsymbol{U}}\|_F^2}/\beta=d/\beta$. Recall that $\epsilon < 1$ to conclude that it suffices to choose a value of $c$ such that $$\frac{c}{\ln \left(2c/\sqrt{\delta}\right)} \geq \frac{48d}{\beta\epsilon^2},$$ or, equivalently, $$\frac{2c/\sqrt{\delta}}{\ln \left(2c/\sqrt{\delta}\right)} \geq \frac{96d}{\beta\epsilon^2\sqrt{\delta}}.$$ We now use the fact that for any $\eta \geq 4$, if $x \geq 2\eta \ln \eta$, then $x/\ln x \geq \eta$. Let $x = 2c/\sqrt{\delta}$, let $\eta = 96d /\left(\beta \epsilon^2\sqrt{\delta}\right)$, and note that $\eta \geq 4$ since $d\geq 1$ and $\beta$, $\epsilon$, and $\delta$ are at most one. Thus, it suffices to set $$\frac{2c}{\sqrt{\delta}} \geq 2 \frac{96 d}{\beta \epsilon^2\sqrt{\delta}}\ln \left(\frac{96 d}{\beta \epsilon^2\sqrt{\delta}}\right),$$ which concludes the proof of the theorem. Let $\delta = 1/10$ and let $\epsilon$ and $\beta$ be constants. Then, we can compare the bound of Eqn. (\[eqn:cval1\]) with the bound of Eqn. (\[eqn:CboundAppendix\]) of Theorem \[thm:theorem7correct\]: both values of $c$ guarantee the same accuracy $\epsilon$ and the same success probability (say 9/10). However, asymptotically, the bound of Theorem \[thm:theorem7correct\] holds by setting $c = O(d\ln d)$, while the bound of Eqn. (\[eqn:cval1\]) holds by setting $c=O(d^2)$. Thus, the bound of Theorem \[thm:theorem7correct\] is much better. By the Coupon Collector Problem (see Section \[sxn:couponcollector\]), sampling-based approaches necessitate at least $\Omega(d\ln d)$ samples, thus making our algorithm asymptotically optimal. We should note, however, that deterministic methods exist (see [@Srivastava2010]) that achieve the same bound with $c=O(d/\epsilon^2)$ samples. We made no effort to optimize the constants in the expression for $c$ in Eqn. (\[eqn:CboundAppendix\]). Better constants are known, by using tighter matrix-Bernstein inequalities. For a state-of-the-art bound see, for example, [@Holodnak2015 Theorem 5.1]. References. ----------- Our presentation in this chapter follows closely the derivations in [@dkm_matrix1]; see [@dkm_matrix1] for a detailed discussion of prior work on this topic. We also refer the interested reader to [@Holodnak2015] and references therein for more recent work on randomized matrix multiplication. RandNLA Approaches for Regression Problems {#sxn:main:regression} ========================================== In this section, we will present a simple randomized algorithm for least-squares regression. In many applications in mathematics and statistical data analysis, it is of interest to find an approximate solution to a system of linear equations that has no exact solution. For example, let a matrix ${\mathbf{A}}\in {\mathbb{R}^{n \times d}}$ and a vector ${\boldsymbol{b}}\in {\mathbb{R}^{n}}$ be given. If $n \gg d$, there will not in general exist a vector ${\boldsymbol{x}}\in {\mathbb{R}^{d}}$ such that ${\mathbf{A}}{\boldsymbol{x}}={\boldsymbol{b}}$, and yet it is often of interest to find a vector ${\boldsymbol{x}}$ such that ${\mathbf{A}}{\boldsymbol{x}}\approx {\boldsymbol{b}}$ in some precise sense. The method of least squares, whose original formulation is often credited to Gauss and Legendre, accomplishes this by minimizing the sum of squares of the elements of the residual vector, i.e., by solving the optimization problem $$\label{eqn:orig_ls_prob} \mathcal{Z} = \min_{{\boldsymbol{x}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\mathbf{A}}{\boldsymbol{x}}- {\boldsymbol{b}}\|_2}.$$ The minimum $\ell_2$-norm vector among those satisfying Eqn. (\[eqn:orig\_ls\_prob\]) is $$\label{eqn:xopt_orig_ls_prob} {\boldsymbol{x}}_{opt} = {\mathbf{A}}^{\dagger}{\boldsymbol{b}},$$ where ${\mathbf{A}}^{\dagger}$ denotes the Moore-Penrose generalized inverse of the matrix ${\mathbf{A}}$. This solution vector has a very natural statistical interpretation as providing an optimal estimator among all linear unbiased estimators, and it has a very natural geometric interpretation as providing an orthogonal projection of the vector ${\boldsymbol{b}}$ onto the span of the columns of the matrix ${\mathbf{A}}$. Recall that to minimize the quantity in Eqn. (\[eqn:orig\_ls\_prob\]), we can set the derivative of ${\mbox{}\|{\mathbf{A}}{\boldsymbol{x}}-{\boldsymbol{b}}\|_2^2}=({\mathbf{A}}{\boldsymbol{x}}-{\boldsymbol{b}})^T({\mathbf{A}}{\boldsymbol{x}}-{\boldsymbol{b}})$ with respect to ${\boldsymbol{x}}$ equal to zero, from which it follows that the minimizing vector ${\boldsymbol{x}}_{opt}$ is a solution of the so-called normal equations $$\label{eqn:normal_eqn} {\mathbf{A}}^T {\mathbf{A}}{\boldsymbol{x}}_{opt}={\mathbf{A}}^T{\boldsymbol{b}}.$$ Computing ${\mathbf{A}}^T{\mathbf{A}}$, and thus computing ${\boldsymbol{x}}_{opt}$ in this way, takes $O(nd^2)$ time, assuming $n \ge d$. Geometrically, Eqn. (\[eqn:normal\_eqn\]) means that the residual vector ${\boldsymbol{b}}^{\perp}={\boldsymbol{b}}-{\mathbf{A}}{\boldsymbol{x}}_{opt}$ is required to be orthogonal to the column space of ${\mathbf{A}}$, i.e., ${{\boldsymbol{b}}^{\perp}}^T{\mathbf{A}}=0$. While solving the normal equations squares the condition number of the input matrix (and thus is typically not recommended in practice), direct methods (such as the QR decomposition, see Section \[sxn:labasics\]) also solve the problem of Eqn. (\[eqn:orig\_ls\_prob\]) in $O(nd^2)$ time, assuming that $n \geq d$. Finally, an alternative expression for the vector ${\boldsymbol{x}}_{opt}$ of Eqn. (\[eqn:xopt\_orig\_ls\_prob\]) emerges by leveraging the SVD of ${\mathbf{A}}$. If ${\mathbf{A}}= {\boldsymbol{U}}_A{\boldsymbol{\Sigma}}_A {\boldsymbol{V}}_A^T$ denotes the SVD of ${\mathbf{A}}$, then $${\boldsymbol{x}}_{opt}={\boldsymbol{V}}_A{\boldsymbol{\Sigma}}_A^{-1}{\boldsymbol{U}}_A^T{\boldsymbol{b}}={\mathbf{A}}^{\dagger}{\boldsymbol{b}}.$$ Computing ${\boldsymbol{x}}_{opt}$ in this way also takes $O(nd^2)$ time, again assuming $n \ge d$. In this section, we will describe a randomized algorithm that will provide accurate relative-error approximations to the minimal $\ell_2$-norm solution vector ${\boldsymbol{x}}_{opt}$ of Eqn. (\[eqn:xopt\_orig\_ls\_prob\]) faster than these “exact” algorithms for a large class of over-constrained least-squares problems. The Randomized Hadamard Transform. {#sxn:RHT} ---------------------------------- The Randomized Hadamard Transform was introduced in [@Ailon2009] as one step in the development of a fast version of the Johnson-Lindenstrauss lemma. Recall that the $n \times n$ Hadamard matrix (assuming $n$ is a power of two) $\tilde{{\boldsymbol{H}}}_n$, may be defined recursively as follows: $$\tilde{{\boldsymbol{H}}}_n = \left[ \begin{array}{cc} \widetilde{{\boldsymbol{H}}}_{n/2} & \widetilde{{\boldsymbol{H}}}_{n/2} \\ \widetilde{{\boldsymbol{H}}}_{n/2} & -\widetilde{{\boldsymbol{H}}}_{n/2} \end{array}\right] , \qquad \mbox{with} \qquad \widetilde{{\boldsymbol{H}}}_2 = \left[ \begin{array}{cc} +1 & +1 \\ +1 & -1 \end{array}\right].$$ We can now define the *normalized* Hadamard transform ${\boldsymbol{H}}_n$ as $(1/\sqrt{n})\tilde{{\boldsymbol{H}}}_n$; it is easy to see that ${\boldsymbol{H}}_n{\boldsymbol{H}}_n^T={\boldsymbol{H}}_n^T{\boldsymbol{H}}_n={\boldsymbol{I}}_n$. Now consider a diagonal matrix ${\boldsymbol{D}}\in \mathbb{R}^{n \times n}$ such that ${\boldsymbol{D}}_{ii}$ is set to +1 with probability $1/2$ and to $-1$ with probability $1/2$. The product ${\boldsymbol{H}}{\boldsymbol{D}}$ is the *Randomized Hadamard Transform* and has three useful properties. First, when applied to a vector, it “spreads out” the mass/energy of that vector, in the sense of providing a bound for the largest element, or infinity norm, of the transformed vector. Second, computing the product ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{x}}$ for any vector ${\boldsymbol{x}}\in \mathbb{R}^n$ takes $O(n\log_2 n)$ time. Even better, if we only need to access, say, $r$ elements in the transformed vector, then those $r$ elements can be computed in $O(n \log_2 r )$ time. We will expand on the latter observation in Section \[sxn:lsruntime\], where we will discuss the running time of the proposed algorithm. Third, the Randomized Hadamard Transform is an orthogonal transformation, since ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{D}}^T{\boldsymbol{H}}^T={\boldsymbol{H}}^T{\boldsymbol{D}}^T{\boldsymbol{D}}{\boldsymbol{H}}={\boldsymbol{I}}_n$. The main algorithm and main theorem. {#sxn:sampling:result1} ------------------------------------ We are now ready to provide an overview of the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm (Algorithm \[alg:alg\_sample\_fast\]). Let the matrix product ${\boldsymbol{H}}{\boldsymbol{D}}$ denote the $n \times n$ Randomized Hadamard Transform discussed in the previous section. (For simplicity, we restrict our discussion to the case that $n$ is a power of two, although this restriction can easily be removed by using variants of the Randomized Hadamard Transform [@Mah-mat-rev_BOOK].) Our algorithm is a *preconditioned random sampling algorithm*: after premultiplying ${\mathbf{A}}$ and ${\boldsymbol{b}}$ by ${\boldsymbol{H}}{\boldsymbol{D}}$, our algorithm samples uniformly at random $r$ constraints from the preprocessed problem. (See Eqn. (\[eqn:rvaluefinal\]), as well as the remarks after Theorem \[thm:alg\_sample\_fast\] for the precise value of $r$.) Then, this algorithm solves the least squares problem on just those sampled constraints to obtain a vector $\tilde{{\boldsymbol{x}}}_{opt} \in {\mathbb{R}^{d}}$ such that Theorem \[thm:alg\_sample\_fast\] is satisfied. Formally, we will let ${\boldsymbol{S}}\in {\mathbb{R}^{n \times r}}$ denote a sampling-and-rescaling matrix specifying which of the $n$ (preprocessed) constraints are to be sampled and how they are to be rescaled. This matrix is initially empty and is constructed as described in the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm. (We are describing this algorithm in terms of the matrix ${\boldsymbol{S}}$, but as with the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm, we do not need to construct it explicitly in an actual implementation [@AMT10].) Then, we can consider the problem $$\tilde{\mathcal{Z}} = \min_{{\boldsymbol{x}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\mathbf{A}}{\boldsymbol{x}}- {\boldsymbol{S}}^T{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}\|_2} ,$$ which is a least squares approximation problem involving only the $r$ constraints, where the $r$ constraints are uniformly sampled from the matrix ${\mathbf{A}}$ after the preprocessing with the Randomized Hadamard Transform. The minimum $\ell_2$-norm vector $\tilde{{\boldsymbol{x}}}_{opt} \in {\mathbb{R}^{d}}$ among those that achieve the minimum value $\tilde{\mathcal{Z}}$ in this problem is $$\tilde{{\boldsymbol{x}}}_{opt} = \left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\mathbf{A}}\right)^{\dagger}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}},$$ which is the output of the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm. One can prove (and the proof is provided below) the following theorem about this algorithm. \[thm:alg\_sample\_fast\] Suppose ${\mathbf{A}}\in {\mathbb{R}^{n \times d}}$ is a matrix of rank $d$, with $n$ being a power of two. Let ${\boldsymbol{b}}\in {\mathbb{R}^{n}}$ and let $\epsilon \in (0,1)$. Run the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm with $$\label{eqn:rvaluefinal} r = \max\left\{48^2 d \ln\left(40nd\right)\ln\left(100^2d \ln \left(40nd\right)\right), 40d\ln(40nd)/\epsilon\right\} $$ and return $\tilde{{\boldsymbol{x}}}_{opt}$. Then, with probability at least .8, the following two claims hold: first, $\tilde{{\boldsymbol{x}}}_{opt}$ satisfies $${\mbox{}\|{\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt}-{\boldsymbol{b}}\|_2} \le (1+\epsilon) \mathcal{Z} ,$$ where, recall, that $ \mathcal{Z} $ is given in Eqn. (\[eqn:orig\_ls\_prob\]); and, second, if we assume that ${\mbox{}\|{\boldsymbol{U}}_A {\boldsymbol{U}}_A^T {\boldsymbol{b}}\|_2} \ge \gamma {\mbox{}\|{\boldsymbol{b}}\|_2}$ for some $\gamma \in (0,1]$, then $\tilde{{\boldsymbol{x}}}_{opt}$ satisfies $${\mbox{}\|{\boldsymbol{x}}_{opt}-\tilde{{\boldsymbol{x}}}_{opt}\|_2} \leq \sqrt{\epsilon}\left(\kappa({\mathbf{A}})\sqrt{\gamma^{-2}-1}\right){\mbox{}\|{\boldsymbol{x}}_{opt}\|_2}.$$ Finally, $$n(d+1) + 2n(d+1) \log_2 \left(r + 1\right) + O(rd^2)$$ time suffices to compute the solution $\tilde{{\boldsymbol{x}}}_{opt}$. It is worth noting that the claims of Theorem \[thm:alg\_sample\_fast\] can be made to hold with probability $1-\delta$, for any $\delta>0$, by repeating the algorithm $\left\lceil \ln(1/\delta)/\ln(5)\right\rceil$ times. Also, we note that if $n$ is not a power of two we can pad ${\mathbf{A}}$ and ${\boldsymbol{b}}$ with all-zero rows in order to satisfy the assumption; this process at most doubles the size of the input matrix. Assuming that $d \leq n \leq e^d$, and using $\max\{a_1,a_2\} \leq a_1 + a_2$, we get that $$r = \mathcal{O} \left( d(\ln d)(\ln n) + \frac{d \ln n}{\epsilon} \right).$$ Thus, the running time of the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm becomes $$\mathcal{O} \left( nd\ln \frac{d}{\epsilon} + d^3 (\ln d)(\ln n) + \frac{d^3 \ln n}{\epsilon} \right) .$$ Assuming that $n/\ln n = \Omega(d^2)$, the above running time reduces to $$\mathcal{O}\left(nd \ln \frac{d}{\epsilon} + \frac{nd \ln d}{\epsilon}\right).$$ For fixed $\epsilon$, these improve the standard $O(nd^2)$ running time of traditional deterministic algorithms. It is worth noting that improvements over the standard $O(nd^2)$ time could be derived with weaker assumptions on $n$ and $d$. However, for the sake of clarity of presentation, we only focus on the above setting. The matrix ${\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}$ can be viewed in one of two equivalent ways: as a random preprocessing or random preconditioning, which “uniformizes” the leverage scores of the input matrix ${\mathbf{A}}$ (see Lemma \[lem:HU\] for a precise statement), followed by a uniform sampling operation; or as a Johnson-Lindenstrauss style random projection, which preserves the geometry of the entire span of ${\mathbf{A}}$, rather than just a discrete set of points (see Lemma \[lem:sample\_lem20pf\] for a precise statement). RandNLA algorithms as preconditioners. {#sxn:precond} -------------------------------------- Stepping back, recall that the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm may be viewed as preconditioning the input matrix ${\mathbf{A}}$ and the target vector ${\boldsymbol{b}}$ with a carefully-constructed data-independent random matrix ${\boldsymbol{X}}$. (Since the analysis of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm, our main algorithm for low-rank matrix approximation, in Section \[sxn:main:lowrank\] below, boils down to very similar ideas as the analysis of the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm, the ideas underlying the following discussion also apply to the <span style="font-variant:small-caps;">RandLowRank</span> algorithm.) For our random sampling algorithm, we let ${\boldsymbol{X}}= {\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}$, where ${\boldsymbol{S}}$ is a matrix that represents the sampling operation and ${\boldsymbol{H}}{\boldsymbol{D}}$ is the Randomized Hadamard Transform. Thus, we replace the least squares approximation problem of Eqn. (\[eqn:orig\_ls\_prob\]) with the least squares approximation problem $$\label{eqn:orig_ls_prob_Xrotated} \tilde{\mathcal{Z}} = \min_{{\boldsymbol{x}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{X}}({\mathbf{A}}{\boldsymbol{x}}- {\boldsymbol{b}})\|_2}. $$ We explicitly compute the solution to the above problem using a traditional deterministic algorithm, e.g., by computing the vector $$\label{eqn:xopt_orig_ls_prob_Xrotated} \tilde{{\boldsymbol{x}}}_{opt} = \left({\boldsymbol{X}}{\mathbf{A}}\right)^{\dagger}{\boldsymbol{X}}{\boldsymbol{b}}.$$ Alternatively, one could use standard iterative methods such as the the Conjugate Gradient Normal Residual method, which can produce an $\epsilon$-approximation to the optimal solution of Eqn. (\[eqn:orig\_ls\_prob\_Xrotated\]) in $O(\kappa({\boldsymbol{X}}{\mathbf{A}}) rd \ln(1/\epsilon))$ time, where $\kappa({\boldsymbol{X}}{\mathbf{A}})$ is the condition number of ${\boldsymbol{X}}{\mathbf{A}}$ and $r$ is the number of rows of ${\boldsymbol{X}}{\mathbf{A}}$. This was indeed the strategy implemented in the popular Blendenpik/LSRN approach [@AMT10]. We now state and prove a lemma that establishes sufficient conditions on *any* matrix ${\boldsymbol{X}}$ such that the solution vector $\tilde{{\boldsymbol{x}}}_{opt}$ to the least squares problem of Eqn. (\[eqn:orig\_ls\_prob\_Xrotated\]) will satisfy the relative-error bounds of Theorem \[thm:alg\_sample\_fast\]. Recall that the SVD of ${\mathbf{A}}$ is ${\mathbf{A}}={\boldsymbol{U}}_A {\boldsymbol{\Sigma}}_A {\boldsymbol{V}}_A^T$. In addition, for notational simplicity, we let ${\boldsymbol{b}}^{\perp} = {\boldsymbol{U}}_A^{\perp}{{\boldsymbol{U}}_A^{\perp}}^{T}{\boldsymbol{b}}$ denote the part of the right hand side vector ${\boldsymbol{b}}$ lying outside of the column space of ${\mathbf{A}}$. The two conditions that we will require for the matrix ${\boldsymbol{X}}$ are: $$\begin{aligned} \label{eqn:lemma1_ass1} & & \sigma_{min}^2 \left( {\boldsymbol{X}}{\boldsymbol{U}}_A \right) \ge 1/\sqrt{2} \mbox{; and} \\ \label{eqn:lemma1_ass2} & & {\mbox{}\|{\boldsymbol{U}}_A^T {\boldsymbol{X}}^T {\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2} \le \epsilon \mathcal{Z}^2/2 ,\end{aligned}$$ for some $\epsilon \in (0,1)$. Several things should be noted about these conditions. - First, although Condition (\[eqn:lemma1\_ass1\]) only states that $\sigma_i^2({\boldsymbol{X}}{\boldsymbol{U}}_A)\geq 1/\sqrt{2}$, for all $i =1,\ldots, d$, our randomized algorithm satisfies $\abs{1-\sigma_i^2({\boldsymbol{X}}{\boldsymbol{U}}_A)} \le 1-1/\sqrt{2}$, for all $i =1,\ldots, d$. This is equivalent to $${\mbox{}\|I-{\boldsymbol{U}}_A^T{\boldsymbol{X}}^T {\boldsymbol{X}}{\boldsymbol{U}}_A \|_2} \le 1-1/\sqrt{2}.$$ Thus, one should think of ${\boldsymbol{X}}{\boldsymbol{U}}_A$ as an approximate isometry. - Second, the lemma is a deterministic statement, since it makes no explicit reference to a particular randomized algorithm and since ${\boldsymbol{X}}$ is not assumed to be constructed from a randomized process. Failure probabilities will enter later when we show that our randomized algorithm constructs an ${\boldsymbol{X}}$ that satisfies Conditions (\[eqn:lemma1\_ass1\]) and (\[eqn:lemma1\_ass2\]) with some probability. - Third, Conditions (\[eqn:lemma1\_ass1\]) and (\[eqn:lemma1\_ass2\]) define what has come to be known as a *subspace embedding*, since it is an embedding that preserves the geometry of the entire subspace of the matrix ${\mathbf{A}}$. Such a subspace embedding can be *oblivious* (meaning that it is constructed without knowledge of the input matrix, as with random projection algorithms) or *non-oblivious* (meaning that it is constructed from information in the input matrix, as with data-dependent nonuniform sampling algorithms). This style of analysis represented a major advance in RandNLA algorithms, since it premitted much stronger bounds to be obtained than had been possible with previous methods. See [@DMMS11] for the journal version (which was a combination and extension of two previous conference papers) of the first paper to use this style of analysis. - Fourth, Condition (\[eqn:lemma1\_ass2\]) simply states that ${\boldsymbol{X}}{\boldsymbol{b}}^{\perp}={\boldsymbol{X}}{\boldsymbol{U}}_A^{\perp}{{\boldsymbol{U}}_A^{\perp}}^{T}{\boldsymbol{b}}$ remains approximately orthogonal to ${\boldsymbol{X}}{\boldsymbol{U}}_A$. Clearly, before applying ${\boldsymbol{X}}$, it holds that ${\boldsymbol{U}}_A^T{\boldsymbol{b}}^{\perp}=0$. - Fifth, although Condition (\[eqn:lemma1\_ass2\]) depends on the right hand side vector ${\boldsymbol{b}}$, the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm will satisfy it without using any information from ${\boldsymbol{b}}$. (See Lemma \[lem:sample\_lem40pf\] below.) Given Conditions (\[eqn:lemma1\_ass1\]) and (\[eqn:lemma1\_ass2\]), we can establish the following lemma. \[lem:suff\_cond\] Consider the overconstrained least squares approximation problem of Eqn. (\[eqn:orig\_ls\_prob\]) and let the matrix ${\boldsymbol{U}}_A \in {\mathbb{R}^{n \times d}}$ contain the top $d$ left singular vectors of ${\mathbf{A}}$. Assume that the matrix ${\boldsymbol{X}}$ satisfies Conditions (\[eqn:lemma1\_ass1\]) and (\[eqn:lemma1\_ass2\]) above, for some $\epsilon \in (0,1)$. Then, the solution vector $\tilde{{\boldsymbol{x}}}_{opt}$ to the least squares approximation problem (\[eqn:orig\_ls\_prob\_Xrotated\]) satisfies: $$\begin{aligned} \label{eqn:lemma1_eq3} {\mbox{}\|{\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt}-{\boldsymbol{b}}\|_2} &\le& (1+\epsilon) \mathcal{Z} \mbox{, and} \\ \label{eqn:lemma1_eq4} {\mbox{}\|{\boldsymbol{x}}_{opt}-\tilde{{\boldsymbol{x}}}_{opt}\|_2} &\leq& \frac{1}{\sigma_{min}({\mathbf{A}})}\sqrt{\epsilon}\mathcal{Z} .\end{aligned}$$ Let us first rewrite the down-scaled regression problem induced by ${\boldsymbol{X}}$ as $$\begin{aligned} \nonumber \min_{{\boldsymbol{x}}\in {\mathbb{R}^{d}}} {\mbox{}\| {\boldsymbol{X}}{\boldsymbol{b}}- {\boldsymbol{X}}{\mathbf{A}}{\boldsymbol{x}}\|_2^2} &=& \min_{{\boldsymbol{x}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{X}}{\mathbf{A}}x-{\boldsymbol{X}}{\boldsymbol{b}}\|_2^2}\\ \label{eqn:ds1} &=& \min_{{\boldsymbol{y}}\in {\mathbb{R}^{d}}} {\mbox{}\| {\boldsymbol{X}}{\mathbf{A}}({\boldsymbol{x}}_{opt}+{\boldsymbol{y}})-{\boldsymbol{X}}( {\mathbf{A}}x_{opt}+{\boldsymbol{b}}^{\perp})\|_2^2} \\ \nonumber &=& \min_{{\boldsymbol{y}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{X}}{\mathbf{A}}{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2} \\ \label{eqn:ds2} &=& \min_{{\boldsymbol{z}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{X}}{\boldsymbol{U}}_A{\boldsymbol{z}}-{\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2}.\end{aligned}$$ Eqn. (\[eqn:ds1\]) follows since ${\boldsymbol{b}}={\mathbf{A}}{\boldsymbol{x}}_{opt}+{\boldsymbol{b}}^{\perp}$ and Eqn. (\[eqn:ds2\]) follows since the columns of the matrix ${\mathbf{A}}$ span the same subspace as the columns of ${\boldsymbol{U}}_A$. Now, let ${\boldsymbol{z}}_{opt} \in {\mathbb{R}^{d}}$ be such that ${\boldsymbol{U}}_A {\boldsymbol{z}}_{opt} = {\mathbf{A}}(\tilde{{\boldsymbol{x}}}_{opt}-{\boldsymbol{x}}_{opt})$. Using this value for ${\boldsymbol{z}}_{opt}$, we will prove that ${\boldsymbol{z}}_{opt}$ is minimizer of the above optimization problem, as follows: $$\begin{aligned} \nonumber{\mbox{}\|{\boldsymbol{X}}{\boldsymbol{U}}_A {\boldsymbol{z}}_{opt} - {\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2} &=& {\mbox{}\|{\boldsymbol{X}}{\mathbf{A}}(\tilde{{\boldsymbol{x}}}_{opt}-{\boldsymbol{x}}_{opt}) - {\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2}\\ \nonumber&=& {\mbox{}\|{\boldsymbol{X}}{\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt} - {\boldsymbol{X}}{\mathbf{A}}{\boldsymbol{x}}_{opt} - {\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2}\\ \label{eqn:pdch51}&=& {\mbox{}\|{\boldsymbol{X}}{\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt} - {\boldsymbol{X}}{\boldsymbol{b}}\|_2^2}\\ \nonumber&=& \min_{{\boldsymbol{x}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{X}}{\mathbf{A}}{\boldsymbol{x}}-{\boldsymbol{X}}{\boldsymbol{b}}\|_2^2} \\ \nonumber &=& \min_{{\boldsymbol{z}}\in {\mathbb{R}^{d}}} {\mbox{}\|{\boldsymbol{X}}{\boldsymbol{U}}_Az-{\boldsymbol{X}}{\boldsymbol{b}}^{\perp}\|_2^2}. $$ Eqn. (\[eqn:pdch51\]) follows since ${\boldsymbol{b}}={\mathbf{A}}{\boldsymbol{x}}_{opt}+{\boldsymbol{b}}^{\perp}$ and the last equality follows from Eqn. (\[eqn:ds2\]). Thus, by the normal equations (\[eqn:normal\_eqn\]), we have that $$\label{eqn:ds-normal} ({\boldsymbol{X}}{\boldsymbol{U}}_A)^T{\boldsymbol{X}}{\boldsymbol{U}}_A {\boldsymbol{z}}_{opt} = ({\boldsymbol{X}}{\boldsymbol{U}}_A)^T {\boldsymbol{X}}{\boldsymbol{b}}^{\perp}.$$ Taking the norm of both sides and observing that under Condition (\[eqn:lemma1\_ass1\]) we have $\sigma_i(({\boldsymbol{X}}{\boldsymbol{U}}_A)^T {\boldsymbol{X}}{\boldsymbol{U}}_A) = \sigma_i^2({\boldsymbol{X}}{\boldsymbol{U}}_A) \ge 1/\sqrt{2}$, for all $i$, it follows that $$\label{eqn:z-norm1} {\mbox{}\|{\boldsymbol{z}}_{opt}\|_2^2} / 2 \le {\mbox{}\|({\boldsymbol{X}}{\boldsymbol{U}}_A)^T{\boldsymbol{X}}{\boldsymbol{U}}_Az_{opt}\|_2^2} = {\mbox{}\| ({\boldsymbol{X}}{\boldsymbol{U}}_A)^T {\boldsymbol{X}}{\boldsymbol{b}}^{\perp} \|_2^2}.$$ Using Condition (\[eqn:lemma1\_ass2\]) we observe that $$\label{eqn:z-norm2} {\mbox{}\|{\boldsymbol{z}}_{opt}\|_2^2} \le \epsilon\mathcal{Z}^2.$$ To establish the first claim of the lemma, let us rewrite the norm of the residual vector as $$\begin{aligned} {\mbox{}\| {\boldsymbol{b}}- {\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt} \|_2^2} \nonumber &=& {\mbox{}\| {\boldsymbol{b}}- {\mathbf{A}}{\boldsymbol{x}}_{opt} + {\mathbf{A}}{\boldsymbol{x}}_{opt} - {\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt} \|_2^2} \\ \label{eqn:pfCeq1} &=& {\mbox{}\| {\boldsymbol{b}}- {\mathbf{A}}{\boldsymbol{x}}_{opt} \|_2^2} + {\mbox{}\| {\mathbf{A}}{\boldsymbol{x}}_{opt} - {\mathbf{A}}\tilde{{\boldsymbol{x}}}_{opt} \|_2^2} \\ \label{eqn:pfCeq2} &=& \mathcal{Z}^{2} + {\mbox{}\|-{\boldsymbol{U}}_A {\boldsymbol{z}}_{opt}\|_2^2} \\ \label{eqn:pfCeq3} &\leq& \mathcal{Z}^{2} + \epsilon \mathcal{Z}^{2} ,\end{aligned}$$ where Eqn. (\[eqn:pfCeq1\]) follows by the Pythagorean theorem, since ${\boldsymbol{b}}- {\mathbf{A}}{\boldsymbol{x}}_{opt} = {\boldsymbol{b}}^\perp$ is orthogonal to ${\mathbf{A}}$ and consequently to ${\mathbf{A}}({\boldsymbol{x}}_{opt} - \tilde{{\boldsymbol{x}}}_{opt})$; Eqn. (\[eqn:pfCeq2\]) follows by the definition of ${\boldsymbol{z}}_{opt}$ and $\mathcal{Z}$; and Eqn. (\[eqn:pfCeq3\]) follows by (\[eqn:z-norm2\]) and fact that $U_A$ has orthonormal columns. The first claim of the lemma follows since $\sqrt{1+\epsilon} \le 1+\epsilon$. To establish the second claim of the lemma, recall that ${\mathbf{A}}({\boldsymbol{x}}_{opt}-\tilde{{\boldsymbol{x}}}_{opt})={\boldsymbol{U}}_A{\boldsymbol{z}}_{opt}$. If we take the norm of both sides of this expression, we have that $$\begin{aligned} {\mbox{}\| {\boldsymbol{x}}_{opt}-\tilde{{\boldsymbol{x}}}_{opt} \|_2^2} \label{eqn:pfDeq1} &\leq& \frac {{\mbox{}\|{\boldsymbol{U}}_Az_{opt}\|_2^2}} {\sigma_{min}^2({\mathbf{A}})} \\ \label{eqn:pfDeq2} &\leq& \frac {\epsilon\mathcal{Z}^2} {\sigma_{min}^2({\mathbf{A}})},\end{aligned}$$ where Eqn. (\[eqn:pfDeq1\]) follows since $\sigma_{min}({\mathbf{A}})$ is the smallest singular value of ${\mathbf{A}}$ and since the rank of ${\mathbf{A}}$ is $d$; and Eqn. (\[eqn:pfDeq2\]) follows by Eqn. (\[eqn:z-norm2\]) and the orthonormality of the columns of ${\boldsymbol{U}}_A$. Taking the square root, the second claim of the lemma follows. If we make no assumption on ${\boldsymbol{b}}$, then Eqn. (\[eqn:lemma1\_eq4\]) from Lemma \[lem:suff\_cond\] may provide a weak bound in terms of ${\mbox{}\|{\boldsymbol{x}}_{opt}\|_2}$. If, on the other hand, we make the additional assumption that a constant fraction of the norm of ${\boldsymbol{b}}$ lies in the subspace spanned by the columns of ${\mathbf{A}}$, then Eqn. (\[eqn:lemma1\_eq4\]) can be strengthened. Such an assumption is reasonable, since most least-squares problems are practically interesting if at least some part of ${\boldsymbol{b}}$ lies in the subspace spanned by the columns of ${\mathbf{A}}$. \[lem:suff\_cond2\] Using the notation of Lemma \[lem:suff\_cond\], and additionally assuming that ${\mbox{}\|{\boldsymbol{U}}_A {\boldsymbol{U}}_A^T{\boldsymbol{b}}\|_2} \geq \gamma{\mbox{}\|{\boldsymbol{b}}\|_2}$, for some fixed $\gamma \in (0,1]$, it follows that $${\mbox{}\|{\boldsymbol{x}}_{opt}-\tilde{{\boldsymbol{x}}}_{opt}\|_2} \leq \sqrt{\epsilon}\left(\kappa({\mathbf{A}})\sqrt{\gamma^{-2}-1}\right){\mbox{}\|{\boldsymbol{x}}_{opt}\|_2} .$$ Since ${\mbox{}\|{\boldsymbol{U}}_A {\boldsymbol{U}}_A^T{\boldsymbol{b}}\|_2} \geq \gamma{\mbox{}\|{\boldsymbol{b}}\|_2}$, it follows that $$\begin{aligned} \mathcal{Z}^2 \nonumber &=& {\mbox{}\|{\boldsymbol{b}}\|_2^2} - {\mbox{}\|{\boldsymbol{U}}_A {\boldsymbol{U}}_A^T {\boldsymbol{b}}\|_2^2} \\ \nonumber &\leq& (\gamma^{-2}-1) {\mbox{}\|{\boldsymbol{U}}_A {\boldsymbol{U}}_A^T {\boldsymbol{b}}\|_2^2} \\ \nonumber &\leq& {\sigma_{\max}^{2}({\mathbf{A}})}(\gamma^{-2}-1){\mbox{}\|{\boldsymbol{x}}_{opt}\|_2^2} .\end{aligned}$$ This last inequality follows from ${\boldsymbol{U}}_A {\boldsymbol{U}}_A^T{\boldsymbol{b}}= {\mathbf{A}}{\boldsymbol{x}}_{opt}$, which implies $${\mbox{}\|{\boldsymbol{U}}_A {\boldsymbol{U}}_A^T {\boldsymbol{b}}\|_2} = {\mbox{}\|{\mathbf{A}}{\boldsymbol{x}}_{opt}\|_2} \leq {\mbox{}\|{\mathbf{A}}\|_2} {\mbox{}\|{\boldsymbol{x}}_{opt}\|_2} = \sigma_{\max}\left({\mathbf{A}}\right){\mbox{}\|{\boldsymbol{x}}_{opt}\|_2}.$$ By combining this with Eqn. (\[eqn:lemma1\_eq4\]) of Lemma \[lem:suff\_cond\], the lemma follows. The proof of Theorem \[thm:alg\_sample\_fast\]. {#sxn:ls:thmproof} ----------------------------------------------- To prove Theorem \[thm:alg\_sample\_fast\], we adopt the following approach: we first show that the Randomized Hadamard Transform has the effect preprocessing or preconditioning the input matrix to make the leverage scores approximately uniform; and we then show that Condition (\[eqn:lemma1\_ass1\]) and (\[eqn:lemma1\_ass2\]) can be satisfied by sampling uniformly on the preconditioned input. The theorem will then follow from Lemma \[lem:suff\_cond\]. ### The effect of the Randomized Hadamard Transform. {#the-effect-of-the-randomized-hadamard-transform. .unnumbered} We start by stating a lemma that quantifies the manner in which ${\boldsymbol{H}}{\boldsymbol{D}}$ approximately “uniformizes” information in the left singular subspace of the matrix ${\mathbf{A}}$; this will allow us to sample uniformly and apply our randomized matrix multiplication results from Section \[chapter:MM\] in order to analyze the proposed algorithm. We state the lemma for a general $n \times d$ orthogonal matrix ${\boldsymbol{U}}$ such that ${\boldsymbol{U}}^T {\boldsymbol{U}}= {\boldsymbol{I}}_d$. \[lem:HU\] Let ${\boldsymbol{U}}$ be an $n \times d$ orthogonal matrix and let the product ${\boldsymbol{H}}{\boldsymbol{D}}$ be the $n \times n$ Randomized Hadamard Transform of Section \[sxn:RHT\]. Then, with probability at least $.95$, $$\begin{aligned} \label{eqn:lem:HU_eqn2} {\mbox{}\|\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{i*}\|_2^2} &\leq& \frac{2d\ln(40nd)}{n},\qquad \text{ for all } i =1,\ldots, n . $$ The following well-known inequality [@Hoeffding1963 Theorem 2] will be useful in the proof. (See also the chapter by Vershynin in this volume [@pcmi-chapter-vershynin] for related results.) \[lem:hoef\] Let $X_i$, $i=1,\ldots, n$ be independent random variables with finite first and second moments such that, for all $i$, $a_i \leq X_i \leq b_i$. Then, for any $t >0$, $${\mbox{}{\bf{Pr}}\left[\abs{\sum_{i=1}^n X_i - \sum_{i=1}^n {\mbox{}{\bf{E}}\left[X_i\right]}} \geq nt\right]} \leq 2\exp{\left(-\frac{2n^2t^2}{\sum_{i=1}^{n}(a_i - b_i)^2}\right)}.$$ Given this lemma, we now provide the proof of Lemma \[lem:HU\]. (of Lemma \[lem:HU\]) Consider $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{ij}$ for some $i$, $j$ (recalling that $i=1,\ldots, n$ and $j = 1,\ldots, d$). Recall that ${\boldsymbol{D}}$ is a diagonal matrix; then, $$\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{ij} = \sum_{\ell=1}^n {\boldsymbol{H}}_{i\ell}{\boldsymbol{D}}_{\ell \ell} {\boldsymbol{U}}_{\ell j} = \sum_{\ell=1}^n {\boldsymbol{D}}_{\ell \ell} \left({\boldsymbol{H}}_{i\ell} {\boldsymbol{U}}_{\ell j}\right)=\sum_{\ell=1}^n X_{\ell}.$$ Let $X_{\ell}={\boldsymbol{D}}_{\ell \ell} \left({\boldsymbol{H}}_{i\ell} {\boldsymbol{U}}_{\ell j}\right)$ be our set of $n$ (independent) random variables. By the construction of ${\boldsymbol{D}}$ and ${\boldsymbol{H}}$, it is easy to see that ${\mbox{}{\bf{E}}\left[X_{\ell}\right]}=0$; also, $$\abs{X_{\ell}} = \abs{{\boldsymbol{D}}_{\ell \ell} \left({\boldsymbol{H}}_{i\ell} {\boldsymbol{U}}_{\ell j}\right)}\leq \frac{1}{\sqrt{n}}\abs{{\boldsymbol{U}}_{\ell j}}.$$ Applying Lemma \[lem:hoef\], we get $${\mbox{}{\bf{Pr}}\left[\abs{\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{ij}} \geq nt\right]}\leq 2\exp{\left(-\frac{2n^3t^2}{4\sum_{\ell=1}^{n}{\boldsymbol{U}}_{\ell j}^2}\right)}= 2\exp{\left(-n^3t^2/2\right)}.$$ In the last equality we used the fact that $\sum_{\ell=1}^{n}{\boldsymbol{U}}_{\ell j}^2=1$, i.e., that the columns of ${\boldsymbol{U}}$ are unit-length. Let the right-hand side of the above inequality be equal to $\delta$ and solve for $t$ to get $${\mbox{}{\bf{Pr}}\left[\abs{\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{ij}} \geq \sqrt{\frac{2 \ln(2/\delta)}{n}}\right]}\leq \delta.$$ Let $\delta = 1/(20nd)$ and apply the union bound over all $nd$ possible index pairs $(i,j)$ to get that, with probability at least 1-1/20=0.95, for all $i,j$, $$\abs{\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{ij}} \leq \sqrt{\frac{2 \ln(40nd)}{n}}.$$ Thus, $$\label{eqn:eqP1} {\mbox{}\|\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{i*}\|_2^2} = \sum_{j=1}^d \left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}\right)_{ij}^2 \leq \frac{2d\ln(40nd)}{n} $$ for all $i =1,\ldots, n$, which concludes the proof of the lemma. ### Satisfying Condition (\[eqn:lemma1\_ass1\]). {#satisfying-conditioneqnlemma1_ass1. .unnumbered} We next prove the following lemma, which states that all the singular values of ${\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A$ are close to one, and in particular that Condition (\[eqn:lemma1\_ass1\]) is satisfied by the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm. The proof of this Lemma \[lem:sample\_lem20pf\] essentially follows from our results in Theorem \[thm:theorem7correct\] for the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm (for approximating the product of a matrix and its transpose). \[lem:sample\_lem20pf\] Assume that Eqn. (\[eqn:lem:HU\_eqn2\]) holds. If $$\label{eqn:rvalue} r \geq 48^2 d \ln\left(40nd\right)\ln\left(100^2d \ln \left(40nd\right)\right) , $$ then, with probability at least .95, $$\abs{1 - \sigma_i^2\left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)} \leq 1-\frac{1}{\sqrt{2}}$$ holds for all $i =1,\ldots, d$. (of Lemma \[lem:sample\_lem20pf\]) Note that for all $i =1,\ldots, d$, $$\begin{aligned} \abs{1 - \sigma_i^2\left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)} \nonumber &=& \abs{\sigma_i\left({\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right) - \sigma_i\left({\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{S}}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)} \\ \label{eqn:eqX31} &\leq& {\mbox{}\|{\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A - {\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{S}}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_2}. $$ In the above, we used the fact that ${\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A = {\boldsymbol{I}}_d$ and inequality (\[eqn:svineq1\]) that was discussed in our Linear Algebra review in Section \[sxn:review:SVD\]. We now view ${\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{S}}{\boldsymbol{S}}^T{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A$ as an approximation to the product of two matrices, namely ${\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T=\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^T$ and ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A$, constructed by randomly sampling and rescaling columns of $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^T$. Thus, we can leverage Theorem \[thm:theorem7correct\]. More specifically, consider the matrix $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^T$. Obviously, since ${\boldsymbol{H}}$, ${\boldsymbol{D}}$, and ${\boldsymbol{U}}_A$ are orthogonal matrices, ${\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_2}=1$ and $\FNorm{{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A}=\FNorm{{\boldsymbol{U}}_A}=\sqrt{d}$. Let $\beta = \left(2\ln(40nd)\right)^{-1}$; since we assumed that Eqn. (\[eqn:lem:HU\_eqn2\]) holds, we note that the columns of $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A \right)^T$, which correspond to the rows of ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A$, satisfy $$\label{eqn:unif_prob_OK} \frac{1}{n} \ge \beta \frac{{\mbox{}\|\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)_{i*}\|_2^2}}{{\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_F^2}}, \qquad \text{ for all } i =1,\ldots, n .$$ Thus, applying Theorem \[thm:theorem7correct\] with $\beta = \left(2\ln(40nd)\right)^{-1}$, $\epsilon = 1 - 1/\sqrt{2}$, and $\delta = 1/20$ implies that $${\mbox{}\|{\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{H}}{\boldsymbol{U}}_A - {\boldsymbol{U}}_A^T{\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{S}}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_2} \leq 1-\frac{1}{\sqrt{2}}$$ holds with probability at least $1-1/20=.95$. For the above bound to hold, we need $r$ to assume the value of Eqn. (\[eqn:rvalue\]). Finally, we note that since ${\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_F^2}=d \geq 1$, the assumption of Theorem \[thm:theorem7correct\] on the Frobenius norm of the input matrix is always satisfied. Combining the above with inequality (\[eqn:eqX31\]) concludes the proof of the lemma. ### Satisfying Condition (\[eqn:lemma1\_ass2\]). {#satisfying-conditioneqnlemma1_ass2. .unnumbered} We next prove the following lemma, which states that Condition (\[eqn:lemma1\_ass2\]) is satisfied by the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm. The proof of this Lemma \[lem:sample\_lem40pf\] again essentially follows from our bounds for the <span style="font-variant:small-caps;">RandMatrixMultiply</span> algorithm from Section \[chapter:MM\] (except here it is used for approximating the product of a matrix and a vector). \[lem:sample\_lem40pf\] Assume that Eqn. (\[eqn:lem:HU\_eqn2\]) holds. If $r \geq 40d\ln(40nd)/\epsilon$, then, with probability at least .9, $${\mbox{}\| \left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp} \|_2^2} \leq \epsilon \mathcal{Z}^{2}/2.$$ (of Lemma \[lem:sample\_lem40pf\]) Recall that ${\boldsymbol{b}}^{\perp} = {\boldsymbol{U}}_A^{\perp}{{\boldsymbol{U}}_A^{\perp}}^{T}{\boldsymbol{b}}$ and that ${\mathcal Z} = {\mbox{}\|{\boldsymbol{b}}^{\perp}\|_2}$. We start by noting that since ${\mbox{}\|{\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp}\|_2^2}={\mbox{}\|{\boldsymbol{U}}_A^T {\boldsymbol{b}}^{\perp}\|_2^2}=0$ it follows that $${\mbox{}\| \left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp} \|_2^2} = {\mbox{}\|{\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{S}}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp} - {\boldsymbol{U}}_A^T {\boldsymbol{D}}{\boldsymbol{H}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp}\|_2^2} .$$ Thus, we can view $\left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp}$ as approximating the product of two matrices, $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}$ and ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp}$, by randomly sampling columns from $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^T$ and rows (elements) from ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp}$. Note that the sampling probabilities are uniform and do not depend on the norms of the columns of $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}$ or the rows of ${\boldsymbol{H}}{\boldsymbol{b}}^{\perp}$. We will apply the bounds of Eqn. (\[eqn:appopt2result\]), after arguing that the assumptions of Eqn. (\[eqn:appopt2\]) are satisfied. Indeed, since we condition on Eqn. (\[eqn:lem:HU\_eqn2\]) holding, the rows of ${\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A$ (which of course correspond to columns of $\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^T$) satisfy $$\frac{1}{n} \ge \beta \frac{{\mbox{}\|\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)_{i*}\|_2^2}}{{\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_F^2}}, \qquad \text{ for all } i =1,\ldots, n,$$ for $\beta = \left(2\ln(40nd)\right)^{-1}$. Thus, Eqn. (\[eqn:appopt2result\]) implies $${\mbox{}{\bf{E}}\left[ {\mbox{}\|\left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp} \|_2^2} \right]} \leq \frac{1}{\beta r}{\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_F^2}{\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp}\|_2^2} = \frac{d{\mathcal Z}^2}{\beta r}.$$ In the above we used ${\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\|_F^2} = d$. Markov’s inequality now implies that with probability at least .9, $${\mbox{}\|\left({\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{U}}_A\right)^{T}{\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}^{\perp} \|_2^2} \leq \frac{10d{\mathcal Z}^2}{\beta r}.$$ Setting $r \geq 20d/(\beta\epsilon)$ and using the value of $\beta$ specified above concludes the proof of the lemma. ### Completing the proof of Theorem \[thm:alg\_sample\_fast\]. {#completing-the-proof-of-theoremthmalg_sample_fast. .unnumbered} The theorem follows since Lemmas \[lem:sample\_lem20pf\] and \[lem:sample\_lem40pf\] establish that the sufficient conditions of Lemma \[lem:suff\_cond\] hold. In more detail, we now complete the proof of Theorem \[thm:alg\_sample\_fast\]. First, let ${\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}$ denote the event that Eqn. (\[eqn:lem:HU\_eqn2\]) holds; clearly, ${\mbox{}{\bf{Pr}}\left[{\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]} \geq .95$. Second, let ${\mathcal E}_{\ref{lem:sample_lem20pf},\ref{lem:sample_lem40pf}|(\ref{eqn:lem:HU_eqn2})}$ denote the event that both Lemmas \[lem:sample\_lem20pf\] and \[lem:sample\_lem40pf\] hold conditioned on ${\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}$ holding. Then, $$\begin{aligned} {\mathcal E}_{\ref{lem:sample_lem20pf},\ref{lem:sample_lem40pf}|(\ref{eqn:lem:HU_eqn2})} &= 1 - \overline{{\mathcal E}_{\ref{lem:sample_lem20pf},\ref{lem:sample_lem40pf}|(\ref{eqn:lem:HU_eqn2})}}\\ &= 1 - \bf{Pr}\left(\left(\mbox{Lemma \ref{lem:sample_lem20pf} does not hold} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right)\right. \\ &\hspace{15mm} \textbf{OR}\left.\left(\mbox{Lemma \ref{lem:sample_lem40pf} does not hold} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right)\right)\\ &\ge 1 - {\mbox{}{\bf{Pr}}\left[\left(\mbox{Lemma \ref{lem:sample_lem20pf} does not hold} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right)\right]}\\ &\hspace{15mm} -{\mbox{}{\bf{Pr}}\left[\left(\mbox{Lemma \ref{lem:sample_lem40pf} does not hold} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right)\right]}\\ &\geq 1 - .05 - .1 = .85. $$ In the above, $\overline{\mathcal E}$ denotes the complement of event ${\mathcal E}$. In the first inequality we used the union bound and in the second inequality we leveraged the bounds for the failure probabilities of Lemmas \[lem:sample\_lem20pf\] and \[lem:sample\_lem40pf\], given that Eqn. (\[eqn:lem:HU\_eqn2\]) holds. We now let ${\mathcal E}$ denote the event that both Lemmas \[lem:sample\_lem20pf\] and \[lem:sample\_lem40pf\] hold, without any a priori conditioning on event ${\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}$; we will bound ${\mbox{}{\bf{Pr}}\left[\mathcal E\right]}$ as follows: $$\begin{aligned} {\mbox{}{\bf{Pr}}\left[\mathcal E\right]} &=& {\mbox{}{\bf{Pr}}\left[{\mathcal E} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]}\cdot {\mbox{}{\bf{Pr}}\left[{\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]} +{\mbox{}{\bf{Pr}}\left[{\mathcal E} | \overline{{\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}}\right]}\cdot {\mbox{}{\bf{Pr}}\left[\overline{{\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}}\right]}\\ &\geq& {\mbox{}{\bf{Pr}}\left[{\mathcal E} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]}\cdot {\mbox{}{\bf{Pr}}\left[{\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]}\\ &=& {\mbox{}{\bf{Pr}}\left[{\mathcal E}_{\ref{lem:sample_lem20pf},\ref{lem:sample_lem40pf}|(\ref{eqn:lem:HU_eqn2})} | {\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]}\cdot {\mbox{}{\bf{Pr}}\left[{\mathcal E}_{(\ref{eqn:lem:HU_eqn2})}\right]}\\ &\geq& .85\cdot .95 \geq .8. $$ In the first inequality we used the fact that all probabilities are positive. The above derivation immediately bounds the success probability of Theorem \[thm:alg\_sample\_fast\]. Combining Lemmas \[lem:sample\_lem20pf\] and \[lem:sample\_lem40pf\] with the structural results of Lemma \[lem:suff\_cond\] and setting $r$ as in Eqn. (\[eqn:rvaluefinal\]) concludes the proof of the accuracy guarantees of Theorem \[thm:alg\_sample\_fast\]. The running time of the [R[AND]{}L[EAST]{}S[QUARES]{}]{} algorithm. {#sxn:lsruntime} ------------------------------------------------------------------- We now discuss the running time of the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm. First of all, by the construction of ${\boldsymbol{S}}$, the number of non-zero entries in ${\boldsymbol{S}}$ is $r$. In Step $6$ we need to compute the products ${\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\mathbf{A}}$ and ${\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{b}}$. Recall that ${\mathbf{A}}$ has $d$ columns and thus the running time of computing both products is equal to the time needed to apply ${\boldsymbol{S}}^T {\boldsymbol{H}}{\boldsymbol{D}}$ on $(d+1)$ vectors. In order to apply ${\boldsymbol{D}}$ on $(d+1)$ vectors in ${\mathbb{R}^{n}}$, $n(d+1)$ operations suffice. In order to estimate how many operations are needed to apply ${\boldsymbol{S}}^T {\boldsymbol{H}}$ on $(d+1)$ vectors, we use the following analysis that was first proposed in [@AL09 Section 7]. Let ${\boldsymbol{x}}$ be any vector in $\mathbb{R}^n$; multiplying ${\boldsymbol{H}}$ by ${\boldsymbol{x}}$ can be done as follows: $$\begin{aligned} \begin{pmatrix} {\boldsymbol{H}}_{n/2} & {\boldsymbol{H}}_{n/2} \\ {\boldsymbol{H}}_{n/2} & -{\boldsymbol{H}}_{n/2} \end{pmatrix} \begin{pmatrix} {\boldsymbol{x}}_1 \\ {\boldsymbol{x}}_2 \end{pmatrix} = \begin{pmatrix} {\boldsymbol{H}}_{n/2}({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2) \\ {\boldsymbol{H}}_{n/2}({\boldsymbol{x}}_1-{\boldsymbol{x}}_2) \end{pmatrix}.\end{aligned}$$ Let $T(n)$ be the number of operations required to perform this operation for $n$-dimensional vectors. Then, $$T(n) = 2T(n/2) + n,$$ and thus $T(n) = O(n \log n)$. We can now include the sub-sampling matrix ${\boldsymbol{S}}$ to get $$\begin{aligned} \begin{pmatrix} {\boldsymbol{S}}_1 & {\boldsymbol{S}}_2 \end{pmatrix} \begin{pmatrix} {\boldsymbol{H}}_{n/2} & {\boldsymbol{H}}_{n/2} \\ {\boldsymbol{H}}_{n/2} & -{\boldsymbol{H}}_{n/2} \end{pmatrix} \begin{pmatrix} {\boldsymbol{x}}_1 \\ {\boldsymbol{x}}_2 \end{pmatrix} = {\boldsymbol{S}}_1 {\boldsymbol{H}}_{n/2}({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2) + {\boldsymbol{S}}_2 {\boldsymbol{H}}_{n/2} ({\boldsymbol{x}}_1 - {\boldsymbol{x}}_2).\end{aligned}$$ Let ${\mbox{nnz}(\cdot)}$ denote the number of non-zero entries of its argument. Then, $$T(n,{\mbox{nnz}({\boldsymbol{S}})}) = T(n/2, {\mbox{nnz}({\boldsymbol{S}}_1)}) + T(n/2,{\mbox{nnz}({\boldsymbol{S}}_2)}) + n.$$ From standard methods in the analysis of recursive algorithms, we can now use the fact that $r={\mbox{nnz}({\boldsymbol{S}})}={\mbox{nnz}({\boldsymbol{S}}_1)}+{\mbox{nnz}({\boldsymbol{S}}_2)}$ to prove that $$T(n,r) \leq 2n \log_2(r+1).$$ Towards that end, let $r_1={\mbox{nnz}({\boldsymbol{S}}_1)}$ and let $r_2={\mbox{nnz}({\boldsymbol{S}}_2)}$. Then, $$\begin{aligned} T(n,r) &= T(n/2, r_1) + T(n/2,r_2) + n\\ &\leq 2\frac{n}{2} \log_2(r_1+1)+2\frac{n}{2} \log_2(r_2+1)+n\log_2 2\\ &= n \log_2 (2(r_1+1)(r_2+1))\\ &\leq n \log_2 (r+1)^2\\ &= 2n \log_2 (r+1). $$ The last inequality follows from simple algebra using $r=r_1+r_2$. Thus, at most $2n(d+1)\log_2 \left(r+1\right)$ operations are needed to apply ${\boldsymbol{S}}^T{\boldsymbol{H}}{\boldsymbol{D}}$ on $d+1$ vectors. After this preprocessing, the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm must compute the pseudoinverse of an $r \times d$ matrix, or, equivalently, solve a least-squares problem on $r$ constraints and $d$ variables. This operation can be performed in $O(rd^2)$ time since $r \geq d$. Thus, the entire algorithm runs in time $$n(d+1) + 2n(d+1) \log_2 \left(r + 1\right) +\mathcal{O}\left(rd^2 \right).$$ References. ----------- Our presentation in this chapter follows closely the derivations in [@DMMS11]; see [@DMMS11] for a detailed discussion of prior work on this topic. We also refer the interested reader to [@AMT10; @Woodruff2014] for followup work on randomized solvers for least-squares problems. A RandNLA Algorithm for Low-rank Matrix Approximation {#sxn:main:lowrank} ===================================================== In this section, we will present a simple randomized matrix algorithm for low-rank matrix approximation. Algorithms to compute low-rank approximations to matrices have been of paramount importance historically in scientific computing (see, for example, [@Saad2011] for traditional numerical methods based on subspace iteration and Krylov subspaces to compute such approximations) as well as more recently in machine learning and data analysis. RandNLA has pioneered an alternative approach, by applying random sampling and random projection algorithms to construct such low-rank approximations with provable accuracy guarantees; see [@dkm_matrix2] for early work on the topic and [@Mah-mat-rev_BOOK; @HMT09_SIREV; @Woodruff2014; @MD2016] for overviews of more recent approaches. In this section, we will present and analyze a simple algorithm to approximate the top $k$ left singular vectors of a matrix ${\mathbf{A}}\in \mathbb{R}^{m \times n}$. Many RandNLA methods for low-rank approximation boil down to variants of this basic technique; see, e.g., the chapter by Martinsson in this volume [@pcmi-chapter-martinsson]. Unlike the previous section on RandNLA algorithms for regression problems, no particular assumptions will be imposed on $m$ and $n$; indeed, ${\mathbf{A}}$ could be a square matrix. The main algorithm and main theorem. {#sxn:sampling:result2} ------------------------------------ Our main algorithm is quite simple and again leverages the Randomized Hadamard Tranform of Section \[sxn:RHT\]. Indeed, let the matrix product ${\boldsymbol{H}}{\boldsymbol{D}}$ denote the $n \times n$ Randomized Hadamard Transform. First, we *postmultiply* the input matrix ${\mathbf{A}}\in \mathbb{R}^{m \times n}$ by $\left({\boldsymbol{H}}{\boldsymbol{D}}\right)^T$, thus forming a new matrix ${\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}}\in \mathbb{R}^{m \times n}$.[^4] Then, we sample (uniformly at random) $c$ columns from the matrix ${\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}}$, thus forming a *smaller* matrix ${\boldsymbol{C}}\in \mathbb{R}^{m \times c}$. Finally, we use a Ritz-Rayleigh type procedure to construct approximations $\tilde{{\boldsymbol{U}}}_k \in \mathbb{R}^{m \times k}$ to the top $k$ left singular vectors of ${\mathbf{A}}$ from ${\boldsymbol{C}}$; these approximations lie within the column space of ${\boldsymbol{C}}$. See the <span style="font-variant:small-caps;">RandLowRank</span> algorithm (Algorithm \[alg:alg\_randlowrank\]) for a detailed description of this procedure, using a sampling-and-rescaling matrix ${\boldsymbol{S}}\in \mathbb{R}^{n \times c}$ to form the matrix ${\boldsymbol{C}}$. Theorem \[thm:relerrLowRank\] is our main quality-of-approximation result for the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. \[thm:relerrLowRank\] Let ${\mathbf{A}}\in \mathbb{R}^{m \times n}$, let $k$ be a rank parameter, and let $\epsilon \in (0,1/2]$. If we set $$\label{eqn:cval5} c \geq c_0\frac{k\ln n}{\epsilon^2} \left(\ln\frac{k}{\epsilon^2}+\ln\ln n\right),$$ (for a fixed constant $c_0$) then, with probability at least .85, the <span style="font-variant:small-caps;">RandLowRank</span> algorithm returns a matrix $\tilde{{\boldsymbol{U}}}_k \in \mathbb{R}^{m \times k}$ such that $$\label{eqn:thm_relerrLowRank} \FNorm{{\mathbf{A}}-\tilde{{\boldsymbol{U}}}_k\tilde{{\boldsymbol{U}}}_k^T{\mathbf{A}}} \le (1+\epsilon) \FNorm{{\mathbf{A}}-{\boldsymbol{U}}_k{\boldsymbol{U}}_k^T{\mathbf{A}}} = (1+\epsilon) \FNorm{{\mathbf{A}}-{\mathbf{A}}_k}.$$ (Here, ${\boldsymbol{U}}_k \in \mathbb{R}^{m \times k}$ contains the top $k$ left singular vectors of ${\mathbf{A}}$). The running time of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm is $O(mnc)$. We discuss the dimensions of the matrices in steps 6-9 of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. One can think of the matrix ${\boldsymbol{C}}\in \mathbb{R}^{m \times c}$ as a “sketch” of the input matrix ${\mathbf{A}}$. Notice that $c$ is (up to $\ln\ln$ factors and ignoring constant terms like $\epsilon$ and $\delta$) $O(k \ln k)$; the rank of ${\boldsymbol{C}}$ (denoted by $\rho_C$) is at least $k$, i.e., $\rho_C \geq k$. The matrix ${\boldsymbol{U}}_C$ has dimensions $m \times \rho_C$ and the matrix ${\boldsymbol{W}}$ has dimensions $\rho_C \times n$. Finally, the matrix ${\boldsymbol{U}}_{W,k}$ has dimensions $\rho_C \times k$ (by our assumption on the rank of ${\boldsymbol{W}}$). Recall that the *best* rank-$k$ approximation to ${\mathbf{A}}$ is equal to ${\mathbf{A}}_k={\boldsymbol{U}}_k{\boldsymbol{U}}_k^T{\mathbf{A}}$. In words, Theorem \[thm:relerrLowRank\] argues that the <span style="font-variant:small-caps;">RandLowRank</span> algorithm returns a set of $k$ orthonormal vectors that are excellent approximations to the top $k$ left singular vectors of ${\mathbf{A}}$, in the sense that projecting ${\mathbf{A}}$ on the subspace spanned by $\tilde{{\boldsymbol{U}}}_k$ returns a matrix that has residual error that is close to that of ${\mathbf{A}}_k$. We emphasize that the $O(mnc)$ running time of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm is due to the Ritz-Rayleigh type procedure in steps (7)-(9). These steps guarantee that the proposed algorithm returns a matrix $\tilde{{\boldsymbol{U}}}_k$ with *exactly* $k$ columns that approximates the top $k$ left singular vectors of ${\mathbf{A}}$. The results of [@pcmi-chapter-martinsson] focus (in our parlance) on the matrix ${\boldsymbol{C}}$, which can be constructed much faster (see Section \[sxn:ch4:runningtime\]), in $O(mn\log_2 c)$ time, but has *more than* $k$ columns. One can bound the error term $\FNorm{{\mathbf{A}}-{\boldsymbol{C}}{\boldsymbol{C}}^{\dagger}{\mathbf{A}}}=\FNorm{{\mathbf{A}}-{\boldsymbol{U}}_C{\boldsymbol{U}}_C^T{\mathbf{A}}}$ to prove that the column span of ${\boldsymbol{C}}$ contains good approximations to the top $k$ left singular vectors of ${\mathbf{A}}$. Repeating the <span style="font-variant:small-caps;">RandLowRank</span> algorithm $\lceil\ln(1/\delta)/\ln5\rceil$ times and keeping the matrix $\tilde{{\boldsymbol{U}}}_k$ that minimizes the error $\FNorm{{\mathbf{A}}-\tilde{{\boldsymbol{U}}}_k\tilde{{\boldsymbol{U}}}_k^T{\mathbf{A}}}$ reduces the failure probability of the algorithm to at most $1-\delta$, for any $\delta\in(0,1)$. As with the sampling process in the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm, the operation represented by ${\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}$ in the <span style="font-variant:small-caps;">RandLowRank</span> algorithm can be viewed in one of two equivalent ways: either as a random preconditioning followed by a uniform sampling operation; or as a Johnson-Lindenstrauss style random projection. (In particular, informally, the <span style="font-variant:small-caps;">RandLowRank</span> algorithm “works” for the following reason. If a matrix is well-approximated by a low-rank matrix, then there is redundancy in the columns (and/or rows), and thus random sampling “should” be successful at selecting a good set of columns. That said, just as with the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm, there may be some columns that are more important to select, e.g., that have high leverage. Thus, using a random projection, which transforms the input to a new basis where the leverage scores of different columns are uniformized, amounts to preconditioning the input such that uniform sampling is appropriate.) The value $c$ is essentially[^5] equal to $O((k/\epsilon^2)\ln(k/\epsilon)\ln n)$. For constant $\epsilon$, this grows as a function of $k\ln k$ and $\ln n$. Similar bounds can be proven for many other random projection algorithms (using different values for $c$) and not just the Randomized Hadamard Transform. Well-known alternatives include random Gaussian matrices, the Randomized Discrete Cosine Transform, sparsity-preserving random projections, etc. Which variant is most appropriate in a given situation depends on the sparsity structure of the matrix, the noise properties of the data, the model of data access, etc. See [@Mah-mat-rev_BOOK; @Woodruff2014] for an overview of similar results. One can generalize the <span style="font-variant:small-caps;">RandLowRank</span> algorithm to work with the matrix $({\mathbf{A}}{\mathbf{A}}^T)^t{\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}$ for integer $t\geq 0$. This would result in subspace iteration. If all intermediate iterates (for $t=0,1,\ldots$) are kept, the Krylov subspace would be formed. See [@MM2015; @DrineasIKM16] and references therein for a detailed treatment and analysis of such methods. (See also the chapter by Martinsson in this volume [@pcmi-chapter-martinsson] for related results.) The remainder of this section will focus on the proof of Theorem \[thm:relerrLowRank\]. Our proof strategy will consist of three steps. First (Section \[s\_aux1\]), we we will prove that: $${\mbox{}\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F^2} \leq {\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2} + {\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.$$ The above inequality allows us to manipulate the easier-to-bound term ${\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2}$ instead of the term ${\mbox{}\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F^2}$. Second (Section \[s\_aux2\]), to bound this term, we will use a structural inequality that is central (in this form or mild variations) in many RandNLA low-rank approximation algorithms and their analyses. Indeed, we will argue that $$\begin{aligned} {\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2} &\leq2{\mbox{}\|({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}(({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T)\|_F^2}\\ &\hspace{15mm}+2{\mbox{}\|({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T\|_F^2}. $$ Third (Section \[s\_aux3\]), we will use results from Section \[chapter:MM\] to bound the two terms at the right hand side of the above inequality. An alternative expression for the error. {#s_aux1} ---------------------------------------- The <span style="font-variant:small-caps;">RandLowRank</span> algorithm approximates the top $k$ left singular vectors of ${\mathbf{A}}$, i.e., the matrix ${\boldsymbol{U}}_k \in \mathbb{R}^{m \times k}$, by the orthonormal matrix $\tilde{\boldsymbol{U}}_k \in \mathbb{R}^{m \times k}$. Bounding $\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F$ directly seems hard, so we present an alternative expression that is easier to analyze and that also reveals an interesting insight for $\tilde{{\boldsymbol{U}}}_k$. We will prove that the matrix $\tilde{\boldsymbol{U}}_k \tilde{\boldsymbol{U}}_k {\mathbf{A}}$ is the best rank-$k$ approximation to ${\mathbf{A}}$ (with respect to the Frobenius norm[^6]) that lies within the column space of the matrix ${\boldsymbol{C}}$. This optimality property is guaranteed by the Ritz-Rayleigh type procedure implemented in Steps 7-9 of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. \[lem:restate\] Let ${\boldsymbol{U}}_C$ be a basis for the column span of ${\boldsymbol{C}}$ and let $\tilde{{\boldsymbol{U}}}_k$ be the output of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. Then $${\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}= {\mathbf{A}}-{\boldsymbol{U}}_C \left({\boldsymbol{U}}_C^T{\mathbf{A}}\right)_k.$$ In addition, ${\boldsymbol{U}}_C ({\boldsymbol{U}}_C^T{\mathbf{A}})_k$ is the best rank-$k$ approximation to ${\mathbf{A}}$, with respect to the Frobenius norm, that lies within the column span of the matrix ${\boldsymbol{C}}$, namely $$\label{eqn:opt1} {\mbox{}\|{\mathbf{A}}-{\boldsymbol{U}}_C ({\boldsymbol{U}}_C^T{\mathbf{A}})_k\|_F^2} = \min_{{\mbox{rank}({\boldsymbol{Y}})}\leq k}{\mbox{}\|{\mathbf{A}}- {\boldsymbol{U}}_C {\boldsymbol{Y}}\|_F^2}.$$ Recall that $\tilde{{\boldsymbol{U}}}_k={\boldsymbol{U}}_C{\boldsymbol{U}}_{W,k}$, where ${\boldsymbol{U}}_{W,k}$ is the matrix of the top $k$ left singular vectors of ${\boldsymbol{W}}= {\boldsymbol{U}}_C^T{\mathbf{A}}$. Thus, ${\boldsymbol{U}}_{W,k}$ spans the same range as ${\boldsymbol{W}}_k$, the best rank-$k$ approximation to ${\boldsymbol{W}}$, i.e., ${\boldsymbol{U}}_{W,k}{\boldsymbol{U}}_{W,k}^T={\boldsymbol{W}}_k {\boldsymbol{W}}_k^{\dagger}$. Therefore $$\begin{aligned} {\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}&= {\mathbf{A}}-{\boldsymbol{U}}_C {\boldsymbol{U}}_{W,k}{\boldsymbol{U}}_{W,k}^T{\boldsymbol{U}}_C^T{\mathbf{A}}\\ &= {\mathbf{A}}-{\boldsymbol{U}}_C {\boldsymbol{W}}_k {\boldsymbol{W}}_k^{\dagger} {\boldsymbol{W}}= {\mathbf{A}}-{\boldsymbol{U}}_C {\boldsymbol{W}}_k.\end{aligned}$$ The last equality follows from ${\boldsymbol{W}}_k{\boldsymbol{W}}_k^{\dagger}$ being the orthogonal projector onto the range of ${\boldsymbol{W}}_k$. In order to prove the optimality property of the lemma, we simply observe that $$\begin{aligned} {\mbox{}\|{\mathbf{A}}-{\boldsymbol{U}}_C ({\boldsymbol{U}}_C^T{\mathbf{A}})_k\|_F^2} &= {\mbox{}\|{\mathbf{A}}-{\boldsymbol{U}}_C{\boldsymbol{U}}_C^T{\mathbf{A}}+ {\boldsymbol{U}}_C{\boldsymbol{U}}_C^T{\mathbf{A}}- {\boldsymbol{U}}_C ({\boldsymbol{U}}_C^T{\mathbf{A}})_k\|_F^2}\\ &={\mbox{}\|({\boldsymbol{I}}-{\boldsymbol{U}}_C{\boldsymbol{U}}_C^T){\mathbf{A}}+ {\boldsymbol{U}}_C({\boldsymbol{U}}_C^T{\mathbf{A}}- ({\boldsymbol{U}}_C^T{\mathbf{A}})_k)\|_F^2}\\ &={\mbox{}\|({\boldsymbol{I}}-{\boldsymbol{U}}_C{\boldsymbol{U}}_C^T){\mathbf{A}}\|_F^2} + {\mbox{}\|{\boldsymbol{U}}_C({\boldsymbol{U}}_C^T{\mathbf{A}}- ({\boldsymbol{U}}_C^T{\mathbf{A}})_k)\|_F^2}\\ &={\mbox{}\|({\boldsymbol{I}}-{\boldsymbol{U}}_C{\boldsymbol{U}}_C^T){\mathbf{A}}\|_F^2} + {\mbox{}\|{\boldsymbol{U}}_C^T{\mathbf{A}}- ({\boldsymbol{U}}_C^T{\mathbf{A}})_k\|_F^2}.\end{aligned}$$ The second to last equality follows from Matrix Pythagoras (Lemma \[l\_pyth\]) and the last equality follows from the orthonormality of the columns of ${\boldsymbol{U}}_C$. The second statement of the lemma is now immediate since $({\boldsymbol{U}}_C^T{\mathbf{A}})_k$ is the best rank-$k$ approximation to ${\boldsymbol{U}}_C^T{\mathbf{A}}$ and thus any other matrix ${\boldsymbol{Y}}$ of rank at most $k$ would result in a larger Frobenius norm error. Lemma \[lem:restate\] shows that Eqn. (\[eqn:thm\_relerrLowRank\]) in Theorem \[thm:relerrLowRank\] can be proven by bounding $\|{\mathbf{A}}-{\boldsymbol{U}}_C \left({\boldsymbol{U}}_C^T{\mathbf{A}}\right)_k\|_F$. Next, we transition from the best rank-$k$ approximation of the projected matrix $({\boldsymbol{U}}_C^T{\mathbf{A}})_k$ to the best rank-$k$ approximation ${\mathbf{A}}_k$ of the original matrix. First (recall the notation introduced in Section \[sxn:review:SVD\]), we split $$\begin{aligned} \label{e_Ai} {\mathbf{A}}={\mathbf{A}}_k+ {\mathbf{A}}_{k,\perp},\ \text{where}\ {\mathbf{A}}_k={\boldsymbol{U}}_k{\boldsymbol{\Sigma}}_k{\boldsymbol{V}}_k^T \ \text{and}\ {\mathbf{A}}_{k,\perp}={\boldsymbol{U}}_{k,\perp}{\boldsymbol{\Sigma}}_{k,\perp}{\boldsymbol{V}}_{k,\perp}^T.\end{aligned}$$ \[l\_aux2\] Let ${\boldsymbol{U}}_C$ be an orthonormal basis for the column span of the matrix ${\boldsymbol{C}}$ and let $\tilde{{\boldsymbol{U}}}_k$ be the output of the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. Then, $${\mbox{}\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F^2} \leq {\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2} + {\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.$$ The optimality property in Eqn. (\[eqn:opt1\]) in Lemma \[lem:restate\] and the fact that ${\boldsymbol{U}}_C^T{\mathbf{A}}_k$ has rank at most $k$ imply $$\begin{aligned} {\mbox{}\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F^2} &= {\mbox{}\|{\mathbf{A}}-{\boldsymbol{U}}_C \left({\boldsymbol{U}}_C^T{\mathbf{A}}\right)_k\|_F^2}\nonumber\\ &\leq {\mbox{}\|{\mathbf{A}}-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2} \nonumber\\ &= {\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C{\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2} + {\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.\end{aligned}$$ The last equality follows from Lemma \[l\_pyth\]. A structural inequality. {#s_aux2} ------------------------ We now state and prove a structural inequality that will help us bound ${\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2}$ (the first term in the error bound of Lemma \[l\_aux2\]). This structural inequality, or minor variants of it, underlie nearly all RandNLA algorithms for low-rank matrix approximation [@MD2016]. To understand this structural inequality, recall that, given a matrix ${\mathbf{A}}\in \mathbb{R}^{m \times n}$, many RandNLA algorithms seek to construct a “sketch” of ${\mathbf{A}}$ by post-multiplying ${\mathbf{A}}$ by some “sketching” matrix ${\boldsymbol{Z}}\in \mathbb{R}^{n \times c}$, where $c$ is much smaller than $n$. (In particular, this is precisely what the <span style="font-variant:small-caps;">RandLowRank</span> algorithm does.) Thus, the resulting matrix ${\mathbf{A}}{\boldsymbol{Z}}\in \mathbb{R}^{m \times c}$ is much smaller than the original matrix ${\mathbf{A}}$, and the interesting question is the approximation guarantees that it offers. A common approach is to explore how well ${\mathbf{A}}{\boldsymbol{Z}}$ spans the principal subspace of ${\mathbf{A}}$, and one metric of accuracy is some norm of the error matrix ${\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k$, where $({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger} {\mathbf{A}}_k$ is the projection of ${\mathbf{A}}_k$ onto the subspace spanned by the columns of ${\mathbf{A}}{\boldsymbol{Z}}$. (See Section \[sxn:review:MP\] for the definition of the Moore-Penrose pseudoinverse of a matrix.) The following structural result offers a means to bound the Frobenius norm of the error matrix ${\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k$. \[lem:mainlemma\_general\] Given ${\mathbf{A}}\in \mathbb{R}^{m \times n}$, let ${\boldsymbol{Z}}\in \mathbb{R}^{n \times c}$ ($c \geq k$) be any matrix such that ${\boldsymbol{V}}_k^T{\boldsymbol{Z}}\in \mathbb{R}^{k \times c}$ has rank $k$. Then, $${\mbox{}\|{\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k\|_F^2} \leq {\mbox{}\|\left({\mathbf{A}}-{\mathbf{A}}_k\right){\boldsymbol{Z}}({\boldsymbol{V}}_k^T{\boldsymbol{Z}})^{\dagger}\|_F^2}. \label{eqn:struct-cond-low-rank-gen}$$ Lemma \[lem:mainlemma\_general\] holds for *any* matrix ${\boldsymbol{Z}}$, regardless of whether ${\boldsymbol{Z}}$ is constructed deterministically or randomly. In the context of RandNLA, typical constructions of ${\boldsymbol{Z}}$ would represent a random sampling or random projection operation, like the the matrix ${\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}$ used in the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. The lemma actually holds for any unitarily invariant norm, including the two and the nuclear norm of a matrix [@MD2016]. See [@MD2016] for a detailed discussion of such structural inequalities and their history. Lemma \[lem:mainlemma\_general\] immediately suggests a proof strategy for bounding the error of RandNLA algorithms for low-rank matrix approximation: identify a sketching matrix ${\boldsymbol{Z}}$ such that ${\boldsymbol{V}}_k^T{\boldsymbol{Z}}$ has full rank; and, at the same time, bound the relevant norms of $\left({\boldsymbol{V}}_k^T{\boldsymbol{Z}}\right)^\dagger$ and $({\mathbf{A}}-{\mathbf{A}}_k) {\boldsymbol{Z}}$. (of Lemma \[lem:mainlemma\_general\]) First, note that $$({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k = \mbox{argmin}_{{\boldsymbol{X}}\in \mathbb{R}^{c \times n}} {\mbox{}\|{\mathbf{A}}_k - \left({\mathbf{A}}{\boldsymbol{Z}}\right){\boldsymbol{X}}\|_F^2}.$$ The above equation follows by viewing the above optimization problem as least-squares regression with multiple right-hand sides. Interestingly, this property holds for any unitarily invariant norm, but the proof is involved; see Lemma 4.2 of [@DrineasIKM16] for a detailed discussion. This implies that instead of bounding ${\mbox{}\|{\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k\|_F^2}$, we can replace $\left({\mathbf{A}}{\boldsymbol{Z}}\right)^+{\mathbf{A}}_k$ with any other $c \times n$ matrix and the equality with an inequality. In particular, we replace $\left({\mathbf{A}}{\boldsymbol{Z}}\right)^{\dagger} {\mathbf{A}}_k$ with $\left({\mathbf{A}}_k{\boldsymbol{Z}}\right)^{\dagger} {\mathbf{A}}_k$: $$\begin{aligned} \nonumber {\mbox{}\|{\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k\|_F^2} \nonumber &\leq {\mbox{}\| {\mathbf{A}}_k - {\mathbf{A}}{\boldsymbol{Z}}\left({\mathbf{A}}_k {\boldsymbol{Z}}\right)^{\dagger} {\mathbf{A}}_k\|_F^2}.\end{aligned}$$ This suboptimal choice for ${\boldsymbol{X}}$ is essentially the “heart” of our proof: it allows us to manipulate and further decompose the error term, thus making the remainder of the analysis feasible. Use ${\mathbf{A}}= {\mathbf{A}}-{\mathbf{A}}_k+{\mathbf{A}}_k$ to get $$\begin{aligned} \nonumber {\mbox{}\|{\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{Z}})({\mathbf{A}}{\boldsymbol{Z}})^{\dagger}{\mathbf{A}}_k\|_F^2} &\leq {\mbox{}\| {\mathbf{A}}_k - ({\mathbf{A}}- {\mathbf{A}}_k+ {\mathbf{A}}_k) {\boldsymbol{Z}}\left({\mathbf{A}}_k{\boldsymbol{Z}}\right)^{\dagger} {\mathbf{A}}_k\|_F^2} \\ \nonumber &= {\mbox{}\|{\mathbf{A}}_k-{\mathbf{A}}_k{\boldsymbol{Z}}({\mathbf{A}}_k{\boldsymbol{Z}})^{\dagger} {\mathbf{A}}_k-({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{Z}}({\mathbf{A}}_k{\boldsymbol{Z}})^{\dagger} {\mathbf{A}}_k\|_F^2}\\ \nonumber &= {\mbox{}\|({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{Z}}({\mathbf{A}}_k{\boldsymbol{Z}})^{\dagger} {\mathbf{A}}_k\|_F^2}. $$ To derive the last inequality, we used $$\begin{aligned} \nonumber {\mathbf{A}}_k - {\mathbf{A}}_k{\boldsymbol{Z}}({\mathbf{A}}_k{\boldsymbol{Z}})^{\dagger} {\mathbf{A}}_k &= {\mathbf{A}}_k - {\boldsymbol{U}}_k{\boldsymbol{\Sigma}}_k{\boldsymbol{V}}_k^T{\boldsymbol{Z}}({\boldsymbol{U}}_k{\boldsymbol{\Sigma}}_k{\boldsymbol{V}}_k^T{\boldsymbol{Z}})^{\dagger} {\mathbf{A}}_k \\ \label{eq31} &= {\mathbf{A}}_k - {\boldsymbol{U}}_k{\boldsymbol{\Sigma}}_k({\boldsymbol{V}}_k^T{\boldsymbol{Z}}) ({\boldsymbol{V}}_k^T{\boldsymbol{Z}})^{\dagger}{\boldsymbol{\Sigma}}_k^{-1}{\boldsymbol{U}}_k^T {\mathbf{A}}_k \\ \label{eq32} &= {\mathbf{A}}_k - {\boldsymbol{U}}_k{\boldsymbol{U}}_k^T {\mathbf{A}}_k={\boldsymbol{0}}.\end{aligned}$$ In Eqn. (\[eq31\]), we used Eqn. (\[eqn:pinv\]) and the fact that both matrices ${\boldsymbol{V}}_k^T{\boldsymbol{Z}}$ and ${\boldsymbol{U}}_k{\boldsymbol{\Sigma}}_k$ have rank $k$. The latter fact also implies that $({\boldsymbol{V}}_k^T{\boldsymbol{Z}}) ({\boldsymbol{V}}_k^T{\boldsymbol{Z}})^{\dagger}={\boldsymbol{I}}_k$, which derives Eqn. (\[eq32\]). Finally, the fact that ${\boldsymbol{U}}_k{\boldsymbol{U}}_k^T {\mathbf{A}}_k={\mathbf{A}}_k$ concludes the derivation and the proof of the lemma. Completing the proof of Theorem \[thm:relerrLowRank\]. {#s_aux3} ------------------------------------------------------ In order to complete the proof of the relative error guarantee of Theorem \[thm:relerrLowRank\], we will complete the strategy outlined at the end of Section \[sxn:sampling:result2\]. First, recall that from Lemma \[l\_aux2\] it suffices to bound $$\label{eqn:drin1} {\mbox{}\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F^2} \leq {\mbox{}\|{\mathbf{A}}_k-{\boldsymbol{U}}_C {\boldsymbol{U}}_C^T{\mathbf{A}}_k\|_F^2} + {\mbox{}\|{\mathbf{A}}-{\mathbf{A}}_{k}\|_F^2}.$$ Then, to bound the first term in the right-hand side of the above inequality, we will apply the structural result of Lemma \[lem:mainlemma\_general\] on the matrix $${\boldsymbol{\Phi}}= {\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}},$$ with ${\boldsymbol{Z}}={\boldsymbol{S}}$, where the matrices ${\boldsymbol{D}}$, ${\boldsymbol{H}}$, and ${\boldsymbol{S}}$ are constructed as described in the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. Lemma \[lem:mainlemma\_general\] states that if ${\boldsymbol{V}}_{\Phi,k}^T{\boldsymbol{S}}$ has rank $k$, then $${\mbox{}\|{\boldsymbol{\Phi}}_k - ({\boldsymbol{\Phi}}{\boldsymbol{S}})({\boldsymbol{\Phi}}{\boldsymbol{S}})^{\dagger}{\boldsymbol{\Phi}}_k\|_F^2}\leq {\mbox{}\|({\boldsymbol{\Phi}}-{\boldsymbol{\Phi}}_k){\boldsymbol{S}}({\boldsymbol{V}}_{\Phi,k}^T{\boldsymbol{S}})^{\dagger}\|_F^2}. \label{eqn:struct-cond-low-rank-gen:phi}$$ Here, we used ${\boldsymbol{V}}_{\Phi,k} \in \mathbb{R}^{n \times k}$ to denote the matrix of the top $k$ right singular vectors of ${\boldsymbol{\Phi}}$. Recall from Section \[sxn:RHT\] that ${\boldsymbol{D}}{\boldsymbol{H}}$ is an orthogonal matrix and thus the left singular vectors and the singular values of the matrices ${\mathbf{A}}$ and ${\boldsymbol{\Phi}}={\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}}$ are identical. The right singular vectors of the matrix ${\boldsymbol{\Phi}}$ are simply the right singular vectors of ${\mathbf{A}}$, rotated by ${\boldsymbol{D}}{\boldsymbol{H}}$, namely $${\boldsymbol{V}}_{{\boldsymbol{\Phi}}}^T={\boldsymbol{V}}^T{\boldsymbol{D}}{\boldsymbol{H}},$$ where ${\boldsymbol{V}}$ (respectively, ${\boldsymbol{V}}_{{\boldsymbol{\Phi}}}$) denotes the matrix of the right singular vectors of ${\mathbf{A}}$ (respectively, ${\boldsymbol{\Phi}}$). Thus, ${\boldsymbol{\Phi}}_k={\mathbf{A}}_k{\boldsymbol{D}}{\boldsymbol{H}}$, ${\boldsymbol{\Phi}}-{\boldsymbol{\Phi}}_k=({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{D}}{\boldsymbol{H}}$, and ${\boldsymbol{V}}_{\Phi,k}={\boldsymbol{V}}_{k}{\boldsymbol{D}}{\boldsymbol{H}}$. Using all the above, we can rewrite Eqn. (\[eqn:struct-cond-low-rank-gen:phi\]) as follows: $${\mbox{}\|{\mathbf{A}}_k - ({\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})({\mathbf{A}}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}{\mathbf{A}}_k\|_F^2}\leq {\mbox{}\|({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}({\boldsymbol{V}}_{k}^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}\|_F^2}. \label{eqn:struct-cond-low-rank-gen:phi2}$$ In the above derivation, we used unitary invariance to drop a ${\boldsymbol{D}}{\boldsymbol{H}}$ term from the Frobenius norm. Recall that ${\mathbf{A}}_{k,\perp}={\mathbf{A}}-{\mathbf{A}}_k$; we now proceed to manipulate the right-hand side of the above inequality as follows[^7]: $$\begin{aligned} \nonumber{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}\|_F^2} \\ \nonumber &\hspace{-20mm}={\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}(({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T+({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T)\|_F^2}\\ \label{eqn:ch4ppp5}&\hspace{-20mm}\leq 2{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}(({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T)\|_F^2}\\ \label{eqn:ch4ppp6}&\hspace{-10mm}+ 2{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T\|_F^2}. $$ We now proceed to bound the terms in (\[eqn:ch4ppp5\]) and (\[eqn:ch4ppp6\]) separately. Our first order of business, however, will be to quantify the manner in which the Randomized Hadamard Transform approximately uniformizes information in the top $k$ right singular vectors of ${\mathbf{A}}$. ### The effect of the Randomized Hadamard Transform. {#the-effect-of-the-randomized-hadamard-transform.-1 .unnumbered} Here, we state a lemma that quantifies the manner in which ${\boldsymbol{H}}{\boldsymbol{D}}$ (premultiplying ${\boldsymbol{V}}_k$, or ${\boldsymbol{D}}{\boldsymbol{H}}$ postmultiplying ${\boldsymbol{V}}_k^T$) approximately “uniformizes” information in the right singular subspace of the matrix ${\mathbf{A}}$, thus allowing us to apply our matrix multiplication results from Section \[chapter:MM\] in order to bound (\[eqn:ch4ppp5\]) and (\[eqn:ch4ppp6\]). This is completely analogous to our discussion in Section \[sxn:ls:thmproof\] regarding the <span style="font-variant:small-caps;">RandLeastSquares</span> algorithm. \[lem:HUnew\] Let ${\boldsymbol{V}}_k$ be an $n \times k$ matrix with orthonormal columns and let the product ${\boldsymbol{H}}{\boldsymbol{D}}$ be the $n \times n$ Randomized Hadamard Transform of Section \[sxn:RHT\]. Then, with probability at least $.95$, $$\begin{aligned} \label{eqn:lem:HU_eqn2new} {\mbox{}\|\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{V}}_k \right)_{i*}\|_2^2} &\leq \frac{2k\ln(40nk)}{n},\qquad \text{ for all } i =1,\ldots, n . $$ The proof of the above lemma is identical to the proof of Lemma \[lem:HU\], with ${\boldsymbol{V}}_k$ instead of ${\boldsymbol{U}}$ and $k$ instead of $d$. ### Bounding Expression (\[eqn:ch4ppp5\]). To bound the term in Expression (\[eqn:ch4ppp5\]), we first use the strong submultiplicativity of the Frobenius norm (see Section \[sxn:review:FNorm\]) to get $$\begin{aligned} \nonumber & \hspace{-20mm} {\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}(({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T)\|_F^2} \\ \leq \label{eqn:n1ppp5}&{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\|_F^2}{\mbox{}\|({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T\|_2^2}.\end{aligned}$$ Our first lemma bounds the term ${\mbox{}\|({\mathbf{A}}-{\mathbf{A}}_k){\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\|_F^2} = {\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\|_F^2}$. We actually prove the result for any matrix ${\boldsymbol{X}}$ and for our choice for the matrix ${\boldsymbol{S}}$ in the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. \[lem:FNormExp\] Let the sampling matrix ${\boldsymbol{S}}\in \mathbb{R}^{n \times c}$ be constructed as in the <span style="font-variant:small-caps;">RandLowRank</span> algorithm. Then, for any matrix ${\boldsymbol{X}}\in \mathbb{R}^{m\times n}$, $${\mbox{}{\bf{E}}\left[{\mbox{}\|{\boldsymbol{X}}{\boldsymbol{S}}\|_F^2}\right]}={\mbox{}\|{\boldsymbol{X}}\|_F^2} ,$$ and, from Markov’s inequality (see Section \[sxn:review:Markov\]), with probability at least 0.95, $${\mbox{}\|{\boldsymbol{X}}{\boldsymbol{S}}\|_F^2} \leq 20{\mbox{}\|{\boldsymbol{X}}\|_F^2}.$$ The above lemma holds even if the sampling of the canonical vectors ${\boldsymbol{e}}_i$ to be included in ${\boldsymbol{S}}$ is not done uniformly at random, but with respect to any set of probabilities $\{p_1,\ldots,p_n\}$ summing up to one, as long as the selected canonical vector at the $t$-th trial (say the $i_t$-th canonical vector ${\boldsymbol{e}}_{i_t}$) is rescaled by $\sqrt{1/cp_{i_t}}$. Thus, even for nonuniform sampling, ${\boldsymbol{X}}{\boldsymbol{S}}$ is an unbiased estimator for the Frobenius norm of the matrix ${\boldsymbol{X}}$. (of Lemma \[lem:FNormExp\]) We compute the expectation of ${\mbox{}\|{\boldsymbol{X}}{\boldsymbol{S}}\|_F^2}$ from first principles as follows: $$\begin{aligned} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\boldsymbol{X}}{\boldsymbol{S}}\|_F^2}\right]}&=&\sum_{t=1}^c\sum_{j=1}^n \frac{1}{n}\cdot{\mbox{}\|\sqrt{\frac{n}{c}}{\boldsymbol{X}}_{*j}\|_F^2}= \frac{1}{c}\sum_{t=1}^c\sum_{j=1}^n {\mbox{}\|{\boldsymbol{X}}_{*j}\|_F^2}={\mbox{}\|{\boldsymbol{X}}\|_F^2}. $$ The lemma now follows by applying Markov’s inequality. We can now prove the following lemma, which will be conditioned on Eqn. (\[eqn:lem:HU\_eqn2new\]) holding. \[lem:sample\_lem40pfn1\] Assume that Eqn. (\[eqn:lem:HU\_eqn2new\]) holds. If $c$ satisfies $$\label{eqn:ppdd2} c \geq \frac{192k\ln(40nk)}{\epsilon^2} \ln\left(\frac{192\sqrt{20}k\ln(40nk)}{\epsilon^2}\right),$$ then with probability at least .95, $${\mbox{}\|\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^{\dagger}-\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^T\|_2^2}\leq2\epsilon^2.$$ Let $\sigma_i$ denote the $i$-th singular value of the matrix ${\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}$. Conditioned on Eqn. (\[eqn:lem:HU\_eqn2new\]) holding, we can replicate the proof of Lemma \[lem:sample\_lem20pf\] to argue that if $c$ satisfies Eqn. (\[eqn:ppdd2\]), then, with probability at least .95, $$\label{eqn:ppdd1} \abs{1 - \sigma_i^2} \leq \epsilon$$ holds for all $i$. (Indeed, we can replicate the proof of Lemma \[lem:sample\_lem20pf\] using ${\boldsymbol{V}}_k$ instead of ${\boldsymbol{U}}_A$ and $k$ instead of $d$; we also evaluate the bound for arbitrary $\epsilon$ instead of fixing it.) We now observe that the matrices $$\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^{\dagger} \quad\mbox{and}\quad \left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^T$$ have the same left and right singular vectors[^8]. Recall that in this lemma we used $\sigma_i$ to denote the singular values of the matrix ${\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}$. Then, the singular values of the matrix $\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^T$ are equal to the $\sigma_i$’s, while the singular values of the matrix $\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^{\dagger}$ are equal to $\sigma_i^{-1}$. Thus, $${\mbox{}\|\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^{\dagger}-\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^T\|_2^2} = \max_i\abs{\sigma_i^{-1}-\sigma_i}^2= \max_i\abs{(1-\sigma_i^2)^2\sigma_i^{-2}}.$$ Combining with Eqn. (\[eqn:ppdd1\]) and using the fact that $\epsilon \leq 1/2$, $${\mbox{}\|\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^{\dagger}-\left({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}\right)^T\|_2^2}= \max_i\abs{(1-\sigma_i^2)\sigma_i^{-2}}\leq (1-\epsilon)^{-1}\epsilon^2\leq2\epsilon^2.$$ \[lem:sample\_lem40pfn2\] Assume that Eqn. (\[eqn:lem:HU\_eqn2new\]) holds. If $c$ satisfies Eqn. (\[eqn:ppdd2\]), then, with probability at least .9, $$2{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}(({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T)\|_F^2}\leq 80\epsilon^2{\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.$$ We combine Eqn. (\[eqn:n1ppp5\]) with Lemmas \[lem:FNormExp\] (applied to ${\boldsymbol{X}}={\mathbf{A}}_{k,\perp}$) and Lemma \[lem:sample\_lem40pfn1\] to get that, conditioned on Eqn. (\[eqn:lem:HU\_eqn2new\]) holding, with probability at least 1-0.05-0.05=0.9, $${\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}(({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^{\dagger}-({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T)\|_F^2}\leq 40\epsilon^2{\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.$$ The aforementioned failure probability follows from a simple union bound on the failure probabilities of Lemmas \[lem:FNormExp\] and \[lem:sample\_lem40pfn1\]. ### Bounding Expression (\[eqn:ch4ppp6\]). {#bounding-expressioneqnch4ppp6. .unnumbered} Our bound for Expression (\[eqn:ch4ppp6\]) will be conditioned on Eqn. (\[eqn:lem:HU\_eqn2new\]) holding; then, we will use our matrix multiplication results from Section \[chapter:MM\] to derive our bounds. Our discussion is completely analogous to the proof of Lemma \[lem:sample\_lem40pf\]. We will prove the following lemma. \[lem:sample\_lem40pfn3\] Assume that Eqn. (\[eqn:lem:HU\_eqn2new\]) holds. If $c \geq 40k\ln(40nk)/\epsilon$, then, with probability at least .95, $${\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T\|_F^2} \leq \epsilon{\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.$$ To prove the lemma, we first observe that $${\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}({\boldsymbol{V}}_k^T{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}})^T\|_F^2}={\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}{\boldsymbol{S}}^T{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k-{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k\|_F^2},$$ since ${\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{H}}^T{\boldsymbol{D}}={\boldsymbol{I}}_n$ and ${\mathbf{A}}_{k,\perp}{\boldsymbol{V}}_k={\boldsymbol{0}}$. Thus, we can view ${\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}{\boldsymbol{S}}^T{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k$ as approximating the product of two matrices, ${\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}$ and ${\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k$, by randomly sampling columns from the first matrix and the corresponding rows from the second matrix. Note that the sampling probabilities are uniform and do not depend on the two matrices involved in the product. We will apply the bounds of Eqn. (\[eqn:appopt3result\]), after arguing that the assumptions of Eqn. (\[eqn:appopt3\]) are satisfied. Indeed, since we condition on Eqn. (\[eqn:lem:HU\_eqn2new\]) holding, the rows of ${\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k={\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{V}}_k$ satisfy $$\frac{1}{n} \ge \beta \frac{{\mbox{}\|\left({\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{V}}_k\right)_{i*}\|_2^2}}{k}, \qquad \text{ for all } i =1,\ldots, n,$$ for $\beta = \left(2\ln(40nk)\right)^{-1}$. Thus, Eqn. (\[eqn:appopt3result\]) implies $$\begin{aligned} {\mbox{}{\bf{E}}\left[{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}{\boldsymbol{S}}^T{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k-{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k\|_F^2}\right]} &\leq \frac{1}{\beta c}{\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}\|_F^2}{\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{V}}_k\|_F^2}\\ &= \frac{k}{\beta c}{\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}.\end{aligned}$$ In the above we used ${\mbox{}\|{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{V}}_k\|_F^2} = k$. Markov’s inequality now implies that with probability at least .95, $${\mbox{}\|{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{S}}{\boldsymbol{S}}^T{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k-{\mathbf{A}}_{k,\perp}{\boldsymbol{D}}{\boldsymbol{H}}{\boldsymbol{H}}^T{\boldsymbol{D}}{\boldsymbol{V}}_k\|_F^2} \leq \frac{20k}{\beta c}{\mbox{}\|{\mathbf{A}}_{k,\perp}\|_F^2}. $$ Setting $r \geq 20k/(\beta\epsilon)$ and using the value of $\beta$ specified above concludes the proof of the lemma. ### Concluding the proof of Theorem \[thm:relerrLowRank\]. {#concluding-the-proof-of-theoremthmrelerrlowrank. .unnumbered} We are now ready to conclude the proof, and therefore we revert back to using ${\mathbf{A}}_{k,\perp}={\mathbf{A}}-{\mathbf{A}}_k$. We first state the following lemma. \[lem:sample\_lem40pfn4\] Assume that Eqn. (\[eqn:lem:HU\_eqn2new\]) holds. There exists a constant $c_0$ such that, if $$\label{eqn:cvalfinal} c \geq c_0\frac{k\ln n}{\epsilon^2} \ln\left(\frac{k\ln n}{\epsilon^2}\right),$$ then with probability at least .85, $$\FNorm{{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}} \leq (1+\epsilon)\FNorm{{\mathbf{A}}-{\mathbf{A}}_k}.$$ Combining Lemma \[l\_aux2\] with Expressions (\[eqn:ch4ppp5\]) and (\[eqn:ch4ppp6\]), and Lemmas \[lem:sample\_lem40pfn2\] and \[lem:sample\_lem40pfn3\], we get $${\mbox{}\|{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}\|_F^2} \leq (1+\epsilon+80\epsilon^2){\mbox{}\|{\mathbf{A}}-{\mathbf{A}}_k\|_F^2}\leq (1+41\epsilon){\mbox{}\|{\mathbf{A}}-{\mathbf{A}}_k\|_F^2}.$$ The last inequality follows by using $\epsilon\leq 1/2$. Taking square roots of both sides and using $\sqrt{1+41\epsilon}\leq 1+21\epsilon$, we get $$\FNorm{{\mathbf{A}}-\tilde{\boldsymbol{U}}_k\tilde{\boldsymbol{U}}_k^T{\mathbf{A}}} \leq (1+\epsilon+80\epsilon^2){\mbox{}\|{\mathbf{A}}-{\mathbf{A}}_k\|_F^2}\leq (1+21\epsilon)\FNorm{{\mathbf{A}}-{\mathbf{A}}_k}.$$ Observe that $c$ has to be set to the maximum of the values used in Lemmas \[lem:sample\_lem40pfn2\] and \[lem:sample\_lem40pfn3\], which is the value of Eqn. (\[eqn:ppdd2\]). Adjusting $\epsilon$ to $\epsilon/21$ and appropriately adjusting the constants in the expression of $c$ concludes the lemma. (We made no particular effort to compute or optimize the constant $c_0$ in the expression of $c$.) The failure probability follows by a union bound on the failure probabilities of Lemmas \[lem:sample\_lem40pfn2\] and \[lem:sample\_lem40pfn3\] conditioned on Eqn. (\[eqn:lem:HU\_eqn2new\]). To conclude the proof of Theorem \[thm:relerrLowRank\], we simply need to remove the conditional probability from Lemma \[lem:sample\_lem40pfn4\]. Towards that end, we follow the same strategy as in Section \[sxn:ls:thmproof\], to conclude that the success probability of the overall approach is at least $0.85\cdot 0.95 \geq 0.8$. Running time. {#sxn:ch4:runningtime} ------------- The <span style="font-variant:small-caps;">RandLowRank</span> algorithm computes the product ${\boldsymbol{C}}={\mathbf{A}}{\boldsymbol{H}}{\boldsymbol{D}}{\boldsymbol{S}}$ using the ideas of Section \[sxn:lsruntime\], thus taking $2n(m+1)\log_2(c+1)$ time. Step 7 takes $O(mc^2)$; step 8 takes $O(mnc+nc^2)$ time; step 9 takes $O(mck)$ time. Overall, the running time is, asymptotically, dominated by the $O(mnc)$ term is step 8, with $c$ as in Eqn. (\[eqn:cvalfinal\]). References. ----------- Our presentation in this chapter follows the derivations in [@DrineasIKM16]. We also refer the interested reader to [@MM2015; @Wang2015] for related work. Acknowledgements. {#acknowledgements. .unnumbered} ----------------- The authors would like to thank Ilse Ipsen for allowing them to use her slides for the introductory linear algebra lecture delivered at the PCMI Summer School. The first section of this chapter is heavily based on those slides. The authors would also like to thank Aritra Bose, Eugenia-Maria Kontopoulou, and Fred Roosta for their help in proofreading early drafts of this manuscript. [^1]: Purdue University, Computer Science Department, 305 N University Street, West Lafayette, IN 47906. Email: `pdrineas@purdue.edu`. [^2]: University of California at Berkeley, ICSI and Department of Statistics, 367 Evans Hall, Berkeley, CA 94720. Email: `mmahoney@stat.berkeley.edu`. [^3]: This chapter is based on lectures from the 2016 Park City Mathematics Institute summer school on *The Mathematics of Data*, and it appears as a chapter [@pcmi-chapter-randnla] in the edited volume of lectures from that summer school. [^4]: Alternatively, we could *premultiply* ${\mathbf{A}}^T$ by ${\boldsymbol{H}}{\boldsymbol{D}}$. The reader should become comfortable going back and forth with such manipulations. [^5]: We omit the $\ln \ln n$ term from this qualitative remark. Recall that $\ln \ln n$ goes to infinity with dignity and therefore, quoting Stan Eisenstat, $\ln \ln n$ is for all practical purposes essentially a constant; see <https://rjlipton.wordpress.com/2011/01/19/we-believe-a-lot-but-can-prove-little/>. [^6]: This is not true for other unitarily invariant norms, e.g., the two-norm; see [@BDM2014] for a detailed discussion. [^7]: We use the following easy-to-prove version of the triangle inequality for the Frobenius norm: for any two matrices ${\boldsymbol{X}}$ and ${\boldsymbol{Y}}$ that have the same dimensions, ${\mbox{}\|{\boldsymbol{X}}+{\boldsymbol{Y}}\|_F^2}\leq 2{\mbox{}\|{\boldsymbol{X}}\|_F^2}+2{\mbox{}\|{\boldsymbol{Y}}\|_F^2}$. [^8]: Given any matrix ${\boldsymbol{X}}$ with thin SVD ${\boldsymbol{X}}={\boldsymbol{U}}_X{\boldsymbol{\Sigma}}_X{\boldsymbol{V}}_X^T$ its transpose is ${\boldsymbol{X}}^T={\boldsymbol{V}}_X{\boldsymbol{\Sigma}}_X{\boldsymbol{U}}_X^T$ and its pseudoinverse is ${\boldsymbol{X}}={\boldsymbol{V}}_X{\boldsymbol{\Sigma}}_X^{-1}{\boldsymbol{U}}_X^T$.
--- abstract: 'We propose a robust a posteriori error estimator for the hybridizable discontinuous Galerkin (HDG) method for convection-diffusion equations with dominant convection. The reliability and efficiency of the estimator are established for the error measured in an energy norm. The energy norm is uniformly bounded even when the diffusion coefficient tends to zero. The estimators are robust in the sense that the upper and lower bounds of error are uniformly bounded with respect to the diffusion coefficient. A weighted test function technique and the Oswald interpolation are key ingredients in the analysis. Numerical results verify the robustness of the proposed a posteriori error estimator.' address: - 'School of Mathematical Sciences and Fujian Provincial Key Laboratory on Mathematical Modeling & High Performance Scientific Computing, Xiamen University, Fujian, 361005, P.R. China' - 'Faculty of Science, South University of Science and Technology of China,Shenzhen, 518055, China' - 'Department of Mathematics, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong, China' author: - Huangxin Chen - Jingzhi Li - Weifeng Qiu title: 'Robust a Posteriori Error Estimates for HDG method for Convection-Diffusion Equations' --- [^1] Introduction ============ Given a bounded, polyhedral domain $\Omega \subset R^d (d=2,3)$, we consider the convection-diffusion equations \[cd\_eqs\] $$\begin{aligned} \label{cd_eqs_1} -\epsilon\Delta u + \boldsymbol{\beta}\cdot \nabla u + cu = &\; f \quad \text{ in $\Omega$, } \\ \label{bd_eqs} u = &\; g \quad \text{ on $\partial \Omega$.}\end{aligned}$$ The data and the right-hand sides in (\[cd\_eqs\]) satisfy the following assumptions: 1. $0<\epsilon \leq 1$. 2. $ \boldsymbol{\beta} \in W^{1,\infty}(\Omega)^d$, $c \in L^{\infty}(\Omega)$, $f\in L^2(\Omega)$ and $g\in H^{\frac{1}{2}}(\partial \Omega)$. 3. $c-\frac{1}{2} {\rm div}\boldsymbol{\beta} \geq 0$. 4. There is a function $\psi \in W^{1,\infty}(\Omega)$ and a positive constant $b_0$ such that $\boldsymbol{\beta} \cdot \nabla \psi \geq b_0$. Assumption (A1) includes the case of the convection-dominated regime. According to [@AyusoMarini:cdf], Assumption (A4) is satisfied if $\boldsymbol{\beta}$ has no closed curves and $\vert \boldsymbol{\beta}(x)\vert\neq 0 \text{ for all }x\in\Omega$. It is well known that solutions of (\[cd\_eqs\]) may develop layers (cf. [@Eckhaus72; @FLRT83]). In particular, the solutions may have singular interior layer of width $O(\sqrt{\epsilon})$ or outflow layer of width $O(\epsilon)$. Standard numerical methods, e.g., standard finite element method or central finite difference method, are not robust when the quantity $\epsilon/\|\boldsymbol{\beta} \|_{L^\infty(\Omega)}$ is small compared to the mesh size. In order to stabilize the numerical method, several remedies are proposed for addressing the issue, for instance, streamline diffusion method [@BrooksHughes], residual free bubble methods [@BrezziHughesMariniRussoSuli1999; @BrezziMariniSuli2000; @BurmanErn09], local projection schemes [@KT11], subgrid scale method [@Codina02; @BC2006], continuous interior penalty (CIP) methods [@Burman04; @Burman07], discontinuous Galerkin methods [@AyusoMarini:cdf; @HoustonSchwabSuli2002; @HughesScovazziBochevBuffa2006], and recently discontinuous Petrov-Galerkin (DPG) methods [@BroersenStevenson:mild_weak_cd; @ChanHeuerTanDemkowicz:DPG_CD; @DemkoHeuer:2013:DPG_cd], HDG method [@FQZ13] and the first order least squares method [@CFLQ14]. One can refer to [@Roos08; @Stynes:acta_cd] for more other stabilization techniques. But in order to capture the potential interior or outflow layer of the solutions to the problem (\[cd\_eqs\]), the local Péclet number $P_e = h \|\boldsymbol{\beta} \|_{L^\infty(\Omega)} / \epsilon $ near the layers should be small enough, where $h$ is the mesh size. Hence it would be quite expensive for the stabilized numerical methods used on the quasi-uniform mesh to capture the layers when $\epsilon$ is small. If the mesh in the vicinity of the layers can be locally refined, the cost of numerical computations could be reduced. Therefore the adaptive finite element method is a natural choice for the efficient solution of convection-diffusion equations with dominant convection. The adaptive finite element method based on a posteriori error estimates have been well established for second-order elliptic problems (cf. [@Ainsworth; @Verfurth96]). In recent years the a posteriori error estimates are also extended to convection-diffusion equation. An early attempt was proposed by Eriksson and Johnson in [@EJ93], using regularization and duality techniques. Verfürth [@Verfurth98] proposed semi-robust estimators in the energy norm for the standard Galerkin approximation and the streamline upwind Petrov-Galerkin (SUPG) discretization. In [@Verfurth05] Verfürth improved his results by giving the estimates which are robust with dominant convection in a norm incorporating the standard energy norm and a dual norm of the convective derivative. Very recently, Tobiska and Verfürth [@TV2014] derived the same robust a posteriori error estimators for a wide range of stabilized finite element methods such as streamline diffusion methods, local projection schemes, subgrid scale technique and CIP method. However, the energy norm of error used in Verfürth’s estimates is defined through a dual norm which is not easy to compute. Sangalli [@Sangalli08] proposed different norms for the a posteriori error estimates that allow for robust estimators, but the analysis is only valid in the one dimensional case. As to other approaches for the robust error estimations, one can refer to [@Voh07] for mixed finite element methods, [@Voh08] for cell-centered finite volume scheme, [@Alaoui07] for nonconforming finite element method, [@Ern08; @Ern10; @SZ09; @SZ11] for interior penalty discontinuous Galerkin method. Discontinuous Galerkin (DG) methods have several attractive features compared with conforming finite element methods. For example, DG methods have elementwise conservation of mass, and they work well on arbitrary meshes. However, the dimension of the approximation DG space is much larger than the dimension of the corresponding conforming space. The HDG method [@Cockburn09; @Cockburn10; @KSC2012] was recently introduced to address this issue. HDG methods retain the advantages of standard DG methods and result in significant reduced degrees of freedom. New variables on all interfaces of mesh are introduced such that the numerical solution inside each element can be computed in terms of them, and the resulting algebraic system is only due to the unknowns on the skeleton of the mesh. In [@FQZ13], the HDG method was proposed and analyzed for the problem (\[cd\_eqs\]) on shape-regular mesh. The stabilization parameter of the HDG method in [@FQZ13] can be determined clearly, meanwhile the penalty parameters of the DG schemes [@AyusoMarini:cdf] need to be chosen empirically. Moreover, the condition number of stiffness matrix of a new way of implementing HDG method in [@FQZ13] was proven to be bounded by $O(h^{-2})$ and independent of the diffusion coefficient $\epsilon$. These properties are important for the efficient solution of the problem (\[cd\_eqs\]) and encourage us to consider the corresponding HDG method on adaptive meshes. The a posteriori error analysis for the HDG method for second order elliptic problems has been presented in [@Cockburn12; @Cockburn13], where the error incorporates only the flux and a postprocessed solution used in the estimators. To our best knowledge, no a posteriori error estimates for the HDG discretizations of convection-diffusion problems have been studied in the literature so far. In this work, our objective is to show that the HDG scheme proposed in [@FQZ13] gives rise to robust a posteriori error estimates for the problem (\[cd\_eqs\]). In comparison with the postprocessing technique utilized in [@Voh07; @Cockburn12; @Cockburn13], we establish the estimators without any postprocessed solution since the solution of (\[cd\_eqs\]) is always not smooth and there is no superconvergence result for the HDG method when $\epsilon \ll 1$. We notice that the a posteriori error estimators for nonconforming finite element method [@Alaoui07] and interior penalty DG method [@Ern08; @Ern10] are only semi-robust in the sense that they yield lower and upper bounds of the error which differ by a factor equal at most to $\mathcal{O}(\epsilon^{-1/2})$. In [@SZ09; @SZ11], the a posteriori error estimator is robust in the sense that the ratio of the constants in the upper and lower bounds of error is independent of the diffusion coefficient. However, the energy norm of error in [@SZ09; @SZ11] contains the jump term $\left( h_{F}\epsilon^{-1}\| \llbracket u_h \rrbracket \|^2_{0,F} \right)^{1/2}$ on each interior interface of meshes. In contrast, the a posteriori error estimator in this paper is robust based on the energy norm in (\[energy-error\]), which contains a jump term $\left( \gamma_F\| \llbracket u_h \rrbracket \|^2_{0,F} \right)^{1/2}$ instead. One can refer to (\[A\_gamma\]) for the definition of paparemeter $\gamma_F$. So, our a posteriori error estimator will not enlarge the error estimate too much as the error estimator in [@SZ09; @SZ11], when the mesh size is not small compared with the diffusion coefficient. To derive the reliability and efficiency of the estimators for the error measured in an energy norm which incorporates a scaling flux and the scalar solution of the HDG discretization, two techniques are utilized. The first one is to use the Oswald interpolation operator to approximate a discontinuous polynomial by a continuous and piecewise polynomial function and to control the approximation by the jumps (cf. [@OP03; @OP07]). For most of a posteriori error estimates mentioned above (e.g. [@Verfurth05; @Voh07; @TV2014; @Ern08; @Ern10; @SZ09; @SZ11]), the analysis only gives the estimates for the energy error without $L^2$-error of the scalar solution when $c-\frac{1}{2} {\rm div}\boldsymbol{\beta} = 0$. The second one is to address this issue and to employ a weighted function to derive the estimates for the error which contains $L^2$-error of the scalar solution. This idea goes back to Nävert’s work [@Navert1982] for convection-diffusion problems and was used to obtain the $L^2$-stability of the original DG method for pure hyperbolic equation [@RH1973] and extended to convection-diffusion equations using the IP-DG method [@Guzman2006; @AyusoMarini:cdf], the HDG method [@FQZ13] and the first order least squares method [@CFLQ14]. In the numerical experiments, the convection-diffusion problems with interior or outflow layers are tested based on the proposed a posteriori error estimator. The robustness of the a posteriori error estimator based on the HDG method is observed for the problems with different diffusion coefficient. We also find that the convergence of the adaptive HDG method is almost optimal, i.e., the convergence rate is almost $O(N^{-s})$, where $N$ is the number of elements, $s$ depends on the polynomial order $p$. The outline of the paper is as follows: We introduce some notations, the HDG method, a posteriori error estimator and main results in the next section. In section 3, we collect some auxiliary results for analysis. Section 4 and section 5 are devoted to the proofs of reliability and efficiency, respectively. In the final section, we give some numerical results to confirm our theoretical analysis. Notation, HDG method, error estimator, and main results {#HDG_sec2} ======================================================= In this section, we begin with some basic notation and hypotheses of meshes. Secondly, we introduce the HDG method for (\[cd\_eqs\]) in [@FQZ13]. Then, we define the corresponding a posteriori error estimator. Finally, we give the main results of reliability and efficiency. Notation and the mesh --------------------- Let $\mathcal{T}_{h}$ be a conforming, shape-regular simplicial triangulation of $\Omega$. For any element $T\in \mathcal{T}_{h}$, $\partial{T}$ denotes the set of its edges in the two dimensional case and of its faces in the three dimensional case. Elements of $\partial{T}$ will be generally referred to as faces, [regardless of dimension]{}, and denoted by $F$. We define $\partial\mathcal{T}_{h} := \{\partial T: T\in\mathcal{T}_{h}\}$. We denote by $\mathcal{E}_{h}$ the set of all faces in the triangulation (the skeleton), while the set of all interior [(boundary)]{} faces of the triangulation will be denoted $\mathcal{E}_{h}^{0}$ [($\mathcal{E}_{h}^{\partial}$)]{}. Correspondingly, we refer to ${{\mathcal N}}_h$ the set of vertices and to ${{\mathcal N}}_h^0$ the set of interior vertices. For any $T\in \mathcal T_h $, let $h_T$ be the diameter of element $T$. Similarly, for any $F\in {{\mathcal E}}_h$, we define $h_{F} := {\rm diam}(F)$. Throughout this paper, we use the standard notations and definitions for Sobolev spaces (see, e.g., Adams[@Adams]). We also use the notation $\| \cdot \|_{0,D}$ and $\| \cdot \|_{0,\Gamma}$ to denote the $L^2$-norm on the elements $D$ and faces $\Gamma$, respectively. The HDG method -------------- The HDG method is based on a first order formulation of the convection-diffusion equation (\[cd\_eqs\]), which can be rewritten in a mixed form as finding $({{\boldsymbol{q}}}, u)$ such that \[HDG1\] $$\begin{aligned} \epsilon^{-1}{{{\boldsymbol{q}}}}+\nabla u&=0 \qquad {\rm in }\ \Omega, \\ {\rm div}\, {{{\boldsymbol{q}}}} + {{\boldsymbol{\beta}}}\cdot \nabla u + cu&=f \qquad {\rm in }\ \Omega, \label{mixed-eqs}\\ u&=g \qquad {\rm on }\ \partial\Omega. \label{HDG3}\end{aligned}$$ For any element $T\in \mathcal{T}_{h}$ and any face $F\in\mathcal{E}_{h}$, we define $${{{\boldsymbol{V}}}}(T):=(\mathcal P_p(T))^d,\qquad W(T):=\mathcal P_p(T),\qquad M(F):=\mathcal P_p(F),$$ where $\mathcal P_p(S)$ is the space of polynomials of total degree not larger than $p\geq 1$ on $S$. The finite element spaces are given by $$\begin{aligned} {{{\boldsymbol{V}}}}_h:&=\{{{{\boldsymbol{v}}}}\in {{{\boldsymbol{L}}}}^2(\Omega)\, : \, {{{\boldsymbol{v}}}}|_T\in {{{\boldsymbol{V}}}}(T)\ {\rm for\ all }\ T\in \mathcal T_h\},\\ W_h:&=\{w\in L^2(\Omega)\, : \, w|_T\in W(T)\ {\rm for\ all }\ T\in \mathcal T_h\},\\ M_h:&=\{ \mu\in L^2(\mathcal E_h)\, : \, \mu|_F\in M(F)\ {\rm for\ all }\ F\in \mathcal E_h\},\\ M_h(g):&=\{ \mu \in M_h \, : \, \int_{\partial\Omega} (\mu-g)\xi ds =0 {\rm \ for\ all }\ \xi \in M_h \},\end{aligned}$$ where ${{{\boldsymbol{L}}}}^2(\Omega):=( L^2(\Omega))^d$ and $L^2(\mathcal E_h):=\Pi_{F\in \mathcal E_h}L^2(F)$. The HDG method seeks finite element approximations $({{{\boldsymbol{q}}}}_h, u_h,\widehat u_h)\in {{{\boldsymbol{V}}}}_h\times W_h\times M_h$ satisfying \[PD\] $$\begin{aligned} \label{P1} & (\epsilon^{-1} {{{\boldsymbol{q}}}}_h, {{\boldsymbol{r}}})_{\mathcal T_h}-(u_h, {{\rm div \, {{{\boldsymbol{r}}}}}})_{\mathcal T_h}+\langle \widehat u_h, {{{{\boldsymbol{r}}}}\cdot {{{\boldsymbol{n}}}}}\rangle_{\partial \mathcal T_h}=0,\\ \label{P2} & -({{{\boldsymbol{q}}}}_h +{{\boldsymbol{\beta}}}u_h ,{\nabla w})_{\mathcal T_h}+ ((c-{\rm div}\, {{\boldsymbol{\beta}}}) u_h, w)_{\mathcal T_h}+\langle (\widehat {{{\boldsymbol{q}}}}_h + \widehat{{{\boldsymbol{\beta}}}u_h})\cdot {{{\boldsymbol{n}}}} ,w\rangle_{\partial \mathcal T_h}\\ \nonumber &\qquad \qquad \qquad =(f, w)_{\mathcal T_h} ,\\ \label{P3} & \langle \widehat u_h,\mu\rangle_{\partial \Omega}=\langle g, \mu\rangle_{\partial \Omega},\\ \label{P4} & \langle (\widehat {{{\boldsymbol{q}}}}_h + \widehat{{{\boldsymbol{\beta}}}u_h})\cdot {{{\boldsymbol{n}}}}, \mu\rangle_{\partial \mathcal T_h \backslash \partial \Omega}=0,\end{aligned}$$ for all $({{{\boldsymbol{r}}}},w,\mu)\in {{{\boldsymbol{V}}}}_h \times W_h \times M_h$, where the normal component of numerical flux $(\widehat {{{\boldsymbol{q}}}}_h + \widehat{{{\boldsymbol{\beta}}}u_h})\cdot {{\boldsymbol{n}}}$ is given by $$\begin{aligned} \label{numerical-flux} (\widehat {{{\boldsymbol{q}}}}_h + \widehat{{{\boldsymbol{\beta}}}u_h})\cdot {{\boldsymbol{n}}}={{{\boldsymbol{q}}}}_h\cdot {{\boldsymbol{n}}}+ ({{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}) \widehat{u}_h +\tau(u_h-\widehat u_h) \qquad {\rm on } \ \partial \mathcal T_h,\end{aligned}$$ and the stabilization function $\tau$ is a piecewise, nonnegative constant defined on $\partial \mathcal T_h$. Here, we define $\left(\eta,\zeta\right)_{\mathcal{T}_{h}} := \sum_{K \in \mathcal{T}_{h}} \int_K \eta\,\zeta\,\mathrm{dx},$ and $\langle \eta, \zeta \rangle_{\partial\mathcal{T}_{h}} := \sum_{K \in \mathcal{T}_{h}} \int_{\partial K} \eta\,\zeta\,\mathrm{ds}$. One of the advantages of the HDG method is the elimination of both ${{\boldsymbol{q}}}_h$ and $u_h$ from the system (\[PD\]) to obtain a formulation in terms of numerical trace $\widehat{u}_h$ only, one can refer to [@Cockburn09JSC; @FQZ13; @KSC2012; @Nguyen09] for the implementation. The stabilization function $\tau$ in (\[numerical-flux\]) is chosen as $$\label{tau_hdg} \tau |_{F} = \max ( \sup_{{{\boldsymbol{x}}}\in F} {{\boldsymbol{\beta}}}({{\boldsymbol{x}}}) \cdot {{\boldsymbol{n}}}, 0 ) + \min( \rho_0 \frac{\epsilon}{h_T}, 1 ) \qquad \forall F\in \partial T, \ T \in {{\mathcal T}}_h.$$ Here, $0<\rho_0 \leq 1$. We emphasize that the choice of $\tau$ in (\[tau\_hdg\]) is the second type of stabilization function in [@FQZ13]. According to [@ChenCockburnHDGI], the HDG method (\[PD\]) with stabilization function (\[tau\_hdg\]) has a unique solution. Compared with a recent work [@CFLQ14], the intrinsic idea of choosing $\tau$ in the above HDG method is similar to the strategy of setting the ultra-weakly imposed boundary condition in the first order least squares method for (\[cd\_eqs\]). A posteriori error estimator ---------------------------- We define the elementwise residual function $R_{h}$ as $$\label{elem_residual} R_{h}|_{T} = f - \text{div} \boldsymbol{q}_{h} - \boldsymbol{\beta}\cdot \nabla u_{h} - c u_{h},\quad \forall T\in\mathcal{T}_{h}.$$ We define $$\begin{aligned} \label{A_alpha} \alpha_S = \min\{h_S{\epsilon^{-\frac{1}{2}}} ,1\},\end{aligned}$$ where $S$ can be any element $T\in\mathcal{T}_{h}$ or any face $F\in\mathcal{E}_{h}$. In addition, for any $F\in\mathcal{E}_{h}$, we introduce $$\begin{aligned} \label{A_gamma} \gamma_F =\min\{ \frac{\epsilon}{h_F}+(\frac{h_F}{\epsilon}+\epsilon^{-\frac{1}{2}}\alpha_F) \|\boldsymbol{\beta}\|_{L^\infty(F)} +h_F ,\frac{ \epsilon + \|\boldsymbol{\beta}\|_{L^\infty(F)} }{h_F} + h_F\}.\end{aligned}$$ Now, we are ready to introduce the a posteriori error estimator in the following. \[def\_estimator\](A posteriori error estimator) The a posteriori error estimator is defined as \[estimators\] $$\begin{aligned} \label{estimator_eta} \eta & = \left( \Sigma_{T\in\mathcal{T}_{h}}\eta_{T}^{2} + \Sigma_{F\in \mathcal{E}_{h}^{0}}(\eta_{F}^{0})^{2} +\Sigma_{F\in \mathcal{E}_h^{\partial}}(\eta_{F}^{\partial})^{2}\right)^{\frac{1}{2}}\text{ where }\\ \label{estimator1} \eta_{T} & = \left( \alpha_{T}^{2}\Vert R_{h}\Vert_{0,T}^{2} + \epsilon^{-1} \Vert \boldsymbol{q}_{h}+\epsilon\nabla u_{h}\Vert_{0,T}^{2} \right)^{\frac{1}{2}}, \quad \forall T\in \mathcal{T}_{h},\\ \label{estimator2} \eta_{F}^{0} & = \left( \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h \cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F} + \gamma_F\| \llbracket u_h \rrbracket \|^2_{0,F} \right)^{\frac{1}{2}}, \quad \forall F\in \mathcal{E}_h^{0},\\ \label{estimator3} \eta_{F}^{\partial} & =\gamma_F^{\frac{1}{2}}\| u_h - g \|_{0,F}, \quad \forall F\in \mathcal{E}_h^{\partial}.\end{aligned}$$ Here, for any interior face $F=\partial T^{+} \cap \partial T^{-}$ in $\mathcal{E}_{h}^{0}$, we define the jump of scalar function $\phi$ and the jump of normal component of vector field $\boldsymbol{\sigma}$ by $$\begin{aligned} \llbracket\phi\rrbracket = \phi^{+} - \phi^{-},\quad \llbracket \boldsymbol{\sigma}\cdot \boldsymbol{n}\rrbracket = \boldsymbol{\sigma}^{+}\cdot \boldsymbol{n}^{+} + \boldsymbol{\sigma}^{-}\cdot \boldsymbol{n}^{-}, \text{ respectively}.\end{aligned}$$ Reliability and efficiency of a posteriori error estimator ---------------------------------------------------------- From now on, we use $C_{0}, C_{1}, C_{2}$ to denote generic constants, which are independent of the diffusion coefficient $\epsilon$ and the mesh size. For any $({{\boldsymbol{p}}},w) \in {{\boldsymbol{H}}}^1({{\mathcal T}}_h) \times H^1({{\mathcal T}}_h)$, we define the energy norm $$\begin{aligned} \label{energy-error} & \interleave ({{\boldsymbol{p}}},w)\interleave^2_h \\ {\nonumber}= &\sum_{T\in {{\mathcal T}}_h} \left( \epsilon^{-1} \| {{\boldsymbol{p}}}\|^2_{0,T} + \|w\|^2_{0,T} + \epsilon\|\nabla w\|^2_{0,T} + \alpha^2_T \| {\rm div}{{\boldsymbol{p}}}+{{\boldsymbol{\beta}}}\cdot \nabla w \|^2_{0,T} \right) \\ {\nonumber}&\quad + \sum_{F\in {{\mathcal E}}^0_h} \left( \epsilon^{-\frac{1}{2}} \alpha_F\| \llbracket {{\boldsymbol{p}}}\cdot {{\boldsymbol{n}}}\rrbracket \|^2_{0,F} + \gamma_F \| \llbracket w \rrbracket \|^2_{0,F} \right) + \sum_{F\in {{\mathcal E}}^{\partial}_h} \gamma_F \| w \|^2_{0,F}.\end{aligned}$$ We outline main results by showing the reliability and efficiency of the a posteriori error estimator, respectively, in the following theorems. Using the techniques developed in this paper, we would like to emphasize that all the a posteriori error estimates in this paper will also hold for the mixed hybrid method in [@Egger2010]. (Reliability) \[thm\_reliability\] If the Dirichlet data $g$ is contained in $C(\partial \Omega)\cap M_{h}|_{\partial \Omega}$, then $$\begin{aligned} \label{ineq_reliability} \interleave (\boldsymbol{q}-\boldsymbol{q}_{h},u-u_{h})\interleave_h \leq C_{0}\eta.\end{aligned}$$ In the diffusion dominated case $\epsilon = O(1), \boldsymbol{\beta} = 0$ and $c\geq 0$, the parameter $\gamma_F$ in (\[A\_gamma\]) would be $\frac{1}{h_F}$, and the proposed a posterior error estimators coincide with the ones in [@Cockburn13]. In this particular case, the total energy norm can be defined as $ \interleave ({{\boldsymbol{p}}},w) \interleave^2 = \sum_{T\in {{\mathcal T}}_h} \left( \| {{\boldsymbol{p}}}\|^2_{0,T} + \|w\|^2_{0,T} + \|\nabla w\|^2_{0,T} \right) $. (Efficiency) \[thm\_efficiency\] We have \[ineq\_efficiency\] $$\begin{aligned} \label{in_face_efficiency} \eta_{F}^{0} & \leq C_{1} \left( \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket ({{\boldsymbol{q}}}- \boldsymbol{q}_{h}) \cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F} + \gamma_F\| \llbracket u - u_{h} \rrbracket \|^2_{0,F} \right)^{\frac{1}{2}},\quad \forall F \in \mathcal{E}_{h}^{0},\\ \label{boundary_face_efficiency} \eta_{F}^{\partial} & \leq C_{1} \left(\gamma_F\| g-u_h \|^2_{0,F} \right)^{\frac{1}{2}},\quad \forall F \in \mathcal{E}_{h}^{\partial},\\ \label{elem_efficiency} \eta_{T} & \leq C_{1} (\epsilon^{-1} \| \boldsymbol{q} -\boldsymbol{q}_{h} \|^2_{0,T} + \| u - u_{h}\|^2_{0,T} + \epsilon\|\nabla (u - u_{h})\|^2_{0,T} \\ \nonumber &\qquad + \alpha^2_T \| {\rm div}(\boldsymbol{q} - \boldsymbol{q}_{h}) +{{\boldsymbol{\beta}}}\cdot \nabla (u - u_{h}) \|^2_{0,T} +{osc}^2_h(R_h,T))^{\frac{1}{2}}, \quad \forall T \in \mathcal{T}_{h}.\end{aligned}$$ Here, the data oscillation term ${osc}^2_h(R_h,T) : = \alpha^2_T \| R_h -P_W R_h \|^2_{0,T}$, and $P_{W}$ is the $L^{2}$ orthogonal projection onto $W_{h}$. In addition, for any $F\in \mathcal{E}_{h}^{0}$ with $h_{F}\leq \mathcal{O}(\epsilon)$ or any $T\in \mathcal{T}_{h}$ with $h_{T}\leq \mathcal{O}(\epsilon)$, we have the following efficiency results with explicit dependence on $\epsilon$. (Efficiency on refined element) \[thm\_refined\_efficiency\] For any $F\in \mathcal{E}_{h}^{0}$, if $h_{F}\leq \mathcal{O}(\epsilon)$, then $$\begin{aligned} \label{ineq_refined_efficiency} (\eta_{F}^{0})^{2} & \leq C_{2} \sum_{T \in \omega_F} ( \epsilon^{-1}\| {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h \|^2_{0,T} + \epsilon\| \nabla(u-u_h) \|^2_{0,T}\\ \nonumber & \qquad \qquad +\|u-u_h\|^2_{0,T} + osc^2_h(R_h,T) ).\end{aligned}$$ For any $T\in \mathcal{T}_{h}$, if $h_{T}\leq \mathcal{O}(\epsilon)$, then $$\begin{aligned} \label{ineq_refined_efficiency2} \eta_{T}^{2} & \leq C_{2} \big( \epsilon^{-1}\| {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h \|^2_{0,T} + \epsilon\| \nabla(u-u_h) \|^2_{0,T}\\ \nonumber & \qquad \qquad +\|u-u_h\|^2_{0,T} + osc^2_h(R_h,T) \big).\end{aligned}$$ Here, $\omega_F$ is the union of elements sharing the common face $F$, and ${osc}^2_h(R_h,T)$ is the data oscillation term introduced in Theorem \[thm\_efficiency\]. For the diffusion dominated case $\epsilon = O(1)$, $\boldsymbol{\beta} = 0$ and $c\geq 0$, the above efficiency estimates (\[ineq\_refined\_efficiency\]) and (\[ineq\_refined\_efficiency2\]) also coincide with the efficiency results in [@Cockburn13]. Auxiliary results ================= We collect some auxiliary results in this section for the proof of reliability and efficiency. For every element $T\in {{\mathcal T}}_h$, we denote $\Omega_T$ by the union of all elements that share at least one point with $T$. For any face $F\in {{\mathcal E}}_h$, the set $\Omega_F$ is defined analogously, meanwhile, $\omega_F$ is defined to be the union of elements sharing the common face $F$. The following Clément-type interpolation is crucial for the proof of reliability. \[define-clement\] (cf. [@Verfurth96]) One can define a linear mapping $\pi_h: \ L^1(\Omega) \rightarrow W^c_{1,h} \cap H^1_0(\Omega) $ via $$\begin{aligned} \pi_h v := \sum_{z \in {{\mathcal N}}^0_h} \left( \frac{1}{|\Omega_z|} \int_{\Omega_z} v \, dx\right) \phi_z, {\nonumber}\end{aligned}$$ where $\phi_z$ is $P_1$ nodal bases function for every vertex $z \in {{\mathcal N}}_h^0$, $\Omega_z$ is the support of a nodal bases function $\phi_z$ which consists of all elements that share the vertex $z$, and $W^c_{1,h}$ is the corresponding conforming $P_1$ finite element space defined by $ W^c_{1,h} := \{ w \in C(\Omega) \, : \, w |_{T} \in {{\mathcal P}}_1(T), T \in {{\mathcal T}}_h \} . $ The interpolation $\pi_{h}$ in Definition \[define-clement\] has the following approximation properties. \[ax\_re\_1\] [(cf. [@Verfurth98; @Verfurth05])]{} For any $T \in {{\mathcal T}}_h$ and $F\in {{\mathcal E}}^0_h$, the following estimates hold for any function $v \in H^1_0(\Omega)$: $$\begin{aligned} \| (I-\pi_h)v \|_{0,T} &\leq C_1 \alpha_T \left( \Vert v\Vert_{0, \Omega_{T}}^{2} + \epsilon\Vert \nabla v\Vert_{0,\Omega_{T}}^{2}\right)^{\frac{1}{2}},\\ \| (I-\pi_h)v \|_{0,F} &\leq C_2 \epsilon^{-\frac{1}{4}} \alpha_F^{\frac{1}{2}} \left( \Vert v\Vert_{0, \Omega_{T}}^{2} + \epsilon\Vert \nabla v\Vert_{0,\Omega_{T}}^{2}\right)^{\frac{1}{2}}.\end{aligned}$$ The proof of Lemma \[ax\_re\_1\] can be obtained by the Lemmas 3.1 and 3.2 in [@Verfurth98]. For the particular case $c-\frac{1}{2} {\rm div}\, {{\boldsymbol{\beta}}}=0$, the constant $\alpha_S $ is set to be $h_S{\epsilon^{-\frac{1}{2}}} $ for any $S = T\in {{\mathcal T}}_h$ or $F\in {{\mathcal E}}^0_h$ in [@Verfurth05], and the associated energy error excludes the $L^2$-error $\|u-u_h\|_{0,{{\mathcal T}}_h}$. In this paper, a weighted function technique used in [@AyusoMarini:cdf] shall be employed, such that we can also obtain the estimates for the energy error including the $L^2$-error $\|u-u_h\|_{0,{{\mathcal T}}_h}$ even when $c-\frac{1}{2} {\rm div}\, {{\boldsymbol{\beta}}}=0$. Hence, $\alpha_S$ is always set as (\[A\_alpha\]). For the approximation of function in $W_{h}$ and the Dirichlet boundary data $g$ by continuous finite element space, we need to introduce Oswald interpolation. If $g$ is contained in $C(\partial \Omega)\cap M_{h}|_{\partial \Omega}$, then the continuous finite element space $W_{h,g}^c := \{ w \in C(\Omega) \, : \, w |_{T} \in {{\mathcal P}}_p(T), T \in {{\mathcal T}}_h ,\, w|_{\partial \Omega} = g\} $ is not empty. Hence, we can introduce the Oswald interpolation ${{\mathcal I}}^{os}_h:W_h \rightarrow W^c_{h,g}$. Given a function $v_h \in W_h$, the operator ${{\mathcal I}}^{os}_h$ is prescribed at the Lagrangian nodes in the interior of $\Omega$ by the average of the values of $v_h$ at this node. For the nodes at the boundary $\partial \Omega$, ${{\mathcal I}}^{os}_h$ is prescribed at the Lagrangian nodes on $\partial \Omega$ by the value of $g$ at this node. The following estimate has been analyzed for nonconforming mesh and conforming mesh in [@OP03; @OP07] and was extended to variable polynomial degree in [@ZGHS11]. \[os\_est\_lemma\] [(cf. [@OP03; @OP07; @ZGHS11; @Cockburn13])]{} If the Dirichlet data $g$ is contained in $C(\partial \Omega)\cap M_{h}|_{\partial \Omega}$, then for any $v_h \in W_h$ and any multi-index $\alpha$ with $|\alpha|=0,1$, the following estimate holds: $$\begin{aligned} & \sum_{T \in {{\mathcal T}}_h} \| D^\alpha ( v_h -{{\mathcal I}}^{os}_h v_h )\|^2_{0,T}\\ \nonumber \leq & C_3 \left(\sum_{F \in {{\mathcal E}}^0_h} h^{1-2|\alpha|}_F \| \llbracket v_h\rrbracket \|^2_{0,F} +\sum_{F \in {{\mathcal E}}^{\partial }_h} h^{1-2|\alpha|}_F \| g-v_h \|^2_{0,F}\right).\end{aligned}$$ In order to prove the local efficiency of the estimators, proper element and face bubble functions are useful. We set the element bubble function in the element $T$ as $B_T = \prod_{i=1}^{d+1} \lambda_i$, where $\lambda_i$ denotes the linear nodal basis function at $i$th vertex in $T$. Besides the element bubble function, as mention in Lemma 3.3 in [@Verfurth98] and Lemma 3.6 in [@Verfurth05], there also exists proper face bubble function $B_F$ with $F \in {{\mathcal E}}^0_h$ such that the following lemma holds. \[bubble\_lemma\] [(cf. [@Verfurth98; @Verfurth05])]{} For any element $T \in {{\mathcal T}}_h$, a polynomial $\phi \in {{\mathcal P}}_p(T)$ and any face $F\in {{\mathcal E}}^0_h$, a polynomial $\psi \in {{\mathcal P}}_p(F)$, the following estimates hold: $$\begin{aligned} \| \phi \|^2_{0,T} &\leq C_4 (\phi,B_T\phi)_{T},\\ \|B_T\phi\|_{0,T} &\leq C_5 \|\phi\|_{0,T} ,\\ \| \psi \|^2_{0,F} &\leq C_6 \langle \psi,B_F\psi\rangle_{F},\\ \|B_F \psi \|_{0,\omega_F} &\leq C_7 \epsilon^{\frac{1}{4}} \alpha^{\frac{1}{2}}_F \| \psi \|_{0,F},\\ \|B_F \psi \|_{0,\omega_F} + \epsilon^{\frac{1}{2}} \| \nabla B_F \psi \|_{0,\omega_F} &\leq C_8\epsilon^{\frac{1}{4}} \alpha^{-\frac{1}{2}}_F \| \psi \|_{0,F}.\end{aligned}$$ Proof of reliability ==================== In this section, we give the proof of Theorem \[thm\_reliability\], which shows reliability of the a posteriori error estimator in Definition \[def\_estimator\]. In view of the assumption (A4), we define a weighted function $$\begin{aligned} \varphi := e^{-\psi} +\chi, \label{varphi}\end{aligned}$$ where $\chi$ is a positive constant to be determined later. Let ${{\boldsymbol{e}}}_{{\boldsymbol{q}}}= {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,{{\boldsymbol{e}}}_u = u-u_h$. We have the following Lemma \[pre-lemma\]. \[pre-lemma\] Let $\varphi$ be given in (\[varphi\]) with $\chi \geq 2b_0\| e^{-\psi} \|_{L^\infty(\Omega)} \| \nabla \psi\|^2_{L^\infty(\Omega)}$. Then the following estimate holds: $$\begin{aligned} \label{pre-est} & C \left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right)\\ \nonumber \leq & \epsilon^{-1}( {{\boldsymbol{e}}}_{{\boldsymbol{q}}}, \varphi {{\boldsymbol{e}}}_{{\boldsymbol{q}}})_{{{\mathcal T}}_h} -( {{\boldsymbol{e}}}_u, \nabla \varphi \cdot {{\boldsymbol{e}}}_{{{\boldsymbol{q}}}} )_{{{\mathcal T}}_h} \\ \nonumber & \quad - \frac{1}{2} ( {{\boldsymbol{\beta}}}\cdot \nabla \varphi {{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u )_{{{\mathcal T}}_h}+ \big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}}) {{\boldsymbol{e}}}_u ,\varphi{{\boldsymbol{e}}}_u \big)_{{{\mathcal T}}_h}. \end{aligned}$$ According to the assumptions (A3)-(A4), we have $$\begin{aligned} & \epsilon^{-1}( {{\boldsymbol{e}}}_{{\boldsymbol{q}}}, \varphi {{\boldsymbol{e}}}_{{\boldsymbol{q}}})_{{{\mathcal T}}_h} +( {{\boldsymbol{e}}}_u,e^{-\psi} \nabla \psi \cdot {{\boldsymbol{e}}}_{{{\boldsymbol{q}}}} )_{{{\mathcal T}}_h} {\nonumber}\\ & \quad + \frac{1}{2} ( {{\boldsymbol{\beta}}}\cdot \nabla \psi e^{-\psi} {{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u )_{{{\mathcal T}}_h}+ \big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}}) {{\boldsymbol{e}}}_u ,\varphi{{\boldsymbol{e}}}_u \big)_{{{\mathcal T}}_h} \\ \geq & \epsilon^{-1}\chi( {{\boldsymbol{e}}}_{{\boldsymbol{q}}}, {{\boldsymbol{e}}}_{{\boldsymbol{q}}})_{{{\mathcal T}}_h} +( {{\boldsymbol{e}}}_u,e^{-\psi} \nabla \psi \cdot {{\boldsymbol{e}}}_{{{\boldsymbol{q}}}} )_{{{\mathcal T}}_h}+ \frac{b_0}{2} ( e^{-\psi} {{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u )_{{{\mathcal T}}_h}.\end{aligned}$$ By the Cauchy-Schwarz and Young’s inequalities, for any $\delta>0$, we have $$\begin{aligned} &( {{\boldsymbol{e}}}_u,e^{-\psi} \nabla \psi \cdot {{\boldsymbol{e}}}_{{{\boldsymbol{q}}}} )_{{{\mathcal T}}_h} \\ \leq & \frac{1}{2}\left( \delta^{-1} \| \nabla \psi \|^2_{L^\infty(\Omega)} \| e^{-\psi} \|_{L^\infty(\Omega)} ({{\boldsymbol{e}}}_{{{\boldsymbol{q}}}},{{\boldsymbol{e}}}_{{{\boldsymbol{q}}}})_{{{\mathcal T}}_h} + \delta\| e^{-\psi} \|_{L^\infty(\Omega)}({{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u)_{{{\mathcal T}}_h} \right).\end{aligned}$$ Then, by taking $\chi \geq 2b_0\| e^{-\psi} \|_{L^\infty(\Omega)} \| \nabla \psi\|^2_{L^\infty(\Omega)} $ and $\delta = \frac{b_0}{2}$, we can conclude that the proof is complete. We are now ready to state a key result of the upper bound estimate of $\left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right)$. \[pre-reliability\] If the Dirichlet data $g$ is contained in $C(\partial \Omega)\cap M_{h}|_{\partial \Omega}$, then $$\begin{aligned} & \left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right) \leq C \Big( \sum_{T\in {{\mathcal T}}_h}\eta^2_T + \sum_{F\in {{\mathcal E}}^0_h}(\eta^0_F)^{2} + \sum_{F\in {{\mathcal E}}^\partial_h} (\eta^\partial_F)^2 \Big).\end{aligned}$$ According to Lemma \[pre-lemma\], we have $$\begin{aligned} \label{est_1} & C \left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right)\\ \nonumber \leq & \epsilon^{-1}( {{\boldsymbol{e}}}_{{\boldsymbol{q}}}, \varphi {{\boldsymbol{e}}}_{{\boldsymbol{q}}})_{{{\mathcal T}}_h} -( {{\boldsymbol{e}}}_u, \nabla \varphi \cdot {{\boldsymbol{e}}}_{{{\boldsymbol{q}}}} )_{{{\mathcal T}}_h} \\ \nonumber & \quad - \frac{1}{2} ( {{\boldsymbol{\beta}}}\cdot \nabla \varphi {{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u )_{{{\mathcal T}}_h}+ \big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}}) {{\boldsymbol{e}}}_u ,\varphi{{\boldsymbol{e}}}_u \big)_{{{\mathcal T}}_h}. \end{aligned}$$ Let $u^{{{\mathcal I}}}_h:={{\mathcal I}}^{os}_hu_h$. By the definition of ${{\mathcal I}}^{os}_h$, clearly, we have $u^{{{\mathcal I}}}_h|_{\partial \Omega} = g$. Adding and subtracting $\epsilon\nabla u^{{{\mathcal I}}}_h$ into ${{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h$, we have $$\begin{aligned} \label{est_2} & \epsilon^{-1}( {{\boldsymbol{e}}}_{{\boldsymbol{q}}}, \varphi {{\boldsymbol{e}}}_{{\boldsymbol{q}}})_{{{\mathcal T}}_h} \\ {\nonumber}= & -\left( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\varphi( \nabla u - \nabla u^{{{\mathcal I}}}_h) \right)_{{{\mathcal T}}_h} - \epsilon^{-1}\left( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\varphi\,( \epsilon \nabla u^{{{\mathcal I}}}_h+{{\boldsymbol{q}}}_h) \right)_{{{\mathcal T}}_h}{\nonumber}\\ = & \left( \nabla \varphi \cdot({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h),u-u^{{{\mathcal I}}}_h\right)_{{{\mathcal T}}_h} + \left( \varphi\, {\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h),u-u^{{{\mathcal I}}}_h \right)_{{{\mathcal T}}_h} {\nonumber}\\ &\qquad - \langle ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h)\cdot {{\boldsymbol{n}}}, \varphi(u-u^{{{\mathcal I}}}_h)\rangle_{\partial {{\mathcal T}}_h}-\epsilon^{-1}\left( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\varphi\,( \epsilon \nabla u^{{{\mathcal I}}}_h+{{\boldsymbol{q}}}_h) \right)_{{{\mathcal T}}_h}{\nonumber}\\ = & \left( \nabla \varphi \cdot({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h),u-u^{{{\mathcal I}}}_h\right)_{{{\mathcal T}}_h} + \left( R_h,\varphi (u-u^{{{\mathcal I}}}_h )\right)_{{{\mathcal T}}_h} {\nonumber}\\ &\qquad +\langle {{\boldsymbol{q}}}_h \cdot {{\boldsymbol{n}}}, \varphi( u-u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h}- \left( {{\boldsymbol{\beta}}}\cdot \nabla(u-u_h) + c(u-u_h), \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h}{\nonumber}\\ \nonumber & \qquad - \epsilon^{-1}\left( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\varphi\,( \epsilon \nabla u^{{{\mathcal I}}}_h+{{\boldsymbol{q}}}_h) \right)_{{{\mathcal T}}_h}.\end{aligned}$$ In the last step of (\[est\_2\]), we have used the equation (\[mixed-eqs\]) and the fact that $\langle {{\boldsymbol{q}}}\cdot {{\boldsymbol{n}}}, \varphi( u-u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h}=0$ (since jumps of ${{\boldsymbol{q}}}\cdot{{\boldsymbol{n}}}, u, u^{{{\mathcal I}}}_h$ vanish on all interior faces and $ u-u^{{{\mathcal I}}}_h = 0$ on $\partial \Omega$). Inserting (\[est\_2\]) into (\[est\_1\]) and subtracting and adding $u_h$ into $u-u^{{{\mathcal I}}}_h$ in the first term of the right-hand side of (\[est\_2\]), we have $$\begin{aligned} \label{est_21} & C\left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right)\\ \nonumber \leq & \left( \nabla \varphi \cdot({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h),u_h-u^{{{\mathcal I}}}_h\right)_{{{\mathcal T}}_h} + \left( R_h,\varphi (u-u^{{{\mathcal I}}}_h )\right)_{{{\mathcal T}}_h} \\ {\nonumber}&\quad +\langle {{\boldsymbol{q}}}_h \cdot {{\boldsymbol{n}}}, \varphi( u-u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h} - \left( {{\boldsymbol{\beta}}}\cdot \nabla(u-u_h) + c(u-u_h), \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h}\\ {\nonumber}&\quad -\epsilon^{-1}\left( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\varphi\,( \epsilon \nabla u^{{{\mathcal I}}}_h+{{\boldsymbol{q}}}_h) \right)_{{{\mathcal T}}_h}\\ {\nonumber}&\quad -\frac{1}{2} ( {{\boldsymbol{\beta}}}\cdot \nabla \varphi {{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u )_{{{\mathcal T}}_h}+ \big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}}) {{\boldsymbol{e}}}_u ,\varphi{{\boldsymbol{e}}}_u \big)_{{{\mathcal T}}_h} .\end{aligned}$$ For any $w\in W^c_{1,h} \cap H^1_0(\Omega) $, the equation (\[P2\]) in the HDG method (\[PD\]) can be rewritten as follows after integration by parts: $$\begin{aligned} (f,w)_{{{\mathcal T}}_h}& = -({{\boldsymbol{q}}}_h,\nabla w)_{{{\mathcal T}}_h} + \left({\rm div}({{\boldsymbol{\beta}}}u_h),w\right)_{{{\mathcal T}}_h} - \langle {{\boldsymbol{\beta}}}u_h \cdot {{\boldsymbol{n}}},w\rangle_{\partial {{\mathcal T}}_h} \\ &\quad + (cu_h - {\rm div}{{\boldsymbol{\beta}}}u_h,w)_{{{\mathcal T}}_h} + \langle ( \widehat{{{\boldsymbol{q}}}}_h+\widehat{{{\boldsymbol{\beta}}}u}_h )\cdot{{\boldsymbol{n}}},w\rangle_{\partial {{\mathcal T}}_h}\\ &=({\rm div}{{\boldsymbol{q}}}_h,w)_{{{\mathcal T}}_h} - \langle {{\boldsymbol{q}}}_h \cdot {{\boldsymbol{n}}},w \rangle_{\partial {{\mathcal T}}_h} + ( {{\boldsymbol{\beta}}}\cdot \nabla u_h+cu_h,w )_{{{\mathcal T}}_h}\\ &\quad - \langle {{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}u_h,w\rangle_{\partial {{\mathcal T}}_h}+ \langle ( \widehat{{{\boldsymbol{q}}}}_h+\widehat{{{\boldsymbol{\beta}}}u}_h )\cdot{{\boldsymbol{n}}},w\rangle_{\partial {{\mathcal T}}_h}.\end{aligned}$$ Note that the equation (\[P4\]) indicates $\langle ( \widehat{{{\boldsymbol{q}}}}_h+\widehat{{{\boldsymbol{\beta}}}u}_h )\cdot{{\boldsymbol{n}}},w\rangle_{\partial {{\mathcal T}}_h} = 0$. Hence, we have $$\begin{aligned} - \langle ({{{\boldsymbol{q}}}}_h+{{\boldsymbol{\beta}}}{u}_h )\cdot{{\boldsymbol{n}}},w\rangle_{\partial {{\mathcal T}}_h} = (R_h,w)_{{{\mathcal T}}_h}. \label{est_3}\end{aligned}$$ Combining (\[est\_21\]) and (\[est\_3\]), we obtain $$\begin{aligned} & C \left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} +\Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right) \leq \sum^4_{l=1}I_l \text{ where }\\ I_1& = -\left( \nabla \psi e^{-\psi}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h),u_h-u^{{{\mathcal I}}}_h\right)_{{{\mathcal T}}_h} -\epsilon^{-1}\left( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\varphi ({{\boldsymbol{q}}}_h+\epsilon \nabla u^{{{\mathcal I}}}_h)\right)_{{{\mathcal T}}_h},\\ I_2&= \left( R_h, (I-\pi_h)( \varphi u-\varphi u^{{{\mathcal I}}}_h )\right)_{{{\mathcal T}}_h} +\langle ({{\boldsymbol{q}}}_h+{{\boldsymbol{\beta}}}u_h)\cdot{{\boldsymbol{n}}}, (I-\pi_h)( \varphi u-\varphi u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h} ,\\ I_3 & = -\langle {{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}u_h, \varphi( u-u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h} -\left( {{\boldsymbol{\beta}}}\cdot \nabla(u^{{{\mathcal I}}}_h-u_h) + c(u^{{{\mathcal I}}}_h-u_h), \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h},\\ I_4 & = -\left( {{\boldsymbol{\beta}}}\cdot \nabla(u-u^{{{\mathcal I}}}_h) + c(u-u^{{{\mathcal I}}}_h), \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h} \\ &\qquad - \frac{1}{2} ( {{\boldsymbol{\beta}}}\cdot \nabla \varphi {{\boldsymbol{e}}}_u,{{\boldsymbol{e}}}_u )_{{{\mathcal T}}_h}+ \big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}}) {{\boldsymbol{e}}}_u ,\varphi{{\boldsymbol{e}}}_u \big)_{{{\mathcal T}}_h}.\end{aligned}$$ Here $\pi_h$ is the Clément-type interpolation into the space $W^c_{1,h}\cap H^1_0(\Omega)$ (see Definition \[define-clement\]). Now we estimate the summation $\sum^4_{l=1}I_l$. For the sake of simplicity, given any $v \in H^1(D)$, $D\subset \Omega$, we define an energy norm for $v$ by $\interleave v \interleave_D = \left(\|v\|^2_{0,D} + \epsilon \| \nabla v\|^2_{0,D}\right)^{\frac{1}{2}}$. We can refer to Appendix A and Appendix B for the estimates for $I_{1}$ and $I_{4}$ respectively. In the following we mainly focus on the estimate of $I_2+I_3$ by two approaches. (**Approach A**) For the first approach, we consider the estimate of $I_2$ and $I_3$ separately. By the approximation properties of the Clément-type interpolation presented in Lemma \[ax\_re\_1\], we obtain $$\begin{aligned} I_2 &\leq C\sum_{T \in {{\mathcal T}}_h} \alpha_T \| R_h\|_{0,T} \interleave \varphi u-\varphi u^{{{\mathcal I}}}_h\interleave_{\Omega_T} \\ & \quad +C \sum_{F\in {{\mathcal E}}^0_h} \left(\epsilon^{-\frac{1}{4}} \alpha^{\frac{1}{2}}_F \| \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket \|_{0,F} + \epsilon^{-\frac{1}{4}} \alpha^{\frac{1}{2}}_F \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} \| \llbracket u_h \rrbracket \|_{0,F}\right) \interleave \varphi u-\varphi u^{{{\mathcal I}}}_h\interleave_{\Omega_F} .\end{aligned}$$ Using the Young’s inequality and subtracting and adding $u_h$ into $u-u^{{{\mathcal I}}}_h$, we further have $$\begin{aligned} \label{est_I2} I_2 & \leq \frac{C}{2\delta} \sum_{T \in {{\mathcal T}}_h} \alpha^2_T \|R_h\|^2_{0,T}+ \frac{C}{2\delta} \sum_{F \in {{\mathcal E}}^0_h} \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F}\\ {\nonumber}&\quad +\frac{C}{2\delta} \sum_{F \in {{\mathcal E}}^0_h} \epsilon^{-\frac{1}{2}} \alpha_F \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} \| \llbracket u_h \rrbracket \|^2_{0,F}+ \delta C \|{{\boldsymbol{\beta}}}\|_{L^\infty(\Omega)} \interleave \varphi(u-u^{{{\mathcal I}}}_h)\interleave^2_{{{\mathcal T}}_h}. $$ Since $u-u^{{{\mathcal I}}}_h=0$ on $\partial \Omega$, we easily get $\langle {{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}u^{{{\mathcal I}}}_h, \varphi( u-u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h}=0$. Via integrating by parts, we have $$\begin{aligned} I_3 &= \left( {{\boldsymbol{\beta}}}(u^{{{\mathcal I}}}_h-u_h), \varphi \nabla(u-u^{{{\mathcal I}}}_h) \right)_{{{\mathcal T}}_h} + \left( {{\boldsymbol{\beta}}}\cdot \nabla \varphi (u^{{{\mathcal I}}}_h-u_h), u-u^{{{\mathcal I}}}_h \right)_{{{\mathcal T}}_h} \\ &\quad +\left( ({\rm div}{{\boldsymbol{\beta}}}-c)(u^{{{\mathcal I}}}_h-u_h), \varphi(u-u^{{{\mathcal I}}}_h) \right)_{{{\mathcal T}}_h}.\end{aligned}$$ Note that $\nabla(u-u^{{{\mathcal I}}}_h) = -\epsilon^{-1}\{ ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h) +({{\boldsymbol{q}}}_h+\epsilon \nabla u_h) -\epsilon \nabla(u_h - u^{{{\mathcal I}}}_h) \}$ and $u-u^{{{\mathcal I}}}_h = (u-u_h) + (u_h-u^{{{\mathcal I}}}_h)$, we utilize the Cauchy-Schwarz and Young’s inequalities to obtain $$\begin{aligned} \label{est_I3} I_3 &\leq \big(C^{d}_1 \frac{3\epsilon^{-1}}{2\delta}+ C^d_2(1+\frac{1}{2\delta}) \big) \| u_h - u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h} +C^d_2 \frac{\delta}{2} \|u-u_h\|^2_{0,{{\mathcal T}}_h}\\ {\nonumber}&\quad +\frac{\delta}{2} \left( \epsilon^{-1}\|{{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h\|^2_{0,{{\mathcal T}}_h} + \epsilon^{-1}\| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|^2_{0,{{\mathcal T}}_h} + \epsilon\|\nabla(u_h - u^{{{\mathcal I}}}_h)\|^2_{0,{{\mathcal T}}_h} \right),\end{aligned}$$ where $C^d_1 = \|{{\boldsymbol{\beta}}}\varphi\|^2_{L^\infty(\Omega)}, C^d_2=\| {{\boldsymbol{\beta}}}\cdot\nabla \varphi+\varphi({\rm div}{{\boldsymbol{\beta}}}-c) \|_{L^\infty(\Omega)}$. So, by the first approach, $$\begin{aligned} \label{I2_I3_1} & I_{2} + I_{3}\\ {\nonumber}\leq & \frac{C}{2\delta} \sum_{T \in {{\mathcal T}}_h} \alpha^2_T \|R_h\|^2_{0,T}+ \frac{C}{2\delta} \sum_{F \in {{\mathcal E}}^0_h} \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F}\\ {\nonumber}&\quad +\frac{C}{2\delta} \sum_{F \in {{\mathcal E}}^0_h} \epsilon^{-\frac{1}{2}} \alpha_F \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} \| \llbracket u_h \rrbracket \|^2_{0,F}+ \delta C \|{{\boldsymbol{\beta}}}\|_{L^\infty(\Omega)} \interleave \varphi(u-u^{{{\mathcal I}}}_h)\interleave^2_{{{\mathcal T}}_h}\\ {\nonumber}&\quad +\big(C^{d}_1 \frac{3\epsilon^{-1}}{2\delta}+ C^d_2(1+\frac{1}{2\delta}) \big) \| u_h - u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h} +C^d_2 \frac{\delta}{2} \|u-u_h\|^2_{0,{{\mathcal T}}_h}\\ {\nonumber}&\quad +\frac{\delta}{2} \left( \epsilon^{-1}\|{{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h\|^2_{0,{{\mathcal T}}_h} + \epsilon^{-1}\| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|^2_{0,{{\mathcal T}}_h} + \epsilon\|\nabla(u_h - u^{{{\mathcal I}}}_h)\|^2_{0,{{\mathcal T}}_h} \right).\end{aligned}$$ Here, we recall that $C^d_1 = \|{{\boldsymbol{\beta}}}\varphi\|^2_{L^\infty(\Omega)}$ and $C^d_2=\| {{\boldsymbol{\beta}}}\cdot\nabla \varphi+\varphi({\rm div}{{\boldsymbol{\beta}}}-c) \|_{L^\infty(\Omega)}$. (**Approach B**) For the second approach, we estimate the summation of $I_2$ and $I_3$. It is clear that $$\begin{aligned} \label{I2_I3_eq1} & I_2+ I_3\\ {\nonumber}= &\left( R_h, (I-\pi_h)( \varphi u-\varphi u^{{{\mathcal I}}}_h )\right)_{{{\mathcal T}}_h} +\langle {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}, (I-\pi_h)( \varphi u-\varphi u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h}\\ {\nonumber}& \quad -\langle {{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}u_h,\pi_h( \varphi u- \varphi u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h}\\ {\nonumber}&\quad -\left( {{\boldsymbol{\beta}}}\cdot \nabla(u^{{{\mathcal I}}}_h-u_h) + c(u^{{{\mathcal I}}}_h-u_h), \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h}.\end{aligned}$$ For the first two terms of the right-hand side of (\[I2\_I3\_eq1\]), the estimates can be similarly obtained as in (\[est\_I2\]). For the third term, by the trace inequality we have $$\begin{aligned} & -\langle {{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}u_h,\pi_h( \varphi u- \varphi u^{{{\mathcal I}}}_h )\rangle_{\partial {{\mathcal T}}_h} {\nonumber}\\ \leq & C\sum_{F \in {{\mathcal E}}^0_h} \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} \| \llbracket u_h\rrbracket \|_{0,F} h^{-\frac{1}{2}}_F \| \pi_h( \varphi u - \varphi u^{{{\mathcal I}}}_h ) \|_{0,T_F}{\nonumber}\\ \leq & C\sum_{F \in {{\mathcal E}}^0_h} \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} \| \llbracket u_h\rrbracket \|_{0,F} h^{-\frac{1}{2}}_F \| \varphi u - \varphi u^{{{\mathcal I}}}_h \|_{0,\Omega_{T_F}} {\nonumber}\\ \leq & \frac{C}{2 \delta} \sum_{F \in {{\mathcal E}}^0_h} \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} h^{-1}_F\| \llbracket u_h\rrbracket \|^2_{0,F} +C \delta \|{{\boldsymbol{\beta}}}\|_{L^\infty(\Omega)} \| \varphi \|^2_{L^\infty(\Omega)} ( \|u-u_h\|^2_{0,{{\mathcal T}}_h} + \|u_h-u^{{{\mathcal I}}}_h\|^2_{0,{{\mathcal T}}_h} ),{\nonumber}\end{aligned}$$ where $T_F$ is an element which satisfies $F \subset \partial T$ and the second inequality of the above estimate is deduced from the $L^2$ stability property of Clément-type interpolation $\pi_h$ (cf. [@Verfurth98]). For the fourth term of the right-hand side of (\[I2\_I3\_eq1\]), we can easily derive, for any $\delta > 0$, that $$\begin{aligned} &-\left( {{\boldsymbol{\beta}}}\cdot \nabla(u^{{{\mathcal I}}}_h-u_h) + c(u^{{{\mathcal I}}}_h-u_h), \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h}{\nonumber}\\ \leq & \frac{1}{\delta} \| {{\boldsymbol{\beta}}}\varphi \|_{L^\infty(\Omega)} \|\nabla (u_h - u^{{{\mathcal I}}}_h)\|^2_{0,{{\mathcal T}}_h} + C^d_3 \|u_h-u^{{{\mathcal I}}}_h\|^2_{0,{{\mathcal T}}_h}+ \frac{ \delta}{2}C^d_4 \| u-u_h \|^2_{0,{{\mathcal T}}_h},{\nonumber}$$ where $C^d_3 = \frac{\delta}{2} \|{{\boldsymbol{\beta}}}\varphi\|_{L^\infty(\Omega)} + (\frac{1}{2\delta}+1)\|c\varphi\|_{L^\infty(\Omega)} $, $C^d_4= \|{{\boldsymbol{\beta}}}\varphi\|_{L^\infty(\Omega)} +\|c\varphi\|_{L^\infty(\Omega)} $. Thus, by the second approach, $$\begin{aligned} \label{I2I3_r} & I_2 + I_3\\ {\nonumber}\leq &\frac{C}{2\delta} \sum_{T \in {{\mathcal T}}_h} \alpha^2_T \|R_h\|^2_{0,T}+ \frac{C}{2\delta} \sum_{F \in {{\mathcal E}}^0_h} \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h \cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F} + \delta C\interleave \varphi(u-u^{{{\mathcal I}}}_h)\interleave^2_{{{\mathcal T}}_h}\\ {\nonumber}& + \frac{C}{2 \delta} \sum_{F \in {{\mathcal E}}^0_h} \|{{\boldsymbol{\beta}}}\|_{L^\infty(F)} h^{-1}_F\| \llbracket u_h\rrbracket \|^2_{0,F} +C \delta \|{{\boldsymbol{\beta}}}\|_{L^\infty(\Omega)} \| \varphi \|^2_{L^\infty(\Omega)} ( \|u-u_h\|^2_{0,{{\mathcal T}}_h} + \|u_h-u^{{{\mathcal I}}}_h\|^2_{0,{{\mathcal T}}_h} )\\ {\nonumber}& + \frac{1}{\delta} \| {{\boldsymbol{\beta}}}\varphi \|_{L^\infty(\Omega)} \|\nabla (u_h - u^{{{\mathcal I}}}_h)\|^2_{0,{{\mathcal T}}_h} + C^d_3 \|u_h-u^{{{\mathcal I}}}_h\|^2_{0,{{\mathcal T}}_h}+ \frac{ \delta}{2}C^d_4 \| u-u_h \|^2_{0,{{\mathcal T}}_h}.\end{aligned}$$ For the term $\delta C\interleave \varphi(u-u^{{{\mathcal I}}}_h)\interleave^2_{{{\mathcal T}}_h} $ in the right-hand sides of (\[I2\_I3\_1\]) and (\[I2I3\_r\]), we derive that $$\begin{aligned} \label{I2I3_rr} & \delta C\interleave \varphi(u-u^{{{\mathcal I}}}_h)\interleave^2_{{{\mathcal T}}_h}\\ {\nonumber}\leq & \delta C \left( \epsilon\| \nabla \varphi\|^2_{L^\infty(\Omega)} +\|\varphi\|^2_{L^\infty(\Omega)} \right)\left( \|u-u_h\|^2_{0,{{\mathcal T}}_h} + \|u_h-u^{{{\mathcal I}}}_h\|^2_{0,{{\mathcal T}}_h} \right) \\ {\nonumber}& + \delta C \|\varphi\|^2_{L^\infty(\Omega)} \left( \epsilon^{-1}\|{{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h\|^2_{0,{{\mathcal T}}_h} + \epsilon^{-1}\| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|^2_{0,{{\mathcal T}}_h} + \epsilon\|\nabla(u_h - u^{{{\mathcal I}}}_h)\|^2_{0,{{\mathcal T}}_h} \right).\end{aligned}$$ Now, we are ready to finish the proof. Combining (\[est\_I1\]), (\[I2\_I3\_1\]) (which is given by the first approach for the estimate of $I_2+I_3$), (\[I2I3\_rr\]), (\[est\_I4\]) and Lemma \[os\_est\_lemma\], and choosing $\delta$ small enough, we have $$\begin{aligned} \label{est-f1} \left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right) \leq C \big( \sum_{T\in {{\mathcal T}}_h}\eta^2_T + \sum_{F\in {{\mathcal E}}^0_h}(\eta^{0}_{F,1})^{2} + \sum_{F\in {{\mathcal E}}^\partial_h} (\eta^\partial_{F,1})^2 \big),\end{aligned}$$ where $ \eta_{F,1}^{0} = \left( \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F} + \gamma_{F,1}\| \llbracket u_h \rrbracket \|^2_{0,F} \right)^{\frac{1}{2}}$, $\eta^{\partial}_{F,1} = \gamma_{F,1}^{\frac{1}{2}} \|g-u_h \|_{0,F}$ and $\gamma_{F,1} =\frac{\epsilon}{h_F}+\frac{h_F}{\epsilon}+\epsilon^{-\frac{1}{2}}\alpha_F $. Similarly, we can obtain the following estimate by combining (\[est\_I1\]), (\[I2I3\_r\]) (which is given by the first approach for the estimate of $I_2+I_3$), (\[I2I3\_rr\]), (\[est\_I4\]) and Lemma \[os\_est\_lemma\], and again choosing $\delta$ small enough, $$\begin{aligned} \label{est-f2} \left(\epsilon^{-1}\Vert {{\boldsymbol{e}}}_{{\boldsymbol{q}}}\Vert_{\mathcal{T}_{h}}^{2} + \Vert {{\boldsymbol{e}}}_u\Vert_{\mathcal{T}_{h}}^{2}\right) \leq C \big( \sum_{T\in {{\mathcal T}}_h}\eta^2_T + \sum_{F\in {{\mathcal E}}^0_h}(\eta^{0}_{F,2})^{2} + \sum_{F\in {{\mathcal E}}^\partial_h} (\eta^\partial_{F,2})^2 \big),\end{aligned}$$ where $ \eta_{F,2}^{0} = \left( \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket \|^2_{0,F} + \gamma_{F,2}\| \llbracket u_h \rrbracket \|^2_{0,F} \right)^{\frac{1}{2}}$, $\eta^{\partial}_{F,2} = \gamma_{F,2}^{\frac{1}{2}} \|g-u_h \|_{0,F}$ and $\gamma_{F,2} =\frac{ \epsilon + \|\boldsymbol{\beta}\|_{L^\infty(F)} }{h_F} + h_F$. Thus, combining (\[est-f1\]) and (\[est-f2\]) completes the proof. Now we are in a position to show the proof of the first main result. (Proof of Theorem \[thm\_reliability\]) Note that $ {\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h) + {{\boldsymbol{\beta}}}\cdot \nabla(u-u_h) = R_h - c(u-u_h) $ and the fact $\alpha_T \leq 1$, one can easily obtain the following estimate by triangle inequality, $$\begin{aligned} \label{pre-est-2} \alpha^2_T\| {\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h) + {{\boldsymbol{\beta}}}\cdot \nabla(u-u_h) \|^2_{0,T} \leq 2 \alpha^2_T\| R_h \|^2_{0,T} + 2 \|c( u-u_h )\|^2_{0,T}.\end{aligned}$$ Moreover, for the term $\epsilon \|\nabla (u-u_h)\|^2_{0,{{\mathcal T}}_h} $, we have $$\begin{aligned} \label{pre-est-3} \epsilon \|\nabla (u-u_h)\|^2_{0,{{\mathcal T}}_h} &= \epsilon^{-1} \| {{\boldsymbol{q}}}- {{\boldsymbol{q}}}_h + {{\boldsymbol{q}}}_h +\epsilon \nabla u_h \|^2_{0,{{\mathcal T}}_h}\\ {\nonumber}& \leq 2 \epsilon^{-1} \| {{\boldsymbol{q}}}- {{\boldsymbol{q}}}_h \|^2_{0,{{\mathcal T}}_h} + 2 \epsilon^{-1} \| {{\boldsymbol{q}}}_h +\epsilon \nabla u_h \|^2_{0,{{\mathcal T}}_h} .\end{aligned}$$ Combining (\[pre-est-2\]), (\[pre-est-3\]), Lemma \[pre-reliability\] and the fact that the jumps of ${{\boldsymbol{q}}}\cdot{{\boldsymbol{n}}}$ and $u$ vanish on all interior faces, we immediately have the following reliability estimate: $$\begin{aligned} \interleave ( {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,u-u_h ) \interleave^2_h \leq C \Big( \sum_{T\in {{\mathcal T}}_h}\eta^2_T + \sum_{F\in {{\mathcal E}}^0_h}(\eta^{0}_F)^{2} + \sum_{F\in {{\mathcal E}}^\partial_h} (\eta^\partial_F)^2 \Big).{\nonumber}\end{aligned}$$ So, the proof is complete. Proof of efficiency =================== In this section, we give the proofs of Efficiency (Theorem \[thm\_efficiency\]) and Efficiency on refined element (Theorem \[thm\_refined\_efficiency\]), which address the efficiency of a posteriori error estimator in Definition \[def\_estimator\]. Efficiency ---------- By using the element bubble function $B_T$ (cf. Lemma \[bubble\_lemma\]), we have the following estimate which is proven in Appendix C. \[lemma\_control\_eta2\] For any $T\in {{\mathcal T}}_h$, we have that $$\alpha^2_T \| R_h \|^2_{0,T} \leq C \left(\alpha^2_T \| {\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h) +{{\boldsymbol{\beta}}}\cdot \nabla (u-u_h) \|^2_{0,T} + \|u-u_h\|^2_{0,T} + {osc}^2_h(R_h,T) \right),$$ where the data oscillation term ${osc}^2_h(R_h,T)$ and the projection $P_{W}$ are introduced in Theorem \[thm\_efficiency\]. For any $w\in H^1_0(T)$, we also have $$(R_h,w)_T = -({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\nabla w)_T + ( {{\boldsymbol{\beta}}}\cdot \nabla(u-u_h) +c(u-u_h) ,w)_T.$$ By the similar technique, we can deduce that $$\begin{aligned} \label{eff-1} &\alpha^2_T \| R_h \|^2_{0,T}\\ {\nonumber}\leq & C \left( \epsilon^{-1}\| {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h \|^2_{0,T} + \alpha^2_T \| {{\boldsymbol{\beta}}}\cdot \nabla (u-u_h)+c(u-u_h) \|^2_{0,T} + osc^2_h(R_h,T) \right). \end{aligned}$$ With Lemma \[lemma\_control\_eta2\], we are in a position to show the proof of the second main result. (Proof of Theorem \[thm\_efficiency\]) By (\[estimator1\]), (\[estimator2\]) and the fact that ${{\boldsymbol{q}}}\cdot {{\boldsymbol{n}}}$ and $u$ are continuous across all interior faces, we have (\[in\_face\_efficiency\]) and (\[boundary\_face\_efficiency\]) immediately. For any $T\in {{\mathcal T}}_h$, adding and subtracting ${{\boldsymbol{q}}}$ into ${{\boldsymbol{q}}}_h+\epsilon \nabla u_h$ yields $$\begin{aligned} \label{control_eta1} \epsilon^{-1}\| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|^2_{0,T} \leq 2 \epsilon^{-1} \|{{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h\|^2_{0,T} + 2 \epsilon \| \nabla(u-u_h) \|^2_{0,T}.\end{aligned}$$ Combining (\[control\_eta1\]) and Lemma \[lemma\_control\_eta2\], we can conclude that (\[elem\_efficiency\]) is true. Efficiency on refined element ----------------------------- The proof of (\[ineq\_refined\_efficiency2\]) in Theorem \[thm\_refined\_efficiency\] can be directly obtained by (\[eff-1\]) and (\[control\_eta1\]), and the estimate (\[ineq\_refined\_efficiency\]) is an immediate consequence of the following Lemma \[lemma\_refined\_qn\] and Lemma \[lemma\_refined\_u\]. \[lemma\_refined\_qn\] For any $F\in {{\mathcal E}}^0_h$, if $h_F \leq O(\epsilon)$, we have $$\begin{aligned} & \epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h\cdot {{\boldsymbol{n}}}\rrbracket \|^2_F\\ \leq & C \sum_{T \in \omega_F} \left( \epsilon^{-1}\| {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h \|^2_{0,T} + \epsilon\| \nabla(u-u_h) \|^2_{0,T} +\|u-u_h\|^2_{0,T} + osc^2_h(R_h,T) \right).\end{aligned}$$ Note that for any $w \in H^1_0(\omega_F)$, we have $$\begin{aligned} \langle \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket, w \rangle_F& = \sum_{T \in \omega_F} \langle ({{\boldsymbol{q}}}- {{\boldsymbol{q}}}_h )\cdot {{\boldsymbol{n}}}, w \rangle_{\partial T} = \sum_{T \in \omega_F} \left( ({\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h),w)_T + ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\nabla w)_T \right) \\ &=\sum_{T \in \omega_F} \left( ( R_h - {{\boldsymbol{\beta}}}\cdot \nabla(u-u_h) - c(u-u_h),w )_T+ ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,\nabla w)_T \right),\end{aligned}$$ Applying Lemma \[bubble\_lemma\] with $w = B_F \llbracket {{\boldsymbol{q}}}_h\cdot{{\boldsymbol{n}}}\rrbracket$, we have $$\begin{aligned} &\epsilon^{-\frac{1}{2}} \alpha_F \| \llbracket {{\boldsymbol{q}}}_h\cdot {{\boldsymbol{n}}}\rrbracket\|^2_F{\nonumber}\\ & \leq C \sum_{T \in \omega_F} \left( \epsilon^{-1}\| {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h \|^2_{0,T} + \alpha^2_T \| R_h \|^2_{0,T} + \alpha^2_T \| {{\boldsymbol{\beta}}}\cdot \nabla (u-u_h)+c(u-u_h) \|^2_{0,T} \right).\end{aligned}$$ If $h_F \leq O(\epsilon)$, we have $\alpha^2_T \leq O(\epsilon)$ for $T\in \omega_F$. Then, by the above inequality and the estimate (\[eff-1\]), we can conclude that the proof is complete. By the similar approach as in Lemma 3.4 of [@Cockburn14], we get the following estimate. \[lemma\_refined\_u\] For any $F\in {{\mathcal E}}^0_h$, if $h_F \leq O(\epsilon)$, we have $$\gamma_F \|\llbracket u_h \rrbracket \|^2_F \leq C \sum_{T \in \omega_F} \left( \epsilon^{-1}\| {{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h \|^2_{0,T} + \epsilon\| \nabla(u-u_h) \|^2_{0,T} \right).$$ We denote $P_{M_0}$ by the $L^2$ orthogonal projection operator onto the space $M_{0,h}$, where $M_{0,h}:=\{ \mu\in L^2(\mathcal E_h)\, : \, \mu|_F\in \mathcal P_0(F)\ {\rm for\ all }\ F\in \mathcal E_h\}$. By the equation (\[P1\]) in the HDG method, we have that, for any $T\in {{\mathcal T}}_h$ and any ${{\boldsymbol{r}}}$ in the lowest order Raviart-Thomas space $RT_0(T)$, $$(\epsilon^{-1} {{{\boldsymbol{q}}}}_h, {{\boldsymbol{r}}})_T-(u_h, {{\rm div \, {{{\boldsymbol{r}}}}}})_T+\langle \widehat u_h, {{{{\boldsymbol{r}}}}\cdot {{{\boldsymbol{n}}}}}\rangle_{\partial T}=0.$$ Since $\widehat u_h$ is single-valued on $F$, then for any ${{\boldsymbol{r}}}\in H({\rm div}, \omega_F)$, we have $$\epsilon^{-1} ( {{\boldsymbol{q}}}_h + \epsilon \nabla u_h, {{\boldsymbol{r}}})_{\omega_F} = -\sum_{T \in \omega_F} \sum_{e \in \partial T \setminus F} \langle \widehat u_h-u_h, {{\boldsymbol{r}}}\cdot {{\boldsymbol{n}}}\rangle_F + \langle \llbracket u_h \rrbracket , {{\boldsymbol{r}}}\cdot {{\boldsymbol{n}}}\rangle_F.$$ We take ${{\boldsymbol{r}}}_{0}\in H({\rm div}, \omega_F)$ such that ${{\boldsymbol{r}}}_{0}|_{T} \in RT_0(T) \text{ for all } T \in \omega_F$, $ \int_F {{\boldsymbol{r}}}_{0} \cdot {{\boldsymbol{n}}}= \int_F P_{M_0}\llbracket u_h \rrbracket$ and $\int_e {{\boldsymbol{r}}}_{0} \cdot {{\boldsymbol{n}}}= 0$ for all $e \in \partial \omega_{F}$. Then we obtain $$\begin{aligned} \|P_{M_0}\llbracket u_h \rrbracket\|^2_{0,F} &= \langle \llbracket u_h \rrbracket , {{\boldsymbol{r}}}_{0} \cdot {{\boldsymbol{n}}}\rangle_F = \epsilon^{-1} ( {{\boldsymbol{q}}}_h + \epsilon \nabla u_h, {{\boldsymbol{r}}}_{0} )_{\omega_F} \\ & \leq C \sum_{T\in \omega_F} \epsilon^{-1} \| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|_{0,T} \| {{\boldsymbol{r}}}_{0} \|_{0,T}.\end{aligned}$$ Obviously, $\| {{\boldsymbol{r}}}_{0} \|_{0,T} \leq C h^{\frac{1}{2}}_F \| {{\boldsymbol{r}}}_{0} \cdot {{\boldsymbol{n}}}\|_{0,F}$ (cf. Lemma A.1 in [@Cockburn14]). Therefore we have $$\begin{aligned} \label{eff-2} \|P_{M_0}\llbracket u_h \rrbracket\|^2_{0,F} \leq C \frac{h_F}{\epsilon^2} \| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|^2_{0,\omega_F}.\end{aligned}$$ Note that $$\begin{aligned} \| (I-P_{M_0})\llbracket u_h \rrbracket\|^2_{0,F} = \| (I-P_{M_0})\llbracket u- u_h \rrbracket\|^2_{0,F} \leq 2 \sum_{T\in \omega_F} \| (I-P_{W_0})(u - u_h|_T)\|^2_{0,F}\end{aligned}$$ where $P_{W_0}$ is the $L^2$ orthogonal projection operator onto the space of piecewise constant functions on each element. Then the trace theorem and Poincaré’s inequality indicate that $$\begin{aligned} \| (I-P_{M_0})\llbracket u_h \rrbracket\|^2_{0,F} \leq C h_F \| \nabla(u-u_h) \|^2_{0,\omega_F}. \label{eff-3}\end{aligned}$$ Combining (\[eff-2\]) and (\[eff-3\]) yields that $$\| \llbracket u_h \rrbracket\|^2_{0,F} \leq C \left( h_F \| \nabla(u-u_h) \|^2_{0,\omega_F} + \frac{h_F}{\epsilon^2} \| {{\boldsymbol{q}}}_h + \epsilon \nabla u_h \|^2_{0,\omega_F} \right).$$ By subtracting and adding ${{\boldsymbol{q}}}$ into ${{\boldsymbol{q}}}_h + \epsilon \nabla u_h $ and the fact that $\gamma_F \frac{h_F}{\epsilon} \leq O(1)$ if $h_F \leq O(\epsilon)$, we can conclude that the proof is complete. Numerical experiments ===================== In this section, we present numerical results of the adaptive HDG method for two dimensional model problems to show how the meshes are generated adaptively and how the estimators and the errors behave due to the effects from the quantity $\epsilon/\|\boldsymbol{\beta} \|_{L^\infty(\Omega)}$. The adaptive HDG procedure consists of adaptive loops of the cycle “SOLVE $\rightarrow $ ESTIMATE $\rightarrow $ MARK $\rightarrow $ REFINE”. In the step SOLVE, we choose the direct method such as the the multifrontal method to solve the discrete system. In the step ESTIMATE, we adopt the reliable and efficient a posteriori error estimators suggested in the above sections. In the step REFINE, we apply the newest vertex bisection algorithm (see [@RS07] and the references therein for details). For the step MARK, we use the bulk algorithm which defines a set $\mathcal {M}^F_h$ of marked edges such that $$\sum_{F \in \mathcal {M}^F_h} \left(( \eta^0_F))^2+(\eta^{\partial}_F)^2\right) \geq \theta_1 \sum_{F \in {{\mathcal E}}_h} \left(( \eta^0_F))^2+(\eta^{\partial}_F)^2\right)$$ and a set $\mathcal {M}^T_h$ of marked triangles such that $$\sum_{T \in \mathcal {M}^T_h} \eta_T^2\geq \theta_2 \sum_{T \in {{\mathcal T}}_h} \eta_T^2,$$ where $\theta_1$ and $\theta_2$ are optional parameters and we use $\theta_1=\theta_2=0.5$ in the following experiments. For brevity, we denote by $\eta^2_1 = \sum_{T \in {{\mathcal T}}_h} \eta_T^2$ and $\eta^2_2 = \sum_{F \in {{\mathcal E}}_h} \left(( \eta^0_F))^2+(\eta^{\partial}_F)^2\right)$. For any $({{\boldsymbol{p}}},w) \in {{\boldsymbol{H}}}^1({{\mathcal T}}_h) \times H^1({{\mathcal T}}_h)$, we define an error norm $ \| ({{\boldsymbol{p}}},w)\|^2_h = \sum_{T\in {{\mathcal T}}_h} ( \epsilon^{-1} \| {{\boldsymbol{p}}}\|^2_{0,T} + \|w\|^2_{0,T} ). $ In the following experiments, the adaptive HDG method is implemented for piecewise linear (HDG-P1), quadratic (HDG-P2), and cubic (HDG-P3) finite element spaces. The figures displaying the convergence history are all plotted in log-log coordinates. ![Adaptively refined mesh (left) and 3D plot of the corresponding approximate solution $u_h$ (right) by HDG-P2 for the case $\epsilon = 10^{-4}$.[]{data-label="ex1-1"}](fig/ex1/ex1_p2_1e4_grid.pdf "fig:"){width="6cm" height="5.5cm"} ![Adaptively refined mesh (left) and 3D plot of the corresponding approximate solution $u_h$ (right) by HDG-P2 for the case $\epsilon = 10^{-4}$.[]{data-label="ex1-1"}](fig/ex1/ex1_p2_1e4_solution.pdf "fig:"){width="6cm" height="5.5cm"} We consider a boundary layer problem in [@AyusoMarini:cdf]. The convection-diffusion equation (\[cd\_eqs\]) is solved in the domain $\Omega = [0,1]\times [0,1]$ with $\beta = [1,1]^T$, $c=0$. The source term $f$ and the Dirichlet boundary condition are chosen such that $$u(x,y) = x + y(1-x) + \frac{ e^{-1/\epsilon} - e^{-(1-x)(1-y)/\epsilon} }{1-e^{-1/\epsilon}}$$ is the exact solution. ![Convergence history of the adaptive HDG method. Left-Right: $\epsilon=10^{-5}, \epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex1-2"}](fig/ex1/1e-5_p1.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left-Right: $\epsilon=10^{-5}, \epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex1-2"}](fig/ex1/1e-6_p1.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left-Right: $\epsilon=10^{-5}, \epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex1-2"}](fig/ex1/1e-5_p2.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left-Right: $\epsilon=10^{-5}, \epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex1-2"}](fig/ex1/1e-6_p2.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left-Right: $\epsilon=10^{-5}, \epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex1-2"}](fig/ex1/1e-5_p3.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left-Right: $\epsilon=10^{-5}, \epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex1-2"}](fig/ex1/1e-6_p3.pdf "fig:"){width="6cm" height="5.5cm"} The solution develops boundary layers along the boundaries $x=1$ and $y=1$ for small $\epsilon$. The initial quasi-uniform mesh consists of 800 triangles and the initial mesh size $h_0=0.05$. Actually, our algorithm is robust for any coarser mesh. Figure \[ex1-1\] displays the adaptively refined mesh and the corresponding approximate solution $u_h$ by 20 iterations of the adaptive HDG-P2 method for two cases $\epsilon = 10^{-4}$ and $\epsilon=10^{-5}$. One can observe that the mesh is always locally refined at the singularities along the boundaries $x=1$ and $y=1$, and the boundary layer solution can be captured on the adaptively refined mesh. In Figure \[ex1-2\], we show the error $\| ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,u-u_h) \|_h$, the total energy error $\interleave ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,u-u_h) \interleave_h$ (the total energy error is defined in (\[energy-error\])), the a posteriori error estimators $\eta_1$ and $\eta_2$, and the total a posteriori error estimator as functions of $N$ which is the number of degrees of freedom (DOFs) of $\widehat{u}_h$ for two cases $\epsilon = 10^{-5}$ and $\epsilon=10^{-6}$ by HDG-P1, HDG-P2 and HDG-P3 respectively. In the following, we always let DOFs refer to the DOFs of $\widehat{u}_h$. For the case $\epsilon = 10^{-5}$, the convergence results indicate the robustness of the proposed a posteriori error estimator and the almost optimal convergence rate $ O(N^{-p/2})$ for the adaptive HDG method when the number of DOFs is sufficiently large. Here, $p$ is the polynomial order. The convergence of $\eta_1$ and $\eta_2$ is similar as the total a posteriori error estimator. For smaller $\epsilon = 10^{-6}$ in this example, although the convergence of the error $\| ({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h,u-u_h) \|_h$ slows down, the convergence of the a posteriori error estimators and the total energy error is also almost $O(N^{-p/2})$ when $p=1,2$ and $O(N^{-1})$ when $p=3$ on the currently obtained meshes. We consider an internal layer problem in [@Voh07]. We set $\Omega = [0,1] \times [0,1] , \beta = [0,1]^T, c=1$. The source term $f$ and the Dirichlet boundary condition are chosen such that $$u(x,y) = 0.5\left( 1- {\rm tanh} \left( \frac{0.5-x}{\alpha} \right) \right)$$ is the exact solution, where $\alpha$ is the width of the internal layer. ![Adaptively refined mesh (Left) and 3D plot (Right) of the corresponding approximate solution $u_h$ by HDG-P3 for the case $\alpha= 10^{-4}$ and $\epsilon=10^{-6}$.[]{data-label="ex2-1"}](fig/ex2/1e4_6_grid.pdf "fig:"){width="6cm" height="5.5cm"} ![Adaptively refined mesh (Left) and 3D plot (Right) of the corresponding approximate solution $u_h$ by HDG-P3 for the case $\alpha= 10^{-4}$ and $\epsilon=10^{-6}$.[]{data-label="ex2-1"}](fig/ex2/1e4_6_solution.pdf "fig:"){width="6cm" height="6cm"} ![Convergence history of the adaptive HDG method. Left: $\alpha= 10^{-3}$ and $\epsilon =10^{-5}$. Right: $\alpha = 10^{-4}$ and $\epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex2-2"}](fig/ex2/1e3-5_p1.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left: $\alpha= 10^{-3}$ and $\epsilon =10^{-5}$. Right: $\alpha = 10^{-4}$ and $\epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex2-2"}](fig/ex2/1e4-6_p1.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left: $\alpha= 10^{-3}$ and $\epsilon =10^{-5}$. Right: $\alpha = 10^{-4}$ and $\epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex2-2"}](fig/ex2/1e3-5_p2.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left: $\alpha= 10^{-3}$ and $\epsilon =10^{-5}$. Right: $\alpha = 10^{-4}$ and $\epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex2-2"}](fig/ex2/1e4-6_p2.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left: $\alpha= 10^{-3}$ and $\epsilon =10^{-5}$. Right: $\alpha = 10^{-4}$ and $\epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex2-2"}](fig/ex2/1e3-5_p3.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the adaptive HDG method. Left: $\alpha= 10^{-3}$ and $\epsilon =10^{-5}$. Right: $\alpha = 10^{-4}$ and $\epsilon=10^{-6}$. Top-Bottom: $P1$-$P3$.[]{data-label="ex2-2"}](fig/ex2/1e4-6_p3.pdf "fig:"){width="6cm" height="5.5cm"} The solution of this problem possesses an internal layer along $x=0.5$. The initial quasi-uniform mesh consists of 128 triangles and the initial mesh size $h_0=0.12$. The graphs of Figure \[ex2-1\] show the plots of the mesh and solution by 32 iterations of the adaptive HDG-P3 method for the case $\alpha= 10^{-4}$ and $\epsilon =10^{-6}$. We can see that the singularities of the solutions can also be captured near $x=0.5$ on the adaptively refined mesh. Figure \[ex2-2\] shows the convergence of the corresponding errors. The robustness of the proposed a posteriori error estimator and the almost optimal convergence rate of the adaptive HDG method can be observed for HDG-P1, HDG-P2 and HDG-P3 when the number of DOFs is large enough. Moreover, we can also see that $\eta_1$ and $\eta_2$ converge similarly as the total a posteriori error estimator. ![Adaptively refined mesh (Left) and the 3D plot (Right) of the corresponding approximate solution $u_h$ by HDG-P3 for the case $\epsilon =10^{-4}$.[]{data-label="ex3-1"}](fig/ex3/1e4_p3_grid.pdf "fig:"){width="6cm" height="5.5cm"} ![Adaptively refined mesh (Left) and the 3D plot (Right) of the corresponding approximate solution $u_h$ by HDG-P3 for the case $\epsilon =10^{-4}$.[]{data-label="ex3-1"}](fig/ex3/1e4_p3_solution.pdf "fig:"){width="6cm" height="5.5cm"} This example is also taken from [@AyusoMarini:cdf]. We set $\beta = [1/2,\sqrt{3}/2]^T,c=0$, the source term $f=0$ and the Dirichlet boundary conditions as follows: $$u= \begin{cases} 1 & \text{on $ \{y=0, 0\leq x \leq 1\} $ }, \\ 1& \text{on $ \{x=0, 0\leq y \leq 1/5\} $ },\\ 0 & \text{elsewhere }. \end{cases}$$ ![Convergence history of $\eta_1$, $\eta_2$ and the total a posteriori error estimator $\eta$ by HDG-P1 (Left) and comparing of convergence history of the estimator by HDG-P1, HDG-P2 and HDG-P3 (Right) for the case $\epsilon =10^{-4}$.[]{data-label="ex3-2-1"}](fig/ex3/1e_4_P1.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of $\eta_1$, $\eta_2$ and the total a posteriori error estimator $\eta$ by HDG-P1 (Left) and comparing of convergence history of the estimator by HDG-P1, HDG-P2 and HDG-P3 (Right) for the case $\epsilon =10^{-4}$.[]{data-label="ex3-2-1"}](fig/ex3/1e_4.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the estimator $\eta$ by HDG-P1, HDG-P2 and HDG-P3. Left: $\epsilon =10^{-5}$. Right: $\epsilon=10^{-6}$.[]{data-label="ex3-2"}](fig/ex3/1e_5.pdf "fig:"){width="6cm" height="5.5cm"} ![Convergence history of the estimator $\eta$ by HDG-P1, HDG-P2 and HDG-P3. Left: $\epsilon =10^{-5}$. Right: $\epsilon=10^{-6}$.[]{data-label="ex3-2"}](fig/ex3/1e_6.pdf "fig:"){width="6cm" height="5.5cm"} The solution of this problem possesses both interior layer and outflow layer. The initial quasi-uniform mesh consists of 800 triangles and the initial mesh size $h_0=0.05$. The graphs of Figure \[ex3-1\] show the adaptively refined mesh and the 3D plot of the corresponding approximate solution $u_h$ by 23 iterations of the adaptive HDG-P3 method for the case $\epsilon =10^{-4}$. Figure \[ex3-1\] indicates that both the interior and outflow layers can be captured by the adaptively refined mesh strategy. In particular, when the mesh size is $O(\epsilon)$ near the outflow layer and $O(\sqrt{\epsilon})$ near the interior layer, both interior and outflow layers can be captured well. Since there is no exact solution for this problem, we only show the convergence of the proposed a posteriori error estimator for the cases $\epsilon = 10^{-4},10^{-5}$ and $10^{-6}$ in Figure \[ex3-2-1\] and Figure \[ex3-2\]. For the cases $\epsilon = 10^{-4},10^{-5}$, the almost optimal convergence rate $O(N^{-p/2})$ of the estimator can be observed for HDG-P1, HDG-P2 and HDG-P3 when the number of DOFs is large enough, and the convergence rate is faster when $p$ is larger. From the left graph of Figure \[ex3-2-1\], we can also see that $\eta_1$ and $\eta_2$ converge similarly as the total a posteriori error estimator for the case $\epsilon=10^{-4}$ by HDG-P1. Actually, for other cases the convergence property of $\eta_1$ and $\eta_2$ is always similar. For smaller $\epsilon=10^{-6}$, the convergence of the total a posteriori error estimator is almost the same as $O(N^{-1})$ for $p=1,2,3$ on the currently obtained meshes. Appendix A. Estimate of $I_{1}$ {#appendix-a.-estimate-of-i_1 .unnumbered} ================================ By the Cauchy-Schwarz and Young’s inequalities, we get, for any $\delta>0$, $$\begin{aligned} \label{est_I1} I_1 &\leq \frac{\delta}{2} \epsilon^{-1} \| {{\boldsymbol{q}}}- {{\boldsymbol{q}}}_h \|^2_{0,{{\mathcal T}}_h} + \frac{1}{2\delta} \| \nabla \psi e^{-\psi} \|^2_{L^\infty(\Omega)} \epsilon \| u_h -u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h}\\ {\nonumber}&\quad + \frac{\delta}{2} \epsilon^{-1} \| {{\boldsymbol{q}}}- {{\boldsymbol{q}}}_h \|^2_{0,{{\mathcal T}}_h} +\frac{1}{2\delta} \|\varphi\|^2_{L^\infty(\Omega)} \epsilon^{-1} \| {{\boldsymbol{q}}}_h +\epsilon \nabla u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h}\\ {\nonumber}&\leq \delta \epsilon^{-1} \| {{\boldsymbol{q}}}- {{\boldsymbol{q}}}_h \|^2_{0,{{\mathcal T}}_h} + \frac{1}{2\delta} \| \nabla \psi e^{-\psi} \|^2_{L^\infty(\Omega)} \epsilon \| u_h -u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h}\\ {\nonumber}&\quad + \frac{1}{\delta}\|\varphi\|^2_{L^\infty(\Omega)} \left( \epsilon^{-1} \| {{\boldsymbol{q}}}_h+\epsilon \nabla u_h \|^2_{0,{{\mathcal T}}_h} + \epsilon \|\nabla u_h - \nabla u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h} \right).\end{aligned}$$ Appendix B. estimate of $I_{4}$ {#appendix-b.-estimate-of-i_4 .unnumbered} =============================== Now we consider the estimate of $I_4$. It is clear that $$-\left( {{\boldsymbol{\beta}}}\cdot \nabla(u-u^{{{\mathcal I}}}_h) , \varphi( u-u^{{{\mathcal I}}}_h ) \right)_{{{\mathcal T}}_h} = \frac{1}{2}\left( {{\boldsymbol{\beta}}}\cdot \nabla \varphi, (u-u^{{{\mathcal I}}}_h)^2\right)_{{{\mathcal T}}_h} + \frac{1}{2}\left( {\rm div}{{\boldsymbol{\beta}}}\, , \varphi(u-u^{{{\mathcal I}}}_h)^2\right)_{{{\mathcal T}}_h},$$ which can be obtained by integration by parts and $\langle {{\boldsymbol{\beta}}}\cdot {{\boldsymbol{n}}}, \varphi( u-u^{{{\mathcal I}}}_h )^2\rangle_{\partial {{\mathcal T}}_h}=0$. Then, we have $$\begin{aligned} I_4&=\left( {{\boldsymbol{\beta}}}\cdot \nabla \varphi(u-u_h),u_h - u^{{{\mathcal I}}}_h \right)_{{{\mathcal T}}_h} + \frac{1}{2}\left( {{\boldsymbol{\beta}}}\cdot \nabla \varphi(u^{{{\mathcal I}}}_h-u_h ) ,u^{{{\mathcal I}}}_h-u_h\right)_{{{\mathcal T}}_h}{\nonumber}\\ &\quad + 2\big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}})(u^{{{\mathcal I}}}_h-u_h ), \varphi(u-u_h) \big)_{{{\mathcal T}}_h} - \big( (c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}})(u^{{{\mathcal I}}}_h-u_h ), \varphi(u^{{{\mathcal I}}}_h-u_h) \big)_{{{\mathcal T}}_h}. {\nonumber}\end{aligned}$$ Appling the Cauchy-Schwarz and Young’s inequalities, we get $$\begin{aligned} \label{est_I4} I_4 \leq \frac{1}{2}(C^d_5+2C^d_6)(\frac{1}{\delta}+1) \| u_h - u^{{{\mathcal I}}}_h \|^2_{0,{{\mathcal T}}_h} + (\frac{1}{2}C^d_5+C^d_6)\delta \| u- u_h \|^2_{0,{{\mathcal T}}_h},\end{aligned}$$ where $C^d_5 = \| {{\boldsymbol{\beta}}}\cdot \nabla \varphi \|_{L^\infty(\Omega)}, C^d_6=\|\varphi(c-\frac{1}{2}{\rm div}{{\boldsymbol{\beta}}})\|_{L^\infty(\Omega)}$. Appendix C. Proof of Lemma \[lemma\_control\_eta2\] {#appendix-c.-proof-of-lemma-lemma_control_eta2 .unnumbered} =================================================== We begin by the estimate $\alpha^2_T \| R_h \|^2_{0,T} \leq 2 \alpha^2_T \| P_WR_h \|^2_{0,T} + 2 osc^2_h(R_h,T)$. Note that for any $w\in H^1_0(T)$, we have $$(R_h,w)_T = ( {\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h)+{{\boldsymbol{\beta}}}\cdot \nabla (u-u_h),w )_T + (c(u-u_h),w)_T.$$ Moreover, for the element bubble function $B_T$, we have $$\| P_W R_h \|^2_{0,T} \thickapprox \int_T B_T (P_W R_h)^2 \thickapprox \| B_T P_W R_h \|^2_{0,T} .$$ Here, we use $A \thickapprox B$ if $C B \leq A \leq CB$ with positive constant $C$. Taking $w = B_T P_W R_h$, we derive that $$\begin{aligned} \| P_W R_h \|^2_{0,T} &\thickapprox(P_W R_h ,w)_T = ( R_h ,w)_T - (R_h - P_W R_h ,w)_T\\ & = ( {\rm div}({{\boldsymbol{q}}}-{{\boldsymbol{q}}}_h)+{{\boldsymbol{\beta}}}\cdot \nabla (u-u_h),w )_T + (c(u-u_h),w)_T- (R_h - P_W R_h ,w)_T.\end{aligned}$$ Then, we can conclude the proof is complete by the above estimates and the Young’s inequality. [10]{} R. Adams, Sobolev Spaces, Academic Press, New York, 1975. M. Ainsworth and J. Oden, A Posteriori Error Estimation in Finite Element Analysis, Wiley-Interscience Series in Pure and Applied Mathematics, New York: Wiley, 2000. L.E. Alaoui, A. Ern, and E. Burman, A priori and a posteriori analysis of nonconforming finite elements with face penalty for advection-diffusion equations, IMA J. Numer. Anal., 27 (2007) , pp. 151–171. , [Discontinuous Galerkin methods for advection-diffusion-reaction problems]{}, SIAM J. Numer. Anal., 47 (2009), pp. 1391–1420. S. Badia and R. Codina, Analysis of a stabilized finite element approximation of the transient convection-diffusion equation using an ALE framework, SIAM J. Numer. Anal., 44 (2006), pp. 2159–2197. , [Streamline upwind/[P]{}etrov-[G]{}alerkin formulations for convection dominated flows with particular emphasis on the incompressible [N]{}avier-[S]{}tokes equations]{}, Comput. Methods Appl. Mech. Engrg., 32 (1982), pp.199–259. , [A priori error analysis of residual-free bubbles for advection-diffusion problems]{}, SIAM J. Numer. Anal., 36 (1999), pp. 1933–1948. , [Residual-free bubbles for advection-diffusion problems: the general error analysis]{}, Numer. Math., 85 (2000), pp. 31–47. , [A Petrov-Galerkin discretization with optimal test space of a mild-weak formulation of convection-diffusion equations in mixed form]{}, IMA J. Numer. Anal., 2014, accepted. , [Stabilized Galerkin approximation of convection-diffusion-reaction equations: discrete maximum principle and convergence]{}, Math. Comp., 74 (2005), pp. 1637–1652. E. Burman and P. Hansbo, Edge stabilization for Galerkin approximations of convection-diffusion-reaction problems, Comput. Methods Appl. Mech. Engrg., 193 (2004), pp. 1437–1453. E. Burman and A. Ern, Continuous interior penalty $hp$-finite element methods for advection and advection-diffusion equations, Math. Comp., 76 (2007), pp. 1119–1140. , [A robust DPG method for convection-dominated diffusion problems II: Adjoint boundary conditions and mesh-dependent test norms]{}, Computers & Mathematics with Applications, 67 (2014), pp. 771–795. , [*Analysis of variable-degree [HDG]{} methods for convection-diffusion equations. [P]{}art I: General nonconforming meshes*]{}, IMA J. Num. Anal., **32(4)** (2012), 1267–1293. H. Chen, G. Fu, J. Li, and W. Qiu, First order least square method with ultra-weakly imposed boundary condition for convection dominated diffusion problems, submitted, arXiv:1309.7108\[math.NA\] (2013). R. Codina and J. Blasco, Analysis of a stabilized finite element approximation of the transient convection-diffusion-reaction equation using orthogonal subscales, Comput. Vis. Sci., 4 (2002), pp. 167–174. B. Cockburn, B. Dong, J. Guzmán, M. Restelli, and R. Sacco, A hybridizable discontinuous Galerkin method for steady-state convection-diffusion-reaction problems, J. Sci. Comput., 31 (2009), pp. 3827–3846. B. Cockburn, J. Gopalakrishnan, and R. Lazarov, Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems, SIAM J. Numer. Anal., [**47**]{} (2009), pp. 1319–1365 . B. Cockburn, J. Gopalakrishnan, and F.J. Sayas, A projection-based error analysis of HDG methods, Math Comp., 79 (2010), pp. 1351–1367. B. Cockburn and W. Zhang, A posteriori error estimates for HDG methods, J. Sci. Comput., 51 (2012), pp. 582–607. B. Cockburn and W. Zhang, A posteriori error analysis for hybridizable discontinuous Galerkin methods for second order elliptic problems, SIAM J. Numer. Anal., 51 (2013), pp. 676–693. B. Cockburn and W. Zhang, An a posteriori error estimate for the variable-degree Raviart-Thomas method, Math. Comp., 83 (2014), pp. 1063–1082. , [Robust DPG method for convection-dominated diffusion problems]{}, SIAM J. Numer. Anal., 51 (2013), pp. 2514–2537. W. Eckhaus, Boundary layers in linear elliptic singular perturbation problems, SIAM Rev., 14 (1972), pp. 225–270. , [*A hybrid mixed discontinuous Galerkin finite-element method for convection–diffusion problems*]{}, IMA J. Num. Anal., **30** (2010), 1206–1234. K. Eriksson and C. Johnson, Adaptive streamline diffusion finite element methods for stationary convection-diffusion problems, Math. Comp., 60 (1993), pp. 167–188. A. Ern and A. Stephansen, A posteriori energy-norm error estimates for advection-diffusion equations approximated by weighted interior penalty methods, J. Comput. Math., 26 (2008), pp. 488–510. A. Ern, A. Stephansen, and M.Vohralík, Guaranteed and robust discontinuous Galerkin a posteriori error estimates for convection-diffusion-reaction problems, J. comput. Appl. Math., 234 (2010), pp. 114–130. H. Goering, A. Felgenhauer, G. Lube, H.-G. Roos, and L. Tobiska, Singularly Perturbed Differential Equations, Akademie-Verlag, Berlin, 1983. J. Guzmán, Local analysis of discontinuous Galerkin methods applied to singularly perturbed problems, J. Numer. Math.,14 (2006), pp. 41–56. G. Fu, W. Qiu and W. Zhang, An analysis of HDG methods for convection-dominated diffusion problems, submitted, arXiv:1310.0887\[math.NA\] (2013). , [Discontinuous hp-finite element methods for advection-diffusion-reaction problems]{}, SIAM J. Num. Anal., 39 (2002), pp. 2133–2163. , [A multiscale discontinuous Galerkin method with the computational structure of a continuous Galerkin method]{}, Comput. Methods Appl. Mech. Engrg., 195 (2006), pp. 2761–2787. O. A. Karakashian and F. Pascal, A posteriori error estimates for a discontinuous Galerkin approximation of a second order elliptic problems, SIAM J. Numer. Anal., 41 (2003), pp. 2374–2399. O. A. Karakashian and F. Pascal, Convergence of adaptive discontinuous Galerkin approximations of second-order elliptic problems, SIAM J. Numer. Anal., 45 (2007), pp. 641–665. R.M. Kirby, S.J. Sherwin, and B. Cockburn, To CG or to HDG: A comparative study, J. Sci. Comput., [**51**]{} (2012), pp. 183–212. P. Knobloch and L. Tobiska, On the stability of finite element discretizations of convection diffusion reaction equations, IMA J. Numer. Anal., 31 (2011), pp. 147–164. N.C. Nguyen, J. Peraire, and B. Cockburn, An implicit high-order hybridizable discontinuous Galerkin method for linear convection-diffusion equations, J. Comput. Phys., 288 (2009), pp. 3232–3254. W. Reed and T. Hill, Triangular mesh methods for the neutron transport equation, Technical Report LA- UR-73-479, Los Alamos Scientific Laboratory, 1973. , [Robust Numerical Methods for Singularly Perturbed Differential Equations]{}, volume 24 of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, 2008, Second edition. G. Sangalli, Robust a posteriori estimator for advection-diffusion-reaction problems, Math. Comp., 77 (2008), pp. 41–70. D. Schötzau and L. Zhu, A robust a posteriori error estimator for discontinuous Galerkin methods for convection diffusion equations, Appl. Numer. Math., 59 (2009), pp. 2236–2255. R. Stevenson, Optimality of a standard adaptive finite element method, Foundations of Computional Mathematics, 2 (2007), pp. 245–269. , [Steady state convection diffusion problems]{}, Acta Numer., 14 (2005), pp. 445–508. L. Tobiska and R. Verfürth, Robust a posteriori error estimates for stabilized finite element methods, submitted, arXiv:1402.5892\[math.NA\], (2014). U. Nävert, A finite element method for convection-diffusion problems. PhD thesis, Department of Computer Science, Chalmers University of Technology, Göteborg, 1982. R.Verfürth, A Review of Posteriori Error Estimation and Adaptive Mesh-refinement Techniques, Wiley-Teubner, Chichester,1996. R.Verfürth, A posteriori error estimators for convection-diffusion equations, Numer. Math., 80 (1998), pp. 641–663. R.Verfürth, Robust a posteriori error estimates for stationary convection-diffusion equations, SIAM J.Numer. Anal., 43 (2005), pp. 1766–1782. M.Vohralík, A posteriori error estimates for lowest-order mixed finite element discretizations of convection-diffusion-reaction equations, SIAM J. Numer. Anal., 45 (2007), pp.1570–1599. M.Vohralík, Residual flux-based a posteriori error estimates for finite volume and related locally conservative methods, Numer. Math., 111 (2008), pp. 121–158. L. Zhu and D. Schötzau, A robust a posteriori error estimate for hp-adaptive DG methods for convection-diffusion equations, IMA J. Numer. Anal., 31 (2011), pp. 971–1005. L. Zhu, S. Giani, P. Houston, and D. Schötzau, Energy norm a posteriori error estimation for hp-adaptive discontinuous Galerkin methods for elliptic problems in three dimensions, Math. Models Methods Appl. Sci., 21 (2011), pp. 267–306. [^1]: The authors would also like to thank the associate editor and all the referees for constructive criticism leading to a better presentation of the material in this paper. The first author would like to thank the support from the City University of Hong Kong where this work was carried out during his visit, and he also thanks the supports from the NSF of China (Grant No. 11201394) and the NSF of Fujian Province (Grant No. 2013J05016). The work of the second author was supported by the NSF of China (Grant No. 11201453). The work of the third author was supported by the GRF of Hong Kong (Grant No. 9041980).
--- abstract: '[*Computability logic*]{} (CoL) is a recently introduced semantical platform and ambitious program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which logic has more traditionally been. Its expressions represent interactive computational tasks seen as games played by a machine against the environment, and “truth” is understood as existence of an algorithmic winning strategy. With logical operators standing for operations on games, the formalism of CoL is open-ended, and has already undergone series of extensions. This article extends the expressive power of CoL in a qualitatively new way, generalizing [*formulas*]{} (to which the earlier languages of CoL were limited) to circuit-style structures termed [*cirquents*]{}. The latter, unlike formulas, are able to account for subgame/subtask sharing between different parts of the overall game/task. Among the many advantages offered by this ability is that it allows us to capture, refine and generalize the well known [*independence-friendly logic*]{} which, after the present leap forward, naturally becomes a conservative fragment of CoL, just as classical logic had been known to be a conservative fragment of the formula-based version of CoL. Technically, this paper is self-contained, and can be read without any prior familiarity with CoL.' author: - 'Giorgi Japaridze[^1]' title: From formulas to cirquents in computability logic --- \[section\] \[theoremm\][Definition]{} \[theoremm\][Description]{} [*MSC*]{}: primary: 03B47; secondary: 03F50; 03B70; 68Q10; 68T27; 68T30; 91A05 [*Keywords*]{}: Computability logic; Abstract resource semantics; Independence-friendly logic; Game semantics; Interactive computation Introduction {#sintr} ============ [*Computability logic*]{} (CoL), introduced in [@Jap03; @Japic; @Japfin], is a semantical platform and ambitious program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which logic has more traditionally been. Its expressions stand for interactive computational tasks seen as games played by a machine against the environment, and “truth” is understood as existence of an effective solution, i.e., of an algorithmic winning strategy. With this semantics, CoL provides a systematic answer to the fundamental question “[*what can be computed?*]{}”, just as classical logic is a systematic tool for telling what is true. Furthermore, as it turns out, in positive cases “[*what*]{} can be computed” always allows itself to be replaced by “[*how*]{} can be computed”, which makes CoL of potential interest in not only theoretical computer science, but many more applied areas as well, including interactive knowledge base systems, resource oriented systems for planning and action, or declarative programming languages. On the logical side, CoL promises to be an appealing, constructive and computationally meaningful alternative to classical logic as a basis for applied theories. The first concrete steps towards realizing this potential have been made very recently in [@Japtowards; @cla5], where CoL-based versions of Peano arithmetic were elaborated. The system constructed in [@Japtowards] is an axiomatic theory of [*effectively solvable*]{} number-theoretic [*problems*]{} (just as the ordinary Peano arithmetic is an axiomatic theory of [*true*]{} number-theoretic [*facts*]{}); the system constructed in [@cla4] is an axiomatic theory of [*efficiently solvable*]{} (namely, solvable in polynomial time) number-theoretic [*problems*]{}; in the same style, [@cla5] constructs systems for polynomial space, elementary recursive, and primitive recursive computabilities. In all cases, a solution for a problem can be effectively — in fact, efficiently — extracted from a proof of it in the system, which reduces problem-solving to theorem-proving. The formalism of CoL is open-ended. It has already undergone series of extensions ([@Japseq]-[@Japtogl]) through introducing logical operators for new, actually or potentially interesting, operations on games, and this process will probably still continue in the future. The present work is also devoted to expanding the expressive power of CoL, but in a very different way. Namely, while the earlier languages of CoL were limited to [*formulas*]{}, this paper makes a leap forward by generalizing formulas to circuit-style structures termed [*cirquents*]{}. These structures, in a very limited form (compared with their present form), were introduced earlier [@Cirq; @Japdeep] in the context of the new proof-theoretic approach called [*cirquent calculus*]{}. Cirquent-based formalisms have significant advantages over formula-based ones, including exponentially higher efficiency and substantially greater expressive power. Both [@Cirq] and [@Japdeep] pointed out the possibility and expediency of bringing cirquents into CoL. But a CoL-semantics for cirquents had never really been set up until now. Unlike most of its predecessors, from the technical (albeit perhaps not philosophical or motivational) point of view, the present paper is written without assuming that the reader is already familiar with the basic concepts and techniques of computability logic. It is organized as follows. Section \[s2\] reintroduces the concept of games and interactive computability on which the semantics of CoL is based. A reader familiar with the basics of CoL may safely skip this section. Section \[s3\] introduces the simplest kind of cirquents, containing only the traditional ${\vee}$ and ${\wedge}$ sorts of gates (negation, applied directly to inputs, is also present). These are nothing but (possibly infinite) Boolean circuits in the usual sense, and the semantics of CoL for them coincides with the traditional, classical semantics. While such cirquents — when finite — do not offer higher expressiveness than formulas do, they [*do*]{} offer dramatically higher efficiency. This fact alone, in our days of logic being increasingly CS-oriented, provides sufficient motivation for considering a switch from formulas to cirquents in logic, even if we are only concerned with classical logic. The first steps in this direction have already been made in [@Japdeep], where a cirquent-based sound and complete deductive system was set up for classical logic. That system was shown to provide an exponential speedup of proofs over its formula-based counterparts. Each of the subsequent Sections \[s4\]-\[s8\] conservatively generalizes the cirquents and the semantics of the preceding sections. Section \[s4\] strengthens the expressiveness of cirquents by allowing new, so called [*selectional*]{}, sorts of gates, with the latter coming in three — [*choice*]{} ${\hspace{0pt}\sqcup},{\hspace{0pt}\sqcap}$, [*sequential*]{} ${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}$ and [*toggling*]{} ${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$ — flavors of disjunction and conjunction. Unlike ${\vee}$ and ${\wedge}$ which stand for parallel combinations of (sub)tasks, selectional gates model decision steps in the process of interaction of the machine with its environment. Cirquents with such gates, allowing us to account for the possibility of [*sharing*]{} nontrivial subgames/subtasks, have never been considered in the past. They, even when finite, are more expressive (let alone efficient) than formulas with selectional connectives. Section \[s5\] introduces the idea of [*clustering*]{} selectional gates. Clusters are, in fact, generalized gates — switch-style devices that permit to select one of several $n$-tuples of inputs and connect them, in a parallel fashion, with the $n$-tuple of outputs, with ordinary gates being nothing but the special cases of clusters where $n=1$. Clustering makes it possible to express a new kind of sharing, which can be characterized as “sharing children without sharing grandchildren” — that is, sharing decisions (associated with child gates) without sharing the results of such decisions (the children of those gates). The ability to account for this sort of sharing yields a further qualitative increase in the expressiveness of the associated formalism. Sections \[s6\] and \[s7\] extend clustering from selectional gates to the traditional sorts ${\vee},{\wedge}$ of gates. It turns out that the resulting cirquents — even without selectional gates — are expressive enough to fully capture the well known and extensively studied [*independence-friendly (IF) logic*]{} introduced by Hintikka and Sandu [@HS97]. At the same time, cirquents with clustered ${\vee},{\wedge}$-gates yield substantially higher expressiveness than IF logic does. Due to this fact, they overcome a series of problems emerging within the earlier known approaches to IF logic. One of such problems is the inability of the traditional formalisms of IF logic to account for independence from conjunctions and disjunctions in the same systematic way as they account for independence from quantifiers. Correspondingly, attempts to develop IF logic at the purely propositional level have not been able to go beyond certain very limited and special syntactic fragments of the language. In contrast, our approach saves classical logic’s nice phenomenon of quantifiers being nothing but “long” conjunctions and disjunctions, so that one can do with the latter everything that can be done with the former, and vice versa. As a result, we can now (at last) meaningfully talk about [*propositional IF logic*]{} in the proper sense without any unsettling syntactic restrictions. Another problem arising with IF logic is the “unplayability” (cf. [@Stev09]) of the incomplete-information games traditionally associated with its formulas, as such games violate certain natural game-theoretic principles such as [*perfect recall*]{}. Attempts to associate reasonable game-theoretic intuitions with imperfect-information games typically have to resort to viewing the two parties not as individual players but rather as teams of cooperating but non-communicating players. This approach, however, may often get messy, and tends to give rise to series of new sorts of problems. Väänänen [@Vaa02] managed to construct a semantics for IF logic based on perfect-information games. This, however, made games unplayable for a new reason: moves in Väänänen’s games are second-order objects and hence are “unmakable”. Our approach avoids the need to deal with imperfect-information, second-order-move, or multiple-player games altogether. Section \[s7\] also introduces a further generalization of cirquents through what is termed [*ranking*]{}. Cirquents with ranking (and with only ${\vee},{\wedge}$ gates) allow us to further capture the so called [*extended IF logic*]{} (cf. [@Tul09]) but, again, are substantially more expressive than the latter. They also overcome one notable problem arising in extended IF logic, which is the (weak) negation’s not being able to act as an ordinary connective that can be meaningfully applied anywhere within a formula. Section \[s8\] extends the formalism of cirquents by allowing additional sorts of inputs termed [*general*]{}, as opposed to the [*elementary*]{} inputs to which the cirquents of the earlier sections are limited. Unlike elementary inputs that are interpreted just as ${\top}$ (“true”) or ${\bot}$ (“false”), general inputs stand for any, possibly nontrivial, games. This way, cirquents become and can be viewed as a (complex) [*operations*]{} on games, only very few of which are expressible through formulas. Section \[s9\] sets up an alternative semantics for cirquents termed [*abstract resource semantics*]{} (ARS). It is a companion and faithful technical “assistant” of CoL. ARS also has good claims to be a materialization and generalization of the resource intuitions traditionally — but somewhat wrongly — associated with linear logic and its variations. Unlike CoL, ARS has already met cirquents in the past, namely, in [@Cirq; @Japdeep]. The latter, however, unlike the present paper, elaborated ARS only for a very limited class of cirquents — cirquents with just ${\vee},{\wedge}$ gates and without clustering or ranking. Section \[s10\] proves that CoL and ARS validate exactly the same cirquents. Among the expected applications of this theorem is facilitating soundness/completeness proofs for various deductive systems for (fragments of) CoL, as technically it is by an order of magnitude easier to deal with (the simple and “naive”) ARS than to deal with (the complex and “serious”) CoL directly. Games {#s2} ===== As noted, computability logic understands interactive computational problems as games played between two players: [*machine*]{} and [*environment*]{}. The symbolic names for these two players are ${\top}$ and ${\bot}$, respectively. ${\top}$ is a deterministic mechanical device (thus) only capable of following algorithmic strategies, whereas there are no restrictions on the behavior of ${\bot}$, which represents a capricious user, the blind forces of nature, or the devil himself. Our sympathies are with ${\top}$, and by just saying “won” or “lost” without specifying a player, we always mean won or lost by ${\top}$. ${\wp}$ is always a variable ranging over $\{{\top},{\bot}\}$, with ${\neg}{\wp}$ meaning ${\wp}$’s adversary, i.e. the player which is not ${\wp}$. Before getting to a formal definition of games, we agree that a [**move**]{} is always a finite string over the standard keyboard alphabet. A [**labeled move**]{} ([**labmove**]{}) is a move prefixed with ${\top}$ or ${\bot}$, with its prefix ([**label**]{}) indicating which player has made the move. A [**run**]{} is a (finite or infinite) sequence of labmoves, and a [**position**]{} is a finite run. Runs will be often delimited by “$\langle$” and “$\rangle$”, with ${\langle\rangle}$ thus denoting the [**empty run**]{}. The following is a formal definition of the concept of a game, combined with some less formal conventions regarding the usage of certain terminology. It should be noted that the concept of a game considered in CoL is more general than the one defined below, with games in our present sense called [*constant games*]{}. Since we (for simplicity) only consider constant games in this paper, we omit the word “constant” and just say “game”. \[game\] A [**game**]{} is a pair $A=({\mbox{\bf Lr}^{A}_{}},{\mbox{\bf Wn}^{A}_{}})$, where: 1\. ${\mbox{\bf Lr}^{A}_{}}$ is a set of runs satisfying the condition that a finite or infinite run is in ${\mbox{\bf Lr}^{A}_{}}$ iff all of its nonempty finite — not necessarily proper — initial segments are in ${\mbox{\bf Lr}^{A}_{}}$ (notice that this implies ${\langle\rangle}\in{\mbox{\bf Lr}^{A}_{}}$). The elements of ${\mbox{\bf Lr}^{A}_{}}$ are said to be [**legal runs**]{} of $A$, and all other runs are said to be [**illegal runs**]{} of $A$. We say that $\alpha$ is a [**legal move**]{} for ${\wp}$ in a position $\Phi$ of $A$ iff ${\langle \Phi,{\wp}\alpha \rangle}\in{\mbox{\bf Lr}^{A}_{}}$; otherwise $\alpha$ is an [**illegal move**]{}. When the last move of the shortest illegal initial segment of $\Gamma$ is ${\wp}$-labeled, we say that $\Gamma$ is a [**${\wp}$-illegal run**]{} of $A$. 2\. ${\mbox{\bf Wn}^{A}_{}}$ is a function that sends every run $\Gamma$ to one of the players ${\top}$ or ${\bot}$, satisfying the condition that if $\Gamma$ is a ${\wp}$-illegal run of $A$, then ${\mbox{\bf Wn}^{A}_{}}{\langle \Gamma \rangle}={\neg}{\wp}$.[^2] When ${\mbox{\bf Wn}^{A}_{}}{\langle \Gamma \rangle}={\wp}$, we say that $\Gamma$ is a [**${\wp}$-won**]{} (or [**won by ${\wp}$**]{}) run of $A$; otherwise $\Gamma$ is [**lost by ${\wp}$**]{}. Thus, an illegal run is always lost by the player who has made the first illegal move in it. It is clear from the above definition that, when defining a particular game $A$, it would be sufficient to specify what [*positions*]{} (finite runs) are legal, and what [*legal runs*]{} are won by ${\top}$. Such a definition will then uniquely extend to all — including infinite and illegal — runs. We will implicitly rely on this observation in the sequel. A game is said to be [**elementary**]{} iff it has no legal runs other than (the always legal) empty run ${\langle\rangle}$. That is, an elementary game is a “game” without any (legal) moves, automatically won or lost. There are exactly two of such games, for which we use the same symbols ${\top}$ and ${\bot}$ as for the two players: the game ${\top}$ automatically won by player ${\top}$, and the game ${\bot}$ automatically won by player ${\bot}$.[^3] Computability logic is a conservative extension of classical logic, understanding classical propositions as elementary games. And, just as classical logic, it sees no difference between any two true propositions such as “$0= 0$” and “[*Snow is white*]{}”, and identifies them with the elementary game ${\top}$; similarly, it treats false propositions such as “$0= 1$” or “[*Snow is black*]{}” as the elementary game ${\bot}$. The [*negation*]{} ${\neg}A$ of a game $A$ is understood as the game obtained from $A$ by interchanging the roles of the two players, i.e., making ${\top}$’s (legal) moves and wins ${\bot}$’s moves and wins, and vice versa. Precisely, let us agree that, for a run $\Gamma$, ${\neg}\Gamma$ means the result of changing in $\Gamma$ each label ${\top}$ to ${\bot}$ and vice versa. Then: \[negdef\] The [**negation**]{} ${\neg}A$ of a game $A$ is defined by stipulating that, for any run $\Gamma$, - $\Gamma\in{\mbox{\bf Lr}^{{\neg}A}_{}}$ iff ${\neg}\Gamma\in {\mbox{\bf Lr}^{A}_{}}$; - ${\mbox{\bf Wn}^{{\neg}A}_{}}{\langle \Gamma \rangle}={\top}$ iff ${\mbox{\bf Wn}^{A}_{}}{\langle {\neg}\Gamma \rangle}={\bot}$. Obviously the negation of an elementary game is also elementary. Generally, when applied to elementary games, the meaning of ${\neg}$ fully coincides with its classical meaning. So, ${\neg}$ is a conservative generalization of classical negation from elementary games to all games. Note the relaxed nature of our games. They do not impose any regulations on when either player can or should move. This is entirely up to the players. Even if we assume that illegal moves physically cannot be made, it is still possible that in certain (or all) positions both of the players have legal moves, and then the next move will be made (if made at all) by the player who wants or can act sooner. This brings us to the next question to clarify: how are our games really played, and what does a [*strategy*]{} mean here? In traditional game-semantical approaches, including those of Lorenzen [@Lor61], Hintikka [@Hintikka73] or Blass [@Bla92], player’s strategies are understood as [*functions*]{} — typically as functions from interaction histories (positions) to moves, or sometimes (Abramsky and Jagadeesan [@Abr94]) as functions that only look at the latest move of the history. This [*strategies-as-functions*]{} approach, however, is inapplicable in the context of computability logic, whose relaxed semantics, in striving to get rid of any “bureaucratic pollutants” and only deal with the remaining true essence of games, has no structural rules and thus does not regulate the order of moves. As noted, here often either player may have (legal) moves, and then it is unclear whether the next move should be the one prescribed by ${\top}$’s strategy function or the one prescribed by the strategy function of ${\bot}$. In fact, for a game semantics whose ambition is to provide a comprehensive, natural and direct tool for modeling interactive computations, the strategies-as-functions approach would be less than adequate, even if technically possible. This is so for the simple reason that the strategies that real computers follow are not functions. If the strategy of your personal computer was a function from the history of interaction with you, then its performance would keep noticeably worsening due to the need to read the continuously lengthening — and, in fact, practically infinite — interaction history every time before responding. Fully ignoring that history and looking only at your latest keystroke in the spirit of [@Abr94] is also certainly not what your computer does, either. The inadequacy of the strategies-as-functions approach becomes especially evident when one tries to bring computational complexity issues into interactive computation, the next natural target towards which CoL has already started making its first steps ([@lbcs; @cla4; @cla5]). In computability logic, (${\top}$’s effective) strategies are defined in terms of interactive machines, where computation is one continuous process interspersed with — and influenced by — multiple “input” (environment’s moves) and “output” (machine’s moves) events. Of several, seemingly rather different yet equivalent, machine models of interactive computation studied in CoL, here we will consider the most basic, [**HPM**]{} (“Hard-Play Machine”) model. An HPM is nothing but a Turing machine with the additional capability of making moves. The adversary can also move at any time, with such moves being the only nondeterministic events from the machine’s perspective. Along with the ordinary read/write [*work tape*]{}, the machine also has an additional tape[^4] called the [*run tape*]{}. The latter, at any time, spells the “current position” of the play. The role of this tape is to make the interaction history fully visible to the machine. It is read-only, and its content is automatically updated every time either player makes a move. In these terms, an [**algorithmic solution**]{} (${\top}$’s [**winning strategy**]{}) for a given game $A$ is understood as an HPM $\cal M$ such that, no matter how the environment acts during its interaction with $\cal M$ (what moves it makes and when), the run incrementally spelled on the run tape is a ${\top}$-won run of $A$. When this is the case, we say that ${\cal M}$ [**wins**]{}, or [**solves**]{}, $A$, and that $A$ is a [**computable**]{}, or [**algorithmically solvable**]{}, game. As for ${\bot}$’s strategies, there is no need to define them: all possible behaviors by ${\bot}$ are accounted for by the different possible nondeterministic updates of the run tape of an HPM. In the above outline, we described HPMs in a relaxed fashion, without being specific about technical details such as, say, how, exactly, moves are made by the machine, how many moves either player can make at once, what happens if both players attempt to move “simultaneously”, etc. As it turns out, all reasonable design choices yield the same class of winnable games as long as we consider a certain natural subclass of games called [*static*]{}. Intuitively, static games are interactive tasks where the relative speeds of the players are irrelevant, as it never hurts a player to postpone making moves. In other words, they are games that are contests of intellect rather than contests of speed. And one of the theses that computability logic philosophically relies on is that static games present an adequate formal counterpart of our intuitive concept of “pure”, speed-independent interactive computational problems. Correspondingly, computability logic restricts its attention (more specifically, possible interpretations of the atoms of its formal language) to static games. Below comes a formal definition of this concept. For either player ${\wp}$, we say that a run $\Upsilon$ is a [**${\wp}$-delay**]{} of a run $\Gamma$ iff: - for both players ${\wp}'\in\{{\top},{\bot}\}$, the subsequence of ${\wp}'$-labeled moves of $\Upsilon$ is the same as that of $\Gamma$, and - for any $n,k\geq 1$, if the $n$th ${\wp}$-labeled move is made later than (is to the right of) the $k$th ${\neg}{\wp}$-labeled move in $\Gamma$, then so is it in $\Upsilon$. The above conditions mean that in $\Upsilon$ each player has made the same sequence of moves as in $\Gamma$, only, in $\Upsilon$, ${\wp}$ might have been acting with some delay. Let us say that a run is [**${\wp}$-legal**]{} iff it is not ${\wp}$-illegal. That is, a ${\wp}$-legal run is either simply legal, or the player responsible for (first) making it illegal is ${\neg}{\wp}$ rather than ${\wp}$. Now, we say that a game $A$ is [**static**]{} iff, whenever a run $\Upsilon$ is a ${\wp}$-delay of a run $\Gamma$, we have: - if $\Gamma$ is a ${\wp}$-legal run of $A$, then so is $\Upsilon$;[^5] - if $\Gamma$ is a ${\wp}$-won run of $A$, then so is $\Upsilon$. The class of static games is closed under all game operations studied in CoL, and all games that we shall see in this paper are static. Throughout this paper, we use the term “[**computational problem**]{}", or simply “[**problem**]{}", is a synonym of “static game”. A simplest example of a non-static game would be the game where all moves are legal, and which is won by the player who moves first. The chances of a player to succeed only depend on its relative speed, that is. Such a game hardly represents any meaningful computational problem. The simplest kind of cirquents {#s3} ============================== We fix some infinite collection of finite alphanumeric expressions called [**atoms**]{}, and use $p$, $q$, $r$, $s$, $p_1$, $p_2$, $p(3,6)$, $q_7(1,1,8)$, …as metavariables for them. If $p$ is an atom, then the expressions $p$ and ${\neg}p$ are said to be [**literals**]{}. Let us agree that, in this section, by a [**graph**]{} we mean a directed acyclic multigraph with countably many nodes and edges, where the outgoing edges of each node are arranged in a fixed left-to-right order (edges $\#1$, $\#2$, etc.), and where each node is labeled with either a literal or ${\vee}$ or ${\wedge}$. Since the sets of nodes and edges are countable, we assume that they are always subsets of $\{1,2,3,\ldots\}$. For a node $n$ of the graph, the string representing $n$ in the standard decimal notation is said to be the [**ID number**]{}, or just the [**ID**]{}, of the node. Similarly for edges. The nodes labeled with ${\wedge}$ or ${\vee}$ we call [**gates**]{}, and the nodes labeled with literals we call [**ports**]{}. Specifically, a node labeled with a literal $L$ is said to be a an [**$L$-port**]{}; a ${\wedge}$-labeled node is said to be a [**${\wedge}$-gate**]{}; and a ${\vee}$-labeled node is said to be a [**${\vee}$-gate**]{}. When there is an edge from a node $a$ to a node $b$, we say that $b$ is a [**child**]{} of $a$ and $a$ is a [**parent**]{} of $b$. The relations “[**descendant**]{}” and “[**ancestor**]{}” are the transitive closures of the relations “child” and “parent”, respectively. The meanings of some other standard relations such as “grandchild”, “grandparent”, etc. should also be clear. The [**outdegree**]{} of a node of a graph is the quantity of outgoing edges of that node, which can be finite or infinite. Since there are only countably many edges, any two infinite outdegrees are equal. Similarly, the [**indegree**]{} of a node is the quantity of the incoming edges of that node. We say that a graph is [**well-founded**]{} iff there is no infinite sequence $a_1,a_2,a_3,\ldots$ of nodes where each $a_i$ is a predecessor of $a_{i+1}$. Of course, any (directed acyclic) graph with finitely many nodes is well-founded. We say that a graph is [**effective**]{} iff so are the following basic predicates and partial functions characterizing it: “[*$x$ is a node*]{}”, “[*$x$ is an edge*]{}”, “[*the label of node $x$*]{}”, “[*the outdegree of node $x$*]{}”, “[*the $y$’th outgoing edge of node $x$*]{}”, “[*the origin of edge $x$*]{}”, “[*the destination of edge $x$*]{}”. \[may5a\] A [**cirquent**]{} is an effective, well-founded graph satisfying the following two conditions: 1. Ports have no children. 2. There is a node, called the [**root**]{}, which is an ancestor of all other nodes in the graph. We say that a cirquent is [**finite**]{} iff it has only finitely many edges (and hence nodes); otherwise it is [**infinite**]{}. We say that a cirquent is [**tree-like**]{} iff the indegree of each of its non-root node is $1$. Graphically, we represent ports through the corresponding literals, ${\vee}$-gates through ${\vee}$-inscribed circles, and ${\wedge}$-gates through ${\wedge}$-inscribed circles. We agree that the direction of an edge is always upward, which allows us to draw lines rather than arrows for edges. It is understood that the official order of the outgoing edges of a gate coincides with the (left to right) order in which the edges are drawn. Also, typically we do not indicate the IDs of nodes unless necessary. In most cases, what particular IDs are assigned to the nodes of a cirquent is irrelevant and such an assignment can be chosen arbitrarily. Similarly, edge IDs are usually irrelevant, and they will never be indicated. Below are a few examples of cirquents. Note that cirquents may contain parallel edges, as we agreed that “graph” means “multigraph”. Note also that not only ports but also gates can be childless. (285,112) (33,89)[${\neg}p$]{} (64,89)[$p$]{} (53,71)[(-1,1)[12]{}]{} (53,71)[(1,1)[12]{}]{} (53,65) (50,62)[${\vee}$]{} (91,89)[${\neg}p$]{} (122,89)[$p$]{} (111,71)[(-1,1)[12]{}]{} (111,71)[(1,1)[12]{}]{} (111,65) (108,62)[${\vee}$]{} (0,65)[${\neg}p$]{} (157,65)[$p$]{} (83,40)[(-3,2)[29]{}]{} (83,40)[(3,2)[29]{}]{} (83,40)[(-4,1)[75]{}]{} (83,40)[(4,1)[75]{}]{} (83,34) (80,31)[${\wedge}$]{} (239,89)[${\neg}p$]{} (279,89)[$p$]{} (263,66)[(-1,1)[16]{}]{} (263,66)[(1,1)[16]{}]{} (263,60) (260,57)[${\vee}$]{} (263,34) (260,31)[${\wedge}$]{} (261,40)[(0,1)[14]{}]{} (265,40)[(0,1)[14]{}]{} (269,38)[(1,1)[10]{}]{} (279,48)[(0,1)[35]{}]{} (257,38)[(-1,1)[10]{}]{} (247,48)[(0,1)[35]{}]{} (82,10)[[**Figure 1:**]{} Finite cirquents]{} (390,122) (0,99)[$p(1,1)$]{} (35,99)[$p(1,2)$]{} (70,99)[$\ldots$]{} (3,81)[(0,1)[12]{}]{} (3,81)[(3,1)[36]{}]{} (3,81)[(6,1)[26]{}]{} (40,84)[$\ldots$]{} (3,75) (0,72)[${\vee}$]{} (113,99)[$p(2,1)$]{} (148,99)[$p(2,2)$]{} (183,99)[$\ldots$]{} (116,81)[(0,1)[12]{}]{} (116,81)[(3,1)[36]{}]{} (116,81)[(6,1)[26]{}]{} (153,84)[$\ldots$]{} (116,75) (113,72)[${\vee}$]{} (3,40)[(0,1)[29]{}]{} (3,40)[(4,1)[114]{}]{} (3,40)[(6,1)[89]{}]{} (3,34) (0,31)[${\wedge}$]{} (110,51)[$\ldots$]{} (200,74)[$\ldots$]{} (303,100) (300,97)[${\vee}$]{} (321,81) (318,78)[${\vee}$]{} (316,85)[(-1,1)[10]{}]{} (341,67) (338,63)[${\vee}$]{} (336,71)[(-5,3)[10]{}]{} (363,61) (360,57)[${\vee}$]{} (357,63)[(-4,1)[10]{}]{} (379,59)[(-6,1)[10]{}]{} (303,40)[(0,1)[54]{}]{} (303,40)[(1,2)[17]{}]{} (303,40)[(5,3)[36]{}]{} (303,40)[(3,1)[55]{}]{} (303,40)[(6,1)[35]{}]{} (303,34) (300,31)[${\vee}$]{} (381,57)[$\ldots$]{} (342,44)[$\ldots$]{} (318,100)[$p_1$]{} (338,100)[$p_2$]{} (360,100)[$p_3$]{} (380,100)[$\ldots$]{} (380,80)[$\ldots$]{} (321,96)[(0,-1)[10]{}]{} (341,96)[(0,-1)[23]{}]{} (363,96)[(0,-1)[29]{}]{} (131,10)[[**Figure 2:**]{} Infinite cirquents]{} By an [**interpretation**]{}[^6] in this and the following few sections we mean a function $^*$ that sends each atom $p$ to one of the values (elementary games) $p^*\in\{{\top},{\bot}\}$. It immediately extends to a mapping from all literals to $\{{\top},{\bot}\}$ by stipulating that $({\neg}p)^*={\neg}(p^*)$; that is, $^*$ sends ${\neg}p$ to ${\top}$ iff it sends $p$ to ${\bot}$. Each interpretation $^*$ induces the predicate of [*truth*]{} with respect to (w.r.t.) $^*$ for cirquents and their nodes, as defined below. This definition, as well as similar definitions given later, silently but essentially relies on the fact that the graphs that we consider are well-founded. \[may14b\] Let $C$ be a cirquent and $^*$ an interpretation. With “port” and “gate” below meaning those of $C$, and “true” to be read as “[**true w.r.t. $^*$**]{}”, we say that: - An $L$-port is true iff $L^*={\top}$ (any literal $L$). - A ${\vee}$-gate is true iff so is at least one of its children (thus, a childless ${\vee}$-gate is never true). - A ${\wedge}$-gate is true iff so are all of its children (thus, a childless ${\wedge}$-gate is always true). Finally, we say that the cirquent $C$ is true iff so is its root. \[may14bb\] Let $C$ be a cirquent and $^*$ an interpretation. We define $C^*$ to be the elementary game the (only) legal run ${\langle\rangle}$ of which is won by ${\top}$ iff $C$ is true w.r.t. $^*$. Thus, every interpretation $^*$ “interprets” a cirquent $C$ as one of the elementary games ${\top}$ or ${\bot}$. This will not necessarily be the case for the more general sorts of cirquents introduced in later sections though. We say that two cirquents $C_1$ and $C_2$ are [**extensionally identical**]{} iff, for every interpretation $^*$, $C_{1}^{*}=C_{2}^{*}$. For instance, the two cirquents of Figure 1 are extensionally identical. Occasionally we may say “[*equivalent*]{}” instead of “extensionally identical”, even though one should remember that equivalence often (in other pieces of literature) may mean something weaker than extensional identity. It should be pointed out that the above definition of extensional identity applies not only to cirquents in the sense of the present section. It extends, without any changes in phrasing, to cirquents in the more general sense of any of the subsequent sections of this paper as well. Finally, we say that a cirquent $C$ is [**valid**]{} iff, for any interpretation $^*$, $C^*={\top}$. Obviously the semantics that we have defined in this section is nothing but the kind old semantics of classical logic. Computability logic fully agrees with and adopts the latter. This is exactly what makes computability logic a conservative extension of classical logic. Let us agree that, whenever we speak about formulas of classical logic, they are assumed to be written in [*negation normal form*]{}, that is, in the form where the negation symbol ${\neg}$ is only applied to atoms. If an expression violates this condition, it is to be understood just as a standard abbreviation. Similarly, if we write $E{\rightarrow}F$, it is to be understood as an abbreviation of ${\neg}E{\vee}E$. Also, slightly deviating from the tradition, we allow any finite numbers of arguments for conjunctions and disjunctions in classical formulas. The symbol ${\top}$ will be understood as an abbreviation of the empty conjunction, ${\bot}$ as an abbreviation of the empty disjunction, the expression ${\wedge}\{E\}$ will be used for the conjunction whose single conjunct is $E$, and similarly for ${\vee}\{E\}$. More generally, for any $n\geq 0$, ${\wedge}\{E_1,\ldots,E_n\}$ can (but not necessarily will) be written instead of $E_1{\wedge}\ldots{\wedge}E_n$, and similarly for ${\vee}\{E_1,\ldots,E_n\}$. Every formula of classical propositional logic then can and will be seen as a finite tree-like cirquent, namely, the cirquent which is nothing but the parse tree for that formula. For instance, we shall understand the formula ${\neg}p{\wedge}({\neg}p{\vee}p){\wedge}({\neg}p{\vee}p){\wedge}p$ as the left cirquent of Figure 1. Every finite — not necessarily tree-like — cirquent can also be translated into an equivalent formula of classical propositional logic. This can be done by first turning the cirquent into an extensionally identical tree-like cirquent by duplicating and separating shared nodes, and then writing the formula whose parse tree such a cirquent is. For instance, applying this procedure to the right cirquent of Figure 1 turns it into the left cirquent of the same figure and thus the formula ${\neg}p{\wedge}({\neg}p{\vee}p){\wedge}({\neg}p{\vee}p){\wedge}p$. Without loss of generality, we assume that, unless otherwise specified, the universe of discourse in all cases that we consider — i.e., the set over which the variables of classical first order logic range — is $\{1,2,3,\ldots\}$. Then, from the perspective of classical first order logic, the universal quantification of $E(x)$ is nothing but the “long conjunction” $E(1){\wedge}E(2){\wedge}E(3){\wedge}\ldots$, and the existential quantification of $E(x)$ is nothing but the “long disjunction” $E(1){\vee}E(2){\vee}E(3){\vee}\ldots$. To emphasize this connection, let us agree to use the expression ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt} xE(x)$ for the former and ${\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt} xE(x)$ for the latter, instead of the more usual ${\mbox{\large $\forall$}\hspace{1pt}}xE(x)$ and ${\mbox{\large $\exists$}\hspace{1pt}}x E(x)$ (computability logic reserves ${\mbox{\large $\forall$}\hspace{1pt}}x$ and ${\mbox{\large $\exists$}\hspace{1pt}}x$ for another, so called [*blind*]{}, sort of quantifiers; semantically they, just like ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt} x$ and ${\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt}x$, are conservative generalizations of the classical quantifiers). Since quantifiers are conjunctions or disjunctions, it is obvious that all formulas of classical first order logic can also be seen as tree-like (albeit no longer finite) cirquents. For instance, the formula ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt} x\hspace{-1pt}{\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt} y \hspace{3pt} p(x,y)$ is nothing but the left cirquent of Figure 2. On the other hand, unlike the case with finite cirquents, obviously not all infinite cirquents can be directly and adequately translated into formulas of classical logic. Of course, the great expressive power achieved by infinite cirquents, by itself, does not mean much, because such cirquents are generally “unwritable” or “undrawable”. The cirquents of Figure 2 are among the not many lucky exceptions, but even there, the usage of the ellipsis is very informal, relying on a human reader’s ability to see patterns and generalize. In order to take advantage of the expressiveness of cirquents, one needs to introduce notational conventions allowing to represent certain infinite patterns and (sub)cirquents through finite means. The quantifiers ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt} x$ and ${\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt} x$ are among such means. Defining new, ever more expressive means (not reducible to ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt}x,{\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt}x$) is certainly possible. Among the advantages of considering all — rather than only finitely represented — cirquents as we do in this paper is that a semantics for them has to be set up only once. If and when various abbreviations and finite means of expression are introduced, one will only need to explain what kinds of cirquents or subcirquents they stand for, without otherwise redefining or extending the already agreed-upon general semantics for cirquents. But higher expressiveness is not the only advantage of cirquents over formulas. Another very serious advantage is the higher efficiency of cirquents. Let us for now talk only about finite cirquents. As we know, all such cirquents can be written as formulas of classical propositional logic. So, they do not offer any additional expressive power. But they [*do*]{} offer dramatically improved efficiency in representing Boolean functions. In Section 8 of [@Japdeep] one can find examples of naturally emerging sets of Boolean functions which can be represented through polynomial size cirquents but require exponential sizes if represented through formulas. The higher efficiency of cirquents is achieved due to the possibility to [*share*]{} children between different parents — the mechanism absent in formulas, because of which an exponential explosion may easily occur when translating a (non-tree-like) cirquent into an equivalent formula. Imagine where the computer industry would be at present if, for some strange reason, the computer engineers had insisted on tree-like (rather than graph-like) circuitries! Cirquents offer not only improved efficiency of expression, but also improved efficiency of proofs in deductive systems. In [@Japdeep], a cirquent-based analytic deductive system [**CL8**]{} for classical propositional logic is set up which is shown to yield an exponential improvement (over formula-based analytic deductive systems) in proof efficiency. For instance, the notorious propositional Pigeonhole principle, which is known to have no polynomial size analytic proofs in traditional systems, has been shown to have such proofs in [**CL8**]{}. Selectional gates {#s4} ================= We now start a series of generalizations for cirquents and their semantics. We agree that, throughout the rest of this paper, unless otherwise specified, “cirquent” and all related terms are always to be understood in their latest, most general, sense. In this section we extend the concept of [**cirquents**]{} defined in the previous section by allowing, along with ${\vee}$ and ${\wedge}$, the following six additional possible labels for gates: ${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}\hspace{-2pt},{\hspace{0pt}\sqcup},{\hspace{0pt}\sqcap}$. Gates labeled with any of these new symbols we call [**selectional**]{} gates. And gates labeled with the old ${\vee}$ or ${\wedge}$ we call [**parallel**]{} gates. Selectional gates, in turn, are subdivided into three groups: - $\{\hspace{-2pt}{\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt}\}$, referred to as [**toggling**]{} gates; - $\{\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}\hspace{-2pt}\}$, referred to as [**sequential**]{} gates; - $\{{\hspace{0pt}\sqcup},{\hspace{0pt}\sqcap}\}$, referred to as [**choice**]{} gates. The eight kinds of gates are also divided into the following two groups: - $\{{\vee},\hspace{-2pt}{\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}\hspace{-2pt},{\hspace{0pt}\sqcup}\}$, termed [**disjunctive**]{} gates (or simply [**disjunctions**]{}); - $\{{\wedge},\hspace{-2pt}{\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}\hspace{-2pt},{\hspace{0pt}\sqcap}\}$, termed [**conjunctive**]{} gates (or simply [**conjunctions**]{}). Thus, ${\wedge}$ is to be referred to as “parallel conjunction”, ${\hspace{0pt}\sqcup}$ as “choice disjunction”, etc. The same eight symbols can be used to construct formulas in the standard way. Of course, in the context of formulas, these symbols will be referred to as [**operators**]{} or [**connectives**]{} rather than as gates. Formulas of computability logic accordingly may also use eight sorts of [**quantifiers**]{}. They are written as symbols of the same shape as the corresponding connectives, but in a larger size. Two of such quantifiers — ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt}x$ and ${\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt}x$ — have already been explained in the previous section. The remaining quantifiers are understood in the same way: ${\mbox{\Large $\sqcap$}\hspace{1pt}}xE(x)$ is the infinite choice conjunction $E(1){\hspace{0pt}\sqcap}E(2){\hspace{0pt}\sqcap}\ldots$,  ${\mbox{\Large $\sqcup$}\hspace{1pt}}xE(x)$ is the infinite choice disjunction $E(1){\hspace{0pt}\sqcup}E(2){\hspace{0pt}\sqcup}\ldots$,  and similarly for ${\mbox{\hspace{1pt}\Large $\wedge$\hspace{-1.84mm}\raisebox{0.02mm}{\rule{0.13mm}{3.0mm}}\hspace{6pt}}}\hspace{-1pt},\hspace{-1pt}{\hspace{1pt}\mbox{\Large $\vee$\hspace{-1.84mm}\raisebox{0.1mm}{\rule{0.13mm}{3.0mm}}\hspace{6pt}}}\hspace{-1pt},\hspace{-1pt}{\mbox{\large \raisebox{0.0cm}{$\bigtriangleup$}}},{\mbox{\large \raisebox{0.07cm}{$\bigtriangledown$}}}$. Since quantifiers are again nothing but “long” conjunctions and disjunctions, as pointed out in the previous section, there is no necessity to have special gates for them, as our approach that permits gates with infinite outdegrees covers them all. For similar reasons, there is no necessity to have special gates for what computability logic calls [*(co)recurrence operators*]{}. The [*parallel recurrence*]{} ${\mbox{\raisebox{-0.01cm}{\scriptsize $\wedge$}\hspace{-4pt}\raisebox{0.16cm}{\tiny $\mid$}\hspace{2pt}}}E$ of $E$ is defined as the infinite parallel conjunction $E{\wedge}E{\wedge}\ldots$, the [*parallel corecurrence*]{} ${\mbox{\raisebox{0.12cm}{\scriptsize $\vee$}\hspace{-4pt}\raisebox{0.02cm}{\tiny $\mid$}\hspace{2pt}}}E$ of $E$ is defined as the infinite parallel disjunction $E{\vee}E{\vee}\ldots$, and similarly for [*toggling recurrence*]{} ${\mbox{\raisebox{-0.01cm}{\scriptsize $\wedge$}\hspace{-4pt}\raisebox{0.06cm}{\small $\mid$}\hspace{2pt}}}$, [*toggling corecurrence*]{} ${\mbox{\raisebox{0.12cm}{\scriptsize $\vee$}\hspace{-3.8pt}\raisebox{0.04cm}{\small $\mid$}\hspace{2pt}}}$, [*sequential recurrence*]{} ${\mbox{\raisebox{-0.07cm}{\scriptsize $-$}\hspace{-0.2cm}${\mbox{\raisebox{-0.01cm}{\scriptsize $\wedge$}\hspace{-4pt}\raisebox{0.16cm}{\tiny $\mid$}\hspace{2pt}}}$}}$ and [*sequential corecurrence*]{} ${\mbox{\raisebox{0.20cm}{\scriptsize $-$}\hspace{-0.2cm}${\mbox{\raisebox{0.12cm}{\scriptsize $\vee$}\hspace{-4pt}\raisebox{0.02cm}{\tiny $\mid$}\hspace{2pt}}}$}}$ (choice recurrence and corecurrence are not considered because, semantically, both $E{\hspace{0pt}\sqcap}E{\hspace{0pt}\sqcap}\ldots$ and $E{\hspace{0pt}\sqcup}E{\hspace{0pt}\sqcup}\ldots$ are simply equivalent to $E$). All of the operators ${\vee},{\wedge},\hspace{-2pt}{\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}\hspace{-2pt},\hspace{-2pt}{\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}\hspace{-2pt},{\hspace{0pt}\sqcup},{\hspace{0pt}\sqcap}$, including their quantifier and recurrence versions, have been already motivated, defined and studied in computability logic. This, however, has been done only in the context of formulas. In this paper we extend the earlier approach and concepts of computability logic from formulas to cirquents as generalized formulas. As before, an interpretation is an assignment $^*$ of ${\top}$ or ${\bot}$ to each atom, extended to all literals by commuting with ${\neg}$. And, as before, such an assignment induces a mapping that sends each cirquent $C$ to a game $C^*$. When $C$ is a cirquent in the sense of the previous section, $C^*$ is always an elementary game (${\top}$ or ${\bot}$). When, however, $C$ contains selectional gates, the game $C^*$ is no longer elementary. To define such games $C^*$, let us agree that, throughout this papers, positive integers are identified with their decimal representations, so that, when we say “number $n$”, we may mean either the [*number*]{} $n$ as an abstract object, or the [*string*]{} that represents $n$ in the decimal notation. Among the benefits of this convention is that it allows us to identify nodes of a cirquent with their IDs. \[may14a\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Phi$ a position. $\Phi$ is a [**legal position**]{} of the game $C^*$ iff, with a “gate” below meaning a gate of $C$, the following conditions are satisfied: 1. Each labmove of $\Phi$ has one of the following forms: 1. ${\top}g.i$, where $g$ is a ${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcup}$-gate and $i$ is a positive integer not exceeding the outdegree of $g$; 2. ${\bot}g.i$, where $g$ is a ${\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcap}$-gate and $i$ is a positive integer not exceeding the outdegree of $g$. 2. Whenever $g$ is a choice gate, $\Phi$ contains at most one occurrence of a labmove of the form ${\wp}g.i$. 3. Whenever $g$ is a sequential gate and $\Phi$ is of the form ${\langle \ldots , {\wp}g.i ,\ldots, {\wp}g.j ,\ldots \rangle}$, we have $i<j$. Note that the set of legal runs of $C^*$ does not depend on $^*$. Hence, in the sequel, we can unambiguously omit “$^*$” and simply say “legal run of $C$”. The intuitive meaning of a move of the form $g.i$ is [**selecting**]{} the $i$th outgoing edge — together with the child pointed at by such an edge — of (in) the selectional gate $g$. In a disjunctive selectional gate, a selection is always made by player ${\top}$; and in a conjunctive selectional gate, a selection is always made by player ${\bot}$. The difference between the three types of selectional gates is only in how many selections and in what order are allowed to be made in the same gate. In a choice gate, a selection can be made only once. In a sequential gate, selections can be reconsidered any number of times, but only in the left-to-right order (once an edge $\#i$ is selected, no edge $\# j$ with $j\leq i$ can be (re)selected afterwards). In a toggling gate, selections can be reconsidered any number of times and in any order. This includes the possibility to select the same edge over and over again. \[mm20\] In the context of a given cirquent $C$ and a legal run $\Gamma$ of $C$, we will say that a selectional gate $g$ is [**unresolved**]{} iff either no moves of the form $g.j$ have been made in $\Gamma$, or infinitely many such moves have been made.[^7] Otherwise $g$ is [**resolved**]{} and, where $g.i$ is the last move of the form $g.j$ made in $\Gamma$, the child pointed at by the $i$th outgoing edge of $g$ is said to be the [**resolvent**]{} of $g$. Intuitively, the resolvent is the “final”, or “ultimate” selection of a child made in gate $g$ by the corresponding player. There is no such “ultimate” selection in unresolved gates. The following definition conservatively generalizes Definition \[may14b\] of truth. Now we have a legal run $\Gamma$ of the cirquent as an additional parameter, which was trivial (namely, $\Gamma={\langle\rangle}$) in Definition \[may14b\] and hence not mentioned there. \[may14c\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Gamma$ a legal run of $C$. In this context, with “true” to be read as “[**true w.r.t. $(^*,\Gamma)$**]{}”, we say that: - An $L$-port is true iff $L^*={\top}$. - A ${\vee}$-gate is true iff so is at least one of its children. - A ${\wedge}$-gate is true iff so are all of its children. - A resolved selectional gate is true iff so is its resolvent. - No unresolved disjunctive selectional gate is true. - Each unresolved conjunctive selectional gate is true. Finally, we say that $C$ is true iff so is its root. As we just did in the above definition, when $^*$ and $\Gamma$ are fixed in a context or are otherwise irrelevant, we may omit “w.r.t. $(^*,\Gamma)$” and just say “true”. \[may14aa\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Gamma$ a legal run of $C$. Then $\Gamma$ is a ${\top}$-won run of the game $C^*$ iff $C$ is true w.r.t. $(^*,\Gamma)$. Definitions \[may14a\] and \[may14aa\], together, provide a definition of the game $C^*$, for any cirquent $C$ and interpretation $^*$. To such a game $C^*$ we may refer as “the game represented by $C$ under interpretation $^*$”, or as “$C$ under interpretation $^*$”, or — when $^*$ is fixed or irrelevant — as “the game represented by $C$”. We may also say that “$^*$ interprets $C$ as $C^*$”. Similarly for atoms and literals instead of cirquents. Also, in informal contexts we may identify a cirquent $C$ or a literal $L$ with a (the) game represented by it, and write $C$ or $L$ where, strictly speaking, $C^*$ or $L^*$ is meant. These and similar terminological conventions apply not only to the present section, but the rest of the paper as well. We now need to generalize the definition of validity given in the previous section to cirquents in the sense of the present section. As it turns out, such a generalization can be made in two, equally natural, ways: \[may20\] Let $C$ be a cirquent in the sense of the present or any of the subsequent sections. We say that: 1\. $C$ is [**weakly valid**]{} iff, for any interpretation $^*$, there is an HPM ${\cal M}$ such that $\cal M$ wins the game $C^*$. 2\. $C$ is [**strongly valid**]{} iff there is an HPM ${\cal M}$ such that, for any interpretation $^*$, ${\cal M}$ wins the game $C^*$. When $\cal M$ and $C$ satisfy the second clause of the above definition, we say that $\cal M$ is a [**uniform solution**]{} for $C$. Intuitively, a uniform solution $\cal M$ for a cirquent $C$ is an interpretation-independent winning strategy: since the “intended” or “actual” interpretation $^*$ is not visible to the machine, $\cal M$ has to play in some standard, uniform way that would be successful for any possible interpretation of $C$. To put it in other words, a uniform solution is a [*purely logical solution*]{}, which is based only on the [*form*]{} of a cirquent rather than any extra-logical [*meaning*]{} (interpretation) we have or could have associated with it. It is obvious that for cirquents of the previous section, the weak and strong forms of validity coincide with what we simply called “validity” there. In general, however, not every weakly valid cirquent would also be strongly valid. A simplest example of such a weakly valid cirquent is ${\neg}p{\hspace{0pt}\sqcup}p$.[^8] Under any interpretation $^*$, the game represented by this cirquent is won by one of the two HPMs ${\cal M}_1$ or ${\cal M}_2$, where ${\cal M}_1$ is the machine that selects the left disjunct and rests, while ${\cal M}_2$ is the machine that selects the right disjunct and rests. However, which of these two machines wins the game depends on whether $^*$ interprets $p$ as ${\bot}$ or ${\top}$. In general, there is no [*one*]{} machine that wins the game for [*any*]{} possible interpretation. That is, the cirquent ${\neg}p{\hspace{0pt}\sqcup}p$ has no uniform solution, and thus it is not strongly valid. Which of the two versions of validity is more interesting depends on the motivational standpoint. Weak validity tells us what can be computed in principle. So, a computability-theoretician would focus on this concept. On the other hand, it is strong rather than weak validity that would be of interest in all application areas of CoL. There we want a logic on which a universal problem-solving machine can be based. Such a machine would or should be able to solve problems represented by logical expressions without any specific knowledge of the meanings of their atoms, i.e. without knowledge of the actual interpretation, which may vary from situation to situation or from application to application. Strong validity is exactly what fits the bill in this case. Throughout this paper, our primary focus will be on strong rather than weak validity. To appreciate the difference between the parallel and choice groups of gates or connectives, let us compare the two cirquents of Figure 3. (390,142) (0,119)[${\neg}p(1)$]{} (33,119)[$p(1)$]{} (25,101)[(-1,1)[12]{}]{} (25,101)[(1,1)[12]{}]{} (25,95) (22,92)[${\vee}$]{} (64,119)[${\neg}p(2)$]{} (97,119)[$p(2)$]{} (89,101)[(-1,1)[12]{}]{} (89,101)[(1,1)[12]{}]{} (89,95) (86,92)[${\vee}$]{} (128,119)[${\neg}p(3)$]{} (161,119)[$p(3)$]{} (153,101)[(-1,1)[12]{}]{} (153,101)[(1,1)[12]{}]{} (153,95) (150,92)[${\vee}$]{} (25,60)[(0,1)[29]{}]{} (25,60)[(2,1)[60]{}]{} (25,60)[(4,1)[124]{}]{} (25,60)[(6,1)[124]{}]{} (25,54) (22,51)[${\wedge}$]{} (165,80)[$\ldots$]{} (19,31)[${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}x\bigl({\neg}p(x){\vee}p(x)\bigr)$]{} (220,119)[${\neg}p(1)$]{} (253,119)[$p(1)$]{} (245,101)[(-1,1)[12]{}]{} (245,101)[(1,1)[12]{}]{} (245,95) (242,92)[${\hspace{0pt}\sqcup}$]{} (284,119)[${\neg}p(2)$]{} (317,119)[$p(2)$]{} (309,101)[(-1,1)[12]{}]{} (309,101)[(1,1)[12]{}]{} (309,95) (306,92)[${\hspace{0pt}\sqcup}$]{} (348,119)[${\neg}p(3)$]{} (381,119)[$p(3)$]{} (373,101)[(-1,1)[12]{}]{} (373,101)[(1,1)[12]{}]{} (373,95) (370,92)[${\hspace{0pt}\sqcup}$]{} (245,60)[(0,1)[29]{}]{} (245,60)[(2,1)[60]{}]{} (245,60)[(4,1)[124]{}]{} (245,60)[(6,1)[124]{}]{} (245,54) (242,51)[${\hspace{0pt}\sqcap}$]{} (385,80)[$\ldots$]{} (240,31)[${\mbox{\Large $\sqcap$}\hspace{1pt}}x\bigl({\neg}p(x){\hspace{0pt}\sqcup}p(x)\bigr)$]{} (119,10)[[**Figure 3:**]{} Parallel versus choice gates]{} The game represented by the left cirquent of Figure 3 is elementary, where no moves can or should be made by either player. It is also easy to see that this game (the only legal run ${\langle\rangle}$ of it, that is) is automatically won by ${\top}$, no matter what interpretation $^*$ is applied, so, it is both weakly and strongly valid. On the other hand, the game represented by the right cirquent of the same figure is not elementary. And it is neither strongly valid nor weakly valid. A legal move by ${\bot}$ in this game consists in selecting one of the infinitely many outgoing edges (and hence children) of the root, intuitively corresponding to choosing a value for $x$ in the formula ${\mbox{\Large $\sqcap$}\hspace{1pt}}x\bigl({\neg}p(x){\hspace{0pt}\sqcup}p(x)\bigr)$. And a(ny) legal move by ${\top}$ consists in selecting one of the two outgoing edges (and hence children) of one of the ${\hspace{0pt}\sqcup}$-gates. Making more than one selection in the same choice (unlike toggling or sequential) gate is not allowed, so that a selection automatically also is the resolvent of the gate. The overall game is won by ${\top}$ iff either ${\bot}$ failed to make a selection in the root, or else, where $i$ is the outgoing edge of the root selected by ${\bot}$ and $a_i$ is the corresponding ($i$th, that is) child of the root, either (1) ${\top}$ has selected the left outgoing edge of $a_i$ and ${\neg}p(i)$ is true, or (2) ${\top}$ has selected the right outgoing edge of $a_i$ and $p(i)$ is true. There are no conditions on when the available moves should be made, and generally they can be made by either player at any time and in any order. So, in the present example, ${\top}$ can legally make selections in several or even all ${\hspace{0pt}\sqcup}$-gates. But, of course, a reasonable strategy for ${\top}$ is to first wait till ${\bot}$ resolves the root (otherwise ${\top}$ wins), and then focus only on the resolvent of the root (what happens in the other ${\hspace{0pt}\sqcup}$-gates no longer matters), trying to select the true child of it. From the above explanation it should be clear that the right cirquent of Figure 3 expresses the problem of deciding (in the traditional sense) the predicate $p(x)$. That is, under any given interpretation $^*$, the game represented by that cirquent has an algorithmic winning strategy by ${\top}$ if and only if the predicate $\bigl(p(x)\bigr)^*$ — which we simply write as $p(x)$ — is decidable. As not all predicates are decidable, the cirquent is not weakly valid, let alone being strongly valid. To get a feel for sequential and toggling gates, let us look at Figure 4. (390,142) (0,119)[${\neg}p(1)$]{} (33,119)[$p(1)$]{} (25,101)[(-1,1)[12]{}]{} (25,101)[(1,1)[12]{}]{} (25,95) (19,92)[${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$]{} (64,119)[${\neg}p(2)$]{} (97,119)[$p(2)$]{} (89,101)[(-1,1)[12]{}]{} (89,101)[(1,1)[12]{}]{} (89,95) (83,92)[${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$]{} (128,119)[${\neg}p(3)$]{} (161,119)[$p(3)$]{} (153,101)[(-1,1)[12]{}]{} (153,101)[(1,1)[12]{}]{} (153,95) (147,92)[${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$]{} (25,60)[(0,1)[29]{}]{} (25,60)[(2,1)[60]{}]{} (25,60)[(4,1)[124]{}]{} (25,60)[(6,1)[124]{}]{} (25,54) (22,51)[${\hspace{0pt}\sqcap}$]{} (165,80)[$\ldots$]{} (19,31)[${\mbox{\Large $\sqcap$}\hspace{1pt}}x\bigl({\neg}p(x){\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}p(x)\bigr)$]{} (220,119)[${\neg}p(1)$]{} (253,119)[$p(1)$]{} (245,101)[(-1,1)[12]{}]{} (245,101)[(1,1)[12]{}]{} (245,95) (240,92)[${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$]{} (284,119)[${\neg}p(2)$]{} (317,119)[$p(2)$]{} (309,101)[(-1,1)[12]{}]{} (309,101)[(1,1)[12]{}]{} (309,95) (304,92)[${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$]{} (348,119)[${\neg}p(3)$]{} (381,119)[$p(3)$]{} (373,101)[(-1,1)[12]{}]{} (373,101)[(1,1)[12]{}]{} (373,95) (368,92)[${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$]{} (245,60)[(0,1)[29]{}]{} (245,60)[(2,1)[60]{}]{} (245,60)[(4,1)[124]{}]{} (245,60)[(6,1)[124]{}]{} (245,54) (242,51)[${\hspace{0pt}\sqcap}$]{} (385,80)[$\ldots$]{} (240,31)[${\mbox{\Large $\sqcap$}\hspace{1pt}}x\bigl({\neg}p(x){\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}p(x)\bigr)$]{} (105,10)[[**Figure 4:**]{} Sequential versus toggling gates]{} The cirquents of Figure 4 look similar to the right cirquent of Figure 3. And the latter, as we know, represents the problem of [*deciding*]{} $p(x)$. Then what are the problems represented by the cirquents of Figure 4? The left cirquent of Figure 4 represents the problem of [*semideciding*]{} $p(x)$. That is, under any given interpretation, the game represented by this cirquent is computable if and only if the predicate $p(x)$ is semidecidable (recursively enumerable). Indeed, suppose $p(x)$ is semidecidable. Then an algorithmic winning strategy for the game represented by the cirquent goes like this. Wait till the environment selects the $i$th child $a_i$ of the root for some $i$. Then select the left child of $a_i$, after which start looking for a certificate of the truth of $p(i)$. If and when such a certificate is found, select the right child of $a_i$, and rest your case. It is obvious that this strategy indeed wins. For the opposite direction, suppose $\cal M$ is an HPM that wins the game represented by the cirquent. Then a semidecision procedure for the predicate $p(x)$ goes like this. After receiving an input $i$, simulate the work of $\cal M$ in the scenario where, at the beginning of the play, the environment selected the $i$th child $a_i$ of the root. If and when you see in this simulation that $\cal M$ selected the right child of $a_i$, accept the input. As for the right cirquent of Figure 4, it also represents a decision-style problem, which is further weaker than the problem of semideciding $p(x)$. This problem is known in the literature as [*recursive approximation*]{} (cf. [@Hinman], Definition 8.3.9). Recursively approximating $p(x)$ means telling whether $p(x)$ is true or not, but doing so in the same style as semideciding does in negative cases: by correctly saying “Yes” or “No” at some point (after perhaps taking back previous answers several times) and never reconsidering this answer afterwards. Observe that semideciding $p(x)$ can be seen as always saying “No” at the beginning and then, if this answer is incorrect, changing it to “Yes” at some later time; so, when the answer is negative, this will be expressed by saying “No” and never taking back this answer, yet without ever indicating that the answer is final and will not change.[^9] Thus, the difference between semideciding and recursively approximating is that, unlike a semidecision procedure, a recursive approximation procedure can reconsider [*both*]{} negative and positive answers, and do so several times rather than only once. In perhaps more familiar terms, according to Shoenfield’s Limit Lemma (Cf. [@Hinman], Lemma 8.3.12), a predicate $p(x)$ is recursively approximable (the problem of its recursive approximation has an algorithmic solution) iff $p(x)$ is of arithmetical complexity $\Delta_2$, i.e., both $p(x)$ and its negation can be written in the form ${\mbox{\large $\exists$}\hspace{1pt}}z{\mbox{\large $\forall$}\hspace{1pt}}y\hspace{1pt}s(z,y,x)$, where $s(z,y,x)$ is a decidable predicate. We could go on and on illustrating how our formalism — even at the formula level — makes it possible to express various known and unknown natural and interesting properties, relations and operations on predicates or games as generalized predicates, but this can take us too far. Plenty of examples and discussions in that style can be found in, say, [@Jap03; @Japseq; @Japfin; @Japtogl]. Here we only want to point out the difference between our treatment of ${\vee},{\wedge}$ (including quantifiers as infinite versions of ${\vee},{\wedge}$) and the more traditional game-semantical approaches, most notably that of Hintikka’s [@Hintikka73] game-theoretic semantics. The latter essentially treats ${\vee},{\wedge}$ as we treat ${\hspace{0pt}\sqcup},{\hspace{0pt}\sqcap}$ — namely, associates ${\top}$’s moves/choices with ${\vee}$ and ${\bot}$’s moves/choices with ${\wedge}$. Computability logic, on the other hand, in the style used by Blass [@Bla92] for the multiplicatives of linear logic, treats $A{\vee}B$ and $A{\wedge}B$ as parallel combinations of games: these are simultaneous plays on “two boards” (within the two components). In order to win $A{\wedge}B$, ${\top}$ needs to win in both components, while in order to win $A{\vee}B$, it is sufficient for ${\top}$ to win in just one component. No choice between $A$ and $B$ is expected to be made at any time by either player. Note that otherwise strong validity would not at all be an interesting concept: there would be no strongly valid cirquents except for some pathological cases such as the cirquent whose root is a childless conjunctive gate. Another crucial difference between our approach and that of Hintikka, as well as the approach of Blass, is that we insist on the effectiveness of strategies while the latter allow any strategies. It is not hard to see that, if we allowed any (rather than only algorithmic) strategies, the system of our gates would semantically collapse to merely the parallel group. That is, a cirquent $C$ (under whatever interpretation) would have a winning strategy by ${\top}$ if and only if $C'$ does, where $C'$ is the result of replacing in $C$ every disjunctive selectional gate by ${\vee}$ and every conjunctive selectional gate by ${\wedge}$. Anyway, an important issue for the present paper is that of the advantages of cirquents over formulas. As we remember, finite cirquents without selectional gates are more efficient tools of expression than formulas of classical logic are, but otherwise their expressive power is the same as that of formulas. How about finite cirquents in the more general sense of this section — ones containing selectional gates? In this case, finite (let alone infinite) cirquents are not only more efficient but also more expressive than formulas. To get some insights, let us look at Figure 5. (324,162) (23,142)[$p$]{} (50,142)[$q$]{} (39,125)[(-1,1)[12]{}]{} (39,125)[(1,1)[12]{}]{} (39,119) (36,116)[${\hspace{0pt}\sqcup}$]{} (3,118)[${\neg}p$]{} (28,101)[(-1,1)[12]{}]{} (28,101)[(1,1)[12]{}]{} (28,95) (25,92)[${\vee}$]{} (102,142)[$q$]{} (75,142)[$p$]{} (91,125)[(-1,1)[12]{}]{} (91,125)[(1,1)[12]{}]{} (91,119) (88,116)[${\hspace{0pt}\sqcup}$]{} (112,118)[${\neg}q$]{} (103,101)[(-1,1)[12]{}]{} (103,101)[(1,1)[12]{}]{} (103,95) (100,92)[${\vee}$]{} (66,64)[(-3,2)[37]{}]{} (66,64)[(3,2)[37]{}]{} (66,58) (63,55)[${\wedge}$]{} (0,31)[$\bigl({\neg}p{\vee}(p{\hspace{0pt}\sqcup}q)\bigr){\wedge}\bigl((p{\hspace{0pt}\sqcup}q){\vee}{\neg}q\bigr)$]{} (249,142)[$p$]{} (276,142)[$q$]{} (265,125)[(-1,1)[12]{}]{} (265,125)[(1,1)[12]{}]{} (265,119) (262,116)[${\hspace{0pt}\sqcup}$]{} (203,118)[${\neg}p$]{} (228,101)[(-1,1)[12]{}]{} (228,101)[(3,1)[36]{}]{} (228,95) (225,92)[${\vee}$]{} (312,118)[${\neg}q$]{} (303,101)[(-3,1)[36]{}]{} (303,101)[(1,1)[12]{}]{} (303,95) (300,92)[${\vee}$]{} (266,64)[(-3,2)[37]{}]{} (266,64)[(3,2)[37]{}]{} (266,58) (263,55)[${\wedge}$]{} (238,31)[*No formula*]{} (77,10)[[**Figure 5:**]{} Unshared versus shared gates]{} On the left of Figure 5 we see a tree-like cirquent obtained from the non-tree-like cirquent on the right by duplicating and separating shared descendants in the same style as we obtained the left cirquent of Figure 1 from the right cirquent. The two cirquents of Figure 5, however, are not extensionally identical — after all, every legal run of the right cirquent contains at most one move while legal runs of the left cirquent may contain two moves. The two cirquents are different in a much stronger sense than extensional non-identity though. A machine that selects the left outgoing edge of the left ${\hspace{0pt}\sqcup}$-gate and the right outgoing edge of the right ${\hspace{0pt}\sqcup}$-gate wins the game represented by the left cirquent of Figure 5 under any interpretation, so that the cirquent is strongly valid. On the other hand, the right cirquent of Figure 5 is obviously not strongly valid. Morally, the difference between the two cirquents is that, in the right cirquent, unlike the left cirquent, the subcirquent (“resource”) $p{\hspace{0pt}\sqcup}q$ is [*shared*]{} between the two ${\wedge}$-conjuncted parents. Sharing means that only one resolution can be made in the ${\hspace{0pt}\sqcup}$-gate — a resolution that “works well” for both parents. There is no such resolution though, because the two parents “want” two different choices of a ${\hspace{0pt}\sqcup}$-disjunct. In contrast, in the left cirquent, both of these conflicting “desires” can be satisfied as each parent has its own ${\hspace{0pt}\sqcup}$-child, so that there is no need to coordinate resolutions. In layman’s terms, a shared resource (subcirquent) can be explained using the following metaphor. Imagine Victor and his wife Peggy own a joint bank account, with the balance of \$ 20,000, reserved for family needs. This resource can be seen as a shared choice combination of [*truck*]{}, [*furniture*]{}, and anything of family use that \$20,000 can buy.[^10] However, if Victor prefers a truck while Peggy prefers furniture (and both cost \$20,000), then they will have to sit down and decide together whether furniture is more important for the family or a truck, as their budget does not allow to get both. The situation would be very different if both Victor and Peggy had their own accounts, each worth \$20,000. After all, the total balance in this case would be \$40,000 rather than \$20,000. As we saw, the right cirquent of Figure 5 cannot be adequately translated into a formula using the standard way of turning non-trees into trees. However, using some non-standard and creative ways, a formula which is extensionally identical to that cirquent can still be found. For instance, one can easily see that $(p{\hspace{0pt}\sqcup}q){\vee}({\neg}p{\wedge}{\neg}q)$ is such a formula (as long as its ${\hspace{0pt}\sqcup}$-gate is given the same ID as the ID of the original cirquent’s ${\hspace{0pt}\sqcup}$-gate, of course). Well, we just got lucky in this particular case. “Luck” would not have been on our side if the cirquent under question was slightly more evolved, such as the one of Figure 6. (180,125) (23,105)[$p_1$]{} (50,105)[$p_2$]{} (39,88)[(-1,1)[12]{}]{} (39,88)[(1,1)[12]{}]{} (39,82) (36,79)[${\hspace{0pt}\sqcup}$]{} (91,66)[(-5,1)[51]{}]{} (91,66)[(5,1)[51]{}]{} (91,60) (88,57)[${\wedge}$]{} (102,105)[$p_4$]{} (75,105)[$p_3$]{} (91,88)[(-1,1)[12]{}]{} (91,88)[(1,1)[12]{}]{} (91,82) (88,79)[${\hspace{0pt}\sqcup}$]{} (154,105)[$p_6$]{} (127,105)[$p_5$]{} (143,88)[(-1,1)[12]{}]{} (143,88)[(1,1)[12]{}]{} (143,82) (140,79)[${\hspace{0pt}\sqcup}$]{} (39,66)[(0,1)[10]{}]{} (39,66)[(5,1)[51]{}]{} (39,60) (36,57)[${\wedge}$]{} (143,66)[(0,1)[10]{}]{} (143,66)[(-5,1)[51]{}]{} (143,60) (140,57)[${\wedge}$]{} (91,44)[(0,1)[10]{}]{} (91,44)[(-5,1)[51]{}]{} (91,44)[(5,1)[51]{}]{} (91,38) (88,35)[${\vee}$]{} (-13,10)[[**Figure 6:**]{} A more evolved example of sharing]{} Clustering selectional gates {#s5} ============================ We now further generalize [**cirquents**]{} by adding an extra parameter to them, called [**clustering**]{} (for selectional gates). The latter is nothing but a partition of the set of all selectional gates into subsets, called [**clusters**]{}, satisfying the condition that all gates within any given cluster have the same label (all are ${\hspace{0pt}\sqcup}$-gates, or all are ${\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-gates, or …) and the same outdegree. Due to this condition, we can talk about the [**outdegree**]{} of a cluster meaning the common outdegree of its elements, or the [**type**]{} of a cluster meaning the common type (label) of its elements. An additional condition that we require to be satisfied by all cirquents is that the question on whether any two given selectional gates are in the same cluster be decidable. Just like nodes do, each cluster also has its own [**ID**]{}. For clarity, we assume that the ID of a cluster is the same as the smallest of the IDs of the elements of the cluster. The [*extended ID*]{} of a selectional gate is the expression $n_k$, where $n$ is the ID of the gate and the subscript $k$ is the ID of the cluster to which the gate belongs. When representing cirquents graphically, one could require to show the extended ID of each selectional gate and (just) the ID of any other node. More often, however, we draw cirquents in a lazy yet unambiguous way, where only cluster IDs are indicated; furthermore, such IDs can be (though not always will be) omitted for singleton clusters. Figure 7 shows the same cirquent, on the left represented fully and on the right represented in a lazy way, with the identical cluster ID “$7$” attached to two ${\hspace{0pt}\sqcup}$-gates indicating that they are in the same cluster, while the absence of a cluster ID for the middle ${\hspace{0pt}\sqcup}$-gate indicating that it is in a different, singleton cluster (and so would be any other selectional gate if drawn without a cluster ID). Thus, altogether there are two clusters here, one — cluster $8$ — containing gate $8$, and the other — cluster $7$ containing gates $7$ and $9$. (362,165) (23,137)[$p_1$]{} (24,145)[$1$]{} (50,137)[$p_2$]{} (51,145)[$2$]{} (39,120)[(-1,1)[12]{}]{} (39,120)[(1,1)[12]{}]{} (39,114) (36,111)[${\hspace{0pt}\sqcup}$]{} (21,110)[$7_7$]{} (152,110)[$9_7$]{} (87,97)[$8_8$]{} (20,70)[$10$]{} (152,70)[$11$]{} (86,50)[$12$]{} (102,137)[$p_4$]{} (103,145)[$4$]{} (75,137)[$p_3$]{} (76,145)[$3$]{} (91,120)[(-1,1)[12]{}]{} (91,120)[(1,1)[12]{}]{} (91,114) (88,111)[${\hspace{0pt}\sqcup}$]{} (154,137)[$p_6$]{} (155,145)[$6$]{} (127,137)[$p_5$]{} (128,145)[$5$]{} (143,120)[(-1,1)[12]{}]{} (143,120)[(1,1)[12]{}]{} (143,114) (140,111)[${\hspace{0pt}\sqcup}$]{} (39,82)[(0,1)[26]{}]{} (39,82)[(2,1)[51]{}]{} (39,76) (36,73)[${\wedge}$]{} (143,82)[(0,1)[26]{}]{} (143,82)[(-2,1)[51]{}]{} (143,76) (140,73)[${\wedge}$]{} (91,44)[(-2,1)[51]{}]{} (91,44)[(2,1)[51]{}]{} (91,38) (88,35)[${\vee}$]{} (223,137)[$p_1$]{} (250,137)[$p_2$]{} (239,120)[(-1,1)[12]{}]{} (239,120)[(1,1)[12]{}]{} (239,114) (236,111)[${\hspace{0pt}\sqcup}$]{} (228,110)[$_7$]{} (350,110)[$_7$]{} (302,137)[$p_4$]{} (275,137)[$p_3$]{} (291,120)[(-1,1)[12]{}]{} (291,120)[(1,1)[12]{}]{} (291,114) (288,111)[${\hspace{0pt}\sqcup}$]{} (354,137)[$p_6$]{} (327,137)[$p_5$]{} (343,120)[(-1,1)[12]{}]{} (343,120)[(1,1)[12]{}]{} (343,114) (340,111)[${\hspace{0pt}\sqcup}$]{} (239,82)[(0,1)[26]{}]{} (239,82)[(2,1)[51]{}]{} (239,76) (236,73)[${\wedge}$]{} (343,82)[(0,1)[26]{}]{} (343,82)[(-2,1)[51]{}]{} (343,76) (340,73)[${\wedge}$]{} (291,44)[(-2,1)[51]{}]{} (291,44)[(2,1)[51]{}]{} (291,38) (288,35)[${\vee}$]{} (69,10)[[**Figure 7:**]{} A cirquent with clustered selectional gates]{} It may be helpful for one’s intuition to think of each cluster as a single gate-style physical device rather than a collection of individual gates. Namely, a cluster consisting of $n$ gates of outdegree $m$, as a single device, would have $n$ outputs and $m$  $n$-tuples of inputs. Figure 8 depicts this new kind of a “gate” for the case when $n=3$, $m=2$ and the type of the cluster is ${\hspace{0pt}\sqcup}$. (200,135) (30,111)[(0,-1)[10]{}]{} (50,111)[(0,-1)[10]{}]{} (70,111)[(0,-1)[10]{}]{} (130,111)[(0,-1)[10]{}]{} (150,111)[(0,-1)[10]{}]{} (170,111)[(0,-1)[10]{}]{} (20,85)[(0,1)[10]{}]{} (80,85)[(0,1)[10]{}]{} (120,85)[(0,1)[10]{}]{} (180,85)[(0,1)[10]{}]{} (10,65)[(0,1)[20]{}]{} (190,65)[(0,1)[20]{}]{} (70,55)[(0,1)[10]{}]{} (130,55)[(0,1)[10]{}]{} (20,95)[(1,0)[60]{}]{} (120,95)[(1,0)[60]{}]{} (10,85)[(1,0)[180]{}]{} (10,65)[(1,0)[180]{}]{} (70,55)[(1,0)[60]{}]{} (97,72)[${\hspace{0pt}\sqcup}$]{} (76,58)[$a_0$]{} (96,58)[$b_0$]{} (116,58)[$c_0$]{} (27,88)[$a_1$]{} (47,88)[$b_1$]{} (67,88)[$c_1$]{} (127,88)[$a_2$]{} (147,88)[$b_2$]{} (167,88)[$c_2$]{} (37,116)[*inputs*]{} (137,116)[*inputs*]{} (80,49)[(0,-1)[10]{}]{} (100,49)[(0,-1)[10]{}]{} (120,49)[(0,-1)[10]{}]{} (84,30)[*outputs*]{} (80,53) (100,53) (120,53) (30,97) (50,97) (70,97) (130,97) (150,97) (170,97) (11,10)[[**Figure 8:**]{} Clusters as generalized gates]{} The device shown in Figure 8 should be thought of as a switch that can be set to one of the positions $1$ or $2$ (otherwise no signals will pass through it). Setting it to $1$ simultaneously connects the three input lines $a_1$, $b_1$ and $c_1$ to the output lines $a_0$, $b_0$ and $c_0$, respectively (the three lines are parallel, isolated from each other, so that no signal can jump from one line to another). Similarly, setting the switch to $2$ connects $a_2$, $b_2$ and $c_2$ to $a_0$, $b_0$ and $c_0$, respectively. This is thus an “[*either $(a_1,b_1,c_1)$ or $(a_2,b_2,c_2)$*]{}” kind of a switch; combinations such as, say, $(a_1,b_2,c_1)$, are not available. Figure 9 shows the cirquent of Figure 7 with its clusters re-drawn in the style of Figure 8. (220,213) (14,187)[$p_1$]{} (29,187)[$p_5$]{} (59,187)[$p_2$]{} (74,187)[$p_6$]{} (126,187)[$p_3$]{} (156,187)[$p_4$]{} (17,171)[(0,1)[10]{}]{} (32,171)[(0,1)[10]{}]{} (62,171)[(0,1)[10]{}]{} (77,171)[(0,1)[10]{}]{} (129,171)[(0,1)[10]{}]{} (159,171)[(0,1)[10]{}]{} (-1,145)[(0,1)[15]{}]{} (94,145)[(0,1)[15]{}]{} (31,135)[(0,1)[10]{}]{} (61,135)[(0,1)[10]{}]{} (-1,160)[(1,0)[95]{}]{} (-1,145)[(1,0)[95]{}]{} (31,135)[(1,0)[30]{}]{} (43,150)[${\hspace{0pt}\sqcup}$]{} (39,134) (54,134) (54,134)[(3,-1)[99]{}]{} (9,160)[(0,1)[10]{}]{} (39,160)[(0,1)[10]{}]{} (9,170)[(1,0)[30]{}]{} (17,171) (32,171) (54,160)[(0,1)[10]{}]{} (84,160)[(0,1)[10]{}]{} (54,170)[(1,0)[30]{}]{} (62,171) (77,171) (153,87)[(0,1)[14]{}]{} (111,145)[(0,1)[15]{}]{} (176,145)[(0,1)[15]{}]{} (135,135)[(0,1)[10]{}]{} (150,135)[(0,1)[10]{}]{} (111,160)[(1,0)[65]{}]{} (111,145)[(1,0)[65]{}]{} (135,135)[(1,0)[15]{}]{} (139,150)[${\hspace{0pt}\sqcup}$]{} (143,134) (121,160)[(0,1)[10]{}]{} (136,160)[(0,1)[10]{}]{} (121,170)[(1,0)[15]{}]{} (129,171) (166,160)[(0,1)[10]{}]{} (151,160)[(0,1)[10]{}]{} (151,170)[(1,0)[15]{}]{} (159,171) (39,82)[(0,1)[52]{}]{} (39,82)[(2,1)[104]{}]{} (39,76) (36,73)[${\wedge}$]{} (143,82)[(0,1)[52]{}]{} (143,82)[(2,1)[10]{}]{} (143,76) (140,73)[${\wedge}$]{} (91,44)[(-2,1)[51]{}]{} (91,44)[(2,1)[51]{}]{} (91,38) (88,35)[${\vee}$]{} (-61,10)[[**Figure 9:**]{} An alternative representation of the cirquent of Figure 7]{} Representing clusters in the way we have just done illustrates that they are nothing but generalized gates. In fact, this is a very natural generalization. Namely, a gate in the ordinary sense is the special case of this general sort of a gate where the number of output lines (as well as input lines in each group of inputs) equals $1$. Technically, however, we prefer to continue seeing clusters as sets of ordinary gates as they were officially defined at the beginning of this section. So, drawings in the style of Figures 8 or 9 will never reemerge in this paper, and whatever was said about clusters as individual “gates” of a new type can be safely forgotten. Cirquents of the previous section should be viewed as special cases of cirquents in the new sense of this section. Namely, they are the cases where each selectional gate forms its own, single-element cluster. With this view, the semantical concepts we reintroduce in this section conservatively generalize those of the previous section. The definition of a legal run of the game represented by a cirquent $C$ is the same as before (Definition \[may14a\]), with the difference that now moves are made within clusters rather than individual gates. That is, each move of a legal run of $C$ looks like $c.i$, where $c$ is the ID of a cluster (rather than gate), and $i$ is a positive integer not exceeding the outdegree of that cluster. The intuitive meaning of such a move $c.i$ is selecting the $i$th outgoing edge (together with the corresponding child) simultaneously in [*each*]{} gate belonging to cluster $c$. All other conditions on the legality of runs remain literally the same as before. Anyway, let us not be lazy to fully (re)produce such a definition: \[maya\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Phi$ a position. $\Phi$ is a [**legal position**]{} of the game $C^*$ iff, with a “cluster” below meaning a cluster of $C$, the following conditions are satisfied: 1. Each labmove of $\Phi$ has one of the following forms: 1. ${\top}c.i$, where $c$ is a ${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcup}$-cluster and $i$ is a positive integer not exceeding the outdegree of $c$; 2. ${\bot}c.i$, where $c$ is a ${\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcap}$-cluster and $i$ is a positive integer not exceeding the outdegree of $c$. 2. Whenever $c$ is a choice cluster, $\Phi$ contains at most one occurrence of a labmove of the form ${\wp}c.i$. 3. Whenever $c$ is a sequential cluster and $\Phi$ is of the form ${\langle \ldots , {\wp}c.i ,\ldots, {\wp}c.j ,\ldots \rangle}$, we have $i<j$. So, for instance, ${\langle {\top}8.1,{\top}7.2 \rangle}$ is a legal run of the cirquent of Figure 7, but ${\langle {\top}8.1,{\top}9.2 \rangle}$ is not, because $9$ is (a gate ID but) not a cluster ID. To summarize once again, selections (moves) in individual [*gates*]{} are no longer available. Rather, they should be made in [*clusters*]{}. \[m20\] In the context of a given cirquent $C$ and a legal run $\Gamma$ of $C$, we will say that a selectional gate $g$ of a cluster $c$ is [**unresolved**]{} iff either no moves of the form $c.j$ have been made in $\Gamma$, or infinitely many such moves have been made. Otherwise $g$ is [**resolved**]{} and, where $c.i$ is the last move of the form $c.j$ made in $\Gamma$, the child pointed at by the $i$th outgoing edge of $g$ is said to be the [**resolvent**]{} of $g$. With the terms “unresolved”, “resolved” and “resolvent” conservatively redefined this way, the definition of [**truth**]{} for a cirquent and its nodes is [*literally*]{} the same[^11] in our present case as in the case of the cirquents of the previous section (Definition \[may14c\]), so we do not reproduce it here. The same applies to the definition of the [**Wn**]{} components of the games represented by cirquents (Definition \[may14aa\]). Let us look at the game represented by the cirquents of Figure 7 once again. The meaning of the move “$7.2$” in this game is selecting outgoing edge $\#2$ in [*both*]{} gates of cluster $7$. Intuitively, the effect of such a move is connecting gate $10$ directly to node $2$ and gate $11$ directly to node $6$. Thus, the move “$7.2$” can be seen as a choice (choice $\#2$) [*shared*]{} between gates $10$ and $11$; that shared choice, however, yields different, unshared results for the two gates: result $2$ for gate $10$ while result $6$ for gate $11$. This sort of sharing is very different from the sort of sharing represented by gate $8$: the effect of the move “$8.1$” is sharing both the choice $\#1$ as well as the result $3$ of that choice. Back to the world of Victor and Peggy, imagine they are in their family car on a road between two cities $A$ and $B$. Victor likes sports but never goes to theaters. Peggy likes theaters but never attends games. There is a basketball game and a ballet show tonight in city $A$. And there is a football game and an opera show in city $B$. The shared choice/move in this situation is a choice between “[*drive to $A$*]{}” and “[*drive to $B$*]{}” (they only have one car!). The outcomes of either choice, however, are not shared. For instance, the outcome of the choice “[*drive to $A$*]{}” is “[*see the basketball match*]{}” for Victor while “[*see the ballet*]{}” for Peggy. Victor and Peggy can negotiate and decide between the two pairs $(\mbox{\em Basketball},\mbox{\em Ballet})$ or $(\mbox{\em Football},\mbox{\em Opera})$. But the pairs $(\mbox{\em Basketball},\mbox{\em Opera})$ and $(\mbox{\em Football},\mbox{\em Ballet})$ are not available. As for the stronger type of sharing corresponding to the middle ${\hspace{0pt}\sqcup}$-gate of Figure 7, it can be explained by the following modification of the situation: Both Victor and Peggy are fond of Impressionism. There is a Monet exhibition in city $A$, and a Pissarro exhibition in city $B$. The action/choice — driving to $A$ or driving to $B$ — is again shared. But now so is the corresponding outcome “[*see Monet’s paintings*]{}” or “[*see Pissarro’s paintings*]{}”. While we could continue elaborating on independent philosophical and mathematical/technical motivations for introducing clustering, here we just want to point out the very direct connections of our approach to the already well motivated [*IF*]{} ([*independence-friendly*]{}) [*logic*]{}. We assume that the reader is familiar with the basics of the latter, or else he or she may take a quick look at the concise yet complete (for our purposes) overview of the subject given in [@Tul09]. It should be noted that we use the term “IF logic” in a generic sense, making no terminological distinction between Hintikka and Sandu’s [@HS97] original version of IF logic and Hodge’s [@Hodges97] generalization of it termed in [@Hodges07] [*slash logic*]{}. In fact, both the syntax and the semantics for IF formulas that we use are those of slash logic. It is assumed that all formulas of IF logic are written in negation normal form, and that different occurrences of quantifiers in them always bind different variables. We also assume that only existential quantifiers and disjunctions are slashed in the formulas of IF logic — it is known that, in IF logic (as opposed to what is called [*extended IF logic*]{}), slashing universal quantifiers or conjunctions is redundant, having no effect on the semantical meanings of formulas. Finally, without much (if any) loss of generality, we assume that the semantics of IF logic exclusively deals with models with countable domains; namely, such a domain is always $\{1,2,3,\ldots\}$ or some nonempty finite initial portion of it. Computability logic insists on algorithmicity of ${\top}$’s strategies, while IF logic, in its game semantics, allows any strategies. Let us, for now, consider the version of IF logic which differs from its canonical version only in that it, like CoL, requires ${\top}$’s (which there is usually called ${\mbox{\large $\exists$}\hspace{1pt}}$-Player, or Verifier, or Eloise) strategies to be effective, while imposing no restrictions on ${\bot}$’s (${\mbox{\large $\forall$}\hspace{1pt}}$-Player’s, Falsifier’s, Abelard’s) strategies. Call this version of IF logic [**effective IF logic**]{}. Remember that the outgoing edges of each node of a cirquent come in a fixed left-to-right order: edge $\#1$, edge $\#2$, edge $\#3$, etc. Let us call these numbers $1$, $2$, $3$, etc. the [**order numbers**]{} of the corresponding edges. We now claim that the fragment of our cirquent logic where cirquents are allowed to have only two — ${\wedge}$ and ${\hspace{0pt}\sqcup}$ — sorts of gates is sufficient to cover effective IF logic, and far beyond. A verification of this claim — perhaps at the philosophical rather than technical level — is left to the reader. Namely, each formula $E$ of effective IF logic can be understood as the cirquent obtained from it through performing the following operations: \[dsc1\]   1. Ignoring slashes, write $E$ in the form of a tree-like cirquent, understanding ${\mbox{\large $\forall$}\hspace{1pt}}$ as a “long” ${\wedge}$-conjunction and ${\mbox{\large $\exists$}\hspace{1pt}}$ as a “long” ${\vee}$-disjunction. This way, each occurrence $O$ of a quantifier, conjunction or disjunction gives rise to one (if $O$ is not in the scope of a quantifier) or many (if $O$ is in the scope of a quantifier) gates. We say that each such gate, as well as each outgoing edge of it, [**originates**]{} from $O$. Also, since we assume that different occurrences of quantifiers in IF formulas always bind different variables, instead of saying “…originates from the occurrence of ${\mbox{\large $\forall$}\hspace{1pt}}y$ (or ${\mbox{\large $\exists$}\hspace{1pt}}y$)” we can unambiguously simply say “…originates from $y$”. 2. Change the label of each ${\vee}$-gate to ${\hspace{0pt}\sqcup}$. 3. Cluster the disjunctive gates so that, any two such gates $a$ and $b$ belong to the same cluster if and only if they originate from the same occurrence of ${\mbox{\large $\exists$}\hspace{1pt}}x/y_1,\ldots,y_n$ or ${\vee}/y_1,\ldots,y_n$ in $E$, and the following condition is satisfied: - Let $e_{1}^{a},\ldots,e_{k}^{a}$ and $e_{1}^{b},\ldots,e_{k}^{b}$ be the paths (sequences of edges) from the root of the tree to $a$ and $b$, respectively. Then, for any $i\in\{1,\ldots,k\}$, the order numbers of the edges $e_{i}^{a}$ and $e_{i}^{b}$ are identical unless these edges originate from one of the variables $y_1,\ldots,y_n$. Let us see an example to visualize how our construction works. A traditional starting point of introductory or survey papers on IF logic is the formula $$\label{may21} {\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y \ p(x,y,z,t).$$ Technically, its meaning can be expressed by the second-order formula ${\mbox{\large $\exists$}\hspace{1pt}}f{\mbox{\large $\exists$}\hspace{1pt}}g{\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\forall$}\hspace{1pt}}z\ p\bigl(x,f(x),z,g(z)\bigr)$[^12] or the following formula with Henkin’s branching quantifiers: $$\begin{array}{r} {\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y\\ {\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t \end{array} \raisebox{2pt}{$p(x,y,z,t)$},$$ with the shape of the quantifier array indicating that the two quantifier blocks ${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y$ and ${\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t$ are independent of each other even though, in (\[may21\]), one occurs in the scope of the other. We agreed earlier to write ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt},{\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}$ instead of ${\mbox{\large $\forall$}\hspace{1pt}},{\mbox{\large $\exists$}\hspace{1pt}}$. So, we rewrite (\[may21\]) as ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt}x\hspace{-1pt}{\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-2pt} y\hspace{-1pt}{\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt} z\hspace{-1pt}{\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}\hspace{-1pt} t/x,\hspace{-2pt}y \hspace{3pt} p(x,y,z,t)$. This formula, however, is not yet adequate. The philosophy of IF logic associates the intuitions of [*finding*]{} (rather than just [*existence*]{}) with existential quantifiers or disjunctions; and, as we know, it is the operator/gate ${\hspace{0pt}\sqcup}$ whose precise meaning is actually [*finding*]{} things (rather than ${\vee}$, which is merely about [*existence*]{} of things). So, ${\mbox{\hspace{1pt}\Large $\vee$}\hspace{1pt}}$ should be replaced with ${\mbox{\Large $\sqcup$}\hspace{1pt}}$, and we get the formula ${\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt}x{\mbox{\Large $\sqcup$}\hspace{1pt}}y{\mbox{\hspace{1pt}\Large $\wedge$}\hspace{1pt}}\hspace{-1pt} z{\mbox{\Large $\sqcup$}\hspace{1pt}}t/x,\hspace{-2pt}y \hspace{2pt} p(x,y,z,t)$ which, ignoring the slash for now, can be seen as a tree-like cirquent in the sense of the previous section. It now remains to account for the slash by adequately clustering the cirquent. Namely, the clustering should make sure that, whenever $a$ and $b$ are two upper-level (those originating from $t$) ${\hspace{0pt}\sqcup}$-gates such that the paths from the root to them — seen not as sequences of edges but as sequences of the corresponding order numbers — only differ in their first two elements (the ones originating from $x$ and $y$, of which $t$ should be independent), $a$ and $b$ are in the same cluster. Figure 10 illustrates this arrangement. For compactness considerations, in this figure we have assumed that the universe of discourse (the set over which $x,y,z,t$ range) is just $\{1,2\}$; also, we have written $p_{1111}$, $p_{1112}$, etc. instead of $p(1,1,1,1)$, $p(1,1,1,2)$, etc. (360,165) (3,128)[$3$]{} (53,128)[$4$]{} (103,128)[$3$]{} (153,128)[$4$]{} (203,128)[$3$]{} (253,128)[$4$]{} (303,128)[$3$]{} (353,128)[$4$]{} (78,76)[$1$]{} (278,76)[$2$]{} (-17,142)[$p_{1111}$]{} (8,142)[$p_{1112}$]{} (5,122)[(-2,3)[11]{}]{} (5,122)[(2,3)[11]{}]{} (5,116) (2,113)[${\hspace{0pt}\sqcup}$]{} (33,142)[$p_{1121}$]{} (58,142)[$p_{1122}$]{} (55,122)[(-2,3)[11]{}]{} (55,122)[(2,3)[11]{}]{} (55,116) (52,113)[${\hspace{0pt}\sqcup}$]{} (30,97)[(-2,1)[25]{}]{} (30,97)[(2,1)[25]{}]{} (30,91) (27,88)[${\wedge}$]{} (83,142)[$p_{1211}$]{} (108,142)[$p_{1212}$]{} (105,122)[(-2,3)[11]{}]{} (105,122)[(2,3)[11]{}]{} (105,116) (102,113)[${\hspace{0pt}\sqcup}$]{} (133,142)[$p_{1221}$]{} (158,142)[$p_{1222}$]{} (155,122)[(-2,3)[11]{}]{} (155,122)[(2,3)[11]{}]{} (155,116) (152,113)[${\hspace{0pt}\sqcup}$]{} (130,97)[(-2,1)[25]{}]{} (130,97)[(2,1)[25]{}]{} (130,91) (127,88)[${\wedge}$]{} (80,72)[(-4,1)[51]{}]{} (80,72)[(4,1)[51]{}]{} (80,66) (77,63)[${\hspace{0pt}\sqcup}$]{} (184,142)[$p_{2111}$]{} (208,142)[$p_{2112}$]{} (205,122)[(-2,3)[11]{}]{} (205,122)[(2,3)[11]{}]{} (205,116) (202,113)[${\hspace{0pt}\sqcup}$]{} (233,142)[$p_{2121}$]{} (258,142)[$p_{2122}$]{} (255,122)[(-2,3)[11]{}]{} (255,122)[(2,3)[11]{}]{} (255,116) (252,113)[${\hspace{0pt}\sqcup}$]{} (230,97)[(-2,1)[25]{}]{} (230,97)[(2,1)[25]{}]{} (230,91) (227,88)[${\wedge}$]{} (283,142)[$p_{2211}$]{} (308,142)[$p_{2212}$]{} (305,122)[(-2,3)[11]{}]{} (305,122)[(2,3)[11]{}]{} (305,116) (302,113)[${\hspace{0pt}\sqcup}$]{} (333,142)[$p_{2221}$]{} (358,142)[$p_{2222}$]{} (355,122)[(-2,3)[11]{}]{} (355,122)[(2,3)[11]{}]{} (355,116) (352,113)[${\hspace{0pt}\sqcup}$]{} (330,97)[(-2,1)[25]{}]{} (330,97)[(2,1)[25]{}]{} (330,91) (327,88)[${\wedge}$]{} (280,72)[(-4,1)[51]{}]{} (280,72)[(4,1)[51]{}]{} (280,66) (277,63)[${\hspace{0pt}\sqcup}$]{} (181,43)[(-6,1)[100]{}]{} (181,43)[(6,1)[100]{}]{} (181,37) (178,34)[${\wedge}$]{} (5,10)[[**Figure 10:**]{} Mimicking ${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y \ p(x,y,z,t)$ (when the universe is $\{1,2\}$)]{} To feel the difference created by clustering, let us consider the interpretation that sends the four atoms $p_{1111}$, $p_{1122}$, $p_{2212}$, $p_{2221}$ to ${\top}$ and sends all other atoms to ${\bot}$. Then the game represented by the cirquent of Figure 10 cannot be won (by ${\top}$). On the other hand, it would be winnable if there was no clustering. Further, it is also winnable if clustering is made finer (yet not trivial) as done in the cirquent of Figure 11. The latter expresses the “slightly” modified form ${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t/x \hspace{3pt} p(x,y,z,t)$ of (\[may21\]), and is won by the HPM that generates the run $${\langle {\top}1.1,\ {\top}3.1,\ {\top}4.2,\ {\top}2.2,\ {\top}5.2,\ {\top}6.1 \rangle}$$ (or any permutation of the above). Note that the same run is simply not a legal run of the cirquent of Figure 10. (360,165) (3,128)[$3$]{} (53,128)[$4$]{} (103,128)[$5$]{} (153,128)[$6$]{} (203,128)[$3$]{} (253,128)[$4$]{} (303,128)[$5$]{} (353,128)[$6$]{} (78,76)[$1$]{} (278,76)[$2$]{} (-17,142)[$p_{1111}$]{} (8,142)[$p_{1112}$]{} (5,122)[(-2,3)[11]{}]{} (5,122)[(2,3)[11]{}]{} (5,116) (2,113)[${\hspace{0pt}\sqcup}$]{} (33,142)[$p_{1121}$]{} (58,142)[$p_{1122}$]{} (55,122)[(-2,3)[11]{}]{} (55,122)[(2,3)[11]{}]{} (55,116) (52,113)[${\hspace{0pt}\sqcup}$]{} (30,97)[(-2,1)[25]{}]{} (30,97)[(2,1)[25]{}]{} (30,91) (27,88)[${\wedge}$]{} (83,142)[$p_{1211}$]{} (108,142)[$p_{1212}$]{} (105,122)[(-2,3)[11]{}]{} (105,122)[(2,3)[11]{}]{} (105,116) (102,113)[${\hspace{0pt}\sqcup}$]{} (133,142)[$p_{1221}$]{} (158,142)[$p_{1222}$]{} (155,122)[(-2,3)[11]{}]{} (155,122)[(2,3)[11]{}]{} (155,116) (152,113)[${\hspace{0pt}\sqcup}$]{} (130,97)[(-2,1)[25]{}]{} (130,97)[(2,1)[25]{}]{} (130,91) (127,88)[${\wedge}$]{} (80,72)[(-4,1)[51]{}]{} (80,72)[(4,1)[51]{}]{} (80,66) (77,63)[${\hspace{0pt}\sqcup}$]{} (184,142)[$p_{2111}$]{} (208,142)[$p_{2112}$]{} (205,122)[(-2,3)[11]{}]{} (205,122)[(2,3)[11]{}]{} (205,116) (202,113)[${\hspace{0pt}\sqcup}$]{} (233,142)[$p_{2121}$]{} (258,142)[$p_{2122}$]{} (255,122)[(-2,3)[11]{}]{} (255,122)[(2,3)[11]{}]{} (255,116) (252,113)[${\hspace{0pt}\sqcup}$]{} (230,97)[(-2,1)[25]{}]{} (230,97)[(2,1)[25]{}]{} (230,91) (227,88)[${\wedge}$]{} (283,142)[$p_{2211}$]{} (308,142)[$p_{2212}$]{} (305,122)[(-2,3)[11]{}]{} (305,122)[(2,3)[11]{}]{} (305,116) (302,113)[${\hspace{0pt}\sqcup}$]{} (333,142)[$p_{2221}$]{} (358,142)[$p_{2222}$]{} (355,122)[(-2,3)[11]{}]{} (355,122)[(2,3)[11]{}]{} (355,116) (352,113)[${\hspace{0pt}\sqcup}$]{} (330,97)[(-2,1)[25]{}]{} (330,97)[(2,1)[25]{}]{} (330,91) (327,88)[${\wedge}$]{} (280,72)[(-4,1)[51]{}]{} (280,72)[(4,1)[51]{}]{} (280,66) (277,63)[${\hspace{0pt}\sqcup}$]{} (181,43)[(-6,1)[100]{}]{} (181,43)[(6,1)[100]{}]{} (181,37) (178,34)[${\wedge}$]{} (5,10)[[**Figure 11:**]{} Mimicking ${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t/x \hspace{3pt} p(x,y,z,t)$ (when the universe is $\{1,2\}$)]{} While $({\wedge},{\hspace{0pt}\sqcup})$-cirquents allow us to fully capture effective IF logic, they, at the same time, are significantly more expressive than the latter is, even if — or, especially if — we limit ourselves to finite cirquents, and correspondingly limit IF formulas to propositional ones. As mentioned earlier, IF logic has no smooth approach at the purely propositional level, and is forced to severely limit the forms of (meaningful) propositional-level formulas as, for instance, done in [@SandPiet01]. In particular, it encounters serious difficulties in allowing independence from conjunctions or disjunctions (rather than quantifiers). These problems are automatically neutralized under our approach. We do not introduce any restrictions whatsoever on the forms of cirquents or allowable clusterings in them; yet, all such expressions are semantically meaningful. Of course, a penalty for the higher expressive power of cirquents is the awkwardness associated with the necessity to draw cirquents instead of writing formulas. But, again, various syntactic shortcuts can be introduced to make life easier, with recurrence operators, quantifiers or the slash notation being among such shortcuts. It should also be remembered that drawing cirquents may be some annoyance for humans (when writing papers) but not for computers (when using logic in their work); the latter, in fact, would [*much*]{} prefer cirquents, as they are exponentially more efficient means of expression than formulas are. In any case, a more significant achievement than expressiveness is probably avoiding the necessity to deal with the unplayable and troublemaking imperfect-information games on which the traditional semantics for IF logic are based. We owe this effect to the fact that clustering enforces at the [*game*]{} level what IF logic calls [*uniformity*]{} and tries to enforce at the [*strategy*]{} level. Rather than relying on players’ integrity in expecting that they — to their disadvantage — will conscientiously forget the moves they have already seen but of which their actions should be independent, clustering simply makes “cheating” physically impossible. But it should be remembered that “effective IF logic” is just our present invention which, while both mathematically and philosophically reasonable, is not at all the same as IF logic in its canonical form. So, a true merger between CoL and IF logic would seemingly require a compromise from one of them: either IF logic should adopt the requirement of effectiveness of strategies, or computability logic should drop this (central to its philosophy) requirement. Probably neither camp would be willing to make a compromise. Fortunately, there is no real need for any compromises. The following section further generalizes the concept of cirquents in a conservative way. The new cirquents, unlike the cirquents of the present section, are powerful enough to express anything that the traditional (“non-effective”) IF logic can. This is achieved through extending the idea of clustering from selectional gates to ${\vee}$-gates, yet without associating any moves with such gates or clusters. Clustering ${\vee}$-gates {#s6} ========================= A [**cirquent**]{} in the sense of the present section is the same as one in the sense of the previous section, with the difference that now not only the set of selectional gates is partitioned into clusters, but also the set of ${\vee}$-gates (but not the set of ${\wedge}$-gates — not yet, at least). The condition on clustering is the same as before: all gates within a given cluster are required to have the same type (label) and same outdegree. Cirquents in the sense of the previous section are seen as special cases of cirquents in the present sense, namely, the cases where all ${\vee}$-clusters are singletons. The [**legal positions**]{} of the game represented by a cirquent in this new sense are defined in literally the same way as before (Definition \[maya\]). So, clustering ${\vee}$-gates in a cirquent does not affect the set of its legal runs. By a [**metaselection**]{} for a cirquent $C$ we will mean a (not necessarily effective) partial function $f$ from ${\vee}$-clusters of $C$ to the set of positive integers, such that, for any ${\vee}$-cluster $c$, whenever defined, $f(c)$ does not exceed the outdegree of $c$.[^13] In the context of a given cirquent $C$, a legal run $\Gamma$ of $C$ and a metaselection $f$ for $C$, we will say that a ${\vee}$-gate $g$ of a cluster $c$ is [**unresolved**]{} iff $f$ is undefined at $c$ (note that a childless ${\vee}$-gate will always be unresolved). Otherwise $g$ is [**resolved**]{}, and the child of it pointed at by the $f(c)$th outgoing edge is said to be the [**resolvent**]{} of $g$. As for the same-name concepts “unresolved”, “resolved” and “resolvent” for selectional gates, they are defined literally as before (Definition \[m20\]). Note that these three concepts depend on $f$ but not $\Gamma$ when $g$ is a ${\vee}$-gate, while they depend on $\Gamma$ but not $f$ when $g$ is a selectional gate. The function $f$ thus acts as a “metaextension” of $\Gamma$. Intuitively, it can be thought of as selections in ${\vee}$-clusters made by the guardian angel of ${\top}$ in favor of ${\top}$ [*after*]{} the actual play took place (rather than [*during*]{} it), even if the latter lasted infinitely long; unlike ${\top}$, its guardian angel has magic — nonalgorithmic — intellectual powers to make best possible selections. Technically, however, selections by the “angel”, unlike selections made by either player, are not moves of the game. The following definition refines the earlier definitions of truth by relativizing this concept — renamed into [*metatruth*]{} — with respect to a metaselection as an additional parameter. \[may14cc\] Let $C$ be a cirquent, $^*$ an interpretation, $\Gamma$ a legal run of $C$, and $f$ a metaselection for $C$. In this context, with “metatrue” to be read as “[**metatrue w.r.t. $(^*,\Gamma,f)$**]{}”, we say that: - An $L$-port is metatrue iff $L^*={\top}$. - A resolved selectional gate is metatrue iff so is its resolvent. - No unresolved disjunctive selectional gate is metatrue. - Each unresolved conjunctive selectional gate is metatrue. - A ${\vee}$-gate is metatrue iff it is resolved and its resolvent is metatrue. - A ${\wedge}$-gate is metatrue iff so are all of its children. Finally, we say that $C$ is metatrue iff so is its root. The following definition brings us from metatruth back to truth. \[ma14cc\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Gamma$ a legal run of $C$. We say that $C$ is [**true**]{} w.r.t. $(^*,\Gamma)$ iff there is a metaselection $f$ for $C$ such that $C$ is metatrue w.r.t. $(^*,\Gamma,f)$. It is left to the reader to see why the new concept of truth is a conservative generalization of its earlier counterparts. The same applies to the following definition, which completes our definition of the game $C^*$ represented by any given cirquent $C$ under any given interpretation $^*$. \[may14aaa\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Gamma$ a legal run of $C$. Then $\Gamma$ is a ${\top}$-won run of the game $C^*$ iff $C$ is true w.r.t. $(^*,\Gamma)$. We now claim that any formula $E$ of (this time ordinary, “non-effective”) IF logic can be adequately written as a tree-like cirquent $C$ with only ${\vee}$ and ${\wedge}$ gates; namely, such a cirquent $C$ is obtained from $E$ through applying the steps 1 and 3 of Description \[dsc1\], with step 2 omitted. For any interpretation $^*$, we will then have $C^*={\top}$ iff $E$ is true (under the same interpretation of atoms) in IF logic. Figure 12 shows an example. As an easy exercise, the reader may want to verify that the cirquent of that figure is ${\bot}$ under the interpretation which sends $p_{1111}$, $p_{1122}$, $p_{2212}$, $p_{2221}$ to ${\top}$ and sends all other atoms to ${\bot}$. Note that this cirquent represents an elementary game, unlike its counterpart from Figure 10. (360,165) (3,128)[$3$]{} (53,128)[$4$]{} (103,128)[$3$]{} (153,128)[$4$]{} (203,128)[$3$]{} (253,128)[$4$]{} (303,128)[$3$]{} (353,128)[$4$]{} (78,76)[$1$]{} (278,76)[$2$]{} (-17,142)[$p_{1111}$]{} (8,142)[$p_{1112}$]{} (5,122)[(-2,3)[11]{}]{} (5,122)[(2,3)[11]{}]{} (5,116) (2,113)[${\vee}$]{} (33,142)[$p_{1121}$]{} (58,142)[$p_{1122}$]{} (55,122)[(-2,3)[11]{}]{} (55,122)[(2,3)[11]{}]{} (55,116) (52,113)[${\vee}$]{} (30,97)[(-2,1)[25]{}]{} (30,97)[(2,1)[25]{}]{} (30,91) (27,88)[${\wedge}$]{} (83,142)[$p_{1211}$]{} (108,142)[$p_{1212}$]{} (105,122)[(-2,3)[11]{}]{} (105,122)[(2,3)[11]{}]{} (105,116) (102,113)[${\vee}$]{} (133,142)[$p_{1221}$]{} (158,142)[$p_{1222}$]{} (155,122)[(-2,3)[11]{}]{} (155,122)[(2,3)[11]{}]{} (155,116) (152,113)[${\vee}$]{} (130,97)[(-2,1)[25]{}]{} (130,97)[(2,1)[25]{}]{} (130,91) (127,88)[${\wedge}$]{} (80,72)[(-4,1)[51]{}]{} (80,72)[(4,1)[51]{}]{} (80,66) (77,63)[${\vee}$]{} (184,142)[$p_{2111}$]{} (208,142)[$p_{2112}$]{} (205,122)[(-2,3)[11]{}]{} (205,122)[(2,3)[11]{}]{} (205,116) (202,113)[${\vee}$]{} (233,142)[$p_{2121}$]{} (258,142)[$p_{2122}$]{} (255,122)[(-2,3)[11]{}]{} (255,122)[(2,3)[11]{}]{} (255,116) (252,113)[${\vee}$]{} (230,97)[(-2,1)[25]{}]{} (230,97)[(2,1)[25]{}]{} (230,91) (227,88)[${\wedge}$]{} (283,142)[$p_{2211}$]{} (308,142)[$p_{2212}$]{} (305,122)[(-2,3)[11]{}]{} (305,122)[(2,3)[11]{}]{} (305,116) (302,113)[${\vee}$]{} (333,142)[$p_{2221}$]{} (358,142)[$p_{2222}$]{} (355,122)[(-2,3)[11]{}]{} (355,122)[(2,3)[11]{}]{} (355,116) (352,113)[${\vee}$]{} (330,97)[(-2,1)[25]{}]{} (330,97)[(2,1)[25]{}]{} (330,91) (327,88)[${\wedge}$]{} (280,72)[(-4,1)[51]{}]{} (280,72)[(4,1)[51]{}]{} (280,66) (277,63)[${\vee}$]{} (181,43)[(-6,1)[100]{}]{} (181,43)[(6,1)[100]{}]{} (181,37) (178,34)[${\wedge}$]{} (0,10)[[**Figure 12:**]{} Representing ${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y \hspace{3pt} p(x,y,z,t)$ (when the universe is $\{1,2\}$)]{} Just as for any other claims made in this paper regarding connections to IF logic, we are not attempting to provide a proof of our present claim. Such a proof would require reproducing and analyzing one of the semantics accepted/recognized in the IF logic community, which could take us too far — the present paper is on computability logic rather than IF logic after all and, even if a known semantics of IF logic and the corresponding fragment of the semantics of CoL turned out to be not exactly equivalent, a question would arise about which one is a more adequate materialization of the original philosophy of IF logic. Clustering all gates; ranking {#s7} ============================= Other than the claimed fact that the cirquents of the previous section achieve the full expressive power of IF logic, there are no good reasons to stop at those cirquents. Indeed, if we clustered selectional and ${\vee}$-gates, why not do the same with the remaining ${\wedge}$ type of gates? Naturally, the semantics of clustered ${\wedge}$ gates would have to be symmetric to that of clustered ${\vee}$-gates. Namely, a universally quantified metaselection $h$ should be associated with them, as we associated an existentially quantified metaselection $f$ with ${\vee}$-clusters in the previous section. Such an $h$ can be thought of as a guardian angel of ${\bot}$ that makes selections in ${\wedge}$-clusters in favor of ${\bot}$ after the game has been played by the two players. One can show that then, no matter how the ${\wedge}$-gates are clustered, truth in the sense of the previous section is equivalent to an assertion that sounds like the right side of Definition \[ma14cc\] but starts with the words “[*there is an $f$ such that, for all $h$,…*]{}” instead of just “[*there is an $f$ such that…*]{}”. But then the question comes: why this quantification order and not “[*for all $h$ there is an $f$ such that…*]{}”, which would obviously yield a different yet meaningful concept of truth?[^14] Again, there is no good answer, and here we see the need for a yet more general approach that would be flexible enough to handle the semantical concepts induced by either quantification order. This brings us to the idea of introducing one more parameter into cirquents, which we call [*ranking*]{}. The latter is an indication of in what order selections by the “guardian angels” should be made. Furthermore, we allow not just one but several “guardian angels” for either player, with each “angel” responsible for a certain subset of clusters rather than all clusters of a given type. Then, again, ranking fixes the order in which these multiple “angels” should make their selections. Let us get down to formal definitions to make these intuitions precise and more clear. A [**cirquent**]{} in the sense of the present section is the same as one in the sense of the previous section, with the following two changes. Firstly, now the set of [*all gates*]{} (including ${\wedge}$-gates) is partitioned into clusters, with each cluster, as before, satisfying the condition that all gates in it have the same type and the same outdegree. Secondly, there is an additional parameter called [**ranking**]{}. The latter is a partition of the set of all parallel (${\vee}$ and ${\wedge}$) clusters into a finite number of subsets, called [**ranks**]{}, arranged in a linear order, with each rank satisfying the condition that all clusters in it have the same type (but not necessarily the same outdegree). A rank containing ${\wedge}$-clusters is said to be [**conjunctive**]{}, and a rank containing ${\vee}$-clusters [**disjunctive**]{}. Since the ranks are linearly ordered, we can refer to them as the $1$st rank, the $2$nd rank, etc. or rank $1$, rank $2$, etc. Also, instead of “cluster $c$ is in the $i$th rank”, we may say “$c$ is of (or has) rank $i$”. Cirquents in the sense of the previous section are understood as special cases of cirquents in the present sense, namely, as cirquents where all ${\wedge}$-clusters are singletons of the highest rank, and all ${\vee}$-clusters (which are not necessarily singletons) are of the lowest rank. Here we say “highest rank” and “lowest rank” instead of “rank $2$” and “rank $1$” just for safety, because, if one of the two types of parallel gates is absent, then rank $1$ would be both highest and lowest; and, if there are no parallel gates at all, the cirquent would not even have rank $1$. Let $C$ be a cirquent with $k$ ranks, and let $1\leq i\leq k$. An [**$i$-metaselection**]{} for $C$ is a (not necessarily effective) partial function from the $i$th rank of $C$ to the set of positive integers, satisfying the condition that, for any given cluster $c$ of the $i$th rank, whenever $f_i(c)$ is defined, its value does not exceed the outdegree of $c$. And a (simply) [**metaselection**]{} for $C$ is a $k$-tuple $(f_1,\ldots,f_k)$ where, for each $1\leq i\leq k$, $f_i$ is an $i$-metaselection for $C$. Clustering parallel gates and ranking has no effect on the set of [**legal runs**]{} of the game represented by a cirquent, so the definition of legal positions for cirquents of Section \[s5\] (Definition \[maya\]) transfers to the present case without any changes. \[jun4\] In the context of a given cirquent $C$ with $k$ ranks, a legal run $\Gamma$ of $C$ and a metaselection $\vec{f}=(f_1,\ldots,f_k)$ for $C$, we will say that a ${\vee}$-gate $g$ of a cluster $c$ is [**unresolved**]{} iff, where $i$ is the rank of $c$, the function $f_i$ is undefined at $c$. Otherwise $g$ is [**resolved**]{}, and the child of it pointed at by the $f_i(c)$th outgoing edge is said to be the [**resolvent**]{} of $g$. As for the same-name concepts “unresolved”, “resolved” and “resolvent” for selectional gates, they are defined literally as before (Definition \[m20\]). The following definition of metatruth can be seen to conservatively generalize its predecessor, Definition \[may14cc\]: \[may14ccc\] Let $C$ be a cirquent, $^*$ an interpretation, $\Gamma$ a legal run of $C$, and $\vec{f}$ a metaselection for $C$. In this context, with “metatrue” to be read as “[**metatrue w.r.t. $(^*,\Gamma,\vec{f})$**]{}”, we say that: - An $L$-port is metatrue iff $L^*={\top}$. - A resolved gate (of any of the eight types) is metatrue iff so is its resolvent. - No unresolved disjunctive gate (of any of the four types) is metatrue. - Every unresolved conjunctive gate (of any of the four types) is metatrue. Finally, we say that $C$ is metatrue iff so is its root. The following definition brings us from metatruth back to truth. Again, it can be seen to conservatively generalize its predecessor, Definition \[ma14cc\]: \[may14k\] Let $C$ be a cirquent with $k$ ranks, $^*$ an interpretation, and $\Gamma$ a legal run of $C$. We say that $C$ is [**true**]{} w.r.t. $(^*,\Gamma)$ iff $$\mbox{\em ${\cal Q}_1 f_1\ldots {\cal Q}_k f_k$ such that $C$ is metatrue w.r.t. $(^*,\Gamma,(f_1,\ldots,f_k))$.}$$ Here each ${\cal Q}_i f_i$ abbreviates the phrase “for every $i$-metaselection $f_i$ for $C$” if the $i$th rank is conjunctive, and “there is an $i$-metaselection $f_i$ for $C$” if the $i$th rank is disjunctive. With truth redefined this way, the (remaining) [**Wn**]{} component of the game $C^*$ represented by a cirquent $C$ under an interpretation $^*$ is defined as before. Namely, a legal run $\Gamma$ of $C^*$ is considered ${\top}$-won iff $C$ is true w.r.t. $(^*,\Gamma)$. To understand what we have achieved by introducing ranking and why such a generalization of cirquents was naturally called for, let us, for simplicity, limit our attention to selectional-gate-free cirquents. This fragment of our logic can be seen to be sufficient to capture and naturally generalize the conservative extension of IF logic known as [*extended IF logic*]{} (cf. [@Tul09]). The latter, in addition to what IF logic calls [*strong negation*]{} $\sim$, also considers [*weak negation*]{} ${\neg}$. While $\sim \hspace{-3pt}E$ simply means the result of changing in $E$ each operator and atom to its dual ($p$ to ${\neg}p$ and vice versa, ${\mbox{\large $\forall$}\hspace{1pt}}$ to ${\mbox{\large $\exists$}\hspace{1pt}}$ and vice versa, ${\wedge}$ to ${\vee}$ and vice versa) and hence there is no real need to have $\sim$ as a primitive, weak negation ${\neg}$ in (extended) IF logic is quite problematic. Namely, the latter does not act like an ordinary operator that can be applied anywhere in a formula; rather, extended IF logic (essentially) only allows ${\neg}$ to be applied to entire IF formulas, thus deeming meaningless and illegal expressions such as, say, ${\mbox{\large $\exists$}\hspace{1pt}}u{\neg}{\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z{\mbox{\large $\exists$}\hspace{1pt}}t/x \hspace{3pt} p(x,y,z,t,u)$. This is so because the traditional approaches to IF logic are not general enough to directly provide a semantics for ${\neg}$. This odd situation makes it evident that more general approaches are necessary. Our approach can claim to be one that fits the bill. As noted earlier, the reader is expected to be familiar with the basic concepts and ideas of IF logic and, specifically, the two concepts of negation that we are discussing. So, we only explain the present point through particular examples. Our earlier assumption was that only existential quantifiers and disjunctions were slashed in formulas of non-extended IF logic, as slashing universal quantifiers or conjunctions is generally redundant there. The same is no longer the case if one deals with negations though. Accordingly, from now on, we depart from the above assumption and allow slashing any operators in what we consider to be legitimate formulas of IF logic. Earlier we saw how to translate an IF formula $E$ into an equivalent cirquent. Now we conservatively refine that translation method in a way that accounts for the possibility that $E$ contains slashed conjunctions and/or universal quantifiers. Namely, $E$ should be understood as the cirquent $E^\circ$ defined below: \[dsc2\] Let $E$ be any formula of (non-extended) IF logic. We define $E^\circ$ as the cirquent obtained from $E$ through performing the following steps: 1. Ignoring slashes, write $E$ in the form of a tree-like cirquent, understanding ${\mbox{\large $\forall$}\hspace{1pt}}$ as a “long” ${\wedge}$-conjunction and ${\mbox{\large $\exists$}\hspace{1pt}}$ as a “long” ${\vee}$-disjunction. This way, each occurrence $O$ of a quantifier, conjunction or disjunction gives rise to one (if $O$ is not in the scope of a quantifier) or many (if $O$ is in the scope of a quantifier) gates. We say that each such gate, as well as each outgoing edge of it, [**originates**]{} from $O$. Also, since we assume that different occurrences of quantifiers in IF formulas always bind different variables, instead of saying “…originates from the occurrence of ${\mbox{\large $\forall$}\hspace{1pt}}y$ (or ${\mbox{\large $\exists$}\hspace{1pt}}y$)” we can unambiguously simply say “…originates from $y$”. 2. Cluster the gates so that, any two gates $a$ and $b$ belong to the same cluster if and only if they originate from the same occurrence of ${\mbox{\large $\exists$}\hspace{1pt}}x/y_1,\ldots,y_n$,  ${\vee}/y_1,\ldots,y_n$,  ${\mbox{\large $\forall$}\hspace{1pt}}x/y_1,\ldots,y_n$  or  ${\wedge}/y_1,\ldots,y_n$  in $E$, and the following condition is satisfied: - Let $e_{1}^{a},\ldots,e_{k}^{a}$ and $e_{1}^{b},\ldots,e_{k}^{b}$ be the paths (sequences of edges) from the root of the tree to $a$ and $b$, respectively. Then, for any $i\in\{1,\ldots,k\}$, the order numbers of the edges $e_{i}^{a}$ and $e_{i}^{b}$ are identical unless these edges originate from one of the variables $y_1,\ldots,y_n$. 3. Impose ranking on the resulting cirquent, putting all ${\vee}$-clusters into the lowest rank and all ${\wedge}$-clusters into the highest rank. We claim that such a translation of IF formulas $E$ to cirquents $E^\circ$ is adequate. We further claim that any formula ${\neg}E$ of extended IF logic adequately translates into the cirquent $({\neg}E)^\circ$ defined below: \[dsc3\] Let $E$ be any formula of (non-extended) IF logic. We define $({\neg}E)^\circ$ as the cirquent obtained from $E$ through performing the following steps: 1. Turn $E$ into $E^\circ$ according to Description \[dsc2\]. 2. Change in $E^\circ$ every port label (literal) $p$ to ${\neg}p$ and vice versa, and also change every gate label ${\vee}$ to ${\wedge}$ and vice versa. Finally, we claim that any formula $\sim\hspace{-3pt}E$ of IF logic adequately translates into the cirquent $(\sim\hspace{-3pt} E)^\circ$ defined below: \[dsc4\] Let $E$ be any formula of (non-extended) IF logic. We define $(\sim\hspace{-3pt} E)^\circ$ as the cirquent obtained from $E$ through performing the following steps: 1. Turn $E$ into $({\neg}E)^\circ$ according to Description \[dsc3\]. 2. Swap the ranks in the resulting cirquent. That is, make all elements of the formerly lowest rank now belong to the highest rank, and vice versa. Figures 13, 14 and 15 illustrate applications of our translations to the IF-logic formula $${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z/x,\hspace{-2pt}y{\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y\hspace{3pt} p(x,y,z,t)$$ and two forms of its negation. As before, the universe of discourse here is assumed to be $\{1,2\}$. For compactness considerations, we have written $\overline{p_{1111}}$, $\overline{p_{1112}}$, etc. instead of ${\neg}{p_{1111}}$, ${\neg}{p_{1112}}$, etc. To each gate in those figures we have attached an expression of the form $n^m$. It should be understood as an indication that the gate belongs to cluster $n$, and that such a cluster $n$ is of rank $m$. (360,165) (2,128)[$3^1$]{} (52,128)[$4^1$]{} (102,128)[$3^1$]{} (152,128)[$4^1$]{} (202,128)[$3^1$]{} (252,128)[$4^1$]{} (302,128)[$3^1$]{} (352,128)[$4^1$]{} (77,76)[$1^1$]{} (277,76)[$2^1$]{} (178,46)[$5^2$]{} (27,101)[$6^2$]{} (127,101)[$6^2$]{} (227,101)[$6^2$]{} (327,101)[$6^2$]{} (-17,142)[$p_{1111}$]{} (8,142)[$p_{1112}$]{} (5,122)[(-2,3)[11]{}]{} (5,122)[(2,3)[11]{}]{} (5,116) (2,113)[${\vee}$]{} (33,142)[$p_{1121}$]{} (58,142)[$p_{1122}$]{} (55,122)[(-2,3)[11]{}]{} (55,122)[(2,3)[11]{}]{} (55,116) (52,113)[${\vee}$]{} (30,97)[(-2,1)[25]{}]{} (30,97)[(2,1)[25]{}]{} (30,91) (27,88)[${\wedge}$]{} (83,142)[$p_{1211}$]{} (108,142)[$p_{1212}$]{} (105,122)[(-2,3)[11]{}]{} (105,122)[(2,3)[11]{}]{} (105,116) (102,113)[${\vee}$]{} (133,142)[$p_{1221}$]{} (158,142)[$p_{1222}$]{} (155,122)[(-2,3)[11]{}]{} (155,122)[(2,3)[11]{}]{} (155,116) (152,113)[${\vee}$]{} (130,97)[(-2,1)[25]{}]{} (130,97)[(2,1)[25]{}]{} (130,91) (127,88)[${\wedge}$]{} (80,72)[(-4,1)[51]{}]{} (80,72)[(4,1)[51]{}]{} (80,66) (77,63)[${\vee}$]{} (184,142)[$p_{2111}$]{} (208,142)[$p_{2112}$]{} (205,122)[(-2,3)[11]{}]{} (205,122)[(2,3)[11]{}]{} (205,116) (202,113)[${\vee}$]{} (233,142)[$p_{2121}$]{} (258,142)[$p_{2122}$]{} (255,122)[(-2,3)[11]{}]{} (255,122)[(2,3)[11]{}]{} (255,116) (252,113)[${\vee}$]{} (230,97)[(-2,1)[25]{}]{} (230,97)[(2,1)[25]{}]{} (230,91) (227,88)[${\wedge}$]{} (283,142)[$p_{2211}$]{} (308,142)[$p_{2212}$]{} (305,122)[(-2,3)[11]{}]{} (305,122)[(2,3)[11]{}]{} (305,116) (302,113)[${\vee}$]{} (333,142)[$p_{2221}$]{} (358,142)[$p_{2222}$]{} (355,122)[(-2,3)[11]{}]{} (355,122)[(2,3)[11]{}]{} (355,116) (352,113)[${\vee}$]{} (330,97)[(-2,1)[25]{}]{} (330,97)[(2,1)[25]{}]{} (330,91) (327,88)[${\wedge}$]{} (280,72)[(-4,1)[51]{}]{} (280,72)[(4,1)[51]{}]{} (280,66) (277,63)[${\vee}$]{} (181,43)[(-6,1)[100]{}]{} (181,43)[(6,1)[100]{}]{} (181,37) (178,34)[${\wedge}$]{} (15,10)[[**Figure 13:**]{} ${\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z/x,\hspace{-2pt}y {\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y \hspace{3pt} p(x,y,z,t)$ (when the universe is $\{1,2\}$)]{} (360,165) (2,128)[$3^1$]{} (52,128)[$4^1$]{} (102,128)[$3^1$]{} (152,128)[$4^1$]{} (202,128)[$3^1$]{} (252,128)[$4^1$]{} (302,128)[$3^1$]{} (352,128)[$4^1$]{} (77,76)[$1^1$]{} (277,76)[$2^1$]{} (178,46)[$5^2$]{} (27,101)[$6^2$]{} (127,101)[$6^2$]{} (227,101)[$6^2$]{} (327,101)[$6^2$]{} (-17,142)[$\overline{ p_{1111}}$]{} (8,142)[$\overline{ p_{1112}}$]{} (5,122)[(-2,3)[11]{}]{} (5,122)[(2,3)[11]{}]{} (5,116) (2,113)[${\wedge}$]{} (33,142)[$\overline{ p_{1121}}$]{} (58,142)[$\overline{ p_{1122}}$]{} (55,122)[(-2,3)[11]{}]{} (55,122)[(2,3)[11]{}]{} (55,116) (52,113)[${\wedge}$]{} (30,97)[(-2,1)[25]{}]{} (30,97)[(2,1)[25]{}]{} (30,91) (27,88)[${\vee}$]{} (83,142)[$\overline{ p_{1211}}$]{} (108,142)[$\overline{ p_{1212}}$]{} (105,122)[(-2,3)[11]{}]{} (105,122)[(2,3)[11]{}]{} (105,116) (102,113)[${\wedge}$]{} (133,142)[$\overline{ p_{1221}}$]{} (158,142)[$\overline{ p_{1222}}$]{} (155,122)[(-2,3)[11]{}]{} (155,122)[(2,3)[11]{}]{} (155,116) (152,113)[${\wedge}$]{} (130,97)[(-2,1)[25]{}]{} (130,97)[(2,1)[25]{}]{} (130,91) (127,88)[${\vee}$]{} (80,72)[(-4,1)[51]{}]{} (80,72)[(4,1)[51]{}]{} (80,66) (77,63)[${\wedge}$]{} (184,142)[$\overline{ p_{2111}}$]{} (208,142)[$\overline{ p_{2112}}$]{} (205,122)[(-2,3)[11]{}]{} (205,122)[(2,3)[11]{}]{} (205,116) (202,113)[${\wedge}$]{} (233,142)[$\overline{ p_{2121}}$]{} (258,142)[$\overline{ p_{2122}}$]{} (255,122)[(-2,3)[11]{}]{} (255,122)[(2,3)[11]{}]{} (255,116) (252,113)[${\wedge}$]{} (230,97)[(-2,1)[25]{}]{} (230,97)[(2,1)[25]{}]{} (230,91) (227,88)[${\vee}$]{} (283,142)[$\overline{ p_{2211}}$]{} (308,142)[$\overline{ p_{2212}}$]{} (305,122)[(-2,3)[11]{}]{} (305,122)[(2,3)[11]{}]{} (305,116) (302,113)[${\wedge}$]{} (333,142)[$\overline{ p_{2221}}$]{} (358,142)[$\overline{ p_{2222}}$]{} (355,122)[(-2,3)[11]{}]{} (355,122)[(2,3)[11]{}]{} (355,116) (352,113)[${\wedge}$]{} (330,97)[(-2,1)[25]{}]{} (330,97)[(2,1)[25]{}]{} (330,91) (327,88)[${\vee}$]{} (280,72)[(-4,1)[51]{}]{} (280,72)[(4,1)[51]{}]{} (280,66) (277,63)[${\wedge}$]{} (181,43)[(-6,1)[100]{}]{} (181,43)[(6,1)[100]{}]{} (181,37) (178,34)[${\vee}$]{} (15,10)[[**Figure 14:**]{} ${\neg}{\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z/x,\hspace{-2pt}y {\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y \hspace{3pt} p(x,y,z,t)$ (when the universe is $\{1,2\}$)]{} (360,165) (2,128)[$3^2$]{} (52,128)[$4^2$]{} (102,128)[$3^2$]{} (152,128)[$4^2$]{} (202,128)[$3^2$]{} (252,128)[$4^2$]{} (302,128)[$3^2$]{} (352,128)[$4^2$]{} (77,76)[$1^2$]{} (277,76)[$2^2$]{} (178,46)[$5^1$]{} (27,101)[$6^1$]{} (127,101)[$6^1$]{} (227,101)[$6^1$]{} (327,101)[$6^1$]{} (-17,142)[$\overline{ p_{1111}}$]{} (8,142)[$\overline{ p_{1112}}$]{} (5,122)[(-2,3)[11]{}]{} (5,122)[(2,3)[11]{}]{} (5,116) (2,113)[${\wedge}$]{} (33,142)[$\overline{ p_{1121}}$]{} (58,142)[$\overline{ p_{1122}}$]{} (55,122)[(-2,3)[11]{}]{} (55,122)[(2,3)[11]{}]{} (55,116) (52,113)[${\wedge}$]{} (30,97)[(-2,1)[25]{}]{} (30,97)[(2,1)[25]{}]{} (30,91) (27,88)[${\vee}$]{} (83,142)[$\overline{ p_{1211}}$]{} (108,142)[$\overline{ p_{1212}}$]{} (105,122)[(-2,3)[11]{}]{} (105,122)[(2,3)[11]{}]{} (105,116) (102,113)[${\wedge}$]{} (133,142)[$\overline{ p_{1221}}$]{} (158,142)[$\overline{ p_{1222}}$]{} (155,122)[(-2,3)[11]{}]{} (155,122)[(2,3)[11]{}]{} (155,116) (152,113)[${\wedge}$]{} (130,97)[(-2,1)[25]{}]{} (130,97)[(2,1)[25]{}]{} (130,91) (127,88)[${\vee}$]{} (80,72)[(-4,1)[51]{}]{} (80,72)[(4,1)[51]{}]{} (80,66) (77,63)[${\wedge}$]{} (184,142)[$\overline{ p_{2111}}$]{} (208,142)[$\overline{ p_{2112}}$]{} (205,122)[(-2,3)[11]{}]{} (205,122)[(2,3)[11]{}]{} (205,116) (202,113)[${\wedge}$]{} (233,142)[$\overline{ p_{2121}}$]{} (258,142)[$\overline{ p_{2122}}$]{} (255,122)[(-2,3)[11]{}]{} (255,122)[(2,3)[11]{}]{} (255,116) (252,113)[${\wedge}$]{} (230,97)[(-2,1)[25]{}]{} (230,97)[(2,1)[25]{}]{} (230,91) (227,88)[${\vee}$]{} (283,142)[$\overline{ p_{2211}}$]{} (308,142)[$\overline{ p_{2212}}$]{} (305,122)[(-2,3)[11]{}]{} (305,122)[(2,3)[11]{}]{} (305,116) (302,113)[${\wedge}$]{} (333,142)[$\overline{ p_{2221}}$]{} (358,142)[$\overline{ p_{2222}}$]{} (355,122)[(-2,3)[11]{}]{} (355,122)[(2,3)[11]{}]{} (355,116) (352,113)[${\wedge}$]{} (330,97)[(-2,1)[25]{}]{} (330,97)[(2,1)[25]{}]{} (330,91) (327,88)[${\vee}$]{} (280,72)[(-4,1)[51]{}]{} (280,72)[(4,1)[51]{}]{} (280,66) (277,63)[${\wedge}$]{} (181,43)[(-6,1)[100]{}]{} (181,43)[(6,1)[100]{}]{} (181,37) (178,34)[${\vee}$]{} (15,10)[[**Figure 15:**]{} $\sim\hspace{-3pt} {\mbox{\large $\forall$}\hspace{1pt}}x{\mbox{\large $\exists$}\hspace{1pt}}y{\mbox{\large $\forall$}\hspace{1pt}}z/x,\hspace{-2pt}y {\mbox{\large $\exists$}\hspace{1pt}}t/x,\hspace{-2pt}y \hspace{3pt} p(x,y,z,t)$ (when the universe is $\{1,2\}$)]{} The above cirquents are pairwise extensionally non-identical. An interpretation separating the cirquent of Figure 13 from those of Figures 14 and 15 is one that sends all atoms to ${\top}$. And an interpretation separating the cirquent of Figure 14 from the other two cirquents (by making the former ${\top}$ while the latter ${\bot}$) is the one that sends the four atoms $p_{1111}$, $p_{1221}$, $p_{2112}$, $p_{2222}$ to ${\top}$ and sends all other atoms to ${\bot}$. The cirquent of Figure 13 can be seen to be extensionally identical to the cirquent of figure 12. In general, the same would be the case for any pair of cirquents that syntactically relate to each other in the same way as the cirquents of Figures 13 and 12 do, namely, where one cirquent is a cirquent with all ${\vee}$-clusters in the lowest rank and all ${\wedge}$-clusters in the highest rank, and the other cirquent is the result of ignoring in the first one all ${\wedge}$-clusters and ignoring ranking, after which it can be understood as a cirquent in the limited sense of Section \[s6\]. The cirquent of Figure 14 is the exact opposite of the cirquent of Figure 13, in the sense that, under any interpretation $^*$, one is ${\top}$ iff the other is ${\bot}$. In general, if two cirquents $C_1$ and $C_2$ syntactically relate to each other as the cirquents of Figures 13 and 14 do, then, for any interpretation $^*$, we will have $C_{2}^{*}={\neg}C_{1}^{*}$, where ${\neg}$ is computability logic’s ordinary negation operation of Definition \[negdef\]. As for the cirquent of Figure 15, it appears to be a less natural modification of the cirquent of Figure 13 than the cirquent of Figure 14 is. In particular, it is not clear why we, in the process of transforming the cirquent of figure 13 into the cirquent of Figure 15, not only changed the label of each node to its dual, but also swapped the ranks. Furthermore, it would not be clear how to “swap” ranks if we had more than two of them. So, in spite of IF logic’s tradition to see $\sim$ as the primary sort of negation and treat the “ill-behaved” ${\neg}$ as a second-class citizen, we come to the vision that it is ${\neg}$ rather than $\sim$ that is truly natural and deserves the first-class status. Descriptions \[dsc2\] and \[dsc3\] only generate (sequential-gate-free) cirquents with two (in normal cases) or fewer (in pathological cases) ranks, and hence these sorts of cirquents are sufficient for capturing extended IF logic. Was there then a reasonable call for also considering cirquents with greater numbers of ranks? After all, any approach in any area of mathematics may find an infinite series of generalizations, and one should simply stop somewhere. This is true but, in the process of generalizing, one should stop only at a natural point where we have a more or less closed (in whatever sense) system. And stopping at cirquents with $\leq 2$ ranks (what extended IF logic essentially did) would not be such a natural place. For, as noted earlier, the formalism of extended IF logic is not closed under its logical operators, and we would be forced to deal with a similar sort of an artificial restriction had we limited our considerations only to cirquents with $\leq 2$ ranks. To make the above point more clear, let us extend the syntax of IF logic by requiring that all occurences of quantifiers, conjunctions and disjunctions be superscripted with positive integers, satisfying the following two conditions: - Whenever $1\leq i<j$ and $j$ is the superscript of some occurrence, so is $i$. - Whenever $i$ is the superscript of an occurrence of ${\mbox{\large $\exists$}\hspace{1pt}}$ or ${\vee}$, the same $i$ is not the superscript of an occurrence of ${\mbox{\large $\forall$}\hspace{1pt}}$ or ${\wedge}$. Such superscripts will be understood as indications of the ranks of the clusters originating from the corresponding occurrences of operators when turning formulas into cirquents in the style of Description \[dsc2\]. That is, clause 3 of Description \[dsc2\] should now (for this new syntax of IF logic) read as follows: > Impose ranking on the resulting cirquent, putting all clusters originating from occurrences of $i$-superscripted operators[^15] into the $i$th rank, for any superscript $i$ occurring in $E$. We baptize this newly extended version of IF logic as [**ranked IF logic**]{}. Let us call the highest superscript appearing in a formula of ranked IF logic the [**ranking depth**]{} of that formula. Of course, extended IF logic is the fragment of ranked IF logic limited to formulas of ranking depth $\leq 2$. Namely, each non-negated formula $E$ of IF logic translates into ranked IF logic as the result $F$ of adding the superscript $1$ to each occurrence of ${\mbox{\large $\exists$}\hspace{1pt}}$ and ${\vee}$, and adding the superscript $2$ to each occurrence of ${\mbox{\large $\forall$}\hspace{1pt}}$ and ${\wedge}$ — well, unless $E$ contains no ${\vee}$ and ${\mbox{\large $\exists$}\hspace{1pt}}$, in which case the superscript $1$ rather than $2$ should be added to the occurrences of ${\mbox{\large $\forall$}\hspace{1pt}}$ and ${\wedge}$. Next, for [*any*]{} formula $F$ of ranked IF logic (including the cases when $F$ is obtained from $E$ as above), ${\neg}F$ can be understood as an abbreviation for the result of changing in $F$ each occurrence of each literal $p$ to ${\neg}p$ and vice versa, each occurrence of ${\wedge}$ to ${\vee}$ and vice versa, and each occurrence of ${\mbox{\large $\forall$}\hspace{1pt}}$ to ${\mbox{\large $\exists$}\hspace{1pt}}$ and vice versa, [*without*]{} changing any superscripts in this process. With ${\neg}$ treated as just explained, in contrast with the situation in extended IF logic, ${\neg}$ can meaningfully occur anywhere in an expression of ranked IF logic. For instance, we can write $${\mbox{\large $\exists$}\hspace{1pt}}u^1{\neg}{\mbox{\large $\forall$}\hspace{1pt}}^1 x{\mbox{\large $\exists$}\hspace{1pt}}^2 y{\mbox{\large $\forall$}\hspace{1pt}}^1 z{\mbox{\large $\exists$}\hspace{1pt}}^2 t/x \hspace{3pt} p(x,y,z,t,u),$$ which will be simply understood as an abbreviation of $${\mbox{\large $\exists$}\hspace{1pt}}u^1{\mbox{\large $\exists$}\hspace{1pt}}^1 x{\mbox{\large $\forall$}\hspace{1pt}}^2 y{\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 t/x \hspace{3pt} {\neg}p(x,y,z,t,u).$$ To get a further feel of the advantages of ranked IF logic over extended IF logic, consider the formula ${\neg}{\mbox{\large $\exists$}\hspace{1pt}}x{\mbox{\large $\forall$}\hspace{1pt}}y/x\hspace{-2pt}\sim\hspace{-2pt} p(x,y,z)$ of the latter which, in ranked IF logic, will be written as ${\mbox{\large $\forall$}\hspace{1pt}}^1 x{\mbox{\large $\exists$}\hspace{1pt}}^2 y/x \hspace{3pt}p(x,y,z)$. As we are dealing with a legal and hence semantically meaningful expression of extended IF logic, we naturally want to be able to quantify it — say, existentially — over $z$, and also be able to arbitrarily extend the original independences — say, by making both quantifiers independent of ${\mbox{\large $\exists$}\hspace{1pt}}z$ and vice versa. Alas, extended IF logic does not permit to apply quantification to a ${\neg}$-negated compound formula. But ranked IF logic does. Namely, with a little analysis, the formula $$\label{eee1} {\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 x/z{\mbox{\large $\exists$}\hspace{1pt}}^3 y/z,\hspace{-2pt}x \hspace{3pt}p(x,y,z)$$ can be seen to accounts for the intuitions that we wanted to capture by ${\mbox{\large $\exists$}\hspace{1pt}}z$-quantifying the formula and then making the new and old quantifiers independent of each other. Figure 16 shows formula (\[eee1\]) as a cirquent, and Figure 17 shows two cirquents obtained from it by putting both existential quantifiers into the same rank in an attempt to mechanically turn (\[eee1\]) into an equivalent formula of ranking depth $2$ (a formula of extended IF logic). Either attempt fails. Namely, let $^\dagger$ be the interpretation that sends the two atoms $p_{111}$, $p_{122}$ to ${\top}$ and sends all other atoms to ${\bot}$. Next, let $^\ddagger$ be the interpretation that sends the two atoms $p_{111}$, $p_{222}$ to ${\top}$ and sends all other atoms to ${\bot}$. It can be seen that $$\mbox{$\bigl({\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 x/z{\mbox{\large $\exists$}\hspace{1pt}}^3 y/z,\hspace{-2pt}x \hspace{3pt}p(x,y,z)\bigr)^\dagger={\top}$ whereas $\bigl({\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 x/z{\mbox{\large $\exists$}\hspace{1pt}}^1 y/z,\hspace{-2pt}x \hspace{3pt}p(x,y,z)\bigr)^\dagger={\bot}$,}$$ and $$\mbox{$\bigl({\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 x/z{\mbox{\large $\exists$}\hspace{1pt}}^3 y/z,\hspace{-2pt}x\hspace{3pt} p(x,y,z)\bigr)^\ddagger={\bot}$ whereas $\bigl({\mbox{\large $\exists$}\hspace{1pt}}^2 z{\mbox{\large $\forall$}\hspace{1pt}}^1 x/z{\mbox{\large $\exists$}\hspace{1pt}}^2 y/z,\hspace{-2pt}x \hspace{3pt} p(x,y,z)\bigr)^\ddagger={\top}$.}$$ (180,135) (2,98)[$3^3$]{} (52,98)[$3^3$]{} (102,98)[$3^3$]{} (152,98)[$3^3$]{} (77,46)[$1^1$]{} (27,71)[$2^2$]{} (127,71)[$2^2$]{} (-15,112)[$p_{111}$]{} (8,112)[$p_{112}$]{} (5,92)[(-2,3)[11]{}]{} (5,92)[(2,3)[11]{}]{} (5,86) (2,83)[${\vee}$]{} (35,112)[$p_{121}$]{} (58,112)[$p_{122}$]{} (55,92)[(-2,3)[11]{}]{} (55,92)[(2,3)[11]{}]{} (55,86) (52,83)[${\vee}$]{} (30,67)[(-2,1)[25]{}]{} (30,67)[(2,1)[25]{}]{} (30,61) (27,58)[${\wedge}$]{} (85,112)[$p_{211}$]{} (108,112)[$p_{212}$]{} (105,92)[(-2,3)[11]{}]{} (105,92)[(2,3)[11]{}]{} (105,86) (102,83)[${\vee}$]{} (135,112)[$p_{221}$]{} (158,112)[$p_{222}$]{} (155,92)[(-2,3)[11]{}]{} (155,92)[(2,3)[11]{}]{} (155,86) (152,83)[${\vee}$]{} (130,67)[(-2,1)[25]{}]{} (130,67)[(2,1)[25]{}]{} (130,61) (127,58)[${\wedge}$]{} (80,42)[(-4,1)[51]{}]{} (80,42)[(4,1)[51]{}]{} (80,36) (77,33)[${\vee}$]{} (-70,10)[[**Figure 16:**]{} ${\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 x/z{\mbox{\large $\exists$}\hspace{1pt}}^3 y/z,\hspace{-2pt}x \hspace{3pt} p(x,y,z)$ (when the universe is $\{1,2\}$)]{} (380,135) (2,98)[$3^1$]{} (52,98)[$3^1$]{} (102,98)[$3^1$]{} (152,98)[$3^1$]{} (77,46)[$1^1$]{} (27,71)[$2^2$]{} (127,71)[$2^2$]{} (-15,112)[$p_{111}$]{} (8,112)[$p_{112}$]{} (5,92)[(-2,3)[11]{}]{} (5,92)[(2,3)[11]{}]{} (5,86) (2,83)[${\vee}$]{} (35,112)[$p_{121}$]{} (58,112)[$p_{122}$]{} (55,92)[(-2,3)[11]{}]{} (55,92)[(2,3)[11]{}]{} (55,86) (52,83)[${\vee}$]{} (30,67)[(-2,1)[25]{}]{} (30,67)[(2,1)[25]{}]{} (30,61) (27,58)[${\wedge}$]{} (85,112)[$p_{211}$]{} (108,112)[$p_{212}$]{} (105,92)[(-2,3)[11]{}]{} (105,92)[(2,3)[11]{}]{} (105,86) (102,83)[${\vee}$]{} (135,112)[$p_{221}$]{} (158,112)[$p_{222}$]{} (155,92)[(-2,3)[11]{}]{} (155,92)[(2,3)[11]{}]{} (155,86) (152,83)[${\vee}$]{} (130,67)[(-2,1)[25]{}]{} (130,67)[(2,1)[25]{}]{} (130,61) (127,58)[${\wedge}$]{} (80,42)[(-4,1)[51]{}]{} (80,42)[(4,1)[51]{}]{} (80,36) (77,33)[${\vee}$]{} (222,98)[$3^2$]{} (272,98)[$3^2$]{} (322,98)[$3^2$]{} (372,98)[$3^2$]{} (297,46)[$1^2$]{} (247,71)[$2^1$]{} (347,71)[$2^1$]{} (205,112)[$p_{111}$]{} (228,112)[$p_{112}$]{} (225,92)[(-2,3)[11]{}]{} (225,92)[(2,3)[11]{}]{} (225,86) (222,83)[${\vee}$]{} (255,112)[$p_{121}$]{} (278,112)[$p_{122}$]{} (275,92)[(-2,3)[11]{}]{} (275,92)[(2,3)[11]{}]{} (275,86) (272,83)[${\vee}$]{} (250,67)[(-2,1)[25]{}]{} (250,67)[(2,1)[25]{}]{} (250,61) (247,58)[${\wedge}$]{} (305,112)[$p_{211}$]{} (328,112)[$p_{212}$]{} (325,92)[(-2,3)[11]{}]{} (325,92)[(2,3)[11]{}]{} (325,86) (322,83)[${\vee}$]{} (355,112)[$p_{221}$]{} (378,112)[$p_{222}$]{} (375,92)[(-2,3)[11]{}]{} (375,92)[(2,3)[11]{}]{} (375,86) (372,83)[${\vee}$]{} (350,67)[(-2,1)[25]{}]{} (350,67)[(2,1)[25]{}]{} (350,61) (347,58)[${\wedge}$]{} (300,42)[(-4,1)[51]{}]{} (300,42)[(4,1)[51]{}]{} (300,36) (297,33)[${\vee}$]{} (30,10)[[**Figure 17:**]{} ${\mbox{\large $\exists$}\hspace{1pt}}^1 z{\mbox{\large $\forall$}\hspace{1pt}}^2 x/z{\mbox{\large $\exists$}\hspace{1pt}}^1 y/z,\hspace{-2pt}x \hspace{3pt} p(x,y,z)$ and ${\mbox{\large $\exists$}\hspace{1pt}}^2 z{\mbox{\large $\forall$}\hspace{1pt}}^1 x/z{\mbox{\large $\exists$}\hspace{1pt}}^2 y/z,\hspace{-2pt}x \hspace{3pt} p(x,y,z)$]{} As we just had a chance to observe, ranked IF logic provides greater syntactic flexibility and convenience than extended IF logic does. This gives us reasons to expect that the former is (not only a more direct and flexible means of expression but also) properly more expressive than the latter, in the same sense as the latter is known to be more expressive than ordinary, non-extended IF logic. A way to prove this conjecture can be expressing, through a formula $T$ of ranked IF logic, a definition of truth for formulas of ranked IF logic of ranking depth $\leq 2$ (such formulas fully cover extended IF logic). Now, if $T$ itself was expressible as a formula of ranking depth $\leq 2$, then we would be able to produce a paradox by writing a formula of ranking depth $\leq 2$ that asserts its own not being true. As any approach, our approach allows further generalizations. For instance, our present linear orders on ranks can be relaxed to partial orders, which may give rise to an IF-logic-style approach to independences between different …ranks. But enough is enough. Things have already gone quite far, and further generalizations would be reasonable to make only if and when a clear call for them comes. As pointed out, a call for the generalizations (of both CoL and IF logic) we have made so far in this paper was the necessity to make the formalisms reasonably complete, and neutralize certain unsettling, incompleteness-caused phenomena such as the odd status of weak negation ${\neg}$ in IF logic, or the impossibility to properly develop IF logic at the purely propositional level. However, in our step-by-step generalization process, stopping at cirquents of the present section would not be right. There is one last necessary step remaining, which will be taken in the following section. General ports {#s8} ============= Imagine a finite cirquent $C$ in the sense of any of the previous sections. Let $p_1,\ldots,p_n$ be the atoms, listed in their lexicographic order, used (positively or negatively) in the labels of the ports of $C$. Then $C$ is, in fact, an operation that takes an $n$-tuple of elementary games (an interpretation, that is) and produces a new, not-necessarily elementary (unless $C$ only has ${\vee}$ and ${\wedge}$ gates) game. The same is the case when $C$ is infinite, only here, as an operation, $C$ may take an infinite sequence (rather than just an $n$-tuple) of elementary games. Cirquents thus are systematic ways to generate and express an infinite variety of operations on games. However, as just noted, as long as we only consider cirquents in the sense of the previous section(s), all such operations are limited to ones whose arguments are elementary games ${\top}$ and ${\bot}$. These are two very special and simple cases of games. So, a natural call comes for generalizing our approach in a way that allows cirquents to express operations with not only elementary arguments, but also with arguments that can be arbitrary interactive computational problems (static games, that is) considered in CoL. One way to achieve the above would be to change the concept of an interpretation $^*$ so that now it is allowed to send the atoms of a cirquent to any, not-necessarily-elementary, static games. By doing so we would certainly gain a lot, but just as much would be lost: the class of valid cirquents would shrink, victimizing many innocent ones such as, say, $p{\rightarrow}p{\wedge}p$ or $p{\vee}p{\rightarrow}p{\hspace{0pt}\sqcup}p$. The point is that elementary problems (games) are meaningful and interesting in their own right, and losing the ability to differentiate them from problems in general would be too much of a sacrifice. For instance, classical logic, IF logic, or the systems of computability-logic-based arithmetic constructed in [@Japtowards; @cla4; @cla5], are exclusively concerned with cases where atoms are interpreted as elementary problems. We have a better solution. It is simply allowing two sorts of atoms in the language, one for elementary problems and the other for all problems. This way, not only do we have the ability to express combinations of problems of either sort within the same formal language, but also combinations that intermix elementary problems with not-necessarily-elementary ones. Let us rename the objects to which we earlier refereed as (simply) “atoms” into [**elementary atoms**]{}. In addition to elementary atoms, we fix another infinite set of alphanumeric strings, disjoint from the set of elementary atoms, and call its elements [**general atoms**]{}. We shall continue using the lowercase $p$, $q$, $r$, $s$, $p_1$, $p(3,4)$, …as metavariables for elementary atoms, and we will be using the uppercase $P$, $Q$, $R$, $S$, $P_1$, $P(3,4)$, …as metavariables for general atoms. As before, a [**literal**]{} is $L$ or ${\neg}L$, where $L$ is an atom. In the former case the literal is said to be [**positive**]{}, and in the latter case [**negative**]{}. Such a literal will be said to be elementary or general depending on whether the atom $L$ is elementary or general. The two literals $L$ and ${\neg}L$ are said to be [**opposite**]{}. It is assumed that the question on whether a literal is elementary or general is decidable. A [**cirquent**]{} in the sense of the present section means the same as one in the sense of the previous section, with the only difference that now not only elementary, but also general literals are allowed as labels of ports. A port is said to be [*elementary*]{} or [*general*]{}, [*positive*]{} or [*negative*]{} depending on whether its label is so. Similarly, two ports are said to be [*opposite*]{} iff their labels are so. An [**interpretation**]{} now is a function $^*$ that (as before) sends each elementary atom $p$ to an elementary game $p^*$, and sends each general atom $P$ to any, not-necessarily-elementary, static game $P^*$. This function immediately extends to all literals by stipulating that, for any (elementary or general) atom $W$, $({\neg}W)^*={\neg}(W^*)$, where ${\neg}$ in ${\neg}(W^*)$ is the ordinary game negation operation of Definition \[negdef\]. For a run $\Gamma$ and a string $\alpha$, we will be using $\Gamma^\alpha$ to denote the result of deleting in $\Gamma$ all labmoves except those that look like ${\wp}\alpha\beta$ (either player ${\wp}$ and whatever string $\beta$), and then further deleting the prefix $\alpha$ in each remaining move — that is, replacing each ${\wp}\alpha\beta$ by ${\wp}\beta$. Our present definition of the legal runs of the game $C^*$ represented by a cirquent $C$ under an interpretation $^*$ is similar to the earlier definition(s), with the difference that now additional moves of the form $a.\alpha$ can be made, where $a$ is (the ID of) a general port. The intuitive meaning of such a move is making the move $\alpha$ in the copy of the game $L^*$ associated with $a$, where $L$ is the label of $a$. Accordingly, the additional condition that needs to be satisfied for a legal run $\Gamma$ is that, whenever $a,L$ are as above, $\Gamma^{a.}$ (intuitively the run of $L^*$ played in port $a$) should be a legal run of $L^*$. Below is a full definition: \[mayanew\] Let $C$ be a cirquent, $^*$ an interpretation, and $\Phi$ a position. $\Phi$ is a [**legal position**]{} of the game $C^*$ iff, with “cluster” and “port” below meaning those of $C$, the following conditions are satisfied: 1. Every labmove of $\Phi$ has one of the following forms: 1. ${\top}c.i$, where $c$ is a ${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcup}$-cluster and $i$ is a positive integer not exceeding the outdegree of $c$. 2. ${\bot}c.i$, where $c$ is a ${\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcap}$-cluster and $i$ is a positive integer not exceeding the outdegree of $c$. 3. ${\wp}a.\beta$, where $a$ is a general port, ${\wp}$ is either player, and $\beta$ is some string. 2. Whenever $c$ is a choice cluster, $\Phi$ contains at most one occurrence of a labmove of the form ${\wp}c.i$. 3. Whenever $c$ is a sequential cluster and $\Phi$ is of the form ${\langle \ldots, {\wp}c.i ,\ldots ,{\wp}c.j ,\ldots \rangle}$, we have $i<j$. 4. Whenever $a$ is a general port and $L$ is its label, $\Phi^{a.}$ is a legal position of the game $L^*$. The concept of a [**metaselection**]{}, as well as the concepts [**unresolved**]{}, [**resolved**]{} and [**resolvent**]{}, for any of the eight sorts of gates, transfer from Section \[s7\] to the present section without any changes. And metatruth (Definition \[may14ccc\]) is now redefined as follows: \[may14ff\] Let $C$ be a cirquent, $^*$ an interpretation, $\Gamma$ a legal run of $C$, and $\vec{f}$ a metaselection for $C$. In this context, with “metatrue” to be read as “[**metatrue w.r.t. $(^*,\Gamma,\vec{f})$**]{}”, we say that: - An (elementary or general) $L$-port $a$ is metatrue iff $\Gamma^{a.}$ is a ${\top}$-won run of $L^*$.[^16] - A resolved gate (of any of the eight types) is metatrue iff so is its resolvent. - No unresolved disjunctive gate (of any of the four types) is metatrue. - Every unresolved conjunctive gate (of any of the four types) is metatrue. Finally, we say that $C$ is metatrue iff so is its root. With metatruth (conservatively) redefined this way, the definition of (simply) [**truth**]{} is literally the same as before (Definition \[may14k\]). So is the [**Wn**]{} component of the game $C^*$ represented by a cirquent $C$ under an interpretation $^*$. Namely, a legal run $\Gamma$ of $C^*$ is considered to be ${\top}$-won iff $C$ is true w.r.t. $(^*,\Gamma)$. In certain cases, the elementary versus general status of atoms has no effect on validity. For instance, the cirquent ${\neg}p{\vee}p$ is valid, and so can be shown to be (whether in the weak or in the strong sense) the cirquent ${\neg}P{\vee}P$. The same does not hold for all cirquents though. An example would be ${\neg}p{\vee}(p{\wedge}p)$, which is valid but can be shown to be not so (whether in the strong or in the weak sense) with $P$ instead of $p$. At this point we can observe the resource-consciousness of computability logic even when only parallel gates/connectives are considered: while $p$ and $p{\wedge}p$ are “the same”, $P$ and $P{\wedge}P$ (or $P{\vee}P$) are not so at all: $P$ stands for a [*single*]{} play of game $P^*$ (whatever interpretation $^*$ we have in mind), whereas $P{\wedge}P$ stands for two parallel plays of $P^*$ — that is, plays on two boards. While the game played on either board in the latter case is the same $P^*$, the actual runs on the two boards will not necessarily be the same (unless both players are making exactly the same moves on the two boards), so that, it may well happen that the play on one board is won while on the other board is lost. Generally, winning $P{\wedge}P$ for ${\top}$ is harder than winning $P$, and winning $P$ is harder than winning $P{\vee}P$. The situation is very different from this one with elementary atoms instead of general atoms: $p$ is indeed “the same” as $p{\wedge}p$ (otherwise computability logic would not be a conservative extension of classical logic). That is because $p^*$ is an elementary game with no moves, and hence it makes no difference whether it is “played” on one board or two boards: all of its “plays” will be identical, and hence either all of them will be won or all of them will be lost. Earlier we pointed out the increase in expressive power of the formalism of computability logic achieved by switching from formulas to cirquents. Such a switch has an even more dramatic impact on expressive power when, along with elementary atoms, general atoms are also allowed. As we remember, finite cirquents with only ${\wedge}$ and ${\vee}$ gates and only elementary ports are not any more expressive than formulas are. The same is no the case for cirquents with general ports though. To get a feel of this, let us compare the two cirquents of Figure 18, the only difference between which is that one (on the right) has general ports where the other has elementary ports. Here and later, following our earlier practice, node IDs are omitted in these figures. Also as agreed earlier, omitted cluster IDs indicate that clustering is trivial. i.e., all clusters are singletons. Finally, omitted rank indicators should be understood as that all ${\vee}$-clusters are in the lowest rank and all ${\wedge}$-clusters are in the highest rank, even though this is, in fact, irrelevant: in the absence of nonsingleton clusters, ranking can be seen to be redundant, and how it is chosen has no effect on the semantics of the cirquent. (310,133) (24,113)[$p$]{} (74,113)[$q$]{} (124,113)[$r$]{} (27,91)[(0,1)[16]{}]{} (27,91)[(3,1)[50]{}]{} (27,85) (24,82)[${\wedge}$]{} (77,91)[(-3,1)[50]{}]{} (77,91)[(3,1)[50]{}]{} (77,85) (74,82)[${\wedge}$]{} (127,91)[(0,1)[16]{}]{} (127,91)[(-3,1)[50]{}]{} (127,85) (124,82)[${\wedge}$]{} (77,62)[(-3,1)[50]{}]{} (77,62)[(3,1)[50]{}]{} (77,62)[(0,1)[17]{}]{} (77,56) (74,53)[${\vee}$]{} (200,113)[$P$]{} (250,113)[$Q$]{} (300,113)[$R$]{} (203,91)[(0,1)[16]{}]{} (203,91)[(3,1)[50]{}]{} (203,85) (200,82)[${\wedge}$]{} (253,91)[(-3,1)[50]{}]{} (253,91)[(3,1)[50]{}]{} (253,85) (250,82)[${\wedge}$]{} (303,91)[(0,1)[16]{}]{} (303,91)[(-3,1)[50]{}]{} (303,85) (300,82)[${\wedge}$]{} (253,62)[(-3,1)[50]{}]{} (253,62)[(3,1)[50]{}]{} (253,62)[(0,1)[17]{}]{} (253,56) (250,53)[${\vee}$]{} (23,33)[*$(p{\wedge}q){\vee}(p{\wedge}r){\vee}(q{\wedge}r)$*]{} (226,33)[*No formula*]{} (55,10)[[**Figure 18:**]{} Elementary versus general ports]{} The left cirquent of Figure 18 can be turned into an equivalent tree in the standard way (duplicate and separate shared nodes), yielding the formula $(p{\wedge}q){\vee}(p{\wedge}r){\vee}(q{\wedge}r)$. Everything is classical here, that is. The same trick fails for the right cirquent though. It represents a game played on three boards where ${\top}$, in order to win, should win on (at least) two out of the three boards. So, the cirquent that we see there is by no means equivalent to $(P{\wedge}Q){\vee}(P{\wedge}R){\vee}(Q{\wedge}R)$: the latter represents a game on six (rather than three) boards grouped into three pairs where, in order to win, ${\top}$ needs to win on both boards of at least one pair. This is a “two out of six” combination, which is generally easier to win than the “two out of three” combination represented by the cirquent under question. There is simply no tree-like cirquent (formula) extensionally identical, or equivalent in any reasonable weaker sense, to the right cirquent of Figure 18, meaning that expressing the game operation represented by it essentially requires the ability to account for [*sharing*]{} — the ability absent in formula-based languages. In our cirquent, each of the ports is shared between two conjunctive parents. What makes the formula $(P{\wedge}Q){\vee}(P{\wedge}R){\vee}(Q{\wedge}R)$ inadequate is that, for instance, it fails to indicate that the two occurrences of $P$ stand for [*the same copy*]{} of game $P$ rather than [*two copies of the same game*]{} $P$. And, as pointed out earlier, two copies of $P$ are semantically not the same as just one copy. Now we are done with defining ever more general concepts of cirquents and setting up a CoL semantics for them. The next natural step in this line of research would be elaborating deductive systems that adequately axiomatize the sets of valid cirquents. Of course, this is only possible for various subclasses of cirquents rather than all cirquents. A modest progress in this direction has already been made in [@Japdeep] where a cirquent-based sound and complete system[^17] was constructed. The cirquents of the language of that system have only general ports, only ${\wedge}$ and ${\vee}$ gates and, of course, no clustering and ranking. At the formula level, very considerable advances have already been made in the direction of axiomatizing the sets of valid principles ([@Japtocl1]-[@Japtcs], [@Japjsl]-[@Propint], [@Japseq]-[@Japtowards], [@Japtogl], [@Ver]). For instance, [@Japtogl] contains a sound and complete axiomatization for the propositional fragment of CoL with both elementary and general atoms, negation and all four — parallel, choice, sequential and toggling — sorts of conjunctions and disjunctions. Certain first-order fragments of CoL have also found sound and complete axiomatizations. The quantifiers ${\mbox{\Large $\sqcap$}\hspace{1pt}},{\mbox{\Large $\sqcup$}\hspace{1pt}}$ turn out to be much better behaved than their classical counterparts and, in a striking difference from the latter, yield decidable first order logics. See [@Japtcs], [@Japtcs2]. The present paper makes no axiomatization attempts, leaving this ambitious task for the future. Instead, we are going to show extensional equivalence between the semantics of CoL and its companion termed [*abstract resource semantics*]{}. This result can make the future job of axiomatizing various fragments of CoL significantly easier, as abstract resource semantics is technically much simpler and more convenient to deal with than the semantics of computability logic. Abstract resource semantics {#s9} =========================== [**Abstract resource semantics**]{} ([**ARS**]{}) aims to formally capture our intuitions of [*resources*]{} and resource management. Resources are symmetric to [*tasks*]{}: what is a resource for the consumer, is a task for the provider. So, an equally adequate name for ARS would be “abstract task semantics”. The concept of an [*atomic resource*]{} in ARS is taken as a basic one without any definition of its nature. This makes it open to various interpretations which ARS itself, as a general-purpose tool, does not provide. The semantics of computability logic, treating atoms as variables over static games, can be seen to be one of many such possible interpretations. As for compound resources, technically their explication in ARS is given in terms of games. The formal language that ARS deals with is the same as that of computability logic. Precisely, in this paper, we let this (otherwise open-ended) language consist of all cirquents in the (most general) sense of Section \[s8\]. There are two sorts of resources: [*elementary*]{} and [*general*]{}. Intuitively, elementary resources are “reusable” or “unexhaustable” ones, while general resources may or may not be so. Let us say in achieving a certain goal $G$ you used the fact $2+2=4$ as an (intellectual) resource. After this usage, $2+2$ will still be $4$, so that the resource will remain equally available for future usage if needed again. On the other hand, if you also used $\$20,000$ for achieving $G$, you may not be able to use the same resource again later. $2+2=4$ is an elementary resource, while $\$20,000$ is not, which makes the latter a (properly) general resource. As we may guess, elementary resources will be represented in our cirquent formalism through elementary ports, and general resources through general ports. To get some basic intuitive feel of ARS, and to see why the latter, just like CoL, naturally calls for switching from formulas to cirquents, let us borrow a discussion from [@Japdeep]. We are talking about a vending machine that has slots for 25-cent ($25c$) coins, with each slot taking a single coin. Coins can be authentic or counterfeited. Let us instead use the more generic terms [*true*]{} and [*false*]{} here, as there are various particular situations naturally and inevitably emerging in the world of resources corresponding to those two opposite values. Below are a few examples of real-world resources/tasks and the possible meanings of the two semantical values for them: - A financial debt, which may (true) or may not (false) be eventually paid. - An electrical outlet or a battery, which may (true) or not (false) actually have sufficient power in it. - A standard task performed by a company’s employee or an AI agent, which, eventually, may (true) or not (false) be successfully completed. - A specified amount of computer memory required by a process, which may (true) or not (false) be available at a given time. - A promise, which may be kept (true) or broken (false). See Section 8 of [@Cirq] for detailed elaborations of these intuitions. Continuing the description of our vending machine, inserting a false coin into a slot fills the slot up (so that no other coins can be inserted into it until the operation is complete), but otherwise does not fool the machine into thinking that it has received 25 cents. A candy costs 50 cents, and the machine will dispense a candy if at least two of its slots receive true coins. Pressing the “dispense” button while having inserted anything less than 50 cents, such as a single coin, or one true and two false coins, results in a non-recoverable loss. Victor has three $25c$-coins, and he knows that two of them are true while one is perhaps false (but he has no way to tell which one is false). Could he get a candy? The answer depends on how many slots the machine has. Consider two cases: machine $M2$ with two slots, and machine $M3$ with three slots. Victor would have no problem with $M3$: he can insert his three coins into the three slots, and the machine, having received $\geq 50c$, will dispense a candy. With $M2$, however, Victor is in trouble. He can try inserting arbitrary two of his three coins into the two slots of the machine, but there is no guarantee that one of those two coins is not false, in which case Victor will end up with no candy and only $25$ cents remaining in his pocket. Both $M2$ and $M3$ can be understood as resources — resources turning coins into a candy. And note that these two resources are not the same: $M3$ is obviously stronger (“better”), as it allows Victor to get a candy whereas $M2$ does not, while, at the same time, anyone rich enough to be able to make $M2$ dispense a candy would be able to do the same with $M3$ as well. Yet, formulas fail to capture this important difference. $M2$ and $M3$ can be written as $$\mbox{$R2{\rightarrow}\mbox{\em Candy}$ \ and \ $R3{\rightarrow}\mbox{\em Candy}$,}$$ respectively (with $E{\rightarrow}F$, as always, abbreviating ${\neg}E{\vee}F$): they consume a certain resource $R2$ or $R3$ and produce [*Candy*]{}. What makes $M3$ stronger than $M2$ is that the subresource $R3$ that it consumes is weaker (easier to supply) than the subresource $R2$ consumed by $M2$. Specifically, with one false and two true coins, Victor is able to satisfy $R3$ but not $R2$. The resource $R2$ can be represented as the following cirquent: (70,40) (33,11)[(5,3)[25]{}]{} (33,11)[(-5,3)[25]{}]{} (0,28)[$25c$]{} (51,28)[$25c$]{} (30,3)[${\wedge}$]{} (33,6) which, due to being tree-like, can also be adequately written as the formula $$25c{\wedge}25c.$$ As for the resource $R3$, either one of the following two cirquents is an adequate representation of it, with one of them probably showing the relevant part of the actual physical circuitry used in $M3$: (235,97) (36,62)[(-2,1)[29]{}]{} (36,62)[(2,1)[29]{}]{} (7,62)[(0,1)[15]{}]{} (7,62)[(2,1)[29]{}]{} (36,36)[(2,1)[29]{}]{} (36,36)[(-2,1)[29]{}]{} (36,36)[(0,1)[14]{}]{} (65,62)[(-2,1)[29]{}]{} (65,62)[(0,1)[15]{}]{} (0,80)[$25c$]{} (29,80)[$25c$]{} (58,80)[$25c$]{} (33,53)[${\wedge}$]{} (36,56) (4,53)[${\wedge}$]{} (7,56) (62,53)[${\wedge}$]{} (65,56) (33,28)[${\vee}$]{} (36,31) (-17,8)[[**Figure 19:**]{} Two equivalent cirquents for the resource $R3$]{} (186,62)[(-2,1)[29]{}]{} (186,62)[(2,1)[29]{}]{} (157,62)[(0,1)[15]{}]{} (157,62)[(2,1)[29]{}]{} (186,36)[(2,1)[29]{}]{} (186,36)[(-2,1)[29]{}]{} (186,36)[(0,1)[14]{}]{} (215,62)[(-2,1)[29]{}]{} (215,62)[(0,1)[15]{}]{} (150,80)[$25c$]{} (179,80)[$25c$]{} (208,80)[$25c$]{} (183,53)[${\vee}$]{} (186,56) (154,53)[${\vee}$]{} (157,56) (212,53)[${\vee}$]{} (215,56) (183,28)[${\wedge}$]{} (186,31) Unlike $R2$, however, $R3$ cannot be written as a formula, for reasons similar to those that we saw when discussing Figure 18. $25c{\wedge}25c$ does not fit the bill, for it represents $R2$ which, as we already agreed, is not the same as $R3$. Rewriting one of the above two cirquents — let it be the one on the right — into an “equivalent” formula in the standard way, by duplicating and separating shared nodes, results in $$\label{ffff1} (25c{\vee}25c){\wedge}(25c{\vee}25c){\wedge}(25c{\vee}25c),$$ which is not any more adequate than $25c{\wedge}25c$. It expresses not $R3$ but the resource consumed by a machine with six coin slots grouped into three pairs, where (at least) one slot in each of the three pairs needs to receive a true coin. Such a machine thus dispenses a candy for $\geq 75$ rather than $\geq 50$ cents, which makes Victor’s resources insufficient. The trouble here, as in the case of the right cirquent of Figure 18, is related to the inability of formulas to explicitly account for resource sharing or the absence thereof. The right cirquent of Figure 19 stands for a conjunction of three resources, each conjunct, in turn, being a disjunction of two subresources of type $25c$. However, altogether there are three rather than six $25c$-type subresources, each one being shared between two different conjuncts of the main resource. Formula (\[ffff1\]) is inadequate because, for example, it fails to indicate that the first and the third occurrences of “$25c$” stand for the same resource while the second and the fifth (as well as the fourth and the sixth) occurrences stand for another resource, albeit a resource of the same $25c$-type. In yet another attempt to save formulas, one could try to agree that atoms always stand for individual resources rather than resource types, then give to the three ports of the right cirquent of Figure 19 three different names $P$, $Q$, $R$, and represent the cirquent as the formula $(P{\vee}Q){\wedge}(P{\vee}R){\wedge}(Q{\vee}R)$. But then a crucial piece of information would be lost, specifically the information about all inputs being of the same type $25c$, as opposed to, say, the three different types $25c$, $10c$, $5c$. This would make it impossible to match Victor’s resources with those inputs. Thus, any systematic attempt to develop a logic of resources would face the necessity to go beyond formulas and use a formalism that permits to account for resource sharing, as our cirquents do. And it is an absolute shame that linear logic, commonly perceived as “the” logic of resources, does not allow to express such simple and naturally emerging combinations of resources as the “two out of three” combination expressed by the cirquents of Figure 19. The main purpose of a good semantics should be serving as a bridge between the real world and the otherwise meaningless formal expressions of logic. And, correspondingly, the value of a semantics should be judged by how successfully it achieves this purpose, which, in turn, depends on how naturally and adequately it formalizes certain basic intuitions connecting logic with the outside world. Such intuitions behind abstract resource semantics have been amply explained and illustrated in Section 8 of [@Cirq]. The reader is strongly recommended to get familiar with that piece of literature in order to appreciate the claim of abstract resource semantics that it is a “real” semantics of resources, formalizing the resource philosophy traditionally (and, as argued in [@Cirq], somewhat wrongly) associated with linear logic and its variations. In this paper we are mainly focused on just providing formal definitions, only occasionally making brief intuitive comments, and otherwise fully relying on [@Cirq] for extended explanations of the intuitions, motivations and philosophy underlying the semantics. Even though [@Cirq] dealt with only a very modest subclass of cirquents in our present sense, the basic intuitions relevant to our treatment — at the philosophical if not technical level — were already given there. From the technical point of view, ARS is a game semantics and, as such, only differs from the semantics of CoL in the way it treats atoms. Namely, if in CoL atoms are just placeholders which (together with the entire cirquent) become games only after an interpretation $^*$ is applied to them, ARS treats each atom as an atomic abstract resource in its own rights, and correspondingly sees the whole cirquent as a resource/task/game in its own rights, without requiring an interpretation to be applied to it. To repeat, while in CoL a cirquent $C$, by itself (without an interpretation) is just a formal expression but not a game, in ARS every cirquent $C$ is directly seen as a game, which we agree to denote by $\hat{C}$. A related difference between CoL and ARS is that, in the game $C^*$ represented by a cirquent $C$ under a given interpretation $^*$, CoL allows moves to be made within the games associated by $^*$ with the general literals of $C$. In ARS, as noted, such literals stand for atomic entities and no moves within them can or should be made. On the other hand, ARS has an additional sort of moves by ${\top}$, called (resource) [*allocation*]{}. Such a move looks like $(a,b)$, where $a$ is the ID of a $P$-port for some general atom $P$, and $b$ is the ID of a ${\neg}P$-port. A condition here, called the [*monogamicity condition*]{}, is that neither $a$ nor $b$ could have been already used earlier in any allocation moves. As explained and illustrated in [@Cirq], the intuition behind an allocation move is that of (indeed) allocating one resource to another: a coin ($25c$) to a coin-receiving slot (${\neg}25c$), a memory ($100MB$) to a memory-requesting process (${\neg}100MB$), a power source ($100w$) to a power-consuming utensil (${\neg}100w$), an USB-interface external device ($USB$) to an USB port of a computer (${\neg}USB$), etc. And a justification of the monogamicity condition is that if a nonelementary resource $a$ is used by (allocated to) $b$, then it cannot be also used by (allocated to) another $c\not=b$. Formally, an [**allocation**]{} for a given cirquent $C$ is a pair $(a,b)$ — identified with the expression “$(a,b)$” — where $a$ and $b$ are (the IDs of) general ports of $C$ such that the label of $a$ is $P$ (some general atom $P$) and the label of $b$ is ${\neg}P$. The set of legal runs of the game associated with a cirquent in ARS is defined as follows: \[tamuna\] Let $C$ be a cirquent and $\Phi$ a position. $\Phi$ is a [**legal position**]{} of the game $\hat{C}$ (associated with $C$ in ARS) iff, with “cluster” and “port” below meaning those of $C$, the following conditions are satisfied: 1. Every labmove of $\Phi$ has one of the following forms: 1. ${\top}c.i$, where $c$ is a ${\mbox{\hspace{2pt}$\vee$\hspace{-1.29mm}\raisebox{0.1mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcup}$-cluster and $i$ is a positive integer not exceeding the outdegree of $c$. 2. ${\bot}c.i$, where $c$ is a ${\mbox{\hspace{2pt}$\wedge$\hspace{-1.29mm}\raisebox{0.02mm}{\rule{0.13mm}{2mm}}\hspace{5pt}}}$-, ${\mbox{\hspace{2pt}\small \raisebox{0.0cm}{$\bigtriangleup$}\hspace{2pt}}}$- or ${\hspace{0pt}\sqcap}$-cluster and $i$ is a positive integer not exceeding the outdegree of $c$. 3. ${\top}(a,b)$, where $(a,b)$ is an allocation for $C$. This kind of a move is said to be an [**allocation move**]{}. 2. Whenever $c$ is a choice cluster, $\Phi$ contains at most one occurrence of a labmove of the form ${\wp}c.i$. 3. Whenever $c$ is a sequential cluster and $\Phi$ is of the form ${\langle \ldots ,{\wp}c.i ,\ldots, {\wp}c.j ,\ldots \rangle}$, we have $i<j$. 4. $\Phi$ does not contain any two occurrences ${\top}(a,b)$ and ${\top}(a',b')$ such that $a=a'$ or $b=b'$. By an [**arrangement**]{} for a cirquent $C$ we mean a set $\cal A$ of allocations for $C$ such that, whenever $(a,b),(a',b')\in {\cal A}$, we have either $(a,b)=(a',b')$, or both $a\not=a'$ and $b\not=b'$. We call this condition (on arrangements) the [**monogamicity condition**]{}. Whenever $C$ is a cirquent and $\Gamma$ is a legal run of $\hat{C}$, by the [**arrangement induced by $\Gamma$**]{} we mean the set of all allocations $(a,b)$ such that $\Gamma$ contains the labmove ${\top}(a,b)$. By a [**situation**]{} $^*$ for a cirquent $C$ we mean an assignment of one of the values $\mathbb{T}$ (“[*true*]{}”) or $\mathbb{F}$ (“[*false*]{}”) to each port of $C$, satisfying the condition that, for any two elementary (but not necessarily general) ports $a$ and $b$, whenever $a$ and $b$ have the same label, they are assigned the same value, and whenever $a$ and $b$ have opposite labels, they are assigned different values. Any such function $^*$ is a legitimate situation, including the cases when $^*$ assigns different values to general ports that have identical labels. Intuitively this is perfectly meaningful in the world of resources because, say, one $25c$-port (slot of the vending machine) may receive a true coin while the other $25c$-port may receive a false coin or no coin at all. Note the difference between situations and interpretations. One difference is that an interpretation associates with each port a [*game*]{}, while a situation simply sends each port to one of the values $\mathbb{T}$ or $\mathbb{F}$. Of course, in the case of elementary (but by no means general) ports, this difference is not essential, as $\mathbb{T}$ can be identified with the game ${\top}$ and $\mathbb{F}$ with the game ${\bot}$. Another difference is that the domain of an interpretation is the set of [*atoms*]{} of a cirquent while the domain of a situation is the set of [*ports*]{}. This is a substantial difference, as the same atom, with or without a negation, may be the label of many different ports. But, again, this difference is not relevant if we only consider elementary ports. Let $C$ be a cirquent, $\cal A$ an arrangement for $C$, and $^*$ a situation for $C$. We say that $^*$ is [**consistent with $\cal A$**]{} iff, whenever $(a,b)\in{\cal A}$, the situation $^*$ assigns different values to $a$ and $b$.[^18] The concept of a [**metaselection**]{}, as well as the concepts [**unresolved**]{}, [**resolved**]{} and [**resolvent**]{}, for any of the eight sorts of gates, transfer from Section \[s8\] and hence Section \[s7\] to the present section without any changes. And metatruth (Definition \[may14ff\]) is now redefined as follows: \[ma14ff\] Let $C$ be a cirquent, $^*$ a situation for $C$, $\Gamma$ a legal run of $\hat{C}$, and $\vec{f}$ a metaselection for $C$. In this context, with “metatrue” to be read as “[**metatrue w.r.t. $(^*,\Gamma,\vec{f})$**]{}”, we say that: - An (elementary or general) $L$-port $a$ is metatrue iff $L^*=\mathbb{T}$. - A resolved gate (of any of the eight types) is metatrue iff so is its resolvent. - No unresolved disjunctive gate (of any of the four types) is metatrue. - Every unresolved conjunctive gate (of any of the four types) is metatrue. Finally, we say that $C$ is metatrue iff so is its root. With metatruth redefined this way, the definition of (simply) [**truth**]{} is literally the same as before (Definition \[may14k\]). Next, where $C$ is a cirquent and $\Gamma$ is a legal run of $\hat{C}$, we say that $C$ is [**accomplished**]{} w.r.t. $\Gamma$ iff, for every situation $^*$ consistent with the arrangement induced by $\Gamma$, $C$ is true w.r.t. $(^*,\Gamma)$. Now, where $C$ is a cirquent and $\Gamma$ is a legal run of $\hat{C}$, $\Gamma$ is considered to be a [**${\top}$-won**]{} run of $\hat{C}$ iff $C$ is accomplished w.r.t. $\Gamma$. This, together with Definition \[tamuna\], completes our definition of the game $\hat{C}$ associated with a cirquent $C$ in ARS. We say that an HPM $\cal M$ [**accomplishes**]{} a cirquent $C$ iff $\cal M$ wins the game $\hat{C}$. When such an $\cal M$ exists, the cirquent $C$ is said to be [**accomplishable**]{}. Accomplishability is the main semantical concept of ARS. In its philosophical spirit, it is an ARS-counterpart of the classical concept of truth. Technically, however, it is more a validity- (rather than truth-) style concept. Namely, it is similar to the concept of strong validity of computability logic. To see an example, consider the cirquent of Figure 20. (213,114) (5,102)[$1$]{} (28,102)[$2$]{} (66,102)[$3$]{} (88,102)[$4$]{} (126,102)[$5$]{} (148,102)[$6$]{} (186,102)[$7$]{} (208,102)[$8$]{} (4,91)[$P$]{} (27,91)[$P$]{} (64,91)[$P$]{} (87,91)[$P$]{} (119,91)[${\neg}P$]{} (140,91)[${\neg}P$]{} (179,91)[${\neg}P$]{} (200,91)[${\neg}P$]{} (19,78)[(-1,1)[10]{}]{} (19,78)[(1,1)[10]{}]{} (19,72) (16,69)[${\vee}$]{} (79,78)[(-1,1)[10]{}]{} (79,78)[(1,1)[10]{}]{} (79,72) (76,69)[${\vee}$]{} (48,53) (45,50)[${\wedge}$]{} (48,59)[(-4,1)[30]{}]{} (48,59)[(4,1)[30]{}]{} (139,78)[(-1,1)[10]{}]{} (139,78)[(1,1)[10]{}]{} (139,72) (136,69)[${\vee}$]{} (199,78)[(-1,1)[10]{}]{} (199,78)[(1,1)[10]{}]{} (199,72) (196,69)[${\vee}$]{} (168,53) (165,50)[${\wedge}$]{} (168,59)[(-4,1)[30]{}]{} (168,59)[(4,1)[30]{}]{} (108,37)[(-6,1)[60]{}]{} (108,37)[(6,1)[60]{}]{} (108,31) (105,28)[${\vee}$]{} (18,8)[[**Figure 20:**]{} An accomplishable cirquent]{} And consider two HPMs ${\cal M}_1$ and ${\cal M}_2$ such that: $$\mbox{${\cal M}_1$ generates the run $\Gamma_1={\langle {\top}(1,5), {\top}(2,6), {\top}(3,7),{\top}(4,8) \rangle}$};$$ $$\mbox{${\cal M}_2$ generates the run $\Gamma_2={\langle {\top}(1,5), {\top}(2,7), {\top}(3,6),{\top}(4,8) \rangle}$}.$$ Then ${\cal M}_1$ does not accomplish the cirquent while ${\cal M}_2$ does. Indeed, the arrangement induced by $\Gamma_1$ is $${\cal A}_1 = \{(1,5), (2,6), (3,7),(4,8)\}.$$ Let $^\dagger$ be the situation with $$\mbox{$1^\dagger =2^\dagger =7^\dagger= 8^\dagger =\mathbb{T}$; \ \ $3^\dagger =4^\dagger =5^\dagger =6^\dagger =\mathbb{F}$.}$$ Then $^\dagger$ is consistent with ${\cal A}_1$. But the cirquent is false w.r.t. $(^\dagger,\Gamma_1)$. Hence it is not accomplished w.r.t. $\Gamma_1$. Hence ${\cal M}_1$ does not accomplish it. The above situation $^\dagger$, on the other hand, is not consistent with the arrangement $${\cal A}_2=\{(1,5), (2,7), (3,6), (4,8)\}$$ induced by $\Gamma_2$. Moreover, with some thought, one can see that no situation that makes the cirquent of Figure 20 false can be consistent with ${\cal A}_2$. This means that ${\cal M}_2$, unlike ${\cal M}_1$, [*does*]{} accomplish that cirquent. As an aside, the cirquent of Figure 20 is tree-like and hence can be written as the formula $\bigl((P{\vee}P){\wedge}(P{\vee}P)\bigr){\vee}\bigl(({\neg}P{\vee}{\neg}P){\wedge}({\neg}P{\vee}{\neg}P)\bigr)$. This formula, first brought forward by Blass [@Bla92] in a related context, is not provable in multiplicative linear logic or even in the extension of the latter known as [*affine logic*]{}. The same applies to the more general principle $\bigl((P{\vee}Q){\wedge}(R{\vee}S)\bigr){\vee}\bigl(({\neg}P{\vee}{\neg}R){\wedge}({\neg}Q{\vee}{\neg}S)\bigr)$. Thus, unlike the situation with classical logic or IF logic, the logic induced by ARS or by (the extensionally equivalent) computability logic is [*not*]{} a conservative extension of linear logic or its standard variations such as affine logic. It is this fact that makes CoL and ARS believe that linear logic is incomplete as a logic of resources. The cirquent of Figure 21 looks very “similar” to the cirquent of . Yet, unlike the latter, it is not accomplishable. As an exercise, the reader may want to try to understand why this is so. (400,123) (14,107)[$P$]{} (37,107)[$P$]{} (29,94)[(-1,1)[10]{}]{} (29,94)[(1,1)[10]{}]{} (29,88) (26,85)[${\vee}$]{} (75,107)[$P$]{} (98,107)[$P$]{} (90,94)[(-1,1)[10]{}]{} (90,94)[(1,1)[10]{}]{} (90,88) (87,85)[${\vee}$]{} (136,107)[$P$]{} (159,107)[$P$]{} (151,94)[(-1,1)[10]{}]{} (151,94)[(1,1)[10]{}]{} (151,88) (148,85)[${\vee}$]{} (90,61) (87,58)[${\wedge}$]{} (90,67)[(-4,1)[60]{}]{} (90,67)[(4,1)[60]{}]{} (90,67)[(0,1)[15]{}]{} (230,107)[${\neg}P$]{} (254,107)[${\neg}P$]{} (249,94)[(-1,1)[10]{}]{} (249,94)[(1,1)[10]{}]{} (249,88) (246,85)[${\vee}$]{} (291,107)[${\neg}P$]{} (315,107)[${\neg}P$]{} (310,94)[(-1,1)[10]{}]{} (310,94)[(1,1)[10]{}]{} (310,88) (307,85)[${\vee}$]{} (352,107)[${\neg}P$]{} (376,107)[${\neg}P$]{} (371,94)[(-1,1)[10]{}]{} (371,94)[(1,1)[10]{}]{} (371,88) (368,85)[${\vee}$]{} (310,61) (307,58)[${\wedge}$]{} (310,67)[(-4,1)[60]{}]{} (310,67)[(4,1)[60]{}]{} (310,67)[(0,1)[15]{}]{} (200,37)[(-6,1)[110]{}]{} (200,37)[(6,1)[110]{}]{} (200,31) (197,28)[${\vee}$]{} (106,8)[[**Figure 21:**]{} An unaccomplishable cirquent]{} To see the difference that sharing general ports may create, compare the two cirquents of Figure 22. The left cirquent can be seen to be unaccomplishable. The right cirquent only differs from the left cirquent in that ports $6$ and $7$ are combined together into one port $8$ and shared between the two parents. This “minor” change makes it accomplishable. Namely, it is accomplished by an HPM that makes the three moves $(8,1)$, $(4,2)$ and $(5,3)$ in whatever order, i.e., sets up the arrangement $\{(8,1),(4,2),(5,3)\}$. (350,143) (6,117)[$1$]{} (31,117)[$2$]{} (54,117)[$6$]{} (77,117)[$7$]{} (100,117)[$3$]{} (123,117)[$4$]{} (146,117)[$5$]{} (206,117)[$1$]{} (231,117)[$2$]{} (266,117)[$8$]{} (300,117)[$3$]{} (323,117)[$4$]{} (346,117)[$5$]{} (0,104)[${\neg}P$]{} (25,104)[${\neg}P$]{} (52,104)[$P$]{} (44,91)[(-1,1)[10]{}]{} (44,91)[(1,1)[10]{}]{} (44,85) (41,82)[${\vee}$]{} (75,104)[$P$]{} (93,104)[${\neg}P$]{} (90,91)[(-1,1)[10]{}]{} (90,91)[(1,1)[10]{}]{} (90,85) (87,82)[${\vee}$]{} (121,104)[$P$]{} (144,104)[$P$]{} (136,91)[(-1,1)[10]{}]{} (136,91)[(1,1)[10]{}]{} (136,85) (133,82)[${\vee}$]{} (90,50) (87,47)[${\wedge}$]{} (90,56)[(-2,1)[45]{}]{} (90,56)[(2,1)[45]{}]{} (90,56)[(0,1)[22]{}]{} (90,56)[(-4,1)[81]{}]{} (8,101)[(0,-1)[25]{}]{} (53,28)[*unaccomplishable*]{} (200,104)[${\neg}P$]{} (225,104)[${\neg}P$]{} (265,104)[$P$]{} (244,91)[(-1,1)[10]{}]{} (244,91)[(5,2)[23]{}]{} (244,85) (241,82)[${\vee}$]{} (293,104)[${\neg}P$]{} (290,91)[(-5,2)[23]{}]{} (290,91)[(1,1)[10]{}]{} (290,85) (287,82)[${\vee}$]{} (321,104)[$P$]{} (344,104)[$P$]{} (336,91)[(-1,1)[10]{}]{} (336,91)[(1,1)[10]{}]{} (336,85) (333,82)[${\vee}$]{} (290,50) (287,47)[${\wedge}$]{} (290,56)[(-2,1)[45]{}]{} (290,56)[(2,1)[45]{}]{} (290,56)[(0,1)[22]{}]{} (290,56)[(-4,1)[81]{}]{} (208,101)[(0,-1)[25]{}]{} (257,28)[*accomplishable*]{} (80,8)[[**Figure 22:**]{} Unshared versus shared general ports]{} Again very briefly about the intuitions behind ARS, elaborated through several papers ([@Jap02; @Cirq; @Japdeep; @Japseq]). Situations are full descriptions of the world in terms of what is true and what is false. What the “actual” situation is, is typically unknown to an agent (HPM) trying to accomplish a certain task (cirquent). Furthermore, the truth values of atoms may simply be indetermined before or during the process of playing the game associated with the cirquent, and can be influenced by what moves have been made. For instance, if $p$ stands for “Victor has (or will) become a millionaire in his lifetime”, the truth value of $p$ may eventually depend on how Victor acts towards achieving $p$ as a goal. This value is initially indetermined, for otherwise all Victor’s activities would be meaningless. It will become determined only by the time when Victor dies of when the world ends. Playing a game represented by a cirquent in ARS can be seen as resource or task management. The goal is to make sure that the eventual (“actual”) situation which will determine success and over which the agent only has partial control, is favorable for the agent (guarantees a win). Resource management includes allocation decisions. The effect of each such decision/move is narrowing down the set of all possible (and otherwise unknown) situations. For instance, allocating a coin to a slot of a vending machine rules out the situation (by making it inconsistent with the arrangement that is being set up) in which the coin is true yet the slot did not receive $25c$. Resource management also includes decision-style actions associated with selectional gates or clusters. For instance, a choice between a candy and an apple that a vending machine offers to whoever inserts $50c$ into it is a ${\hspace{0pt}\sqcap}$-combination of [*Candy*]{} and [*Apple*]{} (this is so from the machine’s perspective; it becomes an ${\hspace{0pt}\sqcup}$-combination from the user’s perspective). And nature’s “decisions” about whether Victor lives or dies is a ${\mbox{\hspace{2pt}\small \raisebox{0.06cm}{$\bigtriangledown$}\hspace{2pt}}}$-style combination of “[*Victor is alive*]{}” and “[*Victor is dead*]{}”, where nature can switch from the former to the latter but never back, as required by the game rules associated with sequential combinations. In Section \[s5\] we further saw resource/task intuitions associated with clustering. Remember the “Victor and Peggy in the middle of the road” example. [**Historical remarks**]{}. A year before computability logic was officially born, paper [@Jap02] introduced an approach termed the [*logic of tasks*]{} which, ignoring certain unessential and flexible technical details, eventually became a conservative fragment and an ideological predecessor of both CoL and ARS. In our present terms, the language of the logic of tasks was limited to only elementary atoms (an important detail!) and formulas (tree-like cirquents) built from them using ${\wedge},{\vee},{\hspace{0pt}\sqcap},{\hspace{0pt}\sqcup}$ in both propositional and quantifier forms. For these sorts of cirquents, the concept of validity defined in [@Jap02] coincides with our present ARS concept of accomplishability, as well as with our present CoL concept of strong validity. The above-sketched philosophical vision of the world as a set of potential situations, with the game-playing agent trying to reduce that set to favorable ones, also was originally developed within the framework of the logic of tasks. While the logic of tasks is being further studied by a number of researchers ([@Ch1], [@Ch2], [@Ch3]), the author himself abandoned it as an approach superceded by the more general computability logic. The idea of abstract resource semantics in the proper sense (the sense that differentiates it from the more limited logic of tasks), as well as the idea of cirquents, was born in [@Cirq]. Central to this idea was considering allocations as moves in their own rights and, correspondingly, considering in the language general atoms instead of — or, rather, along with — the elementary atoms of the logic of tasks. Cirquents that [@Cirq] officially dealt with were very limited, with the root always being a ${\wedge}$-gate, the children of the root always being ${\vee}$-gates, and all grandchildren of the root being general ports. The same paper, however, outlined the possibility and expediency of considering more general sorts of cirquents and correspondingly extending the associated abstract resource semantics. The paper [@Japdeep] generalized the cirquents of [@Cirq] and the corresponding abstract resource semantics to all finite ${\wedge},{\vee}$-cirquents with general ports. And the paper [@Japseq] outlined ARS in more or less full generality, even though for formulas only. Within that outline, Mezhirov and Vereshchagin [@Ver] undertook a detailed study of propositional formulas in the logical signature $\{{\neg},{\wedge},{\vee},{\hspace{0pt}\sqcap},{\hspace{0pt}\sqcup},{\mbox{\raisebox{-0.01cm}{\scriptsize $\wedge$}\hspace{-4pt}\raisebox{0.16cm}{\tiny $\mid$}\hspace{2pt}}},{\mbox{\raisebox{0.12cm}{\scriptsize $\vee$}\hspace{-4pt}\raisebox{0.02cm}{\tiny $\mid$}\hspace{2pt}}}\}$, and proved a theorem similar to our forthcoming Theorem \[mth\]. The main novelty in the present extension of ARS is the idea of clustering, never considered before in either CoL or ARS. In fact, as noted earlier, cirquents, in whatever form, had never been used in CoL (as opposed to ARS) as official means of expression. And ARS only had been defined for selectional-gate-free cirquents without clustering and ranking. Before closing this section, we want to summarize what we have already said or what the reader has probably already observed. From the technical (as opposed to, perhaps, philosophical) point of view, when cirquents with [*only*]{} elementary ports are considered, there is no difference between ARS and the semantics of CoL, as long as, in the latter, we limit our attention only to the concept of strong validity. And the above-mentioned logic of tasks is a fragments of this common part of CoL and ARS. So is classical logic, IF logic or extended IF logic. ARS and the semantics of CoL start to differ only when we consider cirquents with general ports. This difference is substantial, yet only in the [*intensional*]{} sense. The following section shows the nontrivial fact that [*extensionally*]{} there is no difference — that is, the two semantics, despite differences, still yield the same classes of valid cirquents. Accomplishability and strong validity are equivalent {#s10} ==================================================== By a [**nice game**]{} we shall mean a game $G$ such that, with ${\wp}$ (as always) standing for either player and ${\neg}{\wp}$ for the other player, we have: - Every legal run of $G$ is either ${\langle \rangle}$ or ${\langle {\wp}m \rangle}$ or ${\langle {\wp}m,{\neg}{\wp}n \rangle}$, where $m,n$ are (the decimal representations of) some positive integers. - The empty run ${\langle \rangle}$ is won by ${\top}$, and a legal run ${\langle {\wp}m \rangle}$ of $G$ is won by ${\wp}$. - A legal run ${\langle {\top}m,{\bot}n \rangle}$ of $G$ is ${\wp}$-won iff so is ${\langle {\bot}n,{\top}m \rangle}$. This allows us to see legal runs of nice games as [*sets*]{} rather than [*sequences*]{} of labmoves, and write $\{{\top}m,{\bot}n\}$ instead of ${\langle {\top}m,{\bot}n \rangle}$. Thus, different nice games differ from each other only in which runs of the form $\{{\top}m,{\bot}n\}$ are won. By a [**nice interpretation**]{} we shall mean an interpretation that interprets each general atom as a nice game. This section is entirely devoted to a proof of the following theorem. In it, as always, a “cirquent” means a cirquent in the most general sense defined so far, i.e., in the sense of Section \[s8\]. \[mth\] $C$ is strongly valid iff $C$ is accomplishable (any cirquent $C$). Furthermore, both the “if” and “only if” parts of this theorem come in the following strong forms, respectively: 1\. There is an effective procedure that takes an arbitrary HPM  $\cal M$ and constructs an HPM  $\cal U$ such that, if $\cal M$ accomplishes $C$, then $\cal U$ is a uniform solution for $C$. 2\. If $C$ is not accomplishable, then, for any HPM  $\cal U$, there is a nice interpretation $^*$ such that $\cal U$ does not win $C^*$. Before getting down to a proof of this theorem, we need to agree on certain additional details about the (otherwise loosely defined in Section \[s2\]) HPM model of computation. Namely, we assume that either tape of an HPM has a beginning but no end, with cells arranged in the left-to-right order. We also assume that, at any computation step, an HPM can make at most one move, whereas its environment can make any finite number of moves. When both players move, the order in which their moves are appended to the content of the run tape is that ${\top}$’s move goes before ${\bot}$’s moves. By a [**run generated**]{} by a given HPM  $\cal H$ we mean any run that might be (depending on the environment’s behavior) incrementally spelled on the run tape of $\cal H$ during its work. Thus, $\cal H$ wins a game $A$ iff every run generated by $\cal H$ is a ${\top}$-won run of $A$. [**Proof of clause 1.**]{} Consider an arbitrary cirquent $C$, and an arbitrary HPM  $\cal M$. Our goal is to construct an HPM  $\cal U$ such that, as long as $\cal M$ accomplishes $C$,  $\cal U$ wins $C^*$ for any interpretation $^*$. Understanding the idea behind our proof is not hard. We let $\cal U$ be a machine which, as far as moves associated with selectional clusters are concerned, acts the same way in $C^*$ — i.e., makes the same selections — as $\cal M$ does in the game $\hat{C}$ associated with $C$ in ARS. The machines $\cal U$ and $\cal M$ only differ from each other in their actions related to general ports. The strategy of $\cal U$ in general ports is that, as long as $\cal M$ has not allocated a given port to another one (or vice versa), $\cal U$ makes no moves in it. However, as soon as two general ports $a$ and $b$ are allocated to each other by $\cal M$,  $\cal U$ starts applying copycat between the games (one being the negation of the other) associated with those ports, i.e., mimicking in either game the environment’s moves made in the other game. As a result, the plays (runs) of the games associated with $a$ and $b$ are guaranteed to be symmetric. More precisely, one is a ${\top}$-delay of the other. This makes it impossible that both of those plays are lost by ${\top}$. We may assume that exactly one of them is won by ${\top}$ (if both are won, “even better”). Then, translating “lost” into the value $\mathbb{F}$ and “won” into the value $\mathbb{T}$, we get a situation $^\ddagger$ for $C$ consistent with the arrangement induced by the run $\Theta$ generated by $\cal M$. If $\cal M$ accomplishes $C$, the latter is true with respect to $(^\ddagger,\Theta)$. Then the game $C^*$ can be easily seen to be won by $\cal U$. In more technical detail, $\cal U$ works by simulating $\cal M$. For this simulation, at any step, $\cal U$ maintains a record [*Configuration*]{} for the “current” configuration of $\cal M$. The latter contains the “current” state of $\cal M$, the locations of the two scanning heads of $\cal M$, and full contents of the (imaginary) work and run tapes of $\cal M$. Initially, the state is the start state of $\cal M$, the two scanning heads are looking at the leftmost cells of their tapes, and the contents of the two tapes are empty. $\cal U$ also maintains a variable $i$ which is initially set to $1$. After the above initialization step, $\cal U$ just acts according to the following procedure: [**Procedure**]{} LOOP: 1. Using the transition function of $\cal M$ and based on the current value of [*Configuration*]{}, $\cal U$ computes the next state, next locations of the scanning heads, the next content of the work tape of $\cal M$, and correspondingly updates [*Configuration*]{}. 2. If during the above transition $\cal M$ made a move $\omega$,  $\cal U$ further updates [*Configuration*]{} by appending the labmove ${\top}\omega$ to the content of the imaginary run tape of $\cal M$. In addition: 1. If $\omega$ is not an allocation move, $\cal U$ makes the same move $\omega$ in the real play. 2. Suppose now $\omega$ is an allocation move $(a,b)$. Let $\Upsilon$ be the longest initial segment of the position spelled on the run tape of $\cal U$ such that the last labmove of $\Upsilon$, as a string spelled on the tape, ends at location $j$ (i.e., the last symbol of the labmove is written in the $j$th cell) for some $j<i$. Then $\cal U$ looks up, within (but not beyond) $\Upsilon$, all the labmoves ${\bot}a.\beta_1,\ldots,{\bot}a.\beta_n$ of the form ${\bot}a.\beta$, and makes the moves $b.\beta_1,\ldots,b.\beta_n$ in the real play. Similarly, it looks up within $\Upsilon$ all the labmoves ${\bot}b.\alpha_1,\ldots,{\bot}b.\alpha_m$ of the form ${\bot}b.\alpha$, and makes the moves $a.\alpha_1,\ldots,a.\alpha_m$. 3. Now $\cal U$ checks its run tape to see whether it contains a labmove ${\bot}\omega$ which, as a string spelled on the tape, ends exactly at location $i$. If such an $\omega$ is found, then: 1. If $\omega$ looks like $c.j$ where $c$ is a selectional cluster, $\cal U$ further updates [*Configuration*]{} by adding the labmove ${\bot}\omega$ to the content of the imaginary run tape of $\cal M$. 2. If $\omega$ looks like $a.\delta$ where $a$ is a general port already allocated by $\cal M$ to a certain port $b$ (or vice versa) — i.e., the imaginary run tape of $\cal M$ contains the labmove ${\top}(a,b)$ or ${\top}(b,a)$ — then $\cal U$ makes the move $b.\delta$ in the real play. 4. Finally, $\cal M$ increments $i$ to $i+1$, and repeats LOOP. Assume $\cal M$ accomplishes $C$. Consider an arbitrary run $\Gamma$ generated by $\cal U$. To this run corresponds a run $\Theta$ generated by $\cal M$ — namely, $\Theta$ is the run incrementally spelled on the imaginary run tape of $\cal M$ when the latter is simulated by $\cal U$ during playing according to the scenario of $\Gamma$. Fix these $\Gamma$ and $\Theta$. Let us also fix $\cal A$ as the arrangement induced by $\Theta$. We further pick an arbitrary interpretation $^*$ and fix it, too. Our assumption that $\cal M$ accomplishes $C$ implies that $\cal M$ never makes an illegal move of $\hat{C}$ (unless its adversary does so first). We may also safely assume that $\cal U$’s environment never makes illegal moves of $C^*$ either, for otherwise $\cal U$ automatically wins and we are done. Then a little analysis of the situation reveals that $\Gamma$ is a legal run of $C^*$ and $\Theta$ is a legal run of $\hat{C}$. We will implicitly rely on this observation in our further argument. Consider an arbitrary pair $(a,b)$ with $(a,b)\in{\cal A}$. Let $P$ be the label of $a$, and thus ${\neg}P$ the label of $b$. Remember that $\Gamma^{a.}$ is the run that took place (according to the scenario of $\Gamma$) in the copy of the game $P^*$ associated with $a$ and, similarly, $\Gamma^{b.}$ is the run that took place in the copy of the game ${\neg}P^*$ associated with $b$. Remember also that, for a run $\Omega$, ${\neg}\Omega$ means the result of negating all labels in $\Omega$. It is clear from our description of LOOP that $\Gamma^{a.}$ is a ${\top}$-delay of ${\neg}\Gamma^{b.}$. Assume $\Gamma^{b.}$ is a ${\top}$-lost run of ${\neg}P^*$. Then, by the definition of the game negation operation, ${\neg}\Gamma^{b.}$ is a ${\top}$-won run of $P^*$. But then, because $P^*$ is a static game and $\Gamma^{a.}$ is a ${\top}$-delay of ${\neg}\Gamma^{b.}$, $\Gamma^{a.}$ is a ${\top}$-won run of $P^*$. To summarize, we have: $$\label{jun1a} \begin{array}{l} \mbox{\em Suppose $(a,b)\in{\cal A}$, and $P$ and ${\neg}P$ are the labels of $a$ and $b$. Then}\\ \mbox{\em either $\Gamma^{a.}$ is a ${\top}$-won run of $P^*$, or $\Gamma^{b.}$ is a ${\top}$-won run of ${\neg}P^*$, or both.} \end{array}$$ We define a situation $^\dagger$ for $C$ by stipulating that, for any elementary or general $L$-port $c$ of $C$, $c^\dagger=\mathbb{T}$ iff $\Gamma^{c.}$ is a ${\top}$-won run of $L^*$. From our description of the work of $\cal U$ it is clear that $\cal M$ and $\cal U$ act in exactly the same ways in the selectional clusters of $C$. That is, $\Gamma$ and $\Theta$ contain exactly the same labmoves of the form ${\wp}c.j$ where $c$ is a selectional cluster. From this observation and our choice of $^\dagger$, with a little thought, we can see that: $$\label{m31a} \mbox{\em $C$ is true w.r.t. $(^\dagger,\Theta)$ iff $C$ is true w.r.t. $(^*,\Gamma)$.}$$ Let $^\ddagger$ be the situation for $C$ that agrees with $^\dagger$ in all cases except that, whenever $(a,b)\in{\cal A}$ and $a^\dagger=b^\dagger=\mathbb{T}$, we have $a^\ddagger=\mathbb{T}$ and $b^\ddagger=\mathbb{F}$. In view of the monotonicity of the truth conditions associated with all types of gates, it is clear that \[m31b\] $$\label{m31b} \mbox{\em If $C$ is true w.r.t. $(^\ddagger,\Theta)$, then $C$ is also true w.r.t. $(^\dagger,\Theta)$.}$$ Consider any $(a,b)\in{\cal A}$. By our choice of $^\ddagger$, it is impossible that $a^\ddagger=b^\ddagger=\mathbb{T}$. And, with (\[jun1a\]) in mind, it is also impossible that $a^\ddagger=b^\ddagger=\mathbb{F}$. Thus, $^\ddagger$ assigns different values to $a$ and $b$. This means that \[jun1b\] $$\label{jun1b} \mbox{\em $^\ddagger$ is consistent with $\cal A$.}$$ Since $\cal M$ accomplishes $C$, in view of (\[jun1b\]), $C$ is true w.r.t. $(^\ddagger,\Theta)$. But then, by (\[m31b\]) and (\[m31a\]), $C$ is true w.r.t. $(^*,\Gamma)$, meaning that $\Gamma$ is a ${\top}$-won run of $C^*$. But remember that $\Gamma$ was an arbitrary run generated by $\cal U$ and $^*$ was an arbitrary interpretation. Thus, $\cal U$ is a uniform solution for $C$. It remains to make the straightforward observation that, as promised in the theorem, our construction of $\cal U$ from $\cal M$ is effective. [**Proof of clause 2.**]{} Our proof here is in many but respects symmetric to the proof of clause 1. Consider an arbitrary cirquent $C$, and an arbitrary HPM  $\cal U$ such that $\cal U$ wins $C^*$ for any nice interpretation $^*$. Our goal is to construct an HPM  $\cal M$ such that $\cal M$ accomplishes $C$. To understand the idea behind our proof, imagine a play of $\cal U$ over $C^*$ for whatever nice interpretation $^*$, on which, note, the behavior of $\cal U$ does not depend. Every (legal) move in this play signifies either a move made in a selectional cluster, or a move made in a general port. Since $^*$ is nice, for each general port, either player has exactly one move that can be made there. Let us call $\cal U$’s environment [*smart*]{} if it always makes different moves in different general ports. This is the case when, say, the environment always makes the move “$a$” in port $a$. Let us assume that, in the play that we are considering, the environment is smart. The best that then $\cal U$ can do is to [*match*]{} each $P$-labeled port with one (but not more!) ${\neg}P$-labeled port — match in the sense of mimicking adversary’s moves so that the two games evolve in a symmetric fashion, to guarantee that at least one of them will be eventually won. Each time such a “matching” between ports $a$ and $b$ is detected, we let $\cal M$ make a move that allocates $a$ to $b$. Other than this, $\cal M$ plays in the same way as $\cal U$ does, by making the same moves (selections) in selectional clusters as $\cal U$ makes. We can then show that, if $\cal M$ does not accomplish $C$ with this strategy, any falsifying situation $^\dagger$ for $C$ translates into certain conditions on $^*$ under which $\cal U$ has lost $C^*$. This, however, is impossible because, by our assumption, $\cal U$ wins $C^*$ for [*any*]{} nice interpretation $^*$. In more detailed terms, $\cal M$ works by simulating $\cal U$. For this simulation, at any step, $\cal M$ maintains a record [*Configuration*]{} for the “current” configuration of $\cal U$. The latter contains the “current” state of $\cal U$, the locations of the two scanning heads of $\cal U$, and full contents of the (imaginary) work and run tapes of $\cal U$. Initially, the state is the start state of $\cal U$, the two scanning heads are looking at the leftmost cells of their tapes, and the contents of the two tapes are empty. $\cal M$ also maintains a variable $i$ which is initially set to $1$. After the above initialization step, $\cal M$ just acts according to the following procedure: [**Procedure**]{} LOOP: 1. Using the transition function of $\cal U$ and based on the current value of [*Configuration*]{}, $\cal M$ computes the next state, next locations of the scanning heads, the next content of the work tape of $\cal U$, and correspondingly updates [*Configuration*]{}. 2. If during the above transition $\cal U$ made a move $\omega$,  $\cal M$ further updates [*Configuration*]{} by appending the labmove ${\top}\omega$ to the content of the imaginary run tape of $\cal U$. In addition, if $\omega$ looks like $c.j$ for some selectional cluster $c$, $\cal M$ makes the same move $\omega$ in the real play. 3. If $i$ is (the ID of) a general port, $\cal M$ further updates [*Configuration*]{} by appending the labmove ${\bot}i.i$ (a “smart environment’s” move) to the content of the imaginary run tape of $\cal U$. 4. Next, $\cal M$ checks if there is a pair $(a,b)$ of opposite general ports with $a$ positive and $b$ negative, such that the imaginary run tape of $\cal U$ contains the four labmoves ${\bot}a.a$, ${\bot}b.b$, ${\top}b.a$, ${\top}a.b$, and $\cal M$ has not yet made the allocation move $(a,b)$ in the real play. This is to what we earlier referred as “detecting a match between $a$ and $b$”. If such a pair $(a,b)$ is found, $\cal M$ makes the move $(a,b)$ in the real play. 5. Now $\cal M$ checks its run tape to see whether it contains a labmove ${\bot}\omega$ which, as a string spelled on the tape, ends at location $i$. If such an $\omega$ is found and it looks like $c.j$ where $c$ is a selectional cluster, $\cal M$ further updates [*Configuration*]{} by appending ${\bot}\omega$ to the content of the imaginary run tape of $\cal U$. 6. Finally, $\cal M$ increments $i$ to $i+1$, and repeats LOOP. Consider an arbitrary run $\Theta$ generated by $\cal M$. To this run corresponds a run $\Gamma$ generated by $\cal U$ — namely, $\Gamma$ is the run incrementally spelled on the imaginary run tape of $\cal U$ when the latter is simulated by $\cal M$ during playing according to the scenario of $\Theta$. Fix these $\Theta$ and $\Gamma$. Let us also fix $\cal A$ as the arrangement induced by $\Theta$. We further pick an arbitrary situation $^\dagger$ for $C$ consistent with $\cal A$ and fix it, too. Our assumption that $\cal U$ wins $C^*$ for any nice interpretation $^*$ implies that $\cal U$ never makes an illegal move of $C^*$ (unless its environment does so first). We may also safely assume that $\cal M$’s adversary never makes illegal moves of $\hat{C}$ either, for otherwise $\cal M$ automatically wins and we are done. Then, a little analysis of the situation reveals that $\Theta$ is a legal run of $\hat{C}$ and $\Gamma$ is a legal run of $C^*$ (for whatever nice interpretation $^*$). We will implicitly rely on this observation in our further argument. We shall say that a general port $a$ of $C$ is [**unmatched**]{} iff for no $b$ does $\cal A$ contain $(a,b)$ or $(b,a)$. Otherwise $a$ is [**matched**]{}, and the port $b$ for which $\cal A$ contains $(a,b)$ or $(b,a)$ is said to be the [**match**]{} of $a$. Let $^\ddagger$ be the situation for $C$ which agrees with $^\dagger$ on all elementary and matched general ports, and sends each unmatched general port to $\mathbb{F}$. Obviously $^\ddagger$, just like $^\dagger$, is consistent with $\cal A$. And, as $^\dagger$ sends to $\mathbb{T}$ any port that $^\ddagger$ does, in view of the monotonicity of all truth conditions associated with gates, it is clear that $$\label{j2b} \mbox{\em If $C$ is true w.r.t. $(^\ddagger,\Theta)$, then so is it w.r.t. $(^\dagger,\Theta)$.}$$ We now choose a nice interpretation $^*$ such that: - For any elementary atom $p$,  $p^*={\top}$ iff there is a $p$-labeled (resp. ${\neg}p$-labeled) port $a$ such that $a^\ddagger=\mathbb{T}$ (resp. $a^\ddagger=\mathbb{F}$). - For any general atom $P$,  $P^*$ is the nice game such that any legal run $\{{\bot}a,{\top}b\}$ of it is ${\top}$-won iff we have one of the following: 1. $a$ is a $P$-port with $a^\ddagger=\mathbb{T}$, and $\Gamma^{a.}=\{{\bot}a, {\top}b\}$. 2. $b$ is a ${\neg}P$-port with $b^\ddagger=\mathbb{F}$, and $\Gamma^{b.}=\{{\top}a, {\bot}b\}$. We claim the following: $$\label{j2c} \mbox{\em For any general $L$-port $c$ of $C$, $c^\ddagger=\mathbb{T}$ iff $\Gamma^{c.}$ is a ${\top}$-won run of $L^*$.}$$ To verify the above claim, let us first consider the case when $L=P$ (some general atom $P$) and $c^\ddagger=\mathbb{T}$. Since $^\ddagger$ assigns $\mathbb{T}$ only to matched general ports, $c$ is matched. Let $d$ be the match of $c$. From our description of the work of $\cal M$ it is obvious that $\Gamma^{c.}=\{{\bot}c, {\top}d\}$. And, by clause 1 of our definition of $P^*$, such a run is a ${\top}$-won run of $P^*$, as desired. Next, consider the case when $L={\neg}P$ and $c^\ddagger=\mathbb{T}$. Again, since $^\ddagger$ assigns $\mathbb{T}$ only to matched general ports, $c$ is matched. Let $d$ be its match. So, $d^\ddagger=\mathbb{F}$. Note that $\Gamma^{c.}=\{{\top}d,{\bot}c\}$ and hence ${\neg}\Gamma^{c.}=\{{\bot}d,{\top}c\}$. By our definition of $P^*$, $\{{\bot}d,{\top}c\}$ can be a ${\top}$-won run of $P^*$ only if either $d^\ddagger=\mathbb{T}$ (clause 1) or $c^\ddagger=\mathbb{F}$ (clause 2). But neither of these two is true in our case. So, ${\neg}\Gamma^{c.}$ is a ${\bot}$-won run of $P^*$. But then, by the definition of game negation, $\Gamma^{c.}$ is a ${\top}$-won run of ${\neg}P^*$, as desired. Next, consider the case when $L=P$ and $c^\ddagger=\mathbb{F}$. Since the smart adversary of (the simulated by $\cal M$)  $\cal U$ makes the move $c.c$ for each general port $c$, $\Gamma^{c.}$ contains the labmove ${\bot}c$. If it does not contain any other labmoves, $\Gamma^{c.}$ is a ${\bot}$-won run of $P^*$ because the latter is a nice game. Suppose now $\Gamma^{c.}=\{{\bot}c,{\top}d\}$ for some $d$. So, either $c$ is unmatched, or $d$ its match. If $c$ is unmatched, then $\Gamma^{d.}$ cannot be $\{{\bot}d,{\top}c\}$ and hence, by our definition of $P^*$, $\Gamma^{c.}$ is not a ${\top}$-won run of $P^*$. Now suppose $d$ is the match of $c$. Then $d^\ddagger=\mathbb{T}$ which, again, makes it impossible that $\Gamma^{c.}$ is a ${\top}$-won run of $P^*$. Thus, in all cases, $\Gamma^{c.}$ is a ${\bot}$-won run of $P^*$, as desired. Finally, consider the case when $L={\neg}P$ and $c^\ddagger=\mathbb{F}$. As in the previous case, $\Gamma^{c.}$ contains the labmove ${\bot}c$. Hence, ${\neg}\Gamma^{c.}$ contains ${\top}c$. If ${\neg}\Gamma^{c.}$ does not contain any other labmoves, ${\neg}\Gamma^{c.}$ is a ${\top}$-won run of $P^*$ because $P^*$ is a nice game; then, $\Gamma^{c.}$ is a ${\bot}$-won run of ${\neg}P^*$, and we are done. Suppose now ${\neg}\Gamma^{c.}= \{{\bot}d,{\top}c\}$ for some $d$, and hence $ \Gamma^{c.}= \{{\top}d,{\bot}c\}$. Then, by clause 2 of our definition of $P^*$, ${\neg}\Gamma^{c.}$ is a ${\top}$-won run of $P^*$, meaning that $\Gamma^{c.}$ is a ${\bot}$-won run of ${\neg}P^*$, as desired. Claim (\[j2c\]) is now proven. By our choice of $^*$, claim (\[j2c\]) is automatically true for elementary ports instead of general ports. So, we have: $$\label{jj2c} \mbox{\em For any (elementary or general) $L$-port $c$ of $C$, $c^\ddagger=\mathbb{T}$ iff $\Gamma^{c.}$ is a ${\top}$-won run of $L^*$.}$$ From our description of the work of $\cal M$ one can see that $\cal M$ and $\cal U$ act in exactly the same ways in selectional clusters. More precisely, $\Theta$ and $\Gamma$ contain exactly the same labmoves of the form ${\wp}c.j$ where $c$ is a selectional cluster of $C$. From this observation, in conjunction with (\[jj2c\]), it is obvious that $C$ is true w.r.t. $(^\ddagger,\Theta)$ iff it is true w.r.t. $(^*,\Gamma)$. But, by our assumption, $\cal U$ wins $C^*$ for any nice interpretation $C^*$. Thus, $C$ is true w.r.t. $(^\ddagger,\Theta)$. Then, by (\[j2b\]), $C$ is also true w.r.t. $(^\dagger,\Theta)$. Now, remembering that $^\dagger$ was an arbitrary situation consistent with the arrangement induced by $\Theta$, we find that $C$ is accomplished w.r.t. $\Theta$. In other words, $\Theta$ is a ${\top}$-won run of $\hat{C}$. Finally, remembering that $\Theta$ was an arbitrary run generated by $\cal M$, we conclude that $\cal M$ wins $\hat{C}$. In other words, $\cal M$ accomplishes $C$. Conclusion ========== We have elaborated circuit-style constructs called [*cirquents*]{}, and set up two sorts of game semantics for them: the semantics of [*computability logic*]{}, and [*abstract resource semantics*]{}. The two, while substantially different, have been proven to validate the same classes of cirquents. Cirquents, allowing us to account for sharing substructures between different parent structures, are properly more expressive and efficient means of representing various objects of study than formulas are. This fact had already been observed in the past, but only very limited classes of cirquents had been studied so far, and only abstract resource semantics (not the semantics of computability logic) had been defined for them. The present paper extended the earlier studied cirquents by adding three non-traditional pairs of conjunctive and disjunctive gates to them. An even more important innovation was generalizing gates to what we called [*clusters*]{}. This is a generalization naturally called for by advanced approaches to game logics and resource logics. Clustering further increases the expressiveness and flexibility of cirquents. Among its merits is achieving the full expressive power of [*independence friendly logic*]{} and far beyond. This paper has been exclusively focused on semantics. Among the subsequent natural steps within the present line of research would be attempts to construct deductive systems for various fragments of the set of valid cirquents. [99]{} S. Abramsky and R. Jagadeesan. [*Games and full completeness for multiplicative linear logic*]{}. [**Journal of Symbolic Logic**]{} 59 (1994), pp. 543–574. A. Blass. [*A game semantics for linear logic*]{}. [**Annals of Pure and Applied Logic**]{} 56 (1992), pp. 183-220. P. Hinman. [**Fundamentals of Mathematical Logic**]{}. A. K. Peters, 2005. J. Hintikka. [**Logic, Language-Games and Information: Kantian Themes in the Philosophy of Logic**]{}. Clarendon Press 1973. J. Hintikka and G. Sandu. [*Game-theoretical semantics*]{}. In: [**Handbook of Logic and Language**]{}. J. van Benthem and A ter Meulen, eds. North-Holland 1997, pp. 361-410. W. Hodges. [*Compositional semantics for a language of imperfect information*]{}. [**Logic Journal of the IGPL**]{} 5 (1997), pp. 539-563. W. Hodges. [*Logics of imperfect information: why sets of assignments?*]{} In: [**Interactive Logic**]{}. J. van Benthem, D.M. Gabbay and B. Löve, eds. Amsterdam University Press 2007, pp. 117-113. G. Japaridze. [*The logic of tasks*]{}. [**Annals of Pure and Applied Logic**]{} 117 (2002), pp. 263-295. G. Japaridze. [*Introduction to computability logic*]{}. [**Annals of Pure and Applied Logic**]{} 123 (2003), pp. 1-99. G. Japaridze. [*Propositional computability logic I*]{}. [**ACM Transactions on Computational Logic**]{} 7 (2006), No.2, pp. 302-330. G. Japaridze. [*Propositional computability logic II*]{}. [**ACM Transactions on Computational Logic**]{} 7 (2006), No.2, pp. 331-362. G. Japaridze. [*From truth to computability I*]{}. [**Theoretical Computer Science**]{} 357 (2006), pp. 100-135. G. Japaridze. [*Introduction to cirquent calculus and abstract resource semantics*]{}. [**Journal of Logic and Computation**]{} 16 (2006), pp. 489-532. G. Japaridze. [*Computability logic: a formal theory of interaction*]{}. In: [**Interactive Computation: The New Paradigm**]{}. D. Goldin, S. Smolka and P. Wegner, eds. Springer 2006, pp. 183-223. G. Japaridze. [*The logic of interactive Turing reduction*]{}. [**Journal of Symbolic Logic**]{} 72 (2007), No.1, pp. 243-276. G. Japaridze. [*From truth to computability II*]{}. [**Theoretical Computer Science**]{} 379 (2007), pp. 20-52. G. Japaridze. [*Intuitionistic computability logic*]{}. [**Acta Cybernetica**]{} 18 (2007), No.1, pp. 77–113. G. Japaridze. [*The intuitionistic fragment of computability logic at the propositional level*]{}. [**Annals of Pure and Applied Logic**]{} 147 (2007), No.3, pp.187-227. G. Japaridze. [*Cirquent calculus deepened*]{}. [**Journal of Logic and Computation**]{} 18 (2008), No.6, pp. 983-1028. G. Japaridze. [*Sequential operators in computability logic*]{}. [**Information and Computation**]{} 206 (2008), No.12, pp. 1443-1475. G. Japaridze. [*In the beginning was game semantics*]{}. In: [**Games: Unifying Logic, Language and Philosophy**]{}. O. Majer, A.-V. Pietarinen and T. Tulenheimo, eds. Springer 2009, pp. 249-350. G. Japaridze. [*Many concepts and two logics of algorithmic reduction*]{}. [**Studia Logica**]{} 91 (2009), No.1, pp. 1-24. G. Japaridze. [*Towards applied theories based on computability logic*]{}. [**Journal of Symbolic Logic**]{} 75 (2010), pp. 565-601. G.Japaridze. [*Toggling operators in computability logic*]{}. [**Theoretical Computer Science**]{} (to appear). Preprint is available at http://arxiv.org/abs/0904.3469 G.Japaridze. [*A logical basis for constructive systems*]{}. Manuscript (2010) http://arxiv.org/abs/1003.0425 G.Japaridze. [*Introduction to clarithmetic I*]{}. Manuscript (2010) http://arxiv.org/abs/1003.4719 G.Japaridze. [*Introduction to clarithmetic II*]{}. Mnuscript (2010) http://arxiv.org/abs/1004.3236 P. Lorenzen. [*Ein dialogisches Konstruktivitätskriterium*]{}. In: [**Infinitistic Methods**]{}. In: PWN, Proc. Symp. Foundations of Mathematics, Warsaw, 1961, pp. 193-200. I. Mezhirov and N. Vereshchagin. [*On abstract resource semantics and computability logic*]{}. [**Journal of Computer and System Sciences**]{} 76 (2010), pp. 356-372. G. Sandu and A. Pietarinen. [*Partiality and games: propositional logic*]{}. [**Logic Journal of the IGPL**]{} 9 (2001), No.1, pp. 107-127. M. Stevenster. [*A strategic perspective on IF games*]{}. In: [**Games: Unifying Logic, Language and Philosophy**]{}. O. Majer, A.-V. Pietarinen and T. Tulenheimo, eds. Springer 2009, pp. 101-116. T. Tulenheimo. [*Independence friendly logic*]{}. In: [**Stanford Encyclopedia of Philosophy**]{}, 2009. http://plato.stanford.edu/entries/logic-if/ G. Wang and W. Xu. [*From the logic of facts to the logic of tasks.*]{} [**Fuzzy Systems and Mathematics**]{} 18 (2004), No.1, pp.1-8. J. Väänänen. [*On the semantics of informational independence*]{}. [**Logic Journal of the IGPL**]{} 10 (2002), pp. 337-350. W. Xu and Y. Jing. [*Theorems in the logic of tasks*]{}. [**Fuzzy Systems and Mathematics**]{} 20 (2006), No.6, pp. 15-20. H. Zhang and S. Li. [*The description logic of tasks: from theory to practice.*]{} [**Chinese Journal of Computers**]{} 29 (2006), No.3, pp. 488-494. [^1]: This material is based upon work supported by the National Science Foundation under Grant No. 0208816 [^2]: We write ${\mbox{\bf Wn}^{A}_{}}{\langle \Gamma \rangle}$ for ${\mbox{\bf Wn}^{A}_{}}(\Gamma)$. [^3]: Precisely, we have ${\mbox{\bf Wn}^{{\top}}_{}}{\langle\rangle}={\top}$ and ${\mbox{\bf Wn}^{{\bot}}_{}}{\langle\rangle}={\bot}$. [^4]: An HPM often also has a third tape called the [*valuation tape*]{}. Its function is to provide values for the variables on which a game may depend. However, as we remember, in this paper we only consider constant games — games that do not depend on any variables. This makes it possible to safely remove the valuation tape from the HPM model (or leave it there but fully ignore), as this tape is no longer relevant. [^5]: In most papers on CoL, the concept of static games is defined without this (first) condition. In such cases, however, the existence of an always-illegal move $\spadesuit$ is stipulated in the definition of games. The first condition of our present definition of static games turns out to be simply derivable from that stipulation. This and a couple of other minor technical differences between our present formulations from those given in other pieces of literature on CoL only signify presentational and by no means conceptual variations. [^6]: This \[ftn1\] concepts is termed “perfect interpretation” in the other pieces of literature on CoL, where the word “interpretation” is reserved for a slightly more general concept. Since we only deal with perfect interpretations in this paper, we omit the word “perfect” and just say “interpretation”. [^7]: The latter, of course, may not be the case if $g$ is a choice gate, or a sequential gate with a finite outdegree. [^8]: If, however, we do not limit our considerations to perfect interpretations (see the footnote on page ) and allow all interpretations in the definition of weak validity, this cirquent will no longer be weakly valid. In fact, for all well studied fragments of CoL, when interpretations are not required to be perfect, the weak and the strong versions of validity have been shown to yield the same classes of formulas. Strong validity in the CoL literature is usually referred to as [*uniform validity*]{}, and weak validity as (simply) [*validity*]{}. [^9]: Unless, of course, the procedure halts by good luck. Halting without saying “Yes” can then be seen as an explicit indication that the original answer “No” was final. [^10]: If Victor and Peggy may change their mind several times, and the sellers’ return policies are flexible enough, then this is a toggling combination rather than a choice one. [^11]: Unlike Definition \[maya\] which, at least, changed the word “gate” to the word “cluster” when reproducing the corresponding Definition \[may14a\]. [^12]: As long as we deal with effective IF logic, one should require here that $f$ and $g$ range over recursive functions. [^13]: An equivalent approach would be letting $f$ be a [*total*]{} function from the set of ${\vee}$-clusters to the set $\{0,1,2,3,\ldots\}$ of [*natural numbers*]{} (rather than positive integers); then, instead of saying that $f$ is undefined at $c$, we could simply say that $f(c)=0$. [^14]: This predicate of truth, in contrast to the previous one, would depend on how ${\wedge}$-gates are clustered but not on how ${\vee}$-gates are clustered. [^15]: That is, clusters whose gates originate from occurrences of $i$-superscripted operators. [^16]: Note that, if $a$ is an elementary port, then $\Gamma^{a}$ is empty, and saying that such a run is a ${\top}$-won run of $L^*$ is the same as to say that $L^*={\top}$. [^17]: Soundness and completeness in [@Japdeep] was proven with respect to abstract resource semantics rather than the semantics of computability logic; as we are going to see later, however, these two semantics are equivalent. [^18]: In [@Cirq], a weaker condition was adopted, according to which at least one (but possibly both) of the ports $a,b$ should be assigned $\mathbb{T}$. It is easy to see that either condition yields the same concept of validity, so that this difference is unimportant.
--- abstract: 'Young’s inequality is extended to the context of absolutely continuous measures. Several applications are included.' address: - 'University of Craiova, Department of Mathematics, Street A. I. Cuza 13, Craiova, RO-200585, Romania' - 'University of Craiova, Department of Mathematics, Street A. I. Cuza 13, Craiova, RO-200585, Romania' author: - 'Flavia-Corina Mitroi' - 'Constantin P. Niculescu' date: June 2011 title: 'An Extension of Young’s Inequality' --- [^1] Introduction ============ Young’s inequality [@Y1912] asserts that every strictly increasing continuous function $f:\left[ 0,\infty\right) \longrightarrow\left[ 0,\infty\right) $ with $f\left( 0\right) =0$ and $\underset{x\rightarrow \infty}{\lim}f\left( x\right) =\infty$ verifies an inequality of the following form,$$ab\leq\int_{0}^{a}f\left( x\right) dx+\int_{0}^{b}f^{-1}\left( y\right) dy, \label{youngineq}$$ whenever $a$ and $b\ $are nonnegative real numbers. The equality occurs if and only if $f\left( a\right) =b$. See [@HLP], [@Mit], [@NP2006] and [@RV] for details and significant applications. Several questions arise naturally in connection with this classical result. 1. Is the restriction on strict monotonicity (or on continuity) really necessary? 2. Is there any weighted analogue of Young’s inequality? 3. Can Young’s inequality be improved? F. Cunningham Jr. and N. Grossman [@CG1971] noticed that the question (Q1) has a positive answer (correcting the prevalent belief that Young’s inequality is the business of strictly increasing continuous functions). The aim of the present paper is to extend the entire discussion to the framework of locally absolutely continuous measures and to prove several improvements. As well known, Young’s inequality is an illustration of the Legendre duality. Precisely, the functions$$F(a)=\int_{0}^{a}f\left( x\right) dx\text{ and }G(b)=\int_{0}^{b}f^{-1}\left( x\right) dx,$$ are both continuous and convex on $\left[ 0,\infty\right) $ and (\[youngineq\]) can be restated as$$ab\leq F(a)+G(b)\text{\quad for all }a,b\in\left[ 0,\infty\right) , \label{youngineq2}$$ with equality if and only if $f\left( a\right) =b.$ Because of the equality case, the formula (\[youngineq2\]) leads to the following connection between the functions $F$ and $G:$$$F(a)=\sup\left\{ ab-G(b):b\geq0\right\} \label{defF}$$ and $$G(b)=\sup\left\{ ab-F(a):a\geq0\right\} .$$ It turns out that each of these formulas produces a convex function (possibly on a different interval). Some details are in order. By definition, the *conjugate* of a convex function $F$ defined on a nondegenerate interval $I$ is the function$$F^{\ast}:I^{\ast}\rightarrow\mathbb{R},\text{\quad}F^{\ast}(y)=\sup\left\{ xy-F(x):x\in I\right\} ,$$ with domain $I^{\ast}=\left\{ y\in\mathbb{R}:F^{\ast}(y)<\infty\right\} $. Necessarily $I^{\ast}$ is an non-empty interval and $F^{\ast}$ is a convex function whose level sets $\left\{ y:F^{\ast}(y)\leq\lambda\right\} $ are closed subsets of $\mathbb{R}$ for each $\lambda\in\mathbb{R}$ (usually such functions are called *closed* convex functions). A convex function may not be differentiable, but it admits a good substitute for differentiability. The *subdifferential *of a real function* *$F$ defined on an interval $I$ is a multivalued function $\partial F:I\rightarrow\mathcal{P}(\mathbb{R})$ defined by$$\partial F(x)=\left\{ \lambda\in\mathbb{R}:F(y)\geq F(x)+\lambda(y-x)\text{, for every}\,\,y\in I\right\} .$$ Geometrically, the subdifferential gives us the slopes of the supporting lines for the graph of $F$. The subdifferential at a point* *is always a convex set, possibly empty, but the convex functions $F:I\rightarrow \mathbb{R}$ have the remarkable property that $\partial F(x)\neq\emptyset$ at all interior points. It is worth noticing that $\partial F(x)=\left\{ F^{\prime}(x)\right\} $ at each point where $F$ is differentiable (so this formula works for all points of $I$ except for a countable subset). See [@NP2006], page 30. \[Lem1\]*(*Legendre duality, *[@NP2006]*, page *41)*. Let $F:I\rightarrow\mathbb{R}$ be a closed convex function. Then its conjugate $F^{\ast}:I^{\ast}\rightarrow\mathbb{R}$ is also convex and closed and: $i)$ $xy\leq F(x)+F^{\ast}(y)$ for all $x\in I,$ $y\in I^{\ast};$ $ii)$ $xy=F(x)+F^{\ast}(y)$ if, and only if, $y\in\partial F(x);$ $iii)$ $\partial F^{\ast}=\,\left( \partial F\right) ^{-1}$ *(*as graphs*)*$;$ $iv)$ $F^{\ast\ast}=F.$ Recall that the inverse of a graph $\Gamma$ is the set $\Gamma^{-1}=\left\{ \left( y,x\right) :(x,y)\in\Gamma\right\} .$ How far is Young’s inequality from the Legendre duality? Surprisingly, they are pretty closed in the sense that in most cases the Legendre duality can be converted into a Young like inequality. Indeed, every continuous convex function admits an integral representation. \[Lem2\]*(*See *[@NP2006]*, page *37)*. Let $F$ be a continuous convex function defined on an interval $I$ and let $\varphi:I\rightarrow\mathbb{R}$ be a function such that $\varphi (x)\in\partial F(x)$ for every $x\in\,I.$ Then for every $a<b$ in $I$ we have $$F(b)-F(a)=\int_{a}^{b}\,\varphi(t)\,dt.$$ As a consequence, the heuristic meaning of the formula $i)$ in Lemma \[Lem1\] is the following Young like inequality, $$ab\leq\int_{a_{0}}^{a}\varphi\left( x\right) dx+\int_{b_{0}}^{b}\psi\left( y\right) dy\text{\quad for all }a\in I,\ b\in I^{\ast},$$ where $\varphi$ and $\psi$ are selection functions for $\partial F$ and respectively $\left( \partial F\right) ^{-1}$. Now it becomes clear that Young’s inequality should work outside strict monotonicity (as well as outside continuity). The details are presented in Section 2. Our approach (based on the geometric meaning of integrals as areas) allows us to extend the framework of integrability to all positive measures $\rho$ which are locally absolutely continuous with respect to the planar Lebesgue measure $dxdy$. See Theorem \[ThmYoungNondecr\] below. A special case of Young’s inequality is$$xy\leq\frac{x^{p}}{p}+\frac{y^{q}}{q},$$ which works for all $x,y\geq0$, and $p,q>1$ with $1/p+1/q=1$. Theorem \[ThmYoungNondecr\] yields the following companion to this inequality in the case of Gaussian measure $\frac{4}{2\pi}e^{-x^{2}-y^{2}}dxdy$ on $[0,\infty)\times\lbrack0,\infty):$ $$\operatorname{erf}(x)\operatorname{erf}(y)\leq\frac{2}{\sqrt{\pi}}\int_{0}^{x}\operatorname{erf}\left( s^{p-1}\right) e^{-s^{2}}ds+\frac{2}{\sqrt{\pi }}\int_{0}^{y}\operatorname{erf}\left( t^{q-1}\right) e^{-t^{2}}dt,$$ where$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-s^{2}}ds \label{erf}$$ is the *Gauss error function* (or the erf function). The precision of our generalization of Young’s inequality makes the objective of Section 3. In Section 4 we discuss yet another extension of Young’s inequality, based on recent work done by J. Jakšetić and J. E. Pečarić [@P]. The paper ends by noticing the connection of our result to the theory of $c$-convexity (that is, of convexity associated to a cost density function). Last but not the least, all results in this paper can be extended verbatim to the framework of nondecreasing functions $f:[a_{0},a_{1})\rightarrow\lbrack A_{0},A_{1})$ such that $a_{0}<a_{1}\leq\infty$ and $A_{0}<A_{1}\leq\infty,$ $f(a_{0})=A_{0}$ and $\lim_{x\rightarrow a_{1}}f(x)=A_{1}.$ In other words, the interval $[0,\infty)$ plays no special role in Young’s inequality. Besides, there is a straightforward companion of Young’s inequality for nonincreasing functions, but this is outside the scope of the present paper. Young’s inequality for weighted measures ======================================== In what follows $f:\left[ 0,\infty\right) \longrightarrow\left[ 0,\infty\right) $ will denote a nondecreasing function such that $f\left( 0\right) =0$ and $\underset{x\rightarrow\infty}{\lim}f\left( x\right) =\infty.$ Since $f$ is not necessarily injective we will attach to $f$ a *pseudo-inverse* by the following formula:$$f_{\sup}^{-1}:\left[ 0,\infty\right) \longrightarrow\left[ 0,\infty\right) ,\quad f_{\sup}^{-1}\left( y\right) =\inf\{x\geq0:f(x)>y\}.$$ Clearly, $f_{\sup}^{-1}$ is nondecreasing and $f_{\sup}^{-1}\left( f\left( x\right) \right) \geq x$ for all $x.$ Moreover, with the convention $f(0-)=0,$ $$f_{\sup}^{-1}\left( y\right) =\sup\left\{ x:y\in\left[ f\left( x-\right) ,f\left( x+\right) \right] \right\} ;$$ here $f\left( x-\right) $ and $f\left( x+\right) $ represent the lateral limits at $x$. When $f$ is also continuous,$$f_{\sup}^{-1}(y)=\max\left\{ x\geq0:y=f(x)\right\} .$$ $($F. Cunningham Jr. and N. Grossman [@CG1971]$)$. *Since pseudo-inverses will be used as integrands, it is convenient to enlarge the concept of pseudo-inverse by referring to any function* $g$ *such that*$$f_{\inf}^{-1}\leq g\leq f_{\sup}^{-1},$$ *where* $f_{\inf}^{-1}(y)=\sup\{x\geq0:f(x)<y\}$. *Necessarily,* $g$ *is nondecreasing and any two* *pseudo-inverses agree except on a countable set (so their integrals will be the same)*. Given $0\leq a<b,$ we define the *epigraph* and the *hypograph* of $f|_{[a,b]}$ respectively by$$\operatorname{epi}f|_{[a,b]}=\left\{ \left( x,y\right) \in\left[ a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right] :y\geq f\left( x\right) \right\} ,$$ and$$\operatorname{hyp}f|_{[a,b]}=\left\{ \left( x,y\right) \in\left[ a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right] :y\leq f\left( x\right) \right\} .$$ Their intersection is the *graph* of $f|_{[a,b]},$ $$\operatorname*{graph}f|_{[a,b]}=\left\{ \left( x,y\right) \in\left[ a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right] :y=f\left( x\right) \right\} .$$ Notice that our definitions of epigraph and hypograph are not the standard ones, but agree with them in the context of monotone functions. We will next consider a measure $\rho$ on $\left[ 0,\infty\right) \times\left[ 0,\infty\right) ,$ which is locally absolutely continuous with respect to the Lebesgue measure $dxdy,$ that is, $\rho$ is of the form $$\rho\left( A\right) =\int_{A}K\left( x,y\right) dxdy,$$ where $K:\left[ 0,\infty\right) \times\left[ 0,\infty\right) \longrightarrow\lbrack0,\infty)\ $is a Lebesgue locally integrable function, and $A$ is any compact subset of $\left[ 0,\infty\right) \times\left[ 0,\infty\right) $. Clearly,$$\begin{aligned} \rho\left( \operatorname{hyp}f|_{[a,b]}\right) +\rho\left( \operatorname{epi}f|_{[a,b]}\right) & =\rho\left( \left[ a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right] \right) \\ & =\int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left( x,y\right) dydx.\end{aligned}$$ Moreover,$$\rho\left( \operatorname{hyp}f|_{[a,b]}\right) =\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx.$$ and$$\rho\left( \operatorname{epi}f|_{[a,b]}\right) =\int_{f\left( a\right) }^{f\left( b\right) }\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy.\nonumber$$ The discussion above can be summarized as follows: \[Lem3\]Let $f:\left[ 0,\infty\right) \longrightarrow\left[ 0,\infty\right) $ be a nondecreasing function such that $f\left( 0\right) =0$ and $\underset{x\rightarrow\infty}{\lim}f\left( x\right) =\infty$. Then for every Lebesgue locally integrable function $K:\left[ 0,\infty\right) \times\left[ 0,\infty\right) \longrightarrow\lbrack0,\infty)$ and every pair of nonnegative numbers $a<b,$$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{f\left( b\right) }\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ =\int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left( x,y\right) dydx.\end{gathered}$$ We can now state the main result of this section: \[ThmYoungNondecr\]*(*Young’s inequality for nondecreasing functions***)*.** Under the assumptions of Lemma $3$, for every pair of nonnegative numbers $a<b,$ and every number $c\geq f(a)$ we have $$\begin{gathered} \int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\ \leq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy.\end{gathered}$$ If in addition $K$ is strictly positive almost everywhere, then the equality occurs if and only if $c\in\left[ f\left( b-\right) ,f\left( b+\right) \right] .$ We start with the case where $f\left( a\right) \leq c\leq f\left( b-\right) $. See Figure \[fig1\]. \[h\] [fig1.jpg]{} In this case,$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ =\int_{a}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ +\int_{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\ =\int_{a}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx+\int_{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\ +\int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\ \geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx,\end{gathered}$$ with equality if and only if $\int_{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right) dx=0.$ When $K$ is strictly positive almost everywhere, this means that $c=f\left( b-\right) $. If $c\geq f\left( b+\right) ,$ then$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ =\int_{a}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{b}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\ =\int_{a}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\ -\left( \int_{b}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right) }^{f\left( c\right) }K\left( x,y\right) dy\right) dx-\int_{f\left( b+\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\right) \\ \geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx.\end{gathered}$$ Equality holds if and only if$\ \int_{f\left( b+\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy$, that is, when $c=f\left( b+\right) $ (provided that $K$ is strictly positive almost everywhere). See Figure 2. \[ptb\] [fig2.jpg]{} If $c\in\left( f\left( b-\right) ,f\left( b+\right) \right) ,$ then $f_{\sup}^{-1}\left( c\right) =b$ and the inequality in the statement of Theorem \[ThmYoungNondecr\] is actually an equality. See Figure \[fig3\]. \[h\] [fig3.jpg]{} \[CorContIncr\]*(*Young’s inequality for continuous increasing functions*)***.** If $f:\left[ 0,\infty\right) \longrightarrow \left[ 0,\infty\right) $ is also continuous and increasing, then$$\begin{gathered} \int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\ \leq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\end{gathered}$$ for every real number $c\geq f(a)$. Assuming $K$ strictly positive almost everywhere, the equality occurs if and only if $\ c=f\left( b\right) .$ If $K\left( x,y\right) =1$ for every $x,y\in\left[ 0,\infty\right) $, then Corollary \[CorContIncr\] asserts that$$bc-af\left( a\right) <\int_{a}^{b}f\left( x\right) dx+\int_{f\left( a\right) }^{c}f^{-1}\left( y\right) dy\text{\quad for all }0<a<b\text{ and }c>f(a);$$ equality occurs if and only if $c=f\left( b\right) $. In the special case where $a=f\left( a\right) =0$, this reduces to the classical inequality of Young. $($The probabilistic companion of Theorem \[ThmYoungNondecr\]$)$. *Suppose there is given a nonnegative random variable* $X:[0,\infty )\rightarrow\lbrack0,\infty)$ *whose cumulative distribution function* $F_{X}(x)=P\left( X\leq x\right) $ *admits a density, that is, a nonnegative Lebesgue-integrable function* $\rho_{X}$ *such* *that* $$P\left( x\leq X\leq y\right) =\int_{x}^{y}\rho_{X}(u)du\text{\quad for all }x\leq y.$$ *The* quantile function *of the distribution function* $F_{X}$ $($*also known as the* increasing rearrangement *of the random variable* $X)$ *is defined by*$$Q_{X}(x)=\inf\left\{ y:F_{X}(y)\geq x\right\} .$$ *Thus, a quantile function is nothing but a pseudo-inverse of* $F_{X}$. *Motivated by Statistics, a number of fast algorithms were developed for computing the quantile functions with high accuracy. See* [@A]. *Without entering the details, we recall here the remarkable formula* *(due to* *G.* *Steinbrecher)* *for the quantile function of the normal distribution:*$$\operatorname{erf}^{-1}(z)=\sum_{k=0}^{\infty}\frac{c_{k}\left( \frac {\sqrt{\pi}}{2}z\right) ^{2k+1}}{2k+1},$$ *where* $c_{0}=1$ *and* $$c_{k}=\sum_{m=0}^{k-1}\frac{c_{m}c_{k-m-1}}{\left( m+1\right) \left( 2m+1\right) }\text{\quad\emph{for all} }k\geq1.$$ *According to Theorem* \[ThmYoungNondecr\], *for every pair of continuous random variables* $Y,Z:[0,\infty)\rightarrow\lbrack0,\infty)$ *with density* $\rho_{Y,Z},$ *and every positive numbers* $b$ *and* $c,$ *the following inequality holds:*$$P\left( Y\leq b;Z\leq c\right) \leq\int_{0}^{b}\left( \int_{0}^{F_{X}(x)}\rho_{Y,Z}\left( x,y\right) dy\right) dx+\int_{0}^{c}\left( \int _{0}^{Q_{X}(y)}\rho_{Y,Z}\left( x,y\right) dx\right) dy.$$ *This can be seen as a principle of uncertainty, since it shows that the functions* $$x\rightarrow\int_{0}^{F_{X}(x)}\rho_{Y,Z}\left( x,y\right) dy\text{ and }y\rightarrow\int_{0}^{Q_{X}(y)}\rho_{Y,Z}\left( x,y\right) dx$$ *cannot be made simultaneously small.* $($The higher dimensional analogue of Theorem $1).$ *Consider a locally absolutely continuous* *kernel* $K:\left[ 0,\infty\right) \times...\times\left[ 0,\infty\right) \longrightarrow\lbrack0,\infty ),\ K=K\left( s_{1},s_{2},...,s_{n}\right) ,$ *and a family* $\phi _{1},...,\phi_{n}:[a_{i},b_{i}]\rightarrow\mathbb{R}\ $*of nondecreasing functions defined on subintervals of* $\left[ 0,\infty\right) .$ *Then* $$\begin{gathered} \int_{\phi_{1}\left( a_{1}\right) }^{\phi_{1}\left( b_{1}\right) }\int_{\phi_{2}\left( a_{2}\right) }^{\phi_{2}\left( b_{2}\right) }\cdots\int_{\phi_{n}\left( a_{n}\right) }^{\phi_{n}\left( b_{n}\right) }K\left( s_{1},s_{2},...,s_{n}\right) ds_{n}...ds_{2}ds_{1}\\ \leq{\displaystyle\sum\limits_{i=1}^{n}} \int_{\phi_{i}\left( a_{i}\right) }^{\phi_{i}\left( b_{i}\right) }\left( \int_{\phi_{1}\left( a_{1}\right) }^{\phi_{1}\left( s\right) }\cdots \int_{\phi_{n}\left( a_{n}\right) }^{\phi_{n}\left( s\right) }K\left( s_{1},...,s_{n}\right) ds_{n}...ds_{i+1}ds_{i-1}...ds_{1}\right) ds.\\end{gathered}$$ *The proof is based on mathematical induction (which is left to the reader). The above inequality cover the n-variable generalization of Young’s inequality as obtained by Oppenheim [@O1927] (as well as the main result in [@Pa1992]).* The following stronger version of Corollary \[CorContIncr\] incorporates the Legendre duality. \[extYoung\]Let $f:\left[ 0,\infty\right) \longrightarrow\left[ 0,\infty\right) $ be a continuous nondecreasing function and $\Phi :[0,\infty)\rightarrow\mathbb{R}$ a convex function whose conjugate is also defined on $[0,\infty)$. Then for all $b>a\geq0,$ $c\geq f(a),$ and $\varepsilon>0$ we have $$\begin{gathered} \int_{a}^{b}\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f(a)}^{c}\Phi^{\ast }\left( \frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dx\\ \geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx-(c-f(a))\Phi\left( \varepsilon\right) -(b-a)\Phi^{\ast}\left( 1/\varepsilon\right) .\end{gathered}$$ According to the Legendre duality,$$\Phi(\varepsilon u)+\Phi^{\ast}(v/\varepsilon)\geq uv\text{\quad for all }u,v,\varepsilon\geq0. \label{fi}$$ For $u=\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy$ and $v=1$ we get$$\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) +\Phi^{\ast}\left( 1/\varepsilon\right) \geq\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy,$$ and by integrating both sides from $a$ to $b$ we obtain the inequality$$\int_{a}^{b}\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+(b-a)\Phi^{\ast}\left( 1/\varepsilon\right) \geq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx.$$ In a similar manner, starting with $u=1$ and $v=\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx,$ we arrive first at the inequality $$\Phi\left( \varepsilon\right) +\Phi^{\ast}\left( \frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) \geq\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx,$$ and then to$$\begin{gathered} (c-f(a))\Phi\left( \varepsilon\right) +\int_{f(a)}^{c}\Phi^{\ast}\left( \frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dx\\ \geq\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy.\end{gathered}$$ Therefore,$$\begin{gathered} \int_{a}^{b}\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f(a)}^{c}\Phi^{\ast }\left( \frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dx\\ \geq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -(b-a)\Phi^{\ast}\left( 1/\varepsilon\right) -(c-f(a))\Phi\left( \varepsilon\right) .\end{gathered}$$ According to Theorem \[ThmYoungNondecr\],$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ \geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx,\end{gathered}$$ and the inequality in the statement of Theorem \[extYoung\] is now clear. In the special case where $K\left( x,y\right) =1,$ $a=f\left( a\right) =0$ and $\Phi(x)=x^{p}/p$ (for some* *$p>1$), Theorem \[extYoung\] yields the following inequality:$$\int_{0}^{b}f^{p}\left( x\right) dx+\int_{0}^{c}\left( f_{\sup}^{-1}\left( y\right) \right) ^{p}dy\geq pbc-\left( p-1\right) \left( b+c\right) ,\ \text{for every }b,c\geq0.$$ This remark extends a result due to W. T. Sulaiman [@S]. We end this section by noticing the following result that complements Theorem \[ThmYoungNondecr\]. \[PropMerkle\]Under the assumptions of Lemma \[Lem3\],$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ \leq\max\left\{ \int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left( x,y\right) dydx,\int_{a}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\right\} .\end{gathered}$$ Assuming $K$ strictly positive almost everywhere, the equality occurs if and only if $c=f\left( b\right) .$ If $c<f\left( b\right) $, then from Lemma \[Lem3\] we infer that$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ =\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{f\left( b\right) }\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{c}^{f\left( b\right) }\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ \leq\int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left( x,y\right) dydx\end{gathered}$$ The other case, $c\geq f\left( b\right) $, has a similar approach. Proposition \[PropMerkle\] extends a result due to M. J. Merkle [@Me]. The precision in Young’s inequality =================================== The main result of this section is as follows: \[ThmPrec\]Under the assumptions of Lemma \[Lem3\], for all $b\geq a\geq0$ and $c\geq f(a),$$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\leq \left\vert \int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{c}^{f\left( b\right) }K\left( x,y\right) dydx\right\vert \text{.}$$ Assuming $K$ strictly positive almost everywhere, the equality occurs if and only if $c=f\left( b\right) $. The case where $f\left( a\right) \leq c\leq f\left( b-\right) $ is illustrated in Figure \[fig4\]. The left-hand side of the inequality in the statement of Theorem \[ThmPrec\] represents the measure of the cross-hatched curvilinear trapezium, while right-hand side is the measure of the $ABCD$ rectangle. \[h\] [fig4.jpg]{} Therefore,$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx=\int _{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\ \leq\int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{c}^{f\left( b\right) }K\left( x,y\right) dydx.\end{gathered}$$ The equality holds if and only if $\int_{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right) dx=0,$ that is, when $f\left( b-\right) =c.$ The case where $c\geq f\left( b+\right) $ is similar to the precedent one. The first term will be:$$\begin{gathered} \int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int _{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx=\int _{b}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( b\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\ \leq\int_{b}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( b\right) }^{c}K\left( x,y\right) dydx.\end{gathered}$$ Equality holds if and only if $\int_{b}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( b\right) }^{c}K\left( x,y\right) dydx=0,\ $so we must have $f\left( b+\right) =c$. The case where $c\in\left[ f\left( b-\right) ,f\left( b+\right) \right] $ is trivial, both sides of our inequality being equal to zero. \[CorMing\]*(*E. Minguzzi* *[@M]*)*. If moreover $K\left( x,y\right) =1$ on $\left[ 0,\infty\right) \times\left[ 0,\infty\right) $, and $f$ is continuous and increasing, then $$\int_{a}^{b}f\left( x\right) dx+\int_{f\left( a\right) }^{c}f^{-1}\left( y\right) dy\ -bc+af\left( a\right) \leq\left( f^{-1}\left( c\right) -b\right) \cdot\left( c-f\left( b\right) \right) .$$ The equality occurs if and only if $c=f\left( b\right) $. More accurate bounds can be indicated under the presence of convexity. \[CorJP\]Let $f$ be a nondecreasing continuous function, which is convex on the interval $\left[ \min\left\{ f_{\sup}^{-1}\left( c\right) ,b\right\} ,\max\left\{ f_{\sup}^{-1}\left( c\right) ,b\right\} \right] $. Then:$$\begin{gathered} i)~\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\ \leq\int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{c}^{c+\frac{f(b)-c}{b-f_{\sup}^{-1}\left( c\right) }(x-f_{\sup}^{-1}\left( c\right) )}K\left( x,y\right) dydx\text{,\quad for every }c\leq f\left( b\right) ;\end{gathered}$$$$\begin{gathered} ii)~\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\ -\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\ \geq\int_{b}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( b\right) }^{f(b)+\frac{c-f(b)}{f_{\sup}^{-1}\left( c\right) -b}(x-b)}K\left( x,y\right) dydx\text{,\quad for every }c\geq f\left( b\right) .\end{gathered}$$ If $\ f$ is concave on the aforementioned interval, then the inequalities above work in the reverse way. Assuming $K$ strictly positive almost everywhere, the equality occurs if and only if$\ f$ is an affine function or $f\left( b\right) =c$. We will restrict here to the case of convex functions, the argument for the concave functions being similar. The left-hand side term of each of the inequalities in our statement represents the measure of the cross-hatched surface. See Figure 5 and Figure 6. As the points of the graph of the convex function $f$ (restricted to the interval of endpoints $b$ and $f_{\sup}^{-1}\left( c\right) )$ are under the chord joining $\left( b,f\left( b\right) \right) $ and $\left( f_{\sup }^{-1}\left( c\right) ,c\right) ,$ it follows that this measure is less than the measure of the enveloping triangle $MNQ$ when $c\leq f(b).$ This yields $i)$. The assertion $ii)$ follows in a similar way. Corollary \[CorJP\] extends a result due to J. Jakšetić and J. E. Pečarić* [@P].* They considered the special case were $K\left( x,y\right) =1$ on $\left[ 0,\infty\right) \times\left[ 0,\infty\right) $ and $f:\left[ 0,\infty\right) \rightarrow\left[ 0,\infty\right) $ is increasing and differentiable, with an increasing derivative on the interval $\left[ \min\left\{ f^{-1}\left( c\right) ,b\right\} ,\max\left\{ f^{-1}\left( c\right) ,b\right\} \right] $ and $f(0)=0.$ In this case the conclusion of Corollary \[CorJP\] reads as follows:$$\begin{aligned} i)\text{ }\int_{0}^{b}f\left( x\right) dx+\int_{0}^{c}f^{-1}\left( y\right) dy\ -bc & \leq\frac{1}{2}\left( f^{-1}\left( c\right) -b\right) \left( c-f\left( b\right) \right) \ \text{for }c<f\left( b\right) ;\\ ii)\text{ }\int_{0}^{b}f\left( x\right) dx+\int_{0}^{c}f^{-1}\left( y\right) dy\ -bc & \geq\frac{1}{2}\left( f^{-1}\left( c\right) -b\right) \left( c-f\left( b\right) \right) \ \text{for }c>f\left( b\right) .\end{aligned}$$ The equality holds if $f\left( b\right) =c$ or $f$ is an affine function. The inequality sign should be reversed if $f$ has a decreasing derivative on the interval $$\left[ \min\left\{ f^{-1}\left( c\right) ,b\right\} ,\max\left\{ f^{-1}\left( c\right) ,b\right\} \right] .$$ The connection with $c$-convexity ================================= Motivated by the mass transportation theory, several people [@D1988], [@EN1974] drew a parallel to the classical theory of convex functions by extending the Legendre duality. Technically, given two compact metric spaces $X$ and $Y$ and a *cost density* function $c:X\times Y\rightarrow \mathbb{R}$ (which is supposed to be continuous), we may consider the following generalization of the notion of convex function: \[cConv\]A function $F:X\rightarrow\mathbb{R}$ is $c$-convex if there exists a function $G:Y\rightarrow\mathbb{R}$ such that $$F(x)=\sup_{y\in Y}\left\{ c(x,y)-G(y)\right\} ,\;\text{for all }x\in X. \label{c-conv}$$ We abbreviate (\[c-conv\]) by writing $F=G^{c}$. A useful remark is the equality$$F^{cc}=F,$$ that is,$$F(x)=\sup_{y\in Y}\left\{ c(x,y)-F^{c}(y)\right\} ,\;\text{for all }x\in X. \label{cdual}$$ The classical notion of convex function corresponds to the case where $X$ is a compact interval and $c(x,y)=xy$. The details can be found in [@NP2006], pp. 40-42. Theorem \[ThmYoungNondecr\] illustrates the theory of $c$-convex functions for the spaces $X=[a,\infty]$, $Y=[f(a),\infty]$ (the Alexandrov one point compactification of $[a,\infty)$ and respectively $[f(a),\infty)$), and the cost function $$c(x,y)=\int_{a}^{x}\int_{f(a)}^{y}K\left( s,t\right) dtds\text{.} \label{cKrelation}$$ In fact, under the hypotheses of this theorem, the functions$$F(x)=\int_{a}^{x}\left( \int_{f\left( a\right) }^{f\left( s\right) }K\left( s,t\right) dt\right) ds,\quad x\geq a,$$ and$$G(y)=\int_{f\left( a\right) }^{y}\left( \int_{a}^{f_{\sup}^{-1}\left( t\right) }K\left( s,t\right) ds\right) dt,\quad y\geq f(a),$$ verify the relations $F^{c}=G$ and $G^{c}=F$ (due to the equality case as specified in the statement of Theorem \[ThmYoungNondecr\], so they are both $c$-convex. On the other hand, a simple argument shows that $F$ and $G$ are also convex in the usual sense. Let us call the functions $c$ that admits a representation of the form (\[cKrelation\]) with $K\in L^{1}(\mathbb{R\times R}),$ *absolutely* *continuous in the hyperbolic sense*. With this terminology, Theorem \[ThmYoungNondecr\] can be rephrased as follows: \[ThmcConv\]Suppose that $c:[a,b]\times\lbrack A,B]\rightarrow\mathbb{R}$ is an absolutely continuous function in the hyperbolic sense with mixed derivative $\frac{\partial^{2}c}{\partial x\partial y}\geq0,$ and $f:[a,b]\rightarrow\lbrack A,B]$ is a nondecreasing function such that $f(a)=A.$ Then$$c(x,y)-c(a,f(a))\leq\int_{a}^{x}\frac{\partial c}{\partial t}(t,f(t))dt+\int _{f(a)}^{y}\frac{\partial c}{\partial s}(f_{\sup}^{-1}(s),s)ds \label{cyineq}$$ for all $(x,y)\in\lbrack a,A]\times\lbrack b,B]$. If $\frac{\partial^{2}c}{\partial x\partial y}>0$ almost everywhere, then (\[cyineq\]) becomes an equality if and only if $y\in\left[ f(x-),f(x+)\right] ;$ here we made the convention $f(a-)=f(a)$ and $f(b+)=f(b).$ Necessarily, an absolutely continuous function $c$ in the hyperbolic sense, is continuous. It admits partial derivatives of the first order and a mixed derivative $\frac{\partial^{2}c}{\partial x\partial y}$ almost everywhere. Besides, the functions $y\rightarrow\frac{\partial c}{\partial x}(x,y)$ and $x\rightarrow\frac{\partial c}{\partial y}(x,y)$ are defined everywhere in their interval of definition and represent absolutely continuous functions; they are also nondecreasing provided that $\frac{\partial^{2}c}{\partial x\partial y}\geq0$ almost everywhere. A special case of Theorem \[ThmcConv\] was proved by Zs. Páles [@Pa1990], [@Pa1992] (assuming $c:[a,A]\times\lbrack b,B]\rightarrow \mathbb{R}$ a continuously differentiable function with nondecreasing derivatives $y\rightarrow\frac{\partial c}{\partial x}(x,y)$ and $x\rightarrow\frac{\partial c}{\partial y}(x,y),$ and $f:[a,b]\rightarrow \lbrack A,B]$ an increasing homeomorphism). An example which escapes his result but is covered by Theorem \[ThmcConv\] is offered by the function$$c(x,y)=\int_{0}^{x}\left\{ \frac{1}{s}\right\} ds\int_{0}^{y}\left\{ \frac{1}{t}\right\} dt,\,\quad x,y\geq0,$$ where $\left\{ \frac{1}{s}\right\} $ denotes the fractional part of $\frac{1}{s}$ if $s>0,$ and $\left\{ \frac{1}{s}\right\} =0$ if $s=0$. According to Theorem \[ThmcConv\],$$\begin{gathered} \int_{0}^{x}\left\{ \frac{1}{s}\right\} ds\int_{0}^{y}\left\{ \frac{1}{t}\right\} dt\\ \leq\int_{0}^{x}\left( \left\{ \frac{1}{s}\right\} \int_{0}^{f(s)}\left\{ \frac{1}{t}\right\} dt\right) ds+\int_{0}^{y}\left( \left\{ \frac{1}{t}\right\} \int_{0}^{f_{\sup}^{-1}(t)}\left\{ \frac{1}{s}\right\} ds\right) dt,\end{gathered}$$ for every nondecreasing function $f:[0,\infty)\rightarrow\lbrack0,\infty)$ such that $f(0)=0.$ **Acknowledgement.** The authors were supported by CNCSIS Grant PN2 ID\_$420.$ [99]{} P. J. Acklam, An algorithm for computing the inverse normal cumulative distribution function, http://home.online.no/pjacklam/notes/invnorm/ F. Cunningham Jr. and N. Grossman, On Young’s inequality, *The American Mathematical Monthly*, **78** (1971), No. 7, 781-783. H. Dietrich, Zur c-Konvexität und c-Subdifferenzierbarkeit von Funktionalen, *Optimization* **19** (1988), 355–371. G. H. Hardy, J. E. Littlewood and G. Pólya, *Inequalities,* Cambridge Mathematical Library, 2nd Ed., 1952, Reprinted 1988. K.-H. Elster, and R. Nehse, Zur Theorie der Polarfunktionale, *Math. Operationsforschung Statist*. **5** (1974), 3–21. M. J. Merkle, A contribution to Young’s inequality, *Publ. Elektrotehn. Fak. Univ. Beograd,* Ser. Mat.-Fiz., No. 461-497 (1974). E. Minguzzi, An equivalent form of Young’s inequality with upper bound, *Applicable Analysis and Discrete Mathematics*, **2** (2008), issue 2, 213–216. D. S. Mitrinović, *Analytic Inequalities*, Springer-Verlag, Berlin and New York, 1970. C. P. Niculescu and L.-E. Persson, *Convex Functions and their Applications. A Contemporary Approach*, CMS Books in Mathematics vol. **23**, Springer-Verlag, New York, 2006. A. Oppenheim, Note on Mr. Cooper’s generalization of Young’s inequality, *J. London Math. Soc.,* **2** (1927), 21-23. Zs. Páles, On Young-type inequalities, *Acta Sci. Math.* (Szeged) **54** (1990), 327–338. Zs. Páles, A general version of Young’s inequality, *Archiv der Mathematik* **58** (1992), No. 4, 360-365. J. E. Pečarić and J. Jakšetić, A note on Young inequality, *Math. Inequal. Appl*., **12** (2009), to appear. A. W. Roberts and D. E. Varberg, *Convex Functions*, Pure and Applied Mathematics, vol. 57, Academic Press, New York, 1973. W. T. Sulaiman, Notes on Young’s Inequality, *International Mathematical Forum*, **4** (2009), No. 24, 1173 - 1180. C. Villani, *Optimal transport. Old and new*, Springer-Verlag, 2009. A. Witkowski, On Young’s inequality, *Journal of Inequalities in Pure And Applied Mathematics*, **7** (2006), Issue 5, article 164. Young, W. H., On classes of summable functions and their Fourier series, *Proc. Roy. Soc. London*, Ser. A 87, 225-229, 1912. [^1]: Corresponding author: Constantin P. Niculescu
--- abstract: | The standard Wojtkowski–Markarian–Donnay–Bunimovich technique for the hyperbolicity of focusing or mixed billiards in the plane requires the diameter of a billiard table to be of the same order as the largest ray of curvature along the focusing boundary. This is due to the physical principle that is used in the proofs, the so-called defocusing mechanism of geometrical optics. In this paper we construct examples of hyperbolic billiards with a focusing boundary component of arbitrarily small curvature whose diameter is bounded by a constant independent of that curvature. Our proof employs a nonstardard cone bundle that does not solely use the familiar dispersing and defocusing mechanisms. Mathematics Subject Classification: 37D50, 37D25, 37A25. author: - '<span style="font-variant:small-caps;">Luca Bussolari</span> [^1] $^\ddagger$ <span style="font-variant:small-caps;">Marco Lenci</span> $^*$[^2] [^3]' date: December 2007 title: | **Hyperbolic billiards with nearly flat\ focusing boundaries. I** --- Introduction {#sec-intro} ============ Much has been written, in the scientific literature, about the [hyperbolic]{}ity of [billiard]{}s in two dimensions. So much that general principles have even been devised for the ‘design of [billiard]{}s with nonvanishing Lyapunov exponents’. The expression is taken from the title of the 1986 seminal paper by Wojtkowski [@w2], in which he beautifully links the question of exponential instability (i.e., positivity of a Lyapunov exponent) to a few simple observations from geometrical optics. By means of the powerful *invariant cone technique* [@w1; @k; @cm], Wojtkowski gives sufficient conditions for a planar [billiard]{} to have nonzero Lyapunov exponents, this implying a fuller range of [hyperbolic]{}properties via the general results of Katok and Strelcyn on Pesin’s theory for [dynamical system]{}s with singularities [@ks]. Wojtkowski’s conditions are rather undemanding for *dispersing* and *semidispersing* [billiard]{}s (i.e., [billiard]{}s in a domain $\Omega \subset {\mathbb{R}}^2$, a.k.a. *table*, whose boundary is the finite union of smooth convex pieces, when seen from inside $\Omega$), and much more restrictive for *focusing*, *semifocusing* and *mixed* [billiard]{}s (that is, cases when $\partial \Omega$ is made up—completely or partially, respectively—by concave pieces). (Both in the dispersing and in the focusing case, the prefix semi- means that $\partial \Omega$ has some flat parts as well.) For the latter type of [billiard]{}s, further work has been done by Markarian [@m1; @m2], Donnay [@d] and Bunimovich [@b3] (see [@cm Chap. 9] for an overview of the subject and [@de] for an interesting variation). If we call *boundary component* each smooth piece of $\partial \Omega$, one of the conditions in [@w2] is that the inner semiosculating disc at any given point of a focusing boundary component must not intersect other components, or the semiosculating discs relative to other focusing components ([@m1] has a similar condition). This is required in order to implement the so-called *defocusing mechanism*, which can be loosely described like this: One wants diverging beams of [trajector]{}ies to keep diverging after every collision with the boundary. But at a focusing portion of the boundary a diverging beam may be bounced back as a converging beam. A solution around this problem is to let the converging beam travel untouched for a sufficienly long time until the [trajector]{}ies focus among themselves and then start to diverge again. The defocusing mechanism is the closest extension of Sinai’s original idea of extracting [hyperbolic]{}ity from the expanding features of dispersing boundaries [@s]. At least to our knowledge, it has remained unsurpassed since Bunimovich introduced it in 1974 [@b1], to become very popular a few years later, when it was used to work out the famous stadium [billiard]{} [@b2]. Sticking too much to the standard principles, however, creates a problem and somehow a paradox. The condition on the semiosculating discs, and each of its later analogues, requires a table with focusing components to have a diameter of the order of the largest radius of curvature among the focusing points of the boundary. To illustrate how this may seem a paradox, consider the following example: Take a unit square and replace three of its sides with circular arcs of curvature $k_d \in (-\sqrt{2},0)$ having their endpoints in the vertices of the square. In this paper we use the convention that the curvature is positive at focusing points of the boundary and negative at dispersing points, so the arcs are convex relative to the interior of the square; the condition $|k_d| < \sqrt{2}$ ensures that each pair of adjacent arcs intersects only at the common endpoint. The resulting [billiard]{} is semidispersing, thus belongs to the standard class and is well-know to be uniformly [hyperbolic]{}, Bernoulli, and so on [@cm]. Now perturb the fourth side into a focusing circular arc of curvature $k_f \ll 1$. Now matter how small the perturbation, this new [billiard]{} will never satisfy Wojtkowski’s principle and is not currently known to be [hyperbolic]{}, although it presumably is. This may not sound too strange. After all, certain perturbations of dispersing [billiard]{}s are known to possess elliptic islands [@rt; @tr]. But the paradox is that the smaller the perturbation, the less adequate the standard technique; that is, the closer the [billiard]{} comes to be dispersing, the worse the method applies which is supposed to exploit the dispersing nature of the boundaries. Up until $k_f=0$, at which point everything suddenly, and abruptly, works again to the fullest power of the theory of [hyperbolic]{} [billiard]{}s. Here we address this problem and, although we cannot yet prove that the perturbed square [billiard]{} is [hyperbolic]{}, we devise a couple of models that make clear what the difficulties are in extending the current methodology. These [billiard]{}s, which are modifications of the example just discussed, are depicted in Figs. \[fig-t1intro\] and \[fig-t2intro\]. They are indeed two families of [billiard]{}s, as we are interested in the case when the curvature of the focusing boundary goes to zero. We define an invariant cone bundle that exploits the fact that the focusing component is nearly flat, and thus almost always acts as a semidispersing boundary. ![The main [billiard]{} table](fig-t1intro.eps){width="12cm"} \[fig-t1intro\] In any event, we are able to answer the following questions in the affirmative: 1. Can one design a [billiard]{} whose [hyperbolic]{}ity is proved via a set of invariant cones that does not use exclusively the dispersing/defocusing mechanism for beams of [trajector]{}ies? 2. \[pt2\] Can one construct a family of [hyperbolic]{} [billiard]{} tables with a (nonvanishing) focusing component whose maximum curvature approaches zero, and such that the area of the table is bounded above? 3. \[pt3\] Can one require the diameter to be bounded above as well? 4. Are these [billiard]{}s [ergodic]{}? (This will be proved in [@bl].) ![A modification of the main [billiard]{} table](fig-t2intro.eps){width="6.2cm"} \[fig-t2intro\] Points \[pt2\] and \[pt3\] show, independently of the method utilized, that one can go beyond the apparent implication ‘almost flat focusing boundaries imply very large tables’. This is the plan of the paper: In Section \[sec-prel\] we review the basic definitions of [billiard]{} dynamics. In Section \[sec-cones\] we present and adapt Wojtkowsky’s theory of invariant cones derived from geometrical optics. In Section \[sec-hyp\] we define the first of our models and choose suitable cones to prove its [hyperbolic]{}ity. In Section \[sec-conf\] we show that the [billiard]{} introduced before can be chosen with a bounded area, and finally we present a second model which has a bounded diameter as well. **Acknowledgments.**  We would like to thank Gianluigi Del Magno for an instructive discussion on the subject. M.L. acknowledges partial support from NSF Grant DMS-0405439. Preliminaries {#sec-prel} ============= A planar [billiard]{} is the [dynamical system]{} generated by the flow of a point particle that moves inertially inside a closed region ${\Omega}\subset {\mathbb{R}}^2$ and collides elastically at the boundary; the latter is assumed to have an infinite mass. This implies that the [trajector]{}y of the particle, near the collision point, verifies the well-known *Fresnel law*: the angle of incidence equals the angle of reflection. The region ${\Omega}$ is called the *[billiard]{} table*. We denote ${\Gamma}= \partial {\Omega}$ and assume that ${\Gamma}$ is piecewise smooth (at least $C^3$). Let $(q(t),u(t))$ represent the position and the velocity of the particle at time $t$. It is an easy consequence of the conservation of energy that $\|u(t)\| =$ constant. Therefore, by a rescaling of time, one can always reconduct to the situation where $\|u\| = 1$, which we assume throughout the paper. The product ${\Omega}\times S^1$ is the natural phase space of the [billiard]{} flow, with a couple of extra specifications: First, if $q \in {\Gamma}$ and $u$ points outwardly, then $(q,u)$ is identified with $(q,u')$, where $u'$ is the outgoing (i.e., inward) velocity of a collision at $q$ with incoming velocity $u$. Second, if $q$ is in a corner, the flow is not defined. The [billiard]{} flow preserves the Lebesgue [measure]{} on ${\Omega}\times S^1$, as it can be verified directly or by applying the Liouville Theorem to this nonsmooth Hamiltonian [system]{}. Now let ${\mathcal{M}}\subset {\Omega}\times S^1$ be the set of all pairs $(q,u)$ with $q \in {\Gamma}$ and $u$ pointing inside the table. These pairs are sometimes called *line elements* [@s] and ${\mathcal{M}}$ is evidently a global cross section for the flow. The corresponding Poincaré map ${\mathcal{F}}: {\mathcal{M}}\longrightarrow {\mathcal{M}}$ is called the *[billiard]{} map* and acts as follows: if $q' \in {\Gamma}$ is the first collision point of the flow-[trajector]{}y with initial conditions $(q,u)$, and $u'$ is the postcollisional velocity there, then ${\mathcal{F}}(q,u) = (q',u')$. ${\mathcal{F}}(q,u)$ is undefined when $q'$ is a vertex of ${\Gamma}$, and is discontinuous at tangential collisions, i.e., when $u'$ is tangent to ${\Gamma}$ in $q'$. For the sake of simplicity, those latter line elements are removed as well from the domain of ${\mathcal{F}}$. The set of all removed $(q,u)$ is denoted ${\mathcal{S}}_1$, or ${\mathcal{S}}_1^+$. We identify ${\mathcal{M}}$ with the rectangle $[0,L] \times [-\pi/2,\pi/2]$, where $L$ is the perimeter of ${\Omega}$: each $(q,u)$ is identified with the pair $(s,{\alpha})$, where $s$ is the arclength coordinate of $q$ (relative to a fixed choice of the origin $s=0$ and oriented counterclockwise) and $\alpha$ is the angle (oriented clockwise) between $u$ and the inner normal to ${\Gamma}$ in $q$. The Lebesgue [measure]{} on ${\Omega}\times S^1$ induces an ${\mathcal{F}}$-invariant [measure]{} $\mu$ on ${\mathcal{M}}$ which, in the above coordinates, is described by $d\mu(s,{\alpha}) = c \, \cos{\alpha}\, ds d{\alpha}$. The constant $c$ is customarily chosen so that $\mu$ is a probability [measure]{}. Let us indicate with ${\mathcal{S}}_0$ the set of all pairs $(s,{\alpha}) \in {\mathcal{M}}$ where $s$ corresponds to a vertex of ${\Gamma}$ or ${\alpha}= \pm \pi/2$. The set ${\mathcal{S}}_1 = {\mathcal{S}}_1^+$ introduced earlier is morally given by “${\mathcal{S}}_1^+ := {\mathcal{F}}^{-1} {\mathcal{S}}_0$”. For historical reasons, this is usually called the *singularity set* of ${\mathcal{F}}$, even though the differential of ${\mathcal{F}}$ is singular only at line elements resulting in tangential hits. Analogously, for $n>1$, ${\mathcal{S}}_n^+ := {\mathcal{S}}_1^+ \cup {\mathcal{F}}^{-1} {\mathcal{S}}_1^+ \cup \cdots \cup {\mathcal{F}}^{-n+1} {\mathcal{S}}_1^+$ is the set where ${\mathcal{F}}^n$ is not defined, which is called the singularity set of ${\mathcal{F}}^n$. We also introduce “${\mathcal{S}}_1^- := {\mathcal{F}}{\mathcal{S}}_0$” and, for $n>1$, ${\mathcal{S}}_n^- := {\mathcal{S}}_1^- \cup {\mathcal{F}}{\mathcal{S}}_1^- \cup \cdots \cup {\mathcal{F}}^{n-1} {\mathcal{S}}_1^-$. These are the singularity sets for the powers of the inverse map ${\mathcal{F}}^{-1}$. Lastly, ${\mathcal{S}}_\infty^+ := \bigcup_{n=1}^{\infty} {\mathcal{S}}_n^+$, ${\mathcal{S}}_\infty^- := \bigcup_{n=1}^{\infty} {\mathcal{S}}_n^-$, and ${\mathcal{S}}:= {\mathcal{S}}_\infty^+ \cup {\mathcal{S}}_\infty^-$. Each ${\mathcal{S}}_n^\pm$ is the union of smooth curves whose endpoints lie either on another such curve or on the *generalized boundary* of ${\mathcal{M}}= [0,L] \times [-\pi/2,\pi/2]$, which is defined as the boundary of ${\mathcal{M}}$ plus all the vertical segments $s=s_i$, where $s_i$ is the boundary coordinate of a vertex of ${\Gamma}$. If $L < \infty$, the number of vertices is finite, and the curvature of ${\Omega}$ is bounded, then ${\mathcal{S}}_n^\pm$ comprises only a finite number of smooth curves. Under the above assumptions, ${\mathcal{F}}$ is a piecewise differentiable map with singularities, of the type studied by Katok and Strelcyn in [@ks]. Among their results is a suitable version of the Oseledec Theorem which guarantees, for a.e. $(s,{\alpha}) =: x \in {\mathcal{M}}$: 1. A decomposition of the tangent space $T_x {\mathcal{M}}$ into $E_x^+ \oplus E_x^-$. These one-dimensional spaces are dynamics-invariant in the sense that $(D {\mathcal{F}})_x E_x^\pm = E_{{\mathcal{F}}x}^\pm$, where $(D {\mathcal{F}})_x$ denotes the differential of ${\mathcal{F}}$ at $x$. 2. The existence of the Lyapunov exponents $\lambda_\pm(x)$, defined as $$\lambda_\pm(x) := \lim_{n \to +\infty} \frac1n \log \| (D {\mathcal{F}}^n)_x v_\pm \|,$$ with $v_\pm \in E_x^\pm$. Since $\mu$ is absolutely continuous w.r.t. the Lebesgue [measure]{} on ${\mathcal{M}}$, then $\lambda_+ (x) = -\lambda_- (x)$. We adopt the convention that $\lambda_+ (x) \ge 0$. The [dynamical system]{} is [hyperbolic]{}, by definition, if $\lambda_+ (x) > 0$ almost everywhere. If the [system]{} is [ergodic]{} too, then $\lambda_+ (x) =$ constant $=: \lambda_+$. Geometrical optics and cone bundles {#sec-cones} =================================== In this section, which liberally draws from [@w2], we recall the basic tenets of the invariant cone technique for the [hyperbolic]{}ity of planar [billiard]{}s (cf. also [@lw]), and prove a couple of results that are specifically designed for our [system]{}s. Given $x\in {\mathcal{M}}$ and two linearly independent vectors $v_1, v_2 \in T_x {\mathcal{M}}$, we define the *cone with boundaries $v_1, v_2$* as the set $$\label{cone} C(x) := {\left\{ av_1 + bv_2 \: \left| \: a,b \in {\mathbb{R}},\ ab \ge 0 \right. \! \right\} }.$$ If $C(x)$ is defined at every, or almost every, $x \in {\mathcal{M}}$ and the dependence on $x$ is measurable, we speak of $C \subset T{\mathcal{M}}$ as a measurable cone bundle. A measurable cone bundle $C$ is said to be: - *invariant*, if $(D{\mathcal{F}})_x C(x) \subseteq C({\mathcal{F}}x)$ for $\mu$-a.e. $x$; - *strictly invariant*, if $(D{\mathcal{F}})_x C(x) \subset C({\mathcal{F}}x)$ for $\mu$-a.e. $x$; - *eventually strictly invariant*, if it is invariant and, for $\mu$-a.e. $x$, there exists $n(x) \in {\mathbb{Z}}^+$ such that $(D {\mathcal{F}}^{n(x)})_x C(x) \subset C({\mathcal{F}}^{n(x)} x)$. The next theorem was proved in [@w1]. \[thm-conhyp\] Given a [billiard]{} map ${\mathcal{F}}$ as described above, if there exists an eventually strictly invariant measurable cone bundle, then the Lyapunov exponent $\lambda_+(x)$ is positive for $\mu$-a.e. $x \in {\mathcal{M}}$. In [@w2] Wojtkowski reduces the invariance of a cone bundle to a problem of geometrical optics concerning the behavior of a family (a *beam*) of nearby [trajector]{}ies. We present the main ideas. To a tangent vector $v \in T_x {\mathcal{M}}$ in phase space is naturally associated a differentiable curve $\varphi: (-{\varepsilon}, {\varepsilon}) \longrightarrow {\mathcal{M}}$ such that $\varphi(0) = x$ and $\varphi'(0) = v$. By construction, $\sigma \mapsto \varphi(\sigma)$ is uniquely determined in linear approximation around $0$. Using the representation of ${\mathcal{M}}$ as a subset of ${\Omega}\times S^1$, and the notation $\varphi(\sigma) = (q(\sigma), u(\sigma)) \in {\Omega}\times S^1$, we construct the family of lines, or *rays*, $l^+ (\sigma) := {\left\{ \left. \! q(\sigma) + r u(\sigma) \: \right| \: r \in {\mathbb{R}}\right\} }$. Also, denoting by $u^-(\sigma)$ the outward-pointing, precollisional vector of $u(\sigma)$ at $q(\sigma) \in {\Gamma}$, we define $l^- (\sigma) := {\left\{ \left. \! q(\sigma) + r u^- (\sigma) \: \right| \: r \in {\mathbb{R}}\right\} }$. In first approximation, that is, when ${\varepsilon}\to 0^+$, the now infinitesimal beam of rays *focuses* in a point, which means that all rays, up to adjustments of order ${\varepsilon}$ in $(q(\sigma), u(\sigma))$, have a common intersection. We consider the case too where the common intersection is at infinity. This *focal point* is clearly a [function]{} of $v$ only: it is denoted $F^+(v)$ for the family $\{ l^+ (\sigma) \}$ and $F^-(v)$ for the family $\{ l^- (\sigma) \}$. Let us call $f^\pm (v)$ the signed distances, along $l^\pm(0)$, between $F^\pm(v)$ and $q_0 = q(0)$ ($l^\pm(\sigma)$ has the orientation induced by the parameter $r \in {\mathbb{R}}$, that is, outward for $l^- (\sigma)$ and inward for $l^+ (\sigma)$, relative to ${\Omega}$). In the remainder, we will omit the dependence of $v$ from all the notation whenever there is no ambiguity. Indicated by $(ds, d{\alpha})$ the components of $0 \ne v \in T {\mathcal{M}}_{(s_0, {\alpha}_0)}$ in the natural basis $\{ \partial / \partial s,\partial / \partial{\alpha}\}$, one has $$f^{\pm} = \left\{ \begin{array}{lll} {\displaystyle}\frac{\cos {\alpha}_0} {\pm k(s_0) - \frac{d{\alpha}}{ds}}, && \mbox{if } ds \ne 0; \vspace{6pt} \\ 0, && \mbox{if } ds = 0. \end{array} \right. \label{fpm}$$ Here $k(s)$ denotes the curvature of ${\Gamma}$ at the point of coordinate $s$ (as specified in the introduction, the curvature is taken positive at focusing points of the boundary, and negative at dispersing points). The formula (\[fpm\]) is derived, e.g., in [@w2]. It is easy to see that $f^\pm$ are projective coordinates of $T_x {\mathcal{M}}$. Hence any cone of the type (\[cone\]) can be described by a closed interval in the coordinate $f^+ \in \overline{{\mathbb{R}}}$, where $\overline{{\mathbb{R}}} := {\mathbb{R}}\cup \{ \infty \}$ is the compactification of ${\mathbb{R}}$. Henceforth, for simplicity, we will drop the subscripts from the coordinates $(s_0, {\alpha}_0)$ of the collision pair. Also, we will use the imprecise terminology ‘the point $s \in {\Gamma}$’ to mean ‘the point in ${\Gamma}$ of coordinate $s$’. The next lemma is known in optics as the mirror equation [@w2; @cm]. For an infinitesimal beam of [trajector]{}ies colliding around the point $s \in {\Gamma}$ with reflection angles around ${\alpha}$, $$-\frac1{f^{-}} + \frac1{f^{+}} = \frac{2k(s)} {\cos {\alpha}}.$$ \[lem-mirror-eq\] We now present a visual description of the cone $C(x) = C(s,{\alpha})$ on the configuration plane containing ${\Omega}$. For $s \in {\Gamma}$ and $\beta>0$, denote by $D_\beta (s)$ the closed disc of radius $1 / |\beta k(s)|$ tangent to ${\Gamma}$ in $s$ on the internal side of ${\Omega}$. Analogously, for $\beta<0$, let $D_\beta (s)$ be the closed disc of radius $1 / |\beta k(s)|$ tangent to ${\Gamma}$ in $s$ on the external side of ${\Omega}$. Consider also the two closed halfplanes delimited by $t(s)$, the tangent line to ${\Gamma}$ in $s$: let $D_{0+}(s)$ denote the internal halfplane, relative to ${\Omega}$, and $D_{0-}(s)$ the external one. See Fig. \[fig-dbetas\]. The interior of $D_\beta (s)$ is indicated with $D_\beta^\circ (s)$. ![The tangent line $t(s)$ and some discs $D_\beta(s)$. The yellow part of the [trajector]{}y is the locus of the focal points $F^+$ corresponding to a certain cone.](fig-dbetas.eps){width="8cm"} \[fig-dbetas\] Given a cone $C(s,{\alpha})$ of the type *(\[cone\])*, $v \in C(s,{\alpha})$ corresponds to $F^+(v) \in l^+(0) \cap D$, where $D \subset {\mathbb{R}}^2$ is one of the following sets: - $D = D_{\beta_1} (s)$; - $D = D_{\beta_1} (s) \setminus D_{\beta_2}^\circ (s)$, with $|\beta_1| < |\beta_2|$; - $D = D_{\beta_1} (s) \cup D_{\beta_2} (s)$, with $\beta_1 \ge 0$ and $\beta_2 \le 0$; - $D = {\mathbb{R}}^2 \setminus (D_{\beta_1}^\circ (s) \cup D_{\beta_2}^\circ (s) \cup \{s\} )$, with $\beta_1 \ge 0$ and $\beta_2 \le 0$. Moreover, $$F^+(v) \in \partial D_\beta (s) \setminus \{ s \} \quad \Longleftrightarrow \quad f^+(v) = \frac{2 \cos {\alpha}} {\beta |k(s)|}.$$ \[lem-fplus\] [<span style="font-variant:small-caps;">Proof.</span> ]{}By construction $F^+ = F^+(v) \in l^+(0)$. Since $f^+$ is a coordinate on $l^+(0)$, a closed interval in the projectivized $f^+ \in \overline{{\mathbb{R}}}$ corresponds, on $l^+(0)$, to either a closed segment or a closed halfline or the union of two disjoint closed halflines. Cases *(a)*-*(d)* cover all possibilities. The second statement, for $\beta>0$, comes from elementary trigonometry (see Fig. \[fig-dbetas\]), and it trivially extends to the case $\beta<0$ as well. The reason why, in Lemma \[lem-fplus\], we chose such peculiar sets $D$ to cut a (projective) closed segment on $l^+(0)$, upon intersection, will be made clear by the next lemma. In particular, we will see that describing the cones in terms of the discs $D_\beta(s)$ will eliminate the dependence on ${\alpha}$ in the mirror equation of Lemma \[lem-mirror-eq\]. For infinitesimal beam of trajectories colliding around $s \in {\Gamma}$, $F^{-} \in \partial D_\beta (s)$ [if and only if ]{}$F^{+} \in \partial D_{\beta'} (s)$, where $$\beta' = 4 \, \mathrm{sgn}(k(s)) - \beta$$ (with the understanding that $F^\pm \in \partial D_{0\pm}$ means $F^\pm \in \{ s, \infty \}$). \[lem-betap\] [<span style="font-variant:small-caps;">Proof.</span> ]{}Let ${\alpha}$ be the angle of reflection (and thus of incidence) of the [trajector]{}y we are perturbing. Disregarding the case $F^+ = F^- = s$, we know from Lemma \[lem-fplus\] that $F^+ \in \partial D_{\beta'} (s)$ corresponds to $f^+ = 2 \cos{\alpha}/ (\beta' |k(s)|)$. Also, $F^- \in \partial D_\beta (s)$ is equivalent to $f^- = -2 \cos{\alpha}/ (\beta |k(s)|)$ (the minus sign is needed because a focal point $F^-$ lying on the internal halfplane $D_{0+} (s)$ corresponds to a negative $f^-$ along $l^-(0)$, and viceversa). Direct substitution into Lemma \[lem-mirror-eq\] yields $$\frac{\beta |k(s)|} {2\cos {\alpha}} + \frac{\beta' |k(s)|} {2 \cos{\alpha}} = \frac{2k(s)} {\cos{\alpha}},$$ whence the assertion. With the tools of Section \[sec-cones\], the problem of the cone invariance along a given [trajector]{}y can be reduced to the study of the focal points of one-parameter perturbations of that [trajector]{}y. We single out the information that we need for our forthcoming proofs. For an infinitesimal beam of [trajector]{}ies colliding around $s$ we have the following: If $s$ belongs to a focusing component of ${\Gamma}$, i.e., $k(s)>0$, then: $$\begin{aligned} F^\mp \in D_4(s) & \Longleftrightarrow & F^\pm \in D_{0-}(s); \\ F^\mp \in D_2(s) \setminus D_4^\circ (s) & \Longleftrightarrow & F^\pm \in D_{0+}(s) \setminus D_2^\circ (s). \end{aligned}$$ If $s$ belongs to a dispersing component of ${\Gamma}$, i.e., $k(s)<0$, then $$\begin{aligned} F^\mp \in D_{-4}(s) & \Longleftrightarrow & F^\pm \in D_{0+}(s); \\ F^\mp \in D_{-2}(s) \setminus D_{-4}^\circ (s) & \Longleftrightarrow & F^\pm \in D_{0-}(s) \setminus D_{-2}^\circ (s). \end{aligned}$$ Analogous equivalences hold for the interior of such cones. The situation is illustrated in Fig. \[fig-p-dbeta\]. \[prop-d-beta\] [<span style="font-variant:small-caps;">Proof.</span> ]{}We only prove the first statement, the other ones being completely analogous. Once again, we disregard the easy case $F^+ = F^- = s$. We have $F^- \in D_4(s)$ $\Leftrightarrow$ $F^- \in \partial D_\beta(s)$, for $\beta \in [4, +\infty)$ $\Leftrightarrow$ (by Lemma \[lem-betap\]) $F^+ \in \partial D_{\beta'} (s)$, for $\beta' \in (-\infty, 0]$ $\Leftrightarrow$ $F^- \in D_{0-} (s)$. Clearly, nothing changes if we swap $F^-$ and $F^+$. ![A geometric representation of Proposition \[prop-d-beta\]. The left picture represents the first two statements (focusing border); the right picture represents the last two statements (dispersing border). Yellow/blue sets of focal points $F^-$ are mapped into yellow/blue sets of focal points $F^+$. The dependence on $s$ in the notation has been omitted.](fig-p-dbeta.eps){width="13.5cm"} \[fig-p-dbeta\] Hyperbolicity {#sec-hyp} ============= Fig. \[fig-t1\] shows the [billiard]{} table we are mainly interested in for the rest of the paper. We refer to it for the definition of the quantities $l, h > 0$. The three dispersing components of the boundary ${\Gamma}$ are circular arcs of curvature $k_d \in (-\sqrt{2},0)$. Their union is denoted ${\Gamma}_d$. The focusing component is a circular arc of curvature $k_f > 0$ and is denoted ${\Gamma}_f$. The remining, flat, part of the boundary is denoted ${\Gamma}_s$. The two rectangular portions of ${\Omega}$ which ${\Gamma}_s$ almost delimits will be referred to as *the strips*, or *the corridors*, or whatever one’s fancy suggests each time. ![The definition of the table ${\Omega}$.](fig-t1.eps){width="12.4cm"} \[fig-t1\] The geometric constants $l, h, k_f, k_d$ are chosen via the following procedure. Keep in mind that we are interested in small values of $k_f$ (see the Introduction) and $h$ (see Section \[sec-conf\]). One starts by fixing arbitrary values of $k_d$ and $h$. Then $k_f$ is determined by a geometric condition that we presently describe, with the help of Fig. \[fig-iss\]. For $s' \in {\Gamma}_d$ and $s'' \in {\Gamma}_f$, consider the straight line passing through $s'$ and $s''$, and let $I(s', s'')$ be its intersection with the disc $D_{-2}(s')$. The curvature $k_f$ must be so small that $$\label{C1} \forall s' \in {\Gamma}_d, \ \forall s'' \in {\Gamma}_f, \quad I(s', s'') \subset D_4(s'').$$ Finally, $l$ is chosen such that $$\label{C2} l \ge \frac1 {k_f}$$ ![Condition (\[C1\]) for two different choices of $s'$.](fig-iss.eps){width="13cm"} \[fig-iss\] Condition (\[C1\]) excludes sufficient separation between the boundary components as per the standard theory of Wojtkowski, Markarian, Donnay and Bunimovich, which is summed up, e.g., in [@cm Thm. 9.19]. The hypotheses of that theorem are evidently violated as (\[C1\]) implies in particular that $D_4(s'')$ contains large portions of ${\Gamma}_d$, for all $s \in {\Gamma}_f$. \[rk-sep\] We are now going to prove the [hyperbolic]{}ity of this [billiard]{} [system]{} via Theorem \[thm-conhyp\]. However, we will not use exactly the Poincaré section that we have introduced in Sections \[sec-prel\] and \[sec-cones\], but a similar section that neglects the hits on the flat boundary component ${\Gamma}_s$. This is standard procedure in the theory of [hyperbolic]{} [billiard]{}s as it is basic fact that the collisions against a flat boundary do not change the [hyperbolic]{} features of a beam of [trajector]{}ies. (One easy way to see this is to *unfold* the [billiard]{} along a given [trajector]{}y: every time the material point hits a flat side we pretend that it continues its precollisional rectilinear motion, but we reflect the table around that flat side; apart from this rigid motion of the [billiard]{}table, nothing changes for the [trajector]{}y or any of its infinitesimal perturbations.) Let us denote $\bar{{\Gamma}} := {\Gamma}_f \cup {\Gamma}_d$. With the usual abuse of notation, whereby a point $q \in {\Gamma}$ is identified with its arclength coordinate $s$, we define ${\mathcal{M}}:= \bar{{\Gamma}} \times [-\pi/2, \pi/2]$, whose elements we call $(s,{\alpha})$ or $x$. Clearly ${\mathcal{M}}$ is a global cross section for the flow. Let ${\mathcal{F}}: {\mathcal{M}}\longrightarrow {\mathcal{M}}$ be its first-return map. For any $x = (s, {\alpha}) \in {\mathcal{M}}$ and $n \in {\mathbb{Z}}$, denote $x_n := (s_n, {\alpha}_n) := {\mathcal{F}}^n x$ and let $\tau_n$ be the length of the portion of the [trajector]{}y (equivalently, the time) between the collisions at $s_n$ and $s_{n+1}$ (notice that there can be an arbitrary number of collisions against ${\Gamma}_s$ between $s_n$ and $s_{n+1}$). Also, let $k_n := k(s_n)$ indicate the curvature of ${\Gamma}$ in $s_n$. Analogously, given $v \in T_x {\mathcal{M}}$, denote $v_n := (D{\mathcal{F}}^n)_x v$. The infinitesimal beam of [trajector]{}ies determined by $v_n$ (and thus by $v$) around $(s_n, {\alpha}_n)$ will have pre- and postcollisional foci denoted, respectively, $F_n^- := F^-(v_n)$ and $F_n^+ := F^+(v_n)$. The corresponding signed distances along the pre- and postcollisional lines are indicated with $f_n^-$ and $f_n^+$. The following facts are obvious: $$\begin{aligned} && F_n^- = F_{n-1}^+, \\ && f_n^- = -( \tau_{n-1} - f_{n-1}^+).\end{aligned}$$ For the sake of the notation, let us drop all subscripts 0 and write $k := k_0$, $F^+ := F_0^+$, and so on. For any $x \in {\mathcal{M}}$, we introduce the following three cones in $T_x {\mathcal{M}}$: - $C_0 (x)$ is the set of all tangent vectors whose correspondent family of rays focuses in linear approximation inside $D_{-2}(s)$. Using the focal distance $f^+$, $$C_0 (x) := {\left\{ v \in T_x {\mathcal{M}}\: \left| \: -\frac{\cos \alpha} {|k|} \le f^+(v) \le 0 \right. \! \right\} }.$$ - $C_1 (x)$ is the set of all tangent vectors whose correspondent family of rays focuses in linear approximation inside $D_{0-}(s)$, i.e., all the divergent families of rays. In projective terms, $$C_1 (x) := {\left\{ v \in T_x {\mathcal{M}}\: \left| \: -\infty < f^+(v) \le 0 \right. \! \right\} }.$$ - $C_2 (x)$ is the set of all tangent vectors whose correspondent family of rays focuses in linear approximation inside $D_2 (s) \setminus D_4^\circ (s)$, i.e., $$C_2 (x) := {\left\{ v \in T_x {\mathcal{M}}\: \left| \: \frac{\cos \alpha} {2|k|} \le f^+(v) \le \frac{\cos \alpha} {|k|} \right. \! \right\} }.$$ We use the above cones to define piecewise an invariant cone bundle $C := \{ C(x) \}_x$. For each $x = (s, {\alpha})$, the choice $C(x) := C_i (x)$ will depend on $s$, $s_{-1}$, and what happens to the [trajector]{}y between the collisions at $s_{-1}$ and $s$. - If , set $C(x) := C_0 (x)$. - If , there are two subcases: - If , set $C(x) := C_2 (x)$. - If , there are two further subcases, depending on whether the piece of [trajector]{}y between $s_{-1}$ and $s$ has collisions with ${\Gamma}_s$: - between $s_{-1}$ and $s$: Set $C(x) := C_1 (x)$. - between $s_{-1}$ and $s$: Set $C(x) := C_2 (x)$. Clearly $C(x)$ is a measurable [function]{} of $x$. \[thm-hyp\] The cone bundle $C$ just defined is eventually strictly invariant relative to the map ${\mathcal{F}}$. [<span style="font-variant:small-caps;">Proof.</span> ]{}We check that $v \in C(x)$ implies $v_1 \in C(x_1)$ for all the possible cases $C(x) = C_i(x)$, $C(x_1) = C_j(x_1)$ $(i,j \in \{ 0,1,2 \})$. - . In this case $C(x) = C_0(x)$, $C(x_1) = C_0(x_1)$. $v \in C_0(x)$ implies $F^+ \in D_{-2}(s)$, hence $F_1^- = F^+ \in D_{0+}^\circ (s_1)$. By Proposition \[prop-d-beta\], $F_1^+ \in D_{-4}^\circ (s_1) \subset D_{-2}^\circ (s_1)$. This is equivalent to $v_1 \in C_0^\circ (x_1)$—where $C^\circ(x)$ represents the interior of $C(x)$ in $T_x {\mathcal{M}}$. We have thus proved strict invariance for this type of collision. - . Here $C(x) = C_0(x)$ but the cone $C(x_1)$ may take two different forms. We separately check both cases. - There are no collisions with ${\Gamma}_s$ between $s$ and $s_1$. Then $C(x_1) = C_1(x_1)$. For $v \in C_0(x)$ we have, by condition (\[C1\]), $F_1^- = F^+ \in D_4 (s_1)$. Proposition \[prop-d-beta\] implies that $F_1^+ \in D_{0-} (s_1)$, that is, $v_1 \in C_1 (x_1)$. In this case the invariance is not necessarily strict. - There are collisions with ${\Gamma}_s$ between $s$ and $s_1$, that is, the material point enters a strip before colliding at $s_1$. In this case $C(x_1) = C_2(x_1)$. Since the material point has to travel all the way to the end of the strip and bounce back, $\tau > 2l > 2/k_f$, having used condition (\[C2\]). For $v \in C_0(x)$, $f^+ \le 0$, hence $f_1^- = -\tau + f^+ < -1/k_f$. Equivalently, $F_1^- \in D_{0+}(s_1) \setminus D_2(s_1)$. By Proposition \[prop-d-beta\], $F_1^+ \in D_2^\circ (s_1) \setminus D_4 (s_1)$, i.e., $v_1 \in C_2^\circ (x_1)$. - . Here $C(x_1) = C_0(x_1)$ and we have two subcases on $C(x)$. - $C(x) = C_1(x)$. In this case $v \in C(x)$ is equivalent to $f^+ \le 0$. Hence $f_1^- < 0$ and $F_1^- \in D_{0+}^\circ (s_1)$. Therefore (Proposition \[prop-d-beta\]) $F_1^+ \in D_{-4}^\circ (s_1) \subset D_{-2}^\circ (s_1)$. Namely $v_1 \in C_0^\circ (x_1)$. - $C(x) = C_2(x)$. So $v \in C(x)$ means that $F^+ = F_1^- \in D_2 (s) \setminus D_4^\circ (s)$. We consider two possible types of [trajector]{}ies: - There are no collisions with ${\Gamma}_s$ between $s$ and $s_1$. By (\[C1\]), $F_1^- \in D_{0-}^\circ (s_1) \setminus D_{-2}^\circ (s_1)$. Hence $F_1^+ \in D_{-2} (s_1)$. - There are collisions with ${\Gamma}_s$ between $s$ and $s_1$. As in case (II.2), $\tau > 2/k_f$ and $f^+ \le (\cos{\alpha}) / k_f < 0$. Thus, $f_1^- < 0$, that is, $F_1^- \in D_{0+}^\circ (s_1)$. Finally, $F_1^+ \in D_{-4}^\circ (s_1) \subset D_{-2}^\circ (s_1)$. - . Definition (B.1) ensures that $C(x_1) = C_2(x_1)$. Let us branch out in two subcases depending on $C(x)$. - $C(x) = C_1(x)$. As in case (III.1), $v \in C(x)$ implies that $f^+ \le 0$. Since, by construction of our cross section, there can be no collisions with ${\Gamma}_d$ in the piece of [trajector]{}y between $s$ and $s_1$, there are only two possibilities: either the particle enters and exits a strip, and thus $\tau > 2/k_f$; or that piece of [trajector]{}y is a chord of the arc ${\Gamma}_f$, and thus $\tau = 2 (\cos{\alpha}) /k_f$. In either case, $\tau > (\cos{\alpha}) /k_f$ and $f_1^- < -(\cos{\alpha}) /k_f$, which means that $F_1^+ \in D_{0+}^\circ (s_1) \setminus D_2 (s_1)$. By Proposition \[prop-d-beta\], $F_1^+ \in D_2^\circ (s_1) \setminus D_4 (s_1)$, that is, $v_1 \in C_2^\circ (x_1)$. - $C(x) = C_2(x)$. The hypothesis $v \in C(x)$ reads $(\cos{\alpha}) / 2k_f \le f^+ \le (\cos{\alpha}) / k_f$. Once again, there are two further subcases: - There are no collisions with ${\Gamma}_s$ between $s$ and $s_1$. In this case, cf. (IV.1), the [trajector]{}y between $s$ and $s_1$ is a chord of ${\Gamma}_f$ and $\tau = 2 (\cos{\alpha}) /k_f$. Therefore $f_1^- = -\tau + f^+ \le -(\cos{\alpha}) /k_f$, which implies $F_1^- \in D_{0+} (s_1) \setminus D_2^\circ (s_1)$. This yields $F_1^+ \in D_2 (s_1) \setminus D_4^\circ (s_1)$, namely $v_1 \in C_2(x_1)$. - There are collisions with ${\Gamma}_s$ between $s$ and $s_1$. $f^+$ and $\tau$ are exactly as in case (III.2.2). Refining the estimate that is written there, $f_1^- < -1/k_f < -(\cos{\alpha}) / k_f$, that is, $F_1^- \in D_{0+}^\circ (s_1) \setminus D_2 (s_1)$. This gives $F_1^+ \in D_2^\circ (s_1) \setminus D_4 (s_1)$. In order to show that $C$ is eventually strict invariant *almost everywhere*, we notice that there are only three cases above in which the cone invariance is not strict, namely (II.1), (III.2.1), and (IV.2.1). In both (II.1) and (III.2.1), nonstrictness can only occur when the external endpoint of $I(s',s'')$ lies on $D_4 (s'')$ and $s = s'$, $s_1 = s''$, or viceversa—cf. (\[C1\]) and Fig. \[fig-iss\]. It is not hard to realize that this situation can only occur for finitely many pairs $(s',s'')$ (at least when the table is optimized, see (\[C3\]) and Fig. \[fig-ho\], there are only two such pairs). As concerns (IV.2.1), we realize that there can only be a finite number of consecutive collisions of that type, because each such piece of [trajector]{}y is a chord of ${\Gamma}_f$ of constant length ($\tau = \tau_1$), but ${\Gamma}_f$ is smaller than a semicircle. Confining the table to a bounded region {#sec-conf} ======================================= In the previous section the table ${\Omega}$ was constructed starting with two values for $h$ and $k_d$, which determined an upper bound on the choice of $k_f$, via (\[C1\]), which in turn determined a lower bound on the choice of $l$, via (\[C2\]). The latter condition, in particular, forced the area of ${\Omega}$ to diverge, as smaller and smaller values are chosen for $k_f$. Now we want to optimize, that is, minimize, the area of the table and to do so we change the order in which its geometric parameters are chosen. Given $k_d < 0$ and $k_f$ sufficiently small, we define the *optimal height* and the *optimal length* of the strips, respectively, as: $$\begin{aligned} \label{C3} && h_o := h_o (k_d, k_f) := \min {\left\{ h \: \left| \: \forall s' \in {\Gamma}_d, \forall s'' \in {\Gamma}_f, \ I(s', s'') \subset D_4(s'') \right. \! \right\} } ; \\ \label{C4} && l_o := l_o (k_f) = k_f^{-1} .\end{aligned}$$ These definitions are well posed, in the sense that a table can be constructed with $h = h_o$ and $l = l_o$. We call it the *optimal table* and we think of it as a [function]{} of $k_f$ ($k_d$ is considered fixed once and for all). The optimal table is [hyperbolic]{} by Theorem \[thm-hyp\]. The next proposition shows that, as $k_f \to 0$, the area of the optimal table is bounded above. (In what follows, the notation $a \sim b$ means that $a = a(k_f)$, $b = b(k_f)$ and, as $k_f \to 0$, $|a/b|$ is bounded away from $0$ and $\infty$.) \[prop-ho\] As $k_f \to 0$, $h_o(k_f) \sim k_f$. [<span style="font-variant:small-caps;">Proof.</span> ]{}Since $k_f \to 0$ and $k_d$ is fixed, we may assume that, given any $s'' \in {\Gamma}_f$, $D_4(s'')$ easily contains $D_{-2}(s')$, for all $s'$ in the upper component of ${\Gamma}_d$ (left picture in Fig. \[fig-iss\]). For $s'$ belonging to the lateral components of ${\Gamma}_d$, it is not hard to realize that the worst-case scenario is the one depicted in Fig. \[fig-ho\] (or the specular situation w.r.t. the axis of symmetry of ${\Omega}$): First of all, if $s''$ moves to the left and/or $s'$ moves upward, $I(s',s'')$ will move towards the interior of $D_4(s'')$, so that (\[C1\]) is always verified. Secondly, setting $h_o$ to be the $h$ displayed there, one clearly sees that for $h \ge h_o$ (\[C1\]) is verified, while for $h < h_o$ it is not. ![Finding $h_o$, cf. Proposition \[prop-ho\].](fig-ho.eps){width="10cm"} \[fig-ho\] Referring to the notation of Fig. \[fig-ho\], we see that $h_o = \tan \beta$ where $\beta$ is the angle between the two chords $s''P$ and $s''Q$ of $\partial D_4(s'')$. Recalling that, in a circle of radius $r$, the relation between the length $\ell$ of a chord and the angle $\theta$ it makes with the tangent to the circle at each of its endpoints is $\ell = 2r \sin \theta$, we have $$\beta = \arcsin \left( \frac{k_f \, c}2 \right) - \arcsin \left( \frac{k_f}2 \right) \sim k_f, \quad \mbox{as } k_f \to 0.$$ In the above $c$ is the length of $s''P$, for which it holds $1 < c < 2 + 2k_d^{-1}$. This ends the proof since $h_o \sim \beta$. From a technical point of view, Proposition \[prop-ho\] is a consequence of the fact that ${\Gamma}_f$ fails to act as a perturbation of a semidispersing component only for a few [trajector]{}ies, whose corresponding beams need to be defocused by visiting the long strips. As $k_f \to 0$, this phenomenon concerns fewer and fewer [trajector]{}ies, but its fix requires more and more space. Proposition \[prop-ho\] tells us that the trade-off between the two effects balances out. If a [hyperbolic]{} [billiard]{} table with a flatter and flatter focusing component need not become bigger and bigger in terms of area, one might hope that it need not in terms of diameter, either. In our particular table, one would like to redesign the strips so that their area is better placed in the plane and can be included in a fixed compact region. In the remainder of the section we show that this is possible, for example by bending the strips around the bulk of the [billiard]{}(see Fig. \[fig-t2intro\]). ![Construction of a spiral as a union of trapezoids.](fig-folding.eps){width="5cm"} \[fig-folding\] Let us describe this construction with the help of Fig. \[fig-folding\]. Substitute each strip of ${\Omega}$ with a polygonal modification given by the union of $N$ adjacent right trapezoids $T_1, T_2, \ldots, T_N$, where $N$ will be specified later depending on $k_f$. $T_1$ is placed so that its shorter leg coincides with the opening towards the bulk of ${\Omega}$: its height is then $h_1 := h \ge h_o$. The length of the shorter base is denoted $l_1$ and the two nonright angles are denoted $\pi/2 + \gamma_1$ and $\pi/2 - \gamma_1$, with $0 < \gamma_1 < \pi/2$. This causes the longer leg to measure $h_2 := h_1 / \cos \gamma_1$. The longer leg of $T_1$ is then used as the shorter leg of the next trapezoid, $T_2$, in the way depicted in Fig. \[fig-folding\]. The construction continues recursively, as values for $l_i$, $\gamma_i$ (and therefore $h_{i+1} := h_i / \cos \gamma_i$) are generated with each new trapezoid $T_i$. We call the resulting region a *polygonal spiral*, or simply *spiral*. There are two of them, and they need not be equal, so we denote $N^R, h_i^R, l_i^R, \gamma_i^R$, and $N^L, h_i^L, l_i^L, \gamma_i^L$, the parameters of the right and the left spiral, respectively. These will be determined later depending on $h_o$ and $l_o$, thus ultimately on $k_f$. We will see to it that the following conditions hold: - The spirals turn counterclockwise at each corner. - They have no self-intersections, or intersections between them or with the bulk of ${\Omega}$. - For $\epsilon \in \{ R,L \}$, all angles $\gamma_i^\epsilon$ are rational multples of $\pi$. - There exists an absolute constant $K_1$ (i.e., $K_1$ does not depend on anything, including $k_f$) such that, for $\epsilon \in \{ R,L \}$, $$\label{cond-s1} h_{N^\epsilon}^\epsilon \le K_1 h_o.$$ - There exists an absolute constant $K_2 > 1$ such that $$\label{cond-s2} l_o \le \sum_{i=1}^{N^\epsilon} l_i^\epsilon \le K_2 \, l_o.$$ - There exists an absolute constant $K_3$ such that, $\forall i=1, 2, \ldots, N^\epsilon$, $$\label{cond-s3} \frac{\tan \gamma_i^\epsilon} {l_i^\epsilon} \le \frac{K_3} {h_i^\epsilon}.$$ (The l.h.s. above is a measure of the “curvature” of the spiral at the $i$-th corner.) Under the above conditions the area of each spiral is bounded, as $k_f \to 0$, because, dropping the superscript $\epsilon$, $$\begin{aligned} \frac12 \sum_{i=1}^N ( 2l_i + h_i \tan \gamma_i ) h_i &\le& \frac{2 + K_3}2 \sum_{i=1}^N l_i h_i \nonumber \\ &\le& \mbox{const } l_o h_N \\ &\le& \mbox{const } l_o h_o \sim 1; \nonumber\end{aligned}$$ having used, in this order, (\[cond-s3\]), (\[cond-s2\]), and (\[cond-s1\]). Also, defining $({\mathcal{M}}, {\mathcal{F}}, \mu)$ as in Section \[sec-hyp\], namely, as the [dynamical system]{} corresponding to the cross section ${\mathcal{M}}$ of all line elements based in $\bar{{\Gamma}} = {\Gamma}_f \cup {\Gamma}_d$, we have: \[prop-hyp-s\] ${\mathcal{M}}$ is a global cross section for the [billiard]{} flow and $({\mathcal{M}}, {\mathcal{F}}, \mu)$ is [hyperbolic]{}. [<span style="font-variant:small-caps;">Proof.</span> ]{}First of all, ${\mathcal{F}}$, as the first-return map onto ${\mathcal{M}}$, is well-defined almost everywhere (e.g., by the Poincaré Recurrence Theorem). To prove that ${\mathcal{M}}$ is a global cross section, we need to show that a.a. [billiard]{} [trajector]{}ies have collisions against $\bar{{\Gamma}} = {\Gamma}_f \cup {\Gamma}_d$. This is easy if we use a well-known result from the theory of polygonal [billiard]{}s [@zk; @bkm]: Let $P$ be the union of the two spirals plus $R$, which is the rectangle (of base 1 and height $h$) joining the open ends of the spirals. $P$ is a *rational polygon*, meaning that all its angles are rational multiples of $\pi$. In a rational polygonal [billiard]{}, all but countably many values of the velocity $u \in S^1$ are *minimal*, in the sense that any nonsingular flow-[trajector]{}y in configuration space (i.e., the set $\{ q(t) \}_{t \in {\mathbb{R}}}$, provided that it contains no corner of $P$), with initial velocity $u$, is dense in $P$ [@zk; @bkm]. This implies that for a.a. initial conditions $(q,u)$, with $q \in P$, the [billiard]{}[trajector]{}y in $P$ hits the boundary of $R$, which means that the true [billiard]{} [trajector]{}y, relative to the table ${\Omega}$, hits $\bar{{\Gamma}}$. As for the second assertion of Proposition \[prop-hyp-s\], we need the following lemma, which will be proved later. \[lem-hyp-s\] A material point that enters a spiral will travel all the way to the end of the spiral. In particular, if $\tau$ is the travel time between the last collision before entering the spiral and the first collision after exiting it (a.a. [trajector]{}ies eventually exit the spiral), then $\tau > 2l_o = 2 / k_f$. Lemma \[lem-hyp-s\] shows that Theorem \[thm-hyp\] (and thus Theorem \[thm-conhyp\]) applies to the present case as well, since its proof only requires of [trajector]{}ies visiting a strip—or a spiral—that the travel time $\tau$ be larger than $2 / k_f$. (Note that, since the spirals are two polygons, they will have no effect on the [hyperbolic]{} features of an infinitesimal beam of [trajector]{}ies, just like the two strips. The only, inconsequential, difference is that the spirals have more corners than the strips, resulting in more discontinuity lines in ${\mathcal{M}}$.) [<span style="font-variant:small-caps;">Proof of [Lemma \[lem-hyp-s\]]{}.</span> ]{} The first assertion is an easy consequence of our design, since a point that enters $T_i$ through the shorter leg will necessarily exit it through the longer leg, thus entering $T_{i+1}$ through the shorter leg, and so on. As for the second assertion, clearly $\tau$ will be larger than twice the sum of the lengths of the shorter bases of the trapezoids. By (\[cond-s2\]), this sum is bounded below by $l_o$. ![The double spiral (right picture) “wrapping” around the bulk of ${\Omega}$ (left picture). The double spiral starts when the two spirals coming out of the bulk of ${\Omega}$ join. Its initial ray is $r_0$, its initial (total) width is $w_0$, each turn amounts to an angle $\bar{\gamma} = 2\pi / \bar{N}$, and the number of rounds is $M$. The point $A$ is the center of the double spiral.](fig-reel.eps){width="13cm"} \[fig-reel\] Let us finally give the exact construction of the two spirals. First of all, we design the spirals to become adjacent after a finite number of turns, say $m^R$ turns for the right spiral and $m^L$ turns for the left spiral (left picture of Fig. \[fig-reel\]); $m^R$ and $m^L$ are absolute constants. We say that the two spirals have now joined in a *regular double spiral*, since they will keep adjacent as they spiral outwards in the regular way shown in the right picture of Fig. \[fig-reel\]. More precisely, all trapezoids $T_i^R$, with $i \ge m^R$, and $T_i^L$, with $i \ge m^L$, are similar, and are defined by $\gamma_i^\epsilon = \bar{\gamma} := 2\pi / \bar{N}$, where $\bar{N}$ is an integer (depending on $h_o$) to be determined momentarily. The double spiral is also defined so that its initial ray (meaning the distance from the border of the spiral to its center $A$, see Fig. \[fig-reel\]) is $r_0$, an absolute constant so large that intersection with the bulk of ${\Omega}$ is avoided. At each next corner, the ray (that is, the distance between that corner and $A$) increases by a factor $1 / \cos \bar{\gamma}$. Therefore, after the first round, the ray has become $r_{\bar{N}} := r_0 ( \cos \bar{\gamma} )^{-\bar{N}}$. Since the spiral wraps around itself tightly (i.e., leaving no area uncovered), its initial width is $$\label{ds-10} w_0 := r_0 \left( \left(\cos \frac{2\pi} {\bar{N}} \right)^{-\bar{N}} \!\!\! - 1 \right).$$ On the other hand, in the place where the left and right spirals join to start the double spiral, one sees that $$\begin{aligned} \label{ds-20} w_0 &=& h_{m^R}^R + h_{m^L}^L \nonumber \\ &=& \left( \prod_{i=1}^{m^R} \frac1 { \cos \gamma_i^R } + \prod_{i=1}^{m^L} \frac1 { \cos \gamma_i^L } \right) h \\ &=:& K_4 \, h . \nonumber\end{aligned}$$ $K_4$ is an absolute constant if we prescribe that, for $i = 1, \ldots, m^\epsilon$, the angles $\gamma_i^\epsilon$ are rational multiples of $\pi$ and stay fixed while $k_f \to 0$ (this is geometrically possible, cf. Fig. \[fig-reel\], left picture). The last two equations imply that $$\label{ds-30} h = h_1 = \frac{r_0} {K_4} \left( \left(\cos \frac{2\pi} {\bar{N}} \right)^{-\bar{N}} \!\!\! - 1 \right).$$ Given $k_f$ sufficiently small, we use (\[ds-30\]) to define both $h$ and $\bar{N}$, keeping in mind that we want $h_o \le h \le K_1 h_o$, cf. (\[cond-s1\]). We need this estimate from elementary calculus: $$\label{ds-40} \lim_{n \to +\infty} \, \frac{n} {2\pi^2} \left( \left( \cos \frac{2\pi} {n} \right)^{-n} \!\!\! - 1 \right) = 1.$$ So the r.h.s. of (\[ds-30\]) decreases like $1/ \bar{N}$, as $\bar{N} \to \infty$. This ensures that, given any sufficiently small $h_o$, there exists an $\bar{N} = \bar{N}(h_o)$ such that the corresponding $h = h(h_o)$, as in (\[ds-30\]), verifies $h_o \le h \le 2 h_o$. Since $h_o = h_o(k_f)$, we rename these two values, respectively, $\bar{N}(k_f)$ and $h(k_f)$ (abbreviated in $\bar{N}$ and $h$ when there is no risk of confusion). Clearly, as $k_f \to 0$, $$\begin{aligned} \label{ds-50} && h(k_f) \sim h_o \sim k_f ; \\ \label{ds-52} && \bar{N}(k_f) \sim h^{-1} \sim k_f^{-1} . \end{aligned}$$ Together with $r_0$, $h_{m^R}^R$ (equivalently $h_{m^L}^L$) and $\bar{\gamma}$ (equivalently $\bar{N}$), the fourth and last parameter that completely determines the double spiral is $M$, which is defined as the number of complete rounds the spiral makes. (Once $M$ is determined, the total number of trapezoids in the right and left spirals is given by $$\label{ds-60} N^\epsilon = m^\epsilon + M \bar{N},$$ for $\epsilon = R$ and $\epsilon = L$, respectively.) Choosing $$\label{ds-70} M = M(k_f) := \left[ \frac{l_o} {2\pi r_0} \right] + 1 = \left[ \frac1 {2\pi r_0 k_f} \right] + 1$$ (where $[ \,\cdot\, ]$ is the integer part of a positive number) ensures that the first inequality of (\[cond-s2\]) is verified, since $\sum_i l_i^\epsilon > M 2\pi r_0 > l_0$. Also, for $\epsilon \in \{ R,L \}$, $$\label{ds-80} h_{N^\epsilon} = h_{m^\epsilon}^\epsilon (\cos \bar{\gamma})^{-M \bar{N}} \sim h_{m^\epsilon}^\epsilon \sim h_o,$$ as $k_f \to 0$, because of (\[ds-40\]) and the fact that $M \sim k_f^{-1}$ (whence $M \bar{N} \sim \bar{N}^2$). The above verifies (\[cond-s1\]). As for the second inequality of (\[cond-s2\]), we know that the trapezoids $T_i^\epsilon$, for $i \ge m^\epsilon$, are similar. Therefore, in the limit $k_f \to 0$, we obtain $$\begin{aligned} \sum_{i=1}^{N^\epsilon} l_i^\epsilon &\sim& \sum_{i=m^\epsilon}^{N^\epsilon} l_i^\epsilon \: = \ l_{m^\epsilon}^\epsilon \sum_{j=0}^{M \bar{N} - 1} (\cos \bar{\gamma})^{-j} \nonumber \\ &\sim& \tan \bar{\gamma} \, \frac{ (\cos \bar{\gamma})^{-M \bar{N}} -1 } { (\cos \bar{\gamma})^{-1} -1 } \nonumber \\ &\sim& \bar{N}^{-1} \frac1 { \bar{N}^{-2} } \sim \bar{N} \sim k_f^{-1} \\ &\sim& l_o, \nonumber \end{aligned}$$ which proves (\[cond-s2\]). In the above we have used (\[ds-52\]) and the evident geometric equalities $l_{m^R}^R = r_0 \tan \bar{\gamma}$ and $l_{m^L}^L = ( r_0 + h_{m^R}^R ) \tan \bar{\gamma}$ (Fig. \[fig-reel\]). Finally, (\[cond-s3\]) holds because, for all $i \ge m^\epsilon$, $l_i^\epsilon / h_i^\epsilon$ is constant, while $\gamma_i^\epsilon = \bar{\gamma} \to 0$, as $k_f \to 0$. The next and last result, whose proof is apparent, emphasizes the motivation behind the constructions of Section \[sec-conf\]. The table ${\Omega}= {\Omega}(k_f)$ defined before is contained in a bounded region of the plane independent of $k_f$. [BKM]{} [<span style="font-variant:small-caps;">[C. Boldrighini, M. Keane and F. Marchetti]{}</span>, [*[Billiards in polygons]{}*]{}, [[Ann. Probab. [6]{} (1978), no. 4, 532–540]{}]{}.]{} [<span style="font-variant:small-caps;">[L. A. Bunimovich]{}</span>, [*[On billiards close to dispersing]{}*]{}, [[Math. USSR Sb. [23]{} (1974), 45–67]{}]{}.]{} [<span style="font-variant:small-caps;">[L. A. Bunimovich]{}</span>, [*[On the ergodic properties of nowhere dispersing billiards]{}*]{}, [[Comm. Math. Phys. [65]{} (1979), no. 3, 295–312]{}]{}.]{} [<span style="font-variant:small-caps;">[L. A. Bunimovich]{}</span>, [*[On absolutely focusing mirrors]{}*]{}, [[in: Ergodic Theory and related topics, III (Gustrow, 1990), edited by U. Krengel et al., Lecture Notes in Math. [1524]{}, Springer, Berlin (1992), 62–82]{}]{}.]{} [<span style="font-variant:small-caps;">[L. Bussolari and M. Lenci]{}</span>, [*[Hyperbolic billiards with nearly flat focusing boundaries. II]{}*]{}, [[in preparation]{}]{}.]{} [<span style="font-variant:small-caps;">[N. Chernov and R. Markarian]{}</span>, [*[Chaotic billiards]{}*]{}, [[Mathematical Surveys and Monographs, [127]{}. American Mathematical Society, Providence, RI, 2006]{}]{}.]{} [<span style="font-variant:small-caps;">[G. Del Magno]{}</span>, [*[Ergodicity of a class of truncated elliptical billiards]{}*]{}, [[Nonlinearity [14]{} (2001), no. 6, 1761–1786]{}]{}.]{} [<span style="font-variant:small-caps;">[V. J. Donnay]{}</span>, [*[Using integrability to produce chaos: billiards with positive entropy]{}*]{}, [[Comm. Math. Phys. [141]{} (1991), no. 2, 225–257]{}]{}.]{} [<span style="font-variant:small-caps;">[A. Katok (with the collaboration of K. Burns)]{}</span>, [*[Infinitesimal Lyapunov functions, invariant cone families and stochastic properties of smooth dynamical systems]{}*]{}, [[Ergodic Theory Dynam. Systems [14]{} (1994), no. 4, 757–785]{}]{}.]{} [<span style="font-variant:small-caps;">[A. Katok and J.-M. Strelcyn (in collaboration with F. Ledrappier and F. Przytycki)]{}</span>, [*[Invariant manifolds, entropy and billiards; smooth maps with singularities]{}*]{}, [[Lecture Notes in Math. [1222]{}, Springer-Verlag, Berlin-New York, 1986]{}]{}.]{} [<span style="font-variant:small-caps;">[C. Liverani and M. Wojtkowski]{}</span>, [*[Ergodicity in Hamiltonian systems]{}*]{}, [[in: Dynamics Reported: Expositions in Dynamical Systems (N.S.), [4]{}, Springer-Verlag, Berlin, 1995]{}]{}.]{} [<span style="font-variant:small-caps;">[R. Markarian]{}</span>, [*[Billiards with Pesin region of measure one]{}*]{}, [[Comm. Math. Phys. [118]{} (1988), no. 1, 87–97]{}]{}.]{} [<span style="font-variant:small-caps;">[R. Markarian]{}</span>, [*[Non-uniformly hyperbolic billiards]{}*]{}, [[Ann. Fac. Sci. Toulouse Math. (6) [3]{} (1994), no. 2, 223–257]{}]{}.]{} [<span style="font-variant:small-caps;">[V. Rom Kedar and D. Turaev]{}</span>, [*[Big islands in dispersing billiard-like potentials]{}*]{}, [[Phys. D [130]{} (1999), no. 3-4, 187–210]{}]{}.]{} [<span style="font-variant:small-caps;">[Ya. G. Sinai]{}</span>, [*[Dynamical systems with elastic reflections]{}*]{}, [[Russ. Math. Surveys [25]{} (1970), no. 2, 137–189]{}]{}.]{} [<span style="font-variant:small-caps;">[D. Turaev and V. Rom Kedar]{}</span>, [*[Soft billiards with corners]{}*]{}, [[J. Statist. Phys. [112]{} (2003), no. 3-4, 765–813]{}]{}.]{} [<span style="font-variant:small-caps;">[M. Wojtkowski]{}</span>, [*[Invariant families of cones and Lyapunov exponents]{}*]{}, [[Ergodic Theory Dynam. Systems [5]{} (1985), no. 1, 145–161]{}]{}.]{} [<span style="font-variant:small-caps;">[M. Wojtkowski]{}</span>, [*[Principles for the design of billiards with nonvanishing Lyaponov exponents]{}*]{}, [[Comm. Math. Phys. [105]{} (1986), no. 3, 391–414]{}]{}.]{} [<span style="font-variant:small-caps;">[A. N. Zemlyakov and A. B. Katok]{}</span>, [*[Topological transitivity of billiards in polygons]{}*]{}, [[Math Notes [18]{} (1975), no. 1–2, 760–764 (1976)]{}]{}.]{} [^1]: Department of Mathematical Sciences, Stevens Institute of Technology, Hoboken, NJ 07030, U.S.A. [^2]: Dipartimento di Matematica, Università di Bologna, P.zza di Porta S. Donato 5, 40126 Bologna, ITALY [^3]: E-mails: `lbussola@math.stevens.edu`, `lenci@dm.unibo.it`
--- author: - 'Carl Yang, Lingrui Gan, Zongyi Wang, Jiaming Shen, Jinfeng Xiao, Jiawei Han' bibliography: - 'setevolve.bib' title: 'Query-Specific Knowledge Summarization with Entity Evolutionary Networks' ---
--- abstract: 'The dynamical behavior of a higher-order cubic Ginzburg-Landau equation is found to include a wide range of scenarios due to the interplay of higher-order physically relevant terms. We find that the competition between the third-order dispersion and stimulated Raman scattering effects, gives rise to rich dynamics: this extends from Poincaré-Bendixson–type scenarios, in the sense that bounded solutions may converge either to distinct equilibria via orbital connections, or space-time periodic solutions, to the emergence of almost periodic and chaotic behavior. One of our main results is that the third-order dispersion has a dominant role in the development of such complex dynamics, since it can be chiefly responsible (i.e., even in the absence of the other higher-order effects) for the existence of the periodic, quasi-periodic and chaotic spatiotemporal structures. Suitable low-dimensional phase space diagnostics are devised and used to illustrate the different possibilities and identify their respective parametric intervals over multiple parameters of the model.' author: - 'V. Achilleos' - 'A. R. Bishop' - 'S. Diamantidis' - 'D. J. Frantzeskakis' - 'T. P. Horikis' - 'N. I. Karachalios' - 'P. G. Kevrekidis' title: ' The dynamical playground of a higher-order cubic Ginzburg-Landau equation: from orbital connections and limit cycles to invariant tori and the onset of chaos ' --- \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] Introduction ============ Nonlinear evolution equations are often associated with the theory of solitons and integrable systems [@Ablo]. A prime example is the nonlinear Schrödinger (NLS) equation which constitutes one of the universal nonlinear evolution equations, with applications ranging from deep water waves to optics [@ablo2]. Remarkable phenomena are also exhibited by its higher-order variants, emerging in a diverse spectrum of applications, such as nonlinear optics [@KodHas87], nonlinear metamaterials [@p31], and water waves in finite depth [@johnson; @sedletsky; @slun]. On the other hand, dissipative variants of NLS models incorporating gain and loss have also been used in optics [@akbook], e.g., in the physics of mode-locked lasers [@laser1; @laser2] (see also the relevant works [@tsoy; @tsoy2]) and polariton superfluids [@daniele] – see, e.g., Ref. [@akbook2] for various applications. Note that such dissipative NLS models can be viewed as variants of the complex Ginzburg-Landau (CGL) equation, which has been extensively studied, especially in the context of pattern formation in far-from-equilibrium systems [@ar]. Dissipative nonlinear evolution equations (incorporating gain, loss, external driving, or combinations thereof) may exhibit (and potentially be attracted to) low-dimensional dynamical features, such as: (a) one or more equilibria (and orbits connecting them), (b) periodic orbits, (c) quasi-periodic orbits or (d) low-dimensional chaotic dynamics [@smale]. The availability of the dynamical scenarios (a)-(d) depends on the effective dimensionality of the low dimensional behavior; one-dimensionality only allows fixed points, planar systems governed by the Poincar[é]{}-Bendixson (PB) theorem [@smale] can also feature periodic orbits, while higher dimensions allow for quasi-periodic or chaotic dynamics. Various prototypical partial differential equation models have demonstrated a PB-type behavior as an intermediate bifurcation stage in the route to spatiotemporal chaos. Examples include the Kuramoto-Sivashinsky [@Cv1] and complex Ginzburg-Landau (CGL) equations; regarding the CGL model, which is of primary interest in this work, we refer to the seminal works [@NBGL] for the spatiotemporal transition to chaos. In addition to the above autonomous systems, spatiotemporal chaos was also found in non-autonomous ones, due to the interplay between loss and external forces, such as the damped-driven NLS [@nobe; @kai; @eli] (where the hyperbolic structure of the underlying integrable NLS is a prerequisite [@Li]) and the sine-Gordon [@Sin1] system. In this work, we focus on the role of higher order effects, and investigate the possibility of bifurcation phenomena leading to the existence of the above prototypical examples of low-dimensional dynamics in an autonomous, physically important higher-order CGL-type model. This model, is motivated by the higher-order NLS equation that is commonly used, e.g., in studies of ultrashort pulses in optical fibers [@KodHas87], but also incorporates (linear or nonlinear) gain and loss; it is, thus, a physically relevant variant of a higher-order cubic CGL equation – without the diffusion term. Note that higher-order versions of the CGL equation, have only recently started attracting attention [@LatasA], while extended second-order CGL models have been extensively studied in various contexts previously [@akbook; @ar; @akbook2]. In particular, we refer to the pioneering work [@laser2], followed by the important contributions [@tsoy; @tsoy2], which revealed the existence of the aforementioned low-dimensional dynamical scenarios for second-order quintic CGL models. The results of [@laser2; @tsoy; @tsoy2] were established by numerical and even analytical reductions to suitable finite dimensional dynamical systems, capturing the long-time dynamics of the original infinite dimensional one. Notably, the revealed dynamical scenarios were associated with a variety of novel localized structures (known as pulsating solitons). However, a crucial feature of these models as acknowledged, e.g., by the authors of [@laser2] was the presence of a higher order (quintic) nonlinearity. This is a key trait distinguishing that set of works from the present one where only cubic nonlinearity is employed, yet the presence of higher order effects, most notably third order dispersion as we will see below, plays a catalytic role in the emergence of the relevant phenomenology. More specifically, we should point out that the [*third-order cubic*]{} CGL model that we consider herein is essentially different from the [*second-order cubic-quintic*]{} model discussed in [@laser2; @tsoy; @tsoy2], not only from a mathematical but also from a physical point of view: indeed, in the context of optics, the model considered in the latter works refers to propagation of short pulses, in the picosecond regime, in media featuring saturation of the nonlinear refractive index, while the model we consider here is relevant to propagation of [*ultrashort*]{} pulses in the sub-picosecond or femtosecond regimes [@KodHas87]. For this reason, our model includes third-order dispersion and higher-order nonlinear effects, that appear naturally as higher-order corrections of the usual NLS model in the framework of the reductive perturbation method. In that regard, and if gain and loss are also incorporated, it is important to ask if, and how, the physically important (in the femtosecond time-scale) higher-order effects may be responsible for tracing a path to complex dynamics, as a result of the potential breaking of the homoclinic structure of the unperturbed NLS counterpart. The main findings of our investigations are the following. First, we show that the incorporation of the gain and loss terms gives rise to the existence of an attractor; a rigorous proof is provided, based on the interpretation of the energy balance equation and properties of the functional (phase) space on which the problem defines an infinite-dimensional flow. The structure of the attractor is then investigated numerically. Given that our model is characterized by six free parameters (which renders a systematic investigation of their role a nontrivial task), we opt to keep four parameters fixed, with values suggested by the physics of ultrashort optical pulses [@KodHas87], and vary the remaining two. In particular, we vary the coefficients of the third-order dispersion and the higher-order nonlinear dissipation, accounting for the stimulated Raman scattering (SRS) effect (more important reasons for this choice will become apparent below). We find that, for sufficiently small SRS coefficient, variations of the third-order dispersion strength give rise to a transition path from dynamics reminiscent of PB, including orbital connections between steady states of high multiplicity and convergence to limit cycles, to invariant tori or even chaotic attractors. However, when the SRS effect becomes stronger, the above scenarios are screened by convergence to steady-states. It is highlighted that the third-order dispersion is found to be chiefly responsible for a dynamical transition from periodic, to quasi-periodic and eventually to chaotic structures. Therefore, our results show that higher (third)-order dispersion and dissipative (SRS) effects are important mechanisms for the emergence of complex spatiotemporal transitions in CGL models. Our presentation is organized as follows. In Section II, we present the model, and discuss the existence of a limit set (attractor). Details on the proof of such a limit set are given in the Appendix \[apLS\]. The structure of the attractor is then investigated numerically in Section  III. We thus reveal the emergence of all dynamical scenarios and corresponding regimes of complex asymptotic behavior. Finally, Section IV summarizes our findings. Motivation and presentation of the model ======================================== Our model is motivated by the following higher-order NLS equation: $$\begin{aligned} \label{eq0} {\rm i}\partial_t u - \frac{s}{2} \partial_x^2 u + |u|^2u &={\rm i}\beta \partial_x^3 u+{\rm i}\mu \partial_x (|u|^2u) \nonumber\\ &+ \left(\sigma_R+{\rm i}\nu\right)u \partial_x(|u|^2), \end{aligned}$$ where $u(x,t)$ is a complex field, subscripts denote partial differentiation, $\beta$, $\mu$, $\nu$ and $\sigma_R$ are positive constants, while $s = \pm 1$ denotes normal (anomalous) group velocity dispersion. Note that Eq. (\[eq0\]) can be viewed as a perturbed NLS equation, with the perturbation (in case of small values of relevant coefficients) appearing in the right-hand side (see, e.g., Refs. [@KodHas87] and discussion below). Variants of Eq. (\[eq0\]) appear in a variety of physical contexts, where they are derived at higher-order approximations of perturbation theory \[the lowest-order nonlinear model is simply the NLS equation in the left-hand side of Eq. (\[eq0\])\]. The most prominent example is probably that of nonlinear optics [@KodHas87]. In this case, $t$ and $x$ denote propagation distance and retarded time, respectively, while $u(x,t)$ is the electric field envelope. While the unperturbed NLS equation is sufficient to describe optical pulse propagation, for ultra-short pulses third-order dispersion and self-steepening (characterized by coefficients $\beta$, $\mu$ and $\nu$, respectively) become important and have to be incorporated in the model. Similar situations also occur in other contexts and, thus, corresponding versions of Eq. (\[eq1\]) have been derived and used, e.g., in nonlinear metamaterials [@p31], but also in water waves in finite depth [@johnson; @sedletsky; @slun]. Moreover, in the context of optics, and for relatively long propagation distances, higher-order nonlinear dissipative effects, such as the SRS effect, of strength $\sigma_R>0$, are also important [@KodHas87]. In addition to the above mentioned effects, our aim is to investigate the dynamics in the framework of Eq. (\[eq0\]), but also incorporating linear or nonlinear gain and loss. This way, in what follows, we are going to analyze the following model: $$\begin{aligned} \label{eq1} \!\!\!\!\!\!\!\!\! {\rm i}\partial_t u - \frac{s}{2} \partial_x^2 u + |u|^2u &=&{\rm i}\gamma u+{\rm i}\delta|u|^2u + {\rm i}\mu \partial_x (|u|^2u) \nonumber\\ &+&{\rm i}\beta \partial_x^3 u+\left(\sigma_R+{\rm i}\nu \right)u \partial_x(|u|^2), \end{aligned}$$ which includes linear loss ($\gamma<0$) \[or gain ($\gamma>0$)\]. These effects are physically relevant in nonlinear optics [@KodHas87; @akbook; @akbook2]: indeed, nonlinear gain ($\delta>0$) \[or loss ($\delta<0$)\] may be used to counterbalance the effects from the linear loss/gain mechanisms and can potentially stabilize optical solitons – see, e.g., Refs. [@p35; @DJFTheo]. As is also explained below, here we focus on the case of linear gain, $\gamma>0$, and nonlinear loss, $\delta<0$, corresponding to a constant gain distribution, and the intensity-dependent two-photon absorption, respectively (see, e.g., Refs. [@Chen]). Obviously, the presence of gain/loss renders Eq. (\[eq1\]) a higher-order cubic CGL equation (cf. recent studies [@LatasA] on such models). Note that in Eq. (\[eq1\]), diffusion is absent: such a linear term would be of the form $\mathrm{i} D\partial_x^2 u$ ($D=$const.), and would appear in the right-hand side of Eq. (\[eq1\]) to account for the presence of spectral filtering or linear parabolic gain ($D>0$) or loss ($D<0$) [@laser2; @tsoy; @tsoy2]. Instead, the equation only features linear dispersion through the term proportional to $s$ in the left hand side of Eq. (\[eq1\]). The gain/loss effects are pivotal for the dissipative nature of the infinite-dimensional flow that will be defined below. This dissipative nature is reflected in the existence of an attractor, capturing its long-time dynamics; nevertheless, as we will show below, the structure of the attractor is determined by the remaining higher-order effects. Here, we focus on the case $s=1$, and supplement Eq. (\[eq1\]) with periodic boundary conditions for $u$ and its spatial derivatives up to the-second order, namely: $$\label{eq2} \begin{array}{cc} u(x+2L,t)=u(x,t),\;\;\mbox{and}\\ \;\;\;\;\;\;\;\partial^j_x(x+2L,t)=\partial^j_x(x,t),\;\; j=1,2, \end{array} $$ $\forall\;\;(x,t)\in{\mathbb R}\times [0,T]$, for some $T>0$, where $L>0$ is given. The initial condition $$\label{eq3} u (x,0)=u_0(x),\quad \forall\,x\in{\mathbb R},$$ also satisfies the periodicity conditions (\[eq2\]). Here, we should mention that the periodic boundary conditions that we consider here are also motivated by the context of optics. Recalling that the roles of space and time are interchanged in the latter context, we note that, indeed, in optical cavities (e.g., ones for lasers), the period $L$ would account for the (retarded) time it takes light to traverse to the laser cavity once and, thus, the boundaries represent the same point in the real space-time (see, e.g., Refs. [@Salin; @Haus2; @VanWig; @AbDT]). In this context, the dynamics that will be analyzed below are relevant to the dynamical transitions and the observation of chaotic optical waveforms in fiber ring lasers [@VanWig]. As shown in Ref. [@PartI], all possible regimes except $\gamma>0$, $\delta<0$, are associated with finite-time collapse or decay. Furthermore, a critical value $\gamma^*$ can be identified in the regime $\gamma<0$, $\delta>0$, which separates finite-time collapse from the decay of solutions. On the other hand, for $\gamma>0$, $\delta<0$, we prove in Appendix \[apLS\] the existence of a limit set (attractor) $\omega(u_0)$, attracting all bounded orbits initiating from arbitrary, appropriately smooth initial data $u_0$ (considered as elements of a suitable Sobolev space). In the next Section, we will show numerically that the attractor $\omega(u_0)$ captures the full route from PB-type dynamics to quasi-periodic or chaotic dynamics. Numerical results ================= The structure of the limit set $\omega(u_0)$, is investigated by numerical integration via a high-accuracy pseudo-spectral method. In our simulations, we fix the half length of $\Omega$ to $L=50$, and the ratio $-\gamma/\delta$ to be of the order of unity, and thus fix $\gamma=1.5$ and $\delta=-1$. This choice, which stems from the fact that this ratio determines the constant density steady-state (see below), will be particularly convenient for illustration purposes. Furthermore, motivated by the fact that, in the context of optics, parameters describing the higher-order effects take, typically, small values [@KodHas87], we fix $\mu=\nu=0.01$, while third-order dispersion and SRS strengths, $\beta>0$, $\sigma_R>0$, are varied in the intervals $[0,~1]$ and $[0,~0.3]$, respectively. Obviously, the above choice is merely a low-dimensional projection of the full 6-dimensional parameter space. Nevertheless, since our scope here is to illustrate the role of higher-order effects on the emergence of complex dynamics in Eq. (\[eq1\]), we will show below that the variations of $\beta$ and $\sigma_R$ alone do offer a clear physical picture in that regard. To be more specific, the choice of those particular parameters stems from the following facts. First, third-order dispersion is the sole linear higher-order effect, which is important also in the linear regime (as it modifies the linear dispersion relation). Second, the stimulated Raman scattering effect is the first higher-order dissipative effect and, as such, is expected to play dominant role in the long-time nonlinear dynamics of the system. Naturally, the nontrivial task (as also highlighted above) of investigating the full parameter space is interesting and relevant in its own right, yet it is beyond the scope of this work. In our simulations, the limit set $\omega(u_0)$ will be visualized by projections of the flow to suitable 2D or 3D spaces, defined by $\mathcal{P}_2=\left\{(X,Y)\in\mathbb{R}^2\right\}$, and $\mathcal{P}_3=\left\{(X,Y,Z)\in\mathbb{R}^3\right\}$. Here, $X(t)=|u(x_1,t)|^2$, $Y(t)=|u(x_2,t)|^2$, $Z(t)=|u(x_3,t)|^2$, for arbitrary spatial coordinates $x_1,\;x_2,x_3\in \Omega$. ![(Color Online) The scenario $\omega(u_0)=\{\phi_b\}$. Left panel: convergence to the fixed point $\mathbf{A}$. Right Panel: the fixed point $\mathbf{A}$ as a limit circle of radius $\sqrt{-\gamma/\delta}$. []{data-label="fig0"}](PRFig1.JPG) Steady-state and orbital connections regime ------------------------------------------- First, we use cw initial data, $$u_0(x)=\epsilon\, \exp\left(-i\frac{K\pi x}{L}\right) \equiv \epsilon \phi_K$$ of amplitude $\epsilon>0$ and wave-number $K>0$, which is an element of the 1D-linear subspace $$\mathcal{V}_K=\left\{u\in L^2(\Omega)\;:\;u=\epsilon\phi_K(x),\;\;\epsilon>0\right\}$$ of $L^2(\Omega)$. Here we should note that there exists a cw state which is an exact solution of Eq. (\[eq1\]); this solution is generically subject to modulational instability (MI) [@ostro] (so-called Benjamin-Feir instability in the context of deep water waves [@BF]). The exact cw solution, as well the relevant MI analysis are presented in Appendix B. However, such analysis is not capable of providing any insights on the long-time dynamics of the solutions. Indeed, although it can be used as a means to understand the destabilization of the cw steady-state, it does not offer any information regarding the long-time behavior and the states the system passes through. As we show below, the intricate dynamics that emerge, cannot be fully understood in the framework of the MI picture. Using the above cw initial data, and varying $\sigma_R>0$, we find that $\omega(u_0)$ is an equilibrium state. Specifically, there exists a critical wave number $K_{\mathrm{max}}$ such that: for $K <K_\mathrm{max}$, $\omega(u_0)=\phi_b$, i.e., a steady-state of constant density $|\phi_b|^2=-\frac{\gamma}{\delta}$; for $K\geq K_\mathrm{max}$, $\omega(u_0)=\Phi_p$, i.e., a steady-state of spatially periodic density. We find that $K_{\mathrm{max}}$ decreases as $\sigma_R$ increases: if $\sigma_R=0,0.1,0.2,0.3$, and $\beta=0.02$, then $K_{\mathrm{max}}=16,13,10,5$, respectively. The dynamical scenario $\omega(u_0)=\{\phi_b\}$ for $\beta=0.02$, $\sigma_R=0.3$ and $K=4$ is illustrated in Fig. \[fig0\]. The projection of the cw equilibrium $\phi_b$ to the 2D space $\mathcal{P}_2$ is the fixed point $\mathbf{A}=(|\phi_b|^2,|\phi_b|^2)=\left(-\frac{\gamma}{\delta},-\frac{\gamma}{\delta}\right)=(1.5,1.5)$. The right panel of Fig. \[fig0\] illustrates the convergence of the projected linear orbits to $\mathbf{A}$, associated to the choice of spatial coordinates $x_1=5, x_2=10$. The dashed blue (continuous red) line is the projection of the flow for the cw with $\epsilon=3$ ($\epsilon=0.01$); the arrows indicate the direction of the 2D-projection of the flow. The cw steady state $\phi_b$ is an element of $\mathcal{V}_K$, and only differs in amplitude from the initial condition. Hence, $\mathcal{V}_K$ defines a stable linear subspace for $\mathbf{A}$. The right panel of Fig. \[fig0\] visualizes the steady state $\phi_b$ as a limit circle $\mathbf{A}$ of radius $\sqrt{-\frac{\gamma}{\delta}}=\sqrt{1.5}$, in the 2D space $(\mathrm{Re}(u(0,t)),\mathrm{Im}(u(0,t))$. The limit circle corresponds to the rotating linear oscillations of the real and imaginary parts of the solution $u$. Effectively in this case, the solution preserves its plane-wave form but its amplitude, say $h(t)$, satisfies a Bernoulli equation. This can be seen as follows: we substitute $u(x,t)=W(t) e^{\mathrm{i} K x}$ in Eq. (\[eq1\]) and obtain the following equation for the time-dependent amplitude $W(t)$ (as usual overhead dots denote time derivatives): $$\begin{aligned} \mathrm{i} \dot{W} - \frac{1}{2} K^2 W + |W|^2 W &=& \mathrm{i}\gamma W +\mathrm{i}\delta |W|^2 W \nonumber \\ &+& \beta K^3 W -\mu K W. \label{extra1}\end{aligned}$$ Then, taking $W(t)= h(t) \exp[\mathrm{i} (K^2/2 - \beta K^3 + \mu K) t]$, we obtain from Eq. (\[extra1\]) the Bernoulli equation: $$\dot{h}=\gamma h+\delta h^3.$$ Thus, for $h(0)=\epsilon$, $$\lim_{t\rightarrow\infty}h^2(t)=-\frac{\gamma}{\delta} \equiv |\phi_b|^2.$$ ![(Color Online) The scenario $\omega(u_0)=\{\Phi_p\}$. Upper panels: density snapshots at times (a) $t\approx500$, (b) $t\approx 683$, (c) $t\approx700$. Bottom panels: orbital connections $\mathbf{O}\rightarrow \mathbf{A}\rightarrow\mathbf{B}$ in 2D (left) and 3D (right) spaces. []{data-label="fig1"}](PRFig2.JPG) Next, consider the scenario $\omega(u_0)=\{\Phi_p\}$, for $\beta=0.02$ $\sigma_R=0.3$, and $K=5$, illustrated in Fig. \[fig1\]. The upper panel shows density snapshots for a cw-initial condition with $\epsilon=0.01$. The solution has reached the cw-steady state $\phi_b$ exponentially fast, but at $t\approx 500$ the instability of the state $\phi_b$ emerges. Although transient oscillations of increasing amplitude occur (cf. snapshot at $t=683$) due to the linear gain $\gamma>0$, the nonlinear loss $\delta<0$ prevents collapse of the solution. After $t\approx 685$, we observe convergence to the new steady state $\Phi_p$ (reached at $t\approx 700$), whose profile remains unchanged till the end of integration ($t=3000$). The orbital connection, via the transient dynamics, between steady states $\phi_b$ and $\Phi_p$ is illustrated in the projections of the flow on the spaces $\mathcal{P}_2$ and $\mathcal{P}_3$ – cf. bottom left and right panels of Fig. \[fig1\], respectively, for $x_1=0$ and $x_2=4.5$. In 2D, $\mathbf{B}\approx (1.5,0.15)$ is the new fixed point, while in 3D, $\mathbf{A}=(1.5,1.5,1.5)$ and $\mathbf{B}\approx (1.5,0.15, 1.16)$. The infinite-dimensional orbital connection: $$\{0\}\;\;\mbox{(unstable)}\xrightarrow{\mathcal{O}_1}\{\phi_b\}\;\;\mbox{(unstable)}\xrightarrow{\mathcal{O}_2}\{\Phi_p\}=\omega(u_0),$$ where $\mathcal{O}_1$ and $\mathcal{O}_2$ denote the orbits connecting the steady states, is projected to the 2D and 3D-connections: $$\mathbf{O}\;\;\mbox{(unstable)}\xrightarrow{\mathcal{O}'_1}\{\mathbf{A}\}\;\;\mbox{(unstable)}\xrightarrow{\mathcal{O}'_2}\{\mathbf{B}\}.$$ The projected orbits highlight the spiraling of the stable manifold of the limit point $\mathbf{B}$ around the unstable linear subspace of $\mathbf{O}=(0,0,0)$ connecting $\mathbf{O}$ and $\mathbf{A}$. The connection was found to be stable with respect to variations of $\epsilon$ – cf. linear dashed blue (continuous red) converging orbit in the bottom left panel, corresponding to a cw-initial condition of amplitude $\epsilon=2$ ($\epsilon=0.01$). ![(Color Online) The dynamics scenario $\omega(u_0)=\mathbf{L}$, i.e., a space-time periodic traveling wave. Upper panels: density snapshots at times (a) $t\approx135$, (b) $t\approx 150$, (c) $t\approx 180$. Bottom panels: convergence $\mathbf{O}\rightarrow \mathbf{A}\rightarrow\mathbf{L}$, the limit cycle in 2D (left) and 3D (right) spaces.[]{data-label="fig2"}](PRFig3.JPG) Space-time periodic (limit-cycle) regime ---------------------------------------- Increasing $\beta$, for $\sigma_R=0.01$, we observe the birth of yet another feature, namely traveling space-time oscillations. The upper panel of Fig. \[fig2\] shows density snapshots, for a cw initial condition of $K=5$, $\epsilon=0.01$, and $\beta=0.55$. Now, instability of the steady-state $\phi_b$, leads to the birth of a stable, traveling space-time periodic solution, whose profile is shown for $t=180$ (arrow indicates propagation direction). The projections, for $x_1=0$, $x_2=5$ and $x_3=10$, on $\mathcal{P}_2$ (bottom left panel) and $\mathcal{P}_3$ (bottom right panel), visualize the periodic solution as a limit cycle $\mathbf{L}$, i.e., a periodic orbit. The continuous blue (dashed red) linear orbit shown in the bottom left panel corresponds to the cw-initial condition of $K=5$ and $\epsilon=3$ ($\epsilon=0.01$), highlighting the stability (i.e., attracting nature) of the limit cycle with respect to $\epsilon$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![(Color Online) The dynamics scenario $\omega(u_0)=\mathbf{L}$, in the presence of third-order dispersion only, namely, for $\beta=0.02$ and $\mu=\nu=\sigma_R=0$. Upper panel: density snapshots at times (a) $t\approx 400$, (b) $t \approx 420$ and (c) $t \approx 450$, for a single-cw initial condition of $K=15$, $\epsilon=0.01$, and $\beta=0.02$. Bottom left and right panels: convergence $\mathbf{O}\rightarrow \mathbf{A}\rightarrow\mathbf{L}$, i.e., the stable limit cycle, in the 2D and 3D spaces.[]{data-label="fig2b"}](PRFig4.JPG "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Specifically, for fixed $\sigma_R=0.01$ and $K>4$, there exists an interval $I_{\beta,K}=[\beta_{\mathrm{min}}(K), \beta_{\mathrm{max}}(K)]$, such that for some $\beta\in I_{\beta,K}$, the initial condition may converge to a space-time periodic solution; e.g., for $K=5$, $I_{\beta,5}\approx[0.5,0.57]$, while for $K=20$, $I_{\beta,20}\approx[0.7, 1.2]$. On the other hand, when $\beta\notin I_{\beta, K}$, the initial condition converges to a steady state. Evidently, the structure of the limit set $\omega(u_0)$ for Eq. (\[eq1\]), consisting either of distinct equilibria and orbits connecting them, or of a limit cycle, is reminiscent of scenarios associated with PB dynamics. It is important to remark that third-order dispersion plays a critical role in this scenario of $\omega(u_0)=\mathbf{L}$, as it can be [*solely*]{} responsible for the emergence of a limit cycle. Indeed, Fig. \[fig2b\] shows the dynamics for a cw-initial condition of $K=15$ and amplitudes as in Fig. \[fig2\], but for $\beta=0.02$ and $\sigma_R=\mu=\nu=0$. Furthermore, the third-order dispersion alone, can also give rise to even more complex behavior (see below). ![(Color Online) Birth of a chaotic attractor $\omega(u_0)=\mathbf{S}$. Transition from the instability of the cw-steady state $\mathbf{A}$, to quasiperiodic, and to chaotic behavior for $t\in [0,330]$. []{data-label="fig3"}](PRfig5.JPG) Quasi-periodic and chaotic regime --------------------------------- The interval $I_{\beta,K}$ may be partitioned to sub-intervals where quasi-periodic, or even chaotic behavior emerges. Figure \[fig3\] shows the 3D-projection of the flow on $\mathcal{P}_3$, for $x_1=5$, $x_2=10$, $x_3=15$, $t\in [0,350]$, $\beta=0.52$, $\sigma_R=\mu=\nu=0.01$, for a cw of $\epsilon=0.01$ and $K=5$. We observe the birth of quasi-periodic orbits from the instability of the steady-state $\phi_b$, and the transition to chaotic behavior manifested by their trapping to a chaotic attractor $\mathbf{S}$. ![(Color Online) Top left panel: a chaotic path in $\mathbf{S}$ for $t\in [180,200]$. Top right panel: projection in 3D-space $\mathcal{P}_3$ of the invariant torus-like set $\mathbf{Q}$ for $t\in [1800,2000]$. Bottom panels: chaotic waveforms, corresponding to points $\mathbf{P1}$ at time $t\approx 150$ (left) and $\mathbf{P2}$ at time $t \approx 165$ (middle), and a quasi-periodic solution in $\mathbf{Q}$ at time $t\approx 1900$ (right). []{data-label="fig4"}](PRFig6.JPG) The upper left panel of Fig. \[fig4\] shows part of a chaotic orbit in $\mathbf{S}$, for $t\in [180,200]$, and $\beta =0.5\approx \beta_{\mathrm{min}}(5)$. The first two snapshots of the bottom panel show profiles of the solution corresponding to points $\mathbf{P1}$ and $\mathbf{P2}$, for $t=150$ and $t=165$. The “windings” of the chaotic orbits are evident in the upper left panel of Fig. \[fig4\], similarly also to the bottom right panel of Fig. \[fig3\]. The chaotic behavior manifests itself in the time-fluctuating amplitude, the changes in the waveform’s spatial periodicity, and in the propagation direction of the chaotic traveling wave. The interval $I_{\beta,K}=[\beta_{\mathrm{min}}(K), \beta_{\mathrm{max}}(K)]$ can be partitioned in the following sub-intervals: a chaotic $I_{\beta,K,c}=[\beta_{\mathrm{min}}(K), \beta_{\mathrm{ch}}(K)]$, a quasi-periodic $I_{\beta,K,q}=(\beta_{\mathrm{ch}}(K), \beta_{\mathrm{lc}}(K))$, and a limit-cycle one $I_{\beta,K,lc}=[\beta_{\mathrm{lc}}(K), \beta_{\mathrm{max}}(K)]$. Let $\beta_{\mathrm{min}}(K)$ be the critical value for the onset of the quasiperiodic behavior and the transition to the chaotic regime. Then, as $\beta\rightarrow \beta_{\mathrm{ch}}(K)$, the chaotic features are less evident and emerge at later times. Chaotic orbits still exist for $\beta=\beta_{\mathrm{ch}}(K)$. For $\beta>\beta_{\mathrm{ch}}(K)$, solutions remain quasi-periodic, and the orbit is trapped within an invariant torus-like set $\mathbf{Q}$. For $K=5$, we find that $\beta_{\mathrm{ch}}(5)\approx 0.53$. The projection on $\mathcal{P}_3$ of $\mathbf{Q}$ for $\beta=0.54>\beta_{\mathrm{ch}}(5)$, is shown in the upper right panel of Fig. \[fig4\]. The orbit is plotted for $t\in [1800,2000]$, and the profile of a quasi-periodic solution within $\mathbf{Q}$ at $t=1900$ is shown in the third snapshot of the bottom panel. The set $\mathbf{Q}$ persists as long as $\beta<\beta_{\mathrm{cl}}(K)$. When $\beta_{\mathrm{lc}}(K)\leq \beta\leq \beta_{\mathrm{max}}(K)$, the set $\mathbf{Q}$ is replaced by a limit cycle. For $K=5$, we find the following sub-intervals of $I_{\beta,5}\approx[0.5,0.57]$: the chaotic $I_{\beta,5,c}\approx [0.5, 0.53]$, the quasi-periodic $I_{\beta,5,q}\approx (0.53, 0.55)$, and the limit-cycle $I_{\beta,5,lc}\approx [0.55, 0.57]$. For $K=5$, the above sub-intervals were detected with accuracy $10^{-3}$: for $\beta=0.549$, the set $\mathbf{Q}$ persists, while for $\beta=0.55$, the initial state is trapped on the limit cycle. Numerical bifurcation diagrams ------------------------------ The richness of the dynamics can be summarized in a bifurcation diagnostic (“diagnostic I”), namely the $\beta- ||u||_{\infty}$ bifurcation diagram, shown in the upper panel of Fig. \[fig6\]. The bifurcation curve \[continuous (blue) line\] illustrates the variations of the $||u||_{\infty}$-norm of the solutions, defined as $$||u||_{\infty}={\rm max}_{(x,t)\in D}|u(x,t)|,~~D=[-L,L]\times [0, T_{\mathrm{max}}],$$ where $T_{\mathrm{max}}$ denotes the end of the interval of numerical integration $[0,T_{\mathrm{max}}]$; the third-order dispersion coefficient is $\beta\in [0,1]$, while the rest of parameters are fixed to the values $\sigma_R=\mu=\nu=0.01$, and for the cw-initial condition we use $\epsilon=0.01$ and $K=5$. The system was integrated until $T_{\mathrm{max}}=3000$. The branches AB and FG correspond to the intervals $\beta\in[0,0.18)$ and $\beta\in (0.57,1]$ respectively, and are associated with the dynamical scenario $\omega(u_0)=\{\phi_b\}$, i.e., the convergence to the steady-state of constant density, $|\phi_b|^2=-\frac{\gamma}{\delta}$. The intersection of the bifurcation curve with the auxiliary “separatrix” B, at $\beta\approx 0.18$, designates the transition to the equilibrium metastability region BC \[light grey (pale yellow) shaded area\], in the interval $\beta\in [0.18, 0.5)$. The fluctuations of the bifurcation curve are associated with metastable dynamical scenarios between distinct states. One such scenario may refer to the orbital connections between steady states mentioned above; another one, may correspond to a transition from unstable periodic orbits to chaotic oscillations, and an eventual convergence to a steady state. These scenarios are followed by drastically different transient dynamics characterizing these connections. As a first example, we note the metastable transition – at $\beta=0.3$ \[vertical dashed (red) line\] – between three distinct steady-states $\mathbf{E1}\rightarrow \mathbf{E2}\rightarrow\mathbf{E3}$ (with $\mathbf{E1}$ marking the steady-state of constant density, $|\phi_b|^2=-\frac{\gamma}{\delta}$). The third row panels of Fig. \[fig6\] show density profiles of these steady states. A second example, refers to the transition from an unstable periodic orbit $\mathbf{PO}$ (which emerges from the instability of the steady-state $\phi_b$), to chaotic oscillations $\mathbf{CO}$ and the convergence to the final steady-state $\mathbf{E3}$; this transition occurs for $\beta=0.47$ \[horizontal dashed (black) line\]. Density profiles during this transition are shown in the fourth row panels of Fig. \[fig6\]. For the first example, the ultimate state $\mathbf{E3}$ is reached at $t\approx 103$, and the solution remains unchanged until the end of integration, while for the second example, the ultimate state $\mathbf{E3}$ is reached at $t\approx 217$. ![(Color Online) Top panel: $\beta- ||u||_{\infty}$ bifurcation diagram (Diagnostic I), for fixed $\sigma_R=\mu=\nu=0.01$, and the cw-initial condition of $\epsilon=0.01$ and $K=5$. Second row panel: Magnification of the quasi-periodic region DE shown in the top panel. Third row panels: Profiles of the distinct steady states involved in the orbital connection $\mathbf{E1}\rightarrow \mathbf{E2}\rightarrow\mathbf{E3}$ occurring at $\beta=0.3$ in the metastable region BC. The system is at rest in the steady-state $\mathbf{E1}$ for $5\lesssim t\lesssim 35$, in $\mathbf{E2}$ for $60\lesssim t\lesssim 68$, and in $\mathbf{E3}$ for $100\lesssim t\lesssim 3000$-the end of integration. Fourth row panels: Transition from an unstable periodic orbit $\mathbf{PO}$ to chaotic oscillations $\mathbf{CH}$ which are eventually damped to the steady state $\mathbf{E3}$. The unstable periodic orbit $\mathbf{PO}$ survives for $62\lesssim t\lesssim 105$, and the chaotic orbit for $120\lesssim t\lesssim 203$. The system is at rest in the steady state $\mathbf{E3}$ for $217\lesssim t\lesssim 3000$-the end of integration. []{data-label="fig6"}](PRFig7.JPG) The intersection of the bifurcation curve with the second auxiliary separatrix C, with an almost vertical slope, is associated with the transition to the chaotic region CD (grey-shaded area), corresponding to the interval $I_{\beta,5,c}\approx [0.5, 0.53]$. The sudden jump of the bifurcation curve (with an infinite slope) at the intersection with the separatrix D designates the entrance into the quasi-periodic regime DE \[dark (pale red) shaded area\], associated with the interval $I_{\beta,5,q}\approx (0.53, 0.55)$. This region is magnified in the second row panel of Fig. \[fig6\]. On the other hand, the next steep jump at the intersection with the separatrix E (also magnified in the second row panel of Fig. \[fig6\]) depicts the entrance to the space-time periodic regime EF \[grey (pale green) shaded area\], associated with the limit-cycle interval $I_{\beta,5,lc}\approx [0.55, 0.57]$. The limit-cycle branch bifurcates from the intersection with the separatrix F beyond which the branch of the constant-density steady-state FG is traced. ![(Color Online) Top panel: $\beta- ||u(T_{\mathrm{max}})||^2_{\alpha}$ bifurcation diagram (Diagnostic II), for fixed $\sigma_R=\mu=\nu=0.01$, and the cw-initial condition of $\epsilon=0.01$ and $K=5$.[]{data-label="fig7.eps"}](PRFig8.JPG) Another bifurcation diagnostic (“diagnostic II”) that we use herein, is the one associated with the variation of the quantity $$||u(T_{\mathrm{max}})||_{\alpha}^2=\frac{1}{2L}\int_{-L}^{L}|u(x,T_{\mathrm{max}})|^2dx$$ with respect to $\beta$. For sufficiently large $T_{\mathrm{max}}$, $||u(T_{\mathrm{max}})||_{\alpha}^2$ could be thought of as the superior limit of Eq. (\[gees\]). The drawback in the above diagnostic is that the transient dynamics are hidden (for sufficiently large $T_{max}$); more generally, the result strongly hinges on the selection of $T_{max}$, but not necessarily strongly on the evolution for earlier or mirroring that for later times. Nevertheless, for sufficiently large $T_{max}$, it can be particularly useful in detecting convergence to different steady-states, e.g., $\omega(u_0)=\{\phi_b\}$ or $\omega(u_0)=\{\Phi_p\}$, via metastability. Furthermore, it is also able to detect regimes of more complex behavior, similarly to the $||u||_{\infty}$- diagnostic. Figure \[fig7.eps\] shows the $\beta-||u(T_{\mathrm{max}})||_{\alpha}^2$ bifurcation curve \[continuous (red) line\], for $T_{\mathrm{max}}=3000$; the rest of parameters are as in Fig. \[fig6\]. The four shaded regions correspond to the same distinct dynamical regimes that were detected in the $\beta- ||u||_{\infty}$ bifurcation diagram of Fig. \[fig6\]. The horizontal straight lines $$||u(T_{\mathrm{max}})||_{\alpha}^2=1.5=-\frac{\gamma}{\delta}$$ in the regions AB and FG show that, in these regimes of $\beta$, solutions converge to the steady-state $\phi_b$. The intersection of the bifurcation curve with the auxiliary “separatrix” B, at $\beta\approx 0.18$, still designates the transition to the equilibrium metastability region BC. However, the new horizontal straight line $||u(T_{\mathrm{max}})||_{\alpha}^2=0.68$ clearly shows that, after the transient metastability dynamics, the solution favors a particular steady-state of convergence, namely $\mathbf{E3}$ for these parameters. It is now useful to compare Diagnostics I and II. First we note that the comparison between the two in the metastability regime BC, reveals that far-from-equilibria transient dynamics are only identified by the fluctuations in the $\beta-||u||_{\infty}$ curve (Diagnostic I) – and [*not*]{} in the $\beta-||u(T_{\mathrm{max}})||_{\alpha}^2$ (Diagnostic II). These fluctuations can be understood by the fact that $||u||_{\infty}$ may be reached at a certain instant, $t_0\in[0,T_{\mathrm{max}}]$ and also by noting that, in general, $||u||_{\infty}\neq {\rm max}_{-L\le x\le L}|\Phi(x)|$ \[i.e., the $||u||_{\infty}$-norm of a steady-state $\Phi(x)$\]. Diagnostic II, on the other hand, reveals that in the metastability regime BC, the dynamics favors a distinct steady-state (as mentioned above) – a fact that cannot be captured by Diagnostic I. ![(Color Online) Top panel: $\sigma_R- ||u(T_{\mathrm{max}})||^2_{\alpha}$ bifurcation diagram (Diagnostic II), for fixed $\beta=0.52$, $\mu=\nu=0.01$, and the cw-initial condition of $\epsilon=0.01$ and $K=5$. Bottom left panel: A chaotic path for $t\in[600, 650]$, when $\beta=0.53$, $\sigma_R=\mu=\nu=0$ and the initial condition is as in the top panel. Bottom right panel: chaotic waveforms corresponding to points $\mathbf{P1}$ at $t \approx 600$ (top), $\mathbf{P2}$ at $t\approx 625$ (middle), and $\mathbf{P3}$ at $t\approx 650$. []{data-label="fig8"}](PRFig9.JPG) As far as the other regimes are concerned, Diagnostic II can also capture the transition to the chaotic regime CD, indicated by the intersection of the bifurcation curve with the auxiliary separatrix C, as well as by its large rapid fluctuations within region CD. The sudden jump of the bifurcation curve at the intersection with the separatrix D designates the entrance into the quasi-periodic regime, portrayed by the small, almost horizontal branch of quasi-periodic solutions within region DE. Note that the transition to the quasi-periodic regime is much more apparent in the Diagnostic II than in Diagnostic I. The intersection of the bifurcation curve with the separatrix E (at a point where the curve has a local minimum in the region DF), is again associated with the entrance to the space-time periodic regime EF (corresponding to the branch of space-time periodic solutions). This branch bifurcates from the straight line FG (pertinent to constant density steady-states) at its intersection with the separatrix F. It is important to make, at this point, yet some additional remarks. First, the interval $I_{\beta,K}$, corresponding to the region CF in the bifurcation diagrams, was found to be unstable under variations of $\sigma_R>0$. Corresponding (in)nstability regimes are illustrated in the top panel of Fig. \[fig8\], where a Diagnostic II-type diagram is shown, namely the bifurcation curve $\sigma_R-||u(T_{\mathrm{max}})||_{\alpha}^2$ \[continuous (red) line\]. This diagram is plotted for fixed $T_{\mathrm{max}}=3000$ and $\beta=0.52$ (recall that, in the previous case, for fixed $\sigma_R=0.01$, it was found that $\beta=0.52\in I_{\beta,5,c}\approx [0.5, 0.53]$, i.e., in the chaotic regime); the rest of parameters are as in Fig. \[fig6\]. It is observed that for relatively small values of the SRS coefficient, namely for $\sigma_R<0.03$ (cf. grey-shaded area, labeled by SR), chaotic behavior persists. On the other hand, above this threshold, i.e., for $\sigma_R> 0.03$, chaotic structures are destroyed, and the system enters into the metastability regime (labeled by RW in the diagram). The ultimate steady-state is $\mathbf{E3}$ for these parameters. Note that the instability of quasi-periodic and space-time periodic regimes under the influence of small increments of $\sigma_R$, occurs in a very similar manner, and can be plotted in similar bifurcation diagrams (results not shown here). Second, the interval $I_{\beta,K}$ persists even in the absence of the rest of the higher-order effects, i.e., for $\sigma_R=\mu=\nu=0$. This highlights the fact that the third-order dispersion plays a dominant role in the emergence of complex dynamics. An example of the chaotic behavior, for $\beta=0.53$ and $\mu=\nu=\sigma_R=0$, is shown in the bottom panels of Fig. \[fig8\]. In particular, the bottom left panel shows a part of a chaotic orbit for $t\in [600,650]$, of the 3D-projection of the flow on $\mathcal{P}_3$, for $x_1=5$, $x_2=10$, and $x_3=15$. Furthermore, the three snapshots in the bottom right panel, show profiles of the solution corresponding to points $\mathbf{P1}$, $\mathbf{P2}$, and $\mathbf{P3}$ of the chaotic path shown on the left, for $t=600$, $t=625$, and $t=650$, respectively. Conclusions =========== In conclusion, we have studied a physically important and broadly relevant higher-order Ginzburg-Landau equation, with zero diffusion. The considered model, is motivated by a higher-order nonlinear Schr[ö]{}dinger equation, which finds applications in a variety of contexts, ranging from nonlinear fiber optics and deep water waves; the model also incorporates linear loss and nonlinear gain, while it is supplemented with periodic boundary conditions, which are relevant to optical cavities settings, such as ones employed, e.g., in ring lasers. Our analysis revealed that the infinite-dimensional dynamics of this model can be reduced to a sequence of low-dimensional dynamical scenarios (fixed points, periodic and quasi-periodic, as well as chaotic orbits) that can be suitably revealed in reduced (two- and three-dimensional) phase space representations. Such a dynamical picture is shared by various non-integrable perturbations of Hamiltonian partial differential equations (such as the NLS, Sine-Gordon, and others), as these perturbations may break the homoclinic structure of their integrable counterparts. However, the path to all the above dynamical scenarios can be traced in drastically different ways and to essentially distinct roots, even if the systems have similar origins for their dissipative nature manifested by the existence of an attractor, e.g., due to the presence of gain/loss, as in the case of CGL models. In particular, in our higher-order CGL model, keeping gain/loss – as well as other coefficients of the higher-order effects – fixed, we have shown that the competition between third-order dispersion and the SRS effect (in the presence of nonlinearity, dispersion, and gain/loss) can trace a path from Poincaré-Bendixson–type behavior to quasi-periodic or chaotic dynamics. These dynamical transitions are also reminiscent of ones observed in fiber ring lasers, or in the path towards optical turbulence phenomena [@VanWig; @Ikeda]. A conspicuous finding was that third-order dispersion chiefly appears to be playing a critical role in controlling the transition from periodic to quasi-periodic, and eventually to chaotic behavior, even in the absence of the rest of the higher-order effects. Our results highlight that higher-order effects may have a primary role for the birth of spatiotemporal transitions in mixed gain/loss systems, suggesting further investigations. First of all, in the framework of the model we considered herein, it would be particularly interesting to investigate more broadly the full six-parameter space, rather than its low-dimensional projection considered herein. Furthermore, another interesting direction would be the identification of a low-dimensional attractor, its dimension and dependence on the spatial length [@eli], as well as the construction of the appropriate finite-dimensional reduced systems able to capture the effective low dimensional dynamics [@Cv2]. Lastly, it would also be interesting to investigate the role of higher-order effects in other autonomous systems with gain and loss. Existence of a limit set (attractor) {#apLS} ==================================== In this Appendix, we define an extended dynamical system associated to the initial-boundary value problem (\[eq1\])-(\[eq3\]). In particular, we briefly sketch the proof for the existence of a limit set-attractor, capturing all bounded orbits of this dynamical system, which initiate from sufficiently smooth initial data (\[eq3\]). The starting point of our proof is the power balance equation [@remark]: $$\begin{aligned} \label{OL2ge} \frac{d}{dt}\int_{-L}^{L}|u|^2dx=2\gamma\int_{-L}^{L}|u|^2dx +2\delta\int_{-L}^{L}|u|^4dx,\end{aligned}$$ satisfied by any local solution $u\in C([0,T], H^k_{per}(\Omega))$, which initiates from sufficiently smooth initial data $u_0\in H^k_{per}(\Omega)$, for fixed $k\geq 3$. Here, $H^k_{per}(\Omega)$ denotes the Sobolev spaces of periodic functions $H^k_{per}$ [@RTem88], in the fundamental interval $\Omega=[-L,L]$: $$\begin{aligned} H^k_{per}(\Omega)=\{u:\Omega\rightarrow \mathbb{C},\,\,u\,\mbox{and}\,\, \partial^j_xu\in L^2(\Omega),\, j=1,...,k;\nonumber\\ u\;\mbox{and}\;\partial^j_xu\; \mbox{for $j=1,...,k-1$, are $2L$-periodic} \}.\end{aligned}$$ Analysis of (\[OL2ge\]), results in the asymptotic estimate: $$\begin{aligned} \label{gees} \limsup_{t\rightarrow\infty} \frac{1}{2L}\int_{-L}^{L}|u(x,t)|^2dx\leq-\frac{\gamma}{\delta},\end{aligned}$$ hence local in time solutions $u\in C([0,T], H^k_{per}(\Omega))$ are uniformly bounded in $L^2(\Omega)$. This allows for the definition of the extended dynamical system $$\varphi(t, u_0): H^k_{per}(\Omega))\rightarrow L^2(\Omega),~~\varphi(t, u_0)=u,$$ whose orbits are bounded $\forall t\geq 0$. Moreover, from the above asymptotic estimate, we derive, that if $L^2(\Omega)$ is endowed with the equivalent averaged norm $$||u||^2_{\alpha}=\frac{1}{2L}\int_{-L}^{L}|u|^2dx$$ then its ball $$\mathcal{B}_{\alpha}(0,\rho)=\left\{u\in L^2(\Omega)\;:\;||u||^2_{\alpha}\leq \rho^2,\;\;\rho^2>-\frac{\gamma}{\delta}\right\}$$ attracts all bounded sets $\mathcal{B}\in H^k_{per}(\Omega)$. That is, there exists $T^*>0$, such that $\varphi(t, \mathcal{B})\subset \mathcal{B}_{\alpha}$, for all $t\geq T^*$. Thus, we may define for any bounded set $\mathcal{B}\in H^k_{per}(\Omega))$, $k\geq 3$, its $\omega$-limit set in $L^2(\Omega)$, $$\omega(\mathcal{B})=\bigcap_{s\geq 0}\overline{\bigcup_{t\geq s}\varphi(t ,\mathcal{B}}).$$ The closures are taken with respect to the weak topology of $L^2(\Omega)$. Then, the standard (embedding) properties of Sobolev spaces imply that the attractor $\omega(\mathcal{B})$ is at least weakly compact in $L^2(\Omega)$, or relatively compact in the dual space $H^{-1}_{per}(\Omega)$. For any initial condition (\[eq3\]), $u_0\in\mathcal{B}$, we denote its limit set by $\omega(u_0)\subset \omega(\mathcal{B})$. \[apMI\] Modulational instability ======================== In this Appendix we provide the modulational instability analysis of the cw state: $$\label{cw} u=u(t)=A e^{{\rm i}\theta(x,t)},\quad \theta(x,t)= k_0 x-\omega_0 t,$$ (where $A$ is a real constant), which is an exact analytical solution of Eq. (\[eq1\]) (for a MI analysis for the cw solution of Eq. (\[eq0\]) cf. Ref. [@potasek]). This solution exists when the following dispersion relation holds: $${\omega _0} = {\beta k_0^3 - k_0^2/2 - \mu {A^2}{k_0} + {\rm i}\left( {\gamma + \delta {A^2}} \right) - {A^2}},$$ while $A^2=-\gamma/\delta$, to suppress any exponential growth. This amplitude value is consistent with the equilibria (steady states) of the system. Now consider a small perturbation to this cw solution $$u(x,t)=[A + u_1(x,t)]e^{{\rm i}\theta(x,t)},$$ inserted into Eq. . Linearizing the system with respect to $u_1$ we obtain $$\begin{aligned} {\rm i}({u_{1t}}-k_0 u_{1x}) &- \frac{1}{2}{u_{1xx}}+ A^2(u_1 + u_1^*)= {\rm i}\delta A^2 (u_1+u_1^*)\\ &+ {\rm i}\beta ( 3k_0^2{u_{1x}} - 3{\rm i}{k_0}{u_{1xx}} - {u_{1xxx}}) \\ &- {\rm i}\mu A^2({\rm i}{k_0}{u_1} + {\rm i}{k_0}u_1^* + 2{u_{1x}} + u_{1x}^*) \\ &- {\rm i}(\nu-{\rm i}\sigma_R) A^2({u_{1x}}+u_{1x}^*),\end{aligned}$$ where star denotes complex conjugate. Solutions of the above equations are sought in the form: $$u_1(x,t)=c_1 e^{i (k x-\omega t)} + c_2 e^{-i (k x-\omega t)},$$ where $c_{1,2}$ are real constants, while $k$ and $\omega$ are the wavenumber and frequency of the perturbations. This way, we obtain the dispersion relation: $$\label{disp} \delta^2\omega^2+p_1(k)\omega+p_2(k)=0$$ where $$\begin{aligned} p_1(k) &= - 2\beta {k^3} + 2[ - 3\beta k_0^2 + {k_0} + {A^2}(2\mu + \nu - {\rm i}\sigma_R )]k, \\ p_2(k) &= {\beta ^2}{k^6} \\&+ [ - 3{\beta ^2}k_0^2 + \beta {k_0} - 2\beta {A^2}(2\mu + \nu - {\rm i}\sigma_R ) - 1/4]{k^4} \\&+ [9{\beta ^2}k_0^4 - 6\beta k_0^3 + k_0^2( {1 - 6\beta {A^2}(\mu + \nu - {\rm i}\sigma_R )} ) \\&+ {k_0}{A^2}(\beta (6 - 6{\rm i}\delta ) + 3\mu + 2\nu - 2{\rm i}\sigma_R ) \\&+ {A^2} ( {{\rm i}\delta + \mu {A^2}(3\mu + 2\nu - 2{\rm i}\sigma_R ) - 1}]{k^2},\end{aligned}$$ and it should be recalled that $A^2=-\gamma/\delta$. It is clear that the system will always be modulationally unstable, since the solutions of Eq.  are in general complex. [99]{} \[1\][\#1]{} urlstyle \[1\][doi:\#1]{} M. J. Ablowitz and H. Segur, [*Solitons and Inverse Scattering Transform*]{} (SIAM, 1981). M. J. Ablowitz, [*Nonlinear dispersive waves: Asymptotic analysis and solitons*]{} (Cambridge University Press, 2011). A. Hasegawa and Y. Kodama, [*Solitons in optical communications*]{} (Oxford Univeristy Press, 1996); G. P. Agrawal, [*Nonlinear Fiber Optics*]{} (Academic Press, 2012); Yu. S. Kivshar and G. P. Agrawal, [*Optical Solitons: From Fibers to Photonic Crystals*]{} (Academic Press, 2003). M. Scalora, M. S. Syrchin, N. Akozbek, E. Y. Poliakov, G. D’Aguanno, N. Mattiucci, M. J. Bloemer, and A. M. Zheltikov, Phys. Rev. Lett. **95**, 013902 (2005); S. Wen, Y. Xiang, X. Dai, Z. Tang, W. Su, and D. Fan, Phys. Rev. A **75**, 033815 (2007); N. L. Tsitsas, N. Rompotis, I. Kourakis, P. G. Kevrekidis, and D. J. Frantzeskakis, Phys. Rev. E **79**, 037601 (2009). R. S. Johnson, Proc. R. Soc. Lond. A [**357**]{}, 131 (1977). Yu. V. Sedletsky, J. Exp. Theor. Phys. [**97**]{}, 180 (2003). A. V. Slunyaev, J. Exp. Theor. Phys. [**101**]{}, 926 (2005). N. N. Akhmediev and A. Ankiewicz, [*Solitons. Nonlinear Pulses and Beams*]{} (Chapman and Hall, 1997). H. Haus, IEEE J. Sel. Topics Quantum Electron. [**6**]{}, 1173 (2000). N. Akhmediev, J. M. Soto-Crespo, and G. Town, Phys. Rev. E [**63**]{}, 056602 (2001). E. N. Tsoy and N. Akhmediev, Phys. Lett. A **343**, 417 (2005). E. N. Tsoy, A. Ankiewicz, and N. Akhmediev, Phys. Rev. E **73**, 036621 (2006). D. Sanvitto and V. Timofeev, [*Exciton polaritons in microcavities*]{}, (Springer-Verlag, Berlin, 2012). N. Akhmediev and A. Ankiewicz (eds.), [*Dissipative Solitons*]{} (Springer, Berlin, 2005). M. C. Cross and P. C. Hohenberg, Rev. Mod. Phys. [**65**]{}, 851 (1993); I. S. Aranson and L. Kramer, Rev. Mod. Phys. [**74**]{}, 99 (2002). M. W. Hirsch. S. Smale, and R. L. Devaney, [*Differential equations, dynamical systems, and an introduction to chaos*]{}, Elsevier, 2004. I. G. Kevrekidis, B. Nicolaenko, and J. C. Scovel, SIAM J. Appl. Math. [**50**]{}, 760 (1990); F. Christiansen, P. Cvitanović, and V. Putkaradze, Nonlinearity [**10**]{}, 55 (1997). H. T. Moon, P. Huerre, and L. G. Redekopp, Phys. Rev. Lett. [**49**]{}, 458 (1982); K. Nozaki and N. Bekki, Phys. Rev. Lett. [**51**]{}, 2171 (1983); R. J. Deissler and H. R. Brand, Phys. Rev. Lett. **72**, 478 (1994). K. Nozaki, N. Bekki, Phys. Rev. Lett. [**50**]{}, 1226 (1983); Physica D [**21**]{}, 381 (1986); Phys. Lett. A [**102**]{}, 383 (1984); D. Cai, D. W. McLaughlin, and J. Shatah, Phys. Lett. A [**253**]{}, 280 (1999). E. Shlizerman and V. Rom-Kedar, Chaos [**15**]{}, 013107 (2005); see also: Phys. Rev. Lett. [**96**]{}, 024104 (2006) and Phys. Rev. Lett [**102**]{}, 033901 (2009). Y. Li and D. W. McLaughlin, Commun. Math. Phys. [**162**]{}, 175 (1994); G. Haller and S. Wiggins, Physica D [**85**]{}, 311 (1995); A. R. Bishop, K. Fesser, P. S. Lomdahl, W. C. Kerr, M. B. Williams, and S. E. Trullinger, Phys. Rev. Lett. **50**, 1095 (1983); A. R. Bishop, M. G. Forest, D. W. McLaughlin, and E. A. Overman II, Phys. Lett. A, **144**, 17 (1990); G. Kovačič and S. Wiggins, Physica D **57**, 185 (1992); N. Ercolani, M. G. Forest, and D. W. McLaughlin, Physica D [**43**]{}, 349 (1990). H. Ikeda, M. Matsumoto, and A. Hasegawa, Opt. Lett. **20** (1995), 1113. T. P. Horikis and D. J. Frantzeskakis, Opt. Lett. **38**, 5098 (2013). Y. Chen and J. Atai, Opt. Lett. **16**, 1933 (1991); Y. Chen, Phys. Rev. A **45**, 6922 (1992); Yu. S. Kivshar and X. Yang, Phys. Rev. E **49**, 1657 (1994). S. C. V. Latas, M. F. S. Ferreira, and M. V. Facão, Appl. Phys. B **104**, 131 (2011); S. C. V. Latas, M. F. S. Ferreira, Opt. Lett. **37**, 3897 (2012); I. M. Uzunov, T. N. Arabadzhiev, and Z. D. Georgiev, Opt. Fiber Tech. [**24**]{}, 15 (2015). F. Salin, P. Grangier, G. Roger, and A. Brun, Phys. Rev. Lett. **56**, 1132 (1986). H. A. Haus, K. Tamura, L. E. Nelson, and E. P. Ippen, IEEE J. Q. Elec. [**31**]{}, 591 (1995). G. D. VanWiggeren and R. Roy, Science **279**, 1198 (1998). M. J. Ablowitz, S. D. Nixon, T. P. Horikis, and D. J. Frantzeskakis, J. Phys. A **246**, 095201 (2013). V. Achilleos, S. Diamantidis, D. J. Frantzeskakis, T. P. Horikis, N. I. Karachalios, and P. G. Kevrekidis, Physica D **316**, 57 (2016). To obtain the equation, multiply Eq. by $u^*$ and its conjugate by $u$, add the equations and integrate in $[-L,~L]$ using the relevant boundary conditions. R. Temam, [*Infinite-Dimensional Dynamical Systems in Mechanics and Physics*]{} (Springer-Verlag, 1997). V. E. Zakharov and L. A. Ostrovsky, Physica D [**238**]{}, 540 (2009). T. B. Benjamin and J. E. Feir, J. Fluid Mech. [**27**]{}, 417 (1967). K. Ikeda, H. Daido, and O. Akimoto, Phys. Rev. Lett. [**45**]{}, 709 (1980). P. Cvitanović, R. L. Davidchack, and E. Siminos, SIAM J. Appl. Dyn. Syst. [**9**]{}, 1 (2010). M. J. Potasek, Opt. Lett. **12**, 921 (1998).
--- abstract: '[In this paper, we give a parameterization of the $\SU$ representation space of the Brieskorn homology spheres using the trace coordinates. As applications, we give an example which shows that the orbifold Toledo invariant in [@krebs] does not distinguish the connected components of the $\PU$ representation space.]{}' address: 'Institute of Mathematics, Vietnam Academy of Sciences and Technology, 18 Hoang Quoc Viet Road, 10307, Hanoi, Vietnam' author: - Vu The Khoi title: 'On the $\SU$ representation space of the Brieskorn homology spheres' --- [^1] Introduction ============ 0.1cm Let $M$ be a manifold with the fundamental group $\fg(M)$ and $G$ be a Lie group. The *representation space* of $M,$ denoted by $\RR_G(M),$ is the space of representations from $\fg(M)$ into the Lie group $G,$ modulo conjugation: $$\RR_G(M):= \hom(\fg(M),G)/G.$$ We denote by $\RR^*_G(M)$ the subset of the representation space which consists of irreducible representations. The representation space of $3$-manifolds has been studied extensively in the case where $G=\SUn, {\rm SU}(3)$ or $\SLC$ in connection with the Casson invariants and hyperbolic geometry (see [@boden; @bhc; @cs; @fs; @furuta; @kk]). Let us recall that $\SU$ is the special unitary group corresponding to the indefinite inner product $\langle Z,W\rangle_{2,1}=Z_1\overline{W}_1 + Z_2\overline{W}_2 - Z_3\overline{W}_3$ on $\C^3.$ The group $\PU$ is the quotient of $\SU$ by its center. In this paper we study the $\SU$ representation space $\RR_{\SU}(M).$ The motivation for this study comes from complex hyperbolic geometry where $\RR_{\PU}(M)$ serves as the local model for the deformation space of spherical CR structures on $M.$ For convenience, we will work with the group $\SU$ and then deduce results for the $\PU$ case. Let $p,q,r$ be pairwise coprime positive integers, the *Brieskorn homology sphere* $\bhs$ is defined to be the link of singularity in $\C^3,$ that is : $$\bhs:=\{(x,y,z)|\ x^p+y^q+z^r=0 \}\ \cap \S^5_{\epsilon}.$$ It is well known that the fundamental group of $\bhs$ may be given as $$\fg(\bhs)= \langle x,y,z,h| \ h\ \text{central},\ x^ph^a=y^qh^b=z^rh^c=xyz=1 \rangle,$$ where $a,b,c$ are integers satisfying $$\frac{a}{p}+ \frac{b}{q}+\frac{c}{r}=\frac{1}{pqr}.$$ In this paper, for simplicity, we will denote by $t_A$ the trace of a matrix $A$ and $[A,B]$ the commutator $ABA^{-1}B^{-1}.$ The notations $\Re$ and $\Im$ stand for the real and imaginary part of a complex numbers respectively. Our main theorem shows that $\RR^*_{\SU}(M)$ can be parameterized by certain trace coordinates. [**Theorem 3.1**]{} [*Two irreducible representations $\rho, \rho':\fg(\bhs)\longrightarrow \SU$ are conjugate if and only if the image under $\rho$ and $\rho'$ of each $x,y,h$ are conjugate and satisfy the relations $$t_{\rho(xy)}=t_{\rho'(xy)}, \quad t_{\rho(x^{-1}y)}=t_{\rho'(x^{-1}y)}, \quad \Im(t_{\rho([x,y])})=\Im(t_{\rho'([x,y])}).$$* ]{} The rest of this paper is organized as follows. In section 2 we study the trace identities for the free group of rank two. Using algebraic results about the invariant ring of matrices, we are able to deduce the coordinates and relations for the $\SU$ representation space of the free group of rank two. Section 3 is devoted to the proof of the main result. In this section, we also show how to find the constraint for the parameters of the representation spaces in practice. Finally, in section 4, we apply our results to give explicit descriptions of the representation spaces of the Brieskorn homology spheres $\Sigma(2,3,11)$ and $\Sigma(2,3,13).$ Trace calculus for free group of rank two ========================================= 0.1cm We first recall some known results about matrices in $\SU$. The reader should consult [@chen; @goldman] for details. Let $V_{\_}$ and $V_0$ be the two subsets of $\C^3$ defined by $V_{\_}:=\{Z=(z_1, z_2, z_3)\in \C^3|\ \langle Z,Z \rangle_{2,1}\ <0 \}$ and $V_0:=\{Z=(z_1, z_2, z_3)\in \C^3|\ \langle Z,Z \rangle_{2,1}\ =0 \}$. We denote by $P:\C^3\,\setminus\{0\}\rightarrow \C P^2$ the canonical projection onto the complex projective space. Then $P(V_{\_})$ equipped with the Bergman metric is the model of the complex hyperbolic space $H^2_{\C}$. The boundary $\partial H^2_{\C}$ in $\C P^2$ is $P(V_0\setminus \{0\}).$ The elements of $\SU$ can be classified according to their action on the complex hyperbolic space $H^2_{\C}$ [@chen]. Namely, a matrix is called [*elliptic*]{} if it has a fixed point in $H^2_{\C}.$ We call it [*parabolic*]{} if it has a unique fixed point in $\overline {H^2_{\C}}$ which lies on $\partial H^2_{\C}.$ And finally, an element is called [*loxodromic*]{} if it has exactly two fixed points in $\overline {H^2_{\C}}$ which lie on $\partial H^2_{\C}.$ A classification of conjugacy classes of elements of $\SU$ can be found in [@chen]. In particular it says that two elliptic elements are conjugate if and only if they have the same positive and negative class of eigenvalues (counted with multiplicity). An explanation of terminology should be added here: we say that an eigenvalue $\lambda$ of an elliptic element is of [*positive type*]{} (respectively [*negative type*]{}) if it has an $\lambda$-eigenvector $v$ such that $ \langle v,v \rangle_{2,1}$ is positive (respectively negative). It has been shown that every eigenvalue of an elliptic element has either positive or negative type. The next proposition gives several trace identities for a pair of matrices in $\SU.$ These identities will be crucial in getting a coordinate system on the representation space. \[trid\] Let $A$ and $B$ be a pair of matrices in $\SU $. Then the following equations hold: [i)]{} $t_{A^{-1}} = \overline{t_A}$. [ii)]{} $t_{A^2} = t_A^2-2\overline{t_A}$. [iii)]{} $t_{A^3} = t_A^3-3|t_A|^2 + 3$. [iv)]{} $t_{A^2B} = t_At_{AB} - \overline{t_A}t_B + t_{A^{-1}B}$. [v)]{} $t_{A^2B^2}= t_At_Bt_{AB}-t_A^2\overline{t_B}+t_A\overline{t_{A^{-1}B}}-\overline{t_A}t_B^2+\overline{t_At_B}+t_Bt_{A^{-1}B}$. [vi)]{} $t_{ABAB^{-1}}= t_{AB}\overline{t_{A^{-1}B}}+\overline{t_{AB}}t_B+\overline{t_B}t_{A^{-1}B}+\overline{t_A}(1-|t_B|^2)$. [vii)]{} $t_{ABA^2B^2}= t_{[A,B]} + t_{AB}t_{A^2B^2} - t_{AB}\overline{t_{AB}}$. The first identity follows from the definition of $\SU.$ The next two identities follow from the fact that the characteristic polynomial of $A$ has the form $A^3-t_AA^2+\overline{t_A}A-I$ (see the proof of Theorem 6.2.4 in [@goldman]). Notice that by the Cayley-Hamilton theorem we have $A^3-t_AA^2+\overline{t_A}A-I=0.$ Now by multiplying this equality from the right by $A^{-1}B$ and then taking the trace, we obtain iv). By multiplying the Cayley-Hamilton identity for $A$ by $A^{-1}B^2$ from the right and using previous identities we get v). To prove vi) we will combine two equalities. The first one is obtained by multiplying the Cayley-Hamilton identity for $AB$ from the right by $(AB)^{-1}B^{-2}$: $$ABAB^{-1}- t_{AB}AB^{-1}+\overline{t_{AB}}B^{-2}-B^{-1}A^{-1}B^{-2}=0.$$ The second one is obtained by multiplying the Cayley-Hamilton identity for $B$ from the left by $(AB)^{-1}B^{-2}$ : $$B^{-1}A^{-1}B-t_BB^{-1}A^{-1}+\overline{t_{B}}B^{-1}A^{-1}B^{-1}- B^{-1}A^{-1}B^{-2}=0.$$ It is not hard to see that when combining these two equalities and simplifying things by the previously proved identities we get the result. The last identity can be obtained by multiplying the Cayley-Hamilton identity for $AB$ by $(AB)^{-1}B^{-1}AB^2.$ We now state some algebraic results on the algebra of invariants of matrices. Let $\C[M_n^{\oplus m}]$ be the coordinate ring for the space of $m$-tuples of $n\times n$ matrices $(A_k=(a_{ij}^k))_{k=1,...,m},$ i.e., $\C[M_n^{\oplus m}]:=\C[a_{ij}^k| 1\le i,j\le n,1\le k \le m].$ Consider the action of $GL_n:={\rm GL}(n,\C)$ by simultaneous conjugation of $m$ matrices. Algebraists are interested in the algebra of invariants $C_{n,m}:=\C[M_n^{\oplus m}]^{GL_n}$ The following result of Teranishi [@teranishi] will be useful for us: The algebra $C_{3,2}$ of invariants of two matrices $X, Y$ in ${\rm GL}(3,\C),$ is generated by : $$t_{X}, \, t_{Y },\, t_{X^2}, \, t_{XY }, \, t_{Y^2},\, t_{X^3}, \, t_{X^2Y }, \, t_{XY^2}, \, t_{Y^3}, \, t_{X^2Y^2}, \, t_{X^2Y^2XY}$$ This result means that the trace of any word in $X,Y$ can be expressed as a polynomial in the eleven traces above. When working with the group $\SU$ we can reduce the number of generators greatly by using Proposition \[trid\]. We get the following : \[5\]Let $A$ and $B$ be a pair of matrices in $\SU$ then the trace of any word in $A, B$ can be written as a polynomial of the following variables and their complex conjugates: $$t_A, \, t_B, \, t_{AB}, \, t_{A^{-1}B}, \, t_{[A,B]}.$$ Since the real dimension of $\SU$ is $8$, the real dimension of the representation space of the free group of rank 2 should also be $8.$ Therefore there should be a relation among these $5$ traces. Fortunately, this relation has been computed in [@ads; @nakamoto] as the defining relation for the algebra of invariants. In particular, it has been shown that the algebra of invariants of two matrices in ${\rm GL}(3,\C)$ is defined by a single relation which expresses $t_{X^2Y^2XY}$ as a solution of a quadratic equation whose coefficients are polynomials in the other ten traces. After plugging our variables into the formula in Theorem 1.2 of [@ads] and simplifying by MAPLE, we get the following result. Let $A$ and $B$ be two matrices in $\SU.$ If we denote $t_A, t_B, t_{AB}, t_{A^{-1}B}$ by $a,b,c,d$ respectively, then the following identities hold: $$\begin{aligned} \Re (t_{[A,B]})&=& \frac{1}{2}(|ab|^2 + |a|^2 + |b|^2 + |c|^2 +|d|^2-ab\overline { c } -\overline {ab}c-a\overline { b } d- \overline { a } b\overline { d } -3). \\ \Im(t_{[A,B]})^2 &=&-\frac{1}{4}(|ab|^2 - |a|^2 - |b|^2 + |c|^2 +|d|^2-ab\overline { c } -\overline {ab}c-a\overline { b } d- \overline { a } b\overline { d } )^2 \\ && +2\Re( -a^3|b|^2+a^2\overline{b}^2\overline{d}+a^2b^2c-a|b|^2\overline{d}c-|a|^2b^3-|a|^2bcd+a^2\overline{c}d \\ && +a^2\overline{b}c+a^2\overline{d}b+ab^2d-2abc^2+acd^2 +\overline{b}\overline{d}c^2+b^2c\overline{a}-2bd^2\overline{a}+c^2d\overline{a} \\ && +a^3+\frac{3}{2}ab\overline{c}+\frac{3}{2}a\overline{b}d-3ac\overline{d}+b^3+b^2\overline{cd}-3bcd+c^3+d^3+d^2\overline{bc} ) \\ &&+\frac{5}{2}|ab|^2+|cd|^2-\frac{9}{2}(|a|^2 + |b|^2 + |c|^2 +|d|^2)+\frac{27}{4}. \end{aligned}$$ Parameterization of the representation space ============================================= In this section we will show that the traces of certain elements give a coordinate system for the irreducible part of the representation space of the Brieskorn homology sphere. Furthermore we also show how to determine the constraint region for the coordinates. Two irreducible representations $\rho, \rho':\fg(\bhs)\longrightarrow \SU$ are conjugate if and only if the image under $\rho$ and $\rho'$ of each $x,y,h$ are conjugate and satisfy the relations $$t_{\rho(xy)}=t_{\rho'(xy)}, \quad t_{\rho(x^{-1}y)}=t_{\rho'(x^{-1}y)}, \quad \Im(t_{\rho([x,y])})=\Im(t_{\rho'([x,y])}).$$ If $\rho$ and $\rho'$ are conjugate, then the required relations are obviously satisfied. On the contrary suppose the relations hold. Since $\rho$ and $\rho'$ are irreducible, $\rho(h)$ and $\rho'(h)$ should be in the center $Z(\SU)$ of $\SU.$ Notice that the images of $x,y,z$ under a representation are elliptic elements, and they are diagonalizable. Moreover, it also follows from the irreducibility that either $\rho(x)$ or $\rho(y)$ has three distinct eigenvalues since otherwise $\rho$ would have a non-trivial invariant subspace by dimensional reason. The same holds for $\rho'$. So, after conjugation, we may assume that $\rho(x)=\rho'(x)= \diag(e^{i\theta_1}, e^{i\theta_2}, e^{i\theta_3}),$ where $e^{i\theta_1}, e^{i\theta_2}, e^{i\theta_3}$ are distinct numbers and $\diag(a,b,c)$ denotes the diagonal matrix whose diagonal elements are $a, b, c.$ To prove the theorem, it is enough to show that we can conjugate $\rho(y)$ to $\rho'(y)$ by a diagonal matrix. To show this we prepare a small lemma. Let $A=\diag(e^{i\theta_1}, e^{i\theta_2}, e^{i\theta_3})$, where $e^{i\theta_1}, e^{i\theta_2}, e^{i\theta_3}$ are three distinct numbers. Suppose that $B=(b_{ij})$ and $B'=(b'_{ij})$ are two $3\times 3$ matrices satisfying $t_B=t_{B'}, t_{AB}=t_{AB'}, t_{A^{-1}B}=t_{A^{-1}B'}$. Then the diagonal elements of $B$ and $B'$ are equal. It follows from our assumption that the following equations hold: $$\left\{\begin{array}{llll} (b_{11}-b'_{11}) & + \ (b_{22}-b'_{22}) & +\ (b_{33}-b'_{33})& =0\\ (b_{11}-b'_{11})e^{i\theta_1} & +\ (b_{22}-b'_{22})e^{i\theta_2} & +\ (b_{33}-b'_{33})e^{i\theta_3}& =0\\ (b_{11}-b'_{11})e^{-i\theta_1} & +\ (b_{22}-b'_{22})e^{-i\theta_2} & +\ (b_{33}-b'_{33})e^{-i\theta_3}& =0 \end{array}\right.$$ Consider this as a system of linear equations in $(b_{ii}-b'_{ii}).$ Since the determinant $$\det\left(\begin{array}{ccc} 1&1&1\\ e^{i\theta_1}&e^{i\theta_2}&e^{i\theta_3}\\ e^{-i\theta_1}&e^{-i\theta_2}&e^{-i\theta_3} \end{array}\right)- \frac{(1-e^{i(\theta_1-\theta_2)})(1-e^{i(\theta_2-\theta_3)})(1-e^{i(\theta_3-\theta_1)})} {e^{i(\theta_1+\theta_2+\theta_3)}}$$ is not zero, we get the conclusion of the lemma. Now come back to the proof of our theorem, suppose that $\rho(y)=(y_{ij})$ and $\rho'(y)=(y'_{ij}).$ From our assumption and Proposition 2.3, we have $t_{\rho(w)}=t_{\rho'(w)}$ for every word $w(x,y).$ Applying Lemma 3.2 for $A=\rho(x),$ $B=\rho(y)$ and $B'=\rho'(y),$ we obtain $$(*) \qquad \qquad \qquad \qquad y_{i,i}=y'_{i,i} \qquad (i=1,2,3).$$ Applying Lemma 3.2 again for $A=\rho(x),$ $B=\rho([x,y])$ and $B'=\rho'([x,y]),$ we conclude that the corresponding diagonal elements of $\rho([x,y])$ and $\rho'([x,y])$ are equal. For the first diagonal element, we have: $$|y_{11}|^2 + |y_{12}|^2e^{i(\theta_1-\theta_2)} -|y_{13}|^2e^{i(\theta_1-\theta_3)} = |y'_{11}|^2 + |y'_{12}|^2e^{i(\theta_1-\theta_2)} - |y'_{13}|^2e^{i(\theta_1-\theta_3)}.$$ Using the fact that $\rho(x), \rho(y)$ belong to $\SU$ and $y_{11}=y'_{11},$ we get the following equations: $$\left\{\begin{array}{lll} (|y_{12}|^2 - |y'_{12}|^2) &- (|y_{13}|^2 - |y'_{13}|^2)& =0\\ (|y_{12}|^2 - |y'_{1,2}|^2)e^{i(\theta_1-\theta_2)} &- (|y_{1,3}|^2 - |y'_{1,3}|^2)e^{i(\theta_1-\theta_3)}&=0. \end{array}\right.$$ From these equations, it follows that $|y_{12}|= |y'_{12}|$ and $|y_{13}|=|y'_{13}|$. Arguing similarly for other diagonal elements of $\rho([x,y])$ and $\rho'([x,y]),$ we obtain that $$(**) \qquad \qquad\qquad \qquad |y_{ij}|= |y'_{ij}| \qquad ( i\ne j).$$ Applying Lemma 3.2 one more time for $A=\rho(x),$ $B=\rho(y^2)$ and $B'=\rho'(y^2),$ it follows that the corresponding diagonal elements of $\rho(y^2)$ and $\rho'(y^2)$ are equal. Combining with $(*)$, we obtain the following equalities : $$(***) \qquad \qquad \qquad \qquad y_{ij}y_{ji}=y'_{ij}y'_{ji} \qquad ( i\ne j).$$ Now consider three pairs $(y_{ij},y_{ji})$ for $i < j.$ By the irreducibility of $\rho,$ at least two pairs are not equal to $(0,0).$ Without loss of generality, we may assume that, say, $y_{12}\ne 0$ and $y_{31}\ne 0.$ By conjugate $\rho$ by $\diag(e^{i\phi_1}, e^{i\phi_2}, e^{i\phi_3})$ for appropriate values of $\phi_i$ and using $(**),$ we may assume that $y_{12}=y'_{12}$ and $y_{31}=y'_{31}.$ Furthermore, using $(***),$ we get that $y_{21}=y'_{21}$ and $y_{13}=y'_{13}.$ Moreover, since $\rho(y)$ and $\rho'(y)$ are in $\SU,$ we obtain $$y_{11}\overline{y_{12}}+y_{21}\overline{y_{22}}-y_{31}\overline{y_{32}}=0,\qquad y'_{11}\overline{y'_{12}}+y'_{21}\overline{y'_{22}}-y'_{31}\overline{y'_{32}} =0.$$ It follows that $y_{32}=y'_{32}.$ By a similar argument, we also get $y_{23}=y'_{23}$ and thus our theorem is proved. To describe the representation space, for each $h\in Z(\SU), x=\diag(\lambda_1, \lambda_2, \lambda_3), y= P \diag(\mu_1, \mu_2, \mu_3) P^{-1}, P\in \SU$ such that $x^ph^a=y^qh^b=I,$ we need to answer the following two questions: - Does there exist $P$ such that $z=(xy)^{-1}$ satisfies $z^rh^c=I$? - What are the possible values of $t_{x^{-1}y}$? In other words, we need to find the image of the following map in terms of $\lambda=(\lambda_1, \lambda_2, \lambda_3)$ and $\mu= (\mu_1, \mu_2, \mu_3)$ : $$\begin{array}{lll} \Phi_{\lambda,\mu} &:& \SU \longrightarrow \C^2 \\ && P \qquad \mapsto (t_{xy},t_{x^{-1}y}), \end{array}$$ where $x=\diag(\lambda_1, \lambda_2, \lambda_3)$ and $y= P \diag(\mu_1, \mu_2, \mu_3) P^{-1}.$ If we write $P=(p_{ij}),$, then we have $$P^{-1}= \left(\begin{array}{ccc} \overline{p_{11}}&\overline{p_{21}}&-\overline{p_{31}}\\ \overline{p_{12}}&\overline{p_{22}}&-\overline{p_{32}}\\ -\overline{p_{13}}&-\overline{p_{23}}&\overline{p_{33}} \end{array}\right).$$ Let us denote by $\hat P$ for the matrix $$\hat P=\left(\begin{array}{ccc} \ \ |{p_{11}}|^2&\ |{p_{12}}|^2&-|{p_{13}}|^2\\ \ \ |{p_{21}}|^2&\ |{p_{22}}|^2&-|{p_{23}}|^2\\ -|{p_{31}}|^2&-|{p_{32}}|^2&\ \ |{p_{33}}|^2 \end{array}\right).$$ Then we have $$t_{xy}=\left(\lambda_1, \lambda_2, \lambda_3\right)\hat P \left(\mu_1, \mu_2, \mu_3\right)^T, \qquad t_{x^{-1}y}= \left(\frac{1}{\lambda_1},\frac{1}{\lambda_2},\frac{1}{ \lambda_3} \right)\hat P \left(\mu_1, \mu_2, \mu_3\right)^T.$$ Let $\mathcal D$ to be the set of $3\times 3$ matrices $M$ such that there exists $P=(p_{ij})\in \SU$ satisfying $M=\hat P.$ An explicit description of $\mathcal D$ in the following lemma will help us to find the image of $\Phi_{\lambda,\mu}$ in practice. Let $M$ be the matrix $$M=\left(\begin{array}{ccc} \ \ {m_{11}}&\ {m_{12}} &-{m_{13}}\\ \ \ {m_{21}}&\ {m_{22}}&-{m_{23}}\\ -{m_{31}}&-{m_{32}}&\ \ {m_{33}} \end{array}\right)$$ such that $m_{ij}\ge 0$ and the sum of every row or column is $1.$ Then $M$ is an element of $\mathcal D$ if and only if the following triangle inequalities holds: $$\sqrt{m_{1k}m_{2k}}\le \sum_{i\ne k} \sqrt{m_{1i}m_{2i}} \qquad (k=1,2,3).$$ We first show the if part: If $M\in \mathcal D$ then there exist $\theta_{ij}$ such that the matrix $(\sqrt{m_{ij}}e^{i\theta_{ij}})$ belongs to $\SU.$, and hence we have $$\sqrt{m_{11}m_{21}}e^{i(\theta_{11}-\theta_{21})} + \sqrt{m_{12}m_{22}}e^{i(\theta_{12}-\theta_{22})} - \sqrt{m_{13}m_{23}}e^{i(\theta_{13}-\theta_{23})}=0.$$ It implies that the three numbers $\sqrt{m_{1i}m_{2i}}$ $(i=1,2,3)$ must satisfy the triangle inequalities and the lemma follows. We next show the “only if" part: Now suppose that three numbers $\sqrt{m_{1i}m_{2i}}$ $(i=1,2,3)$ satisfy the triangle inequalities. Then there exist angles $\theta_{ij}$ satisfying $$\sqrt{m_{11}m_{21}}e^{i(\theta_{11}-\theta_{21})} + \sqrt{m_{12}m_{22}}e^{i(\theta_{12}-\theta_{22})} - \sqrt{m_{13}m_{23}}e^{i(\theta_{13}-\theta_{23})}=0.$$ Put $p_{ij}=\sqrt{m_{ij}}e^{i\theta_{ij}}$ $(i=1,2, j=1,2,3)$, then we get the first two rows of the matrix $P.$ Let $v=(p_{31},p_{32},p_{33})$ be the vector which is orthogonal to these two rows with respect to the indefinite inner product $\langle , \rangle_{2,1}$, and also satisfies $\langle v, v\rangle_{2,1}=-1.$ Then it is not hard to check $P=(p_{ij})\in \SU.$ and $M=\hat P.$ examples ======== [**Example 1.**]{} The first example is the manifold $\Sigma(2,3,11).$ Its fundamental group has the following presentation. $$\fg(\Sigma(2,3,11))= \langle x,y,z,h| \ h\ \text{central},\ x^2h^{-1}=y^3h=z^{11}h^2=xyz=1 \rangle.$$ In this example the irreducible representation space consists of isolated points. For each $h=\diag(\epsilon,\epsilon,\epsilon),$ with $\epsilon^3=1,$, we look for $\lambda$ and $\mu$ satisfying $$\lambda_i^2=\epsilon, \ \ \mu_i^3=\epsilon^{-1} \quad (i=1,2,3), \qquad \lambda_1\lambda_2\lambda_3=1, \qquad \mu_1\mu_2\mu_3=1$$ such that the image of $\Phi_{\lambda,\mu}$ contains a point whose first coordinate is of the form $e^{i\theta_1}+ e^{i\theta_2}+ e^{i\theta_3}$ with $\theta_1+\theta_2+\theta_3=2k\pi$ and $ e^{11i\theta_i}=\epsilon^2$ for all i. A small computer search tells us that there are five irreducible representations into $\SU$, all corresponding to the case $\rho(h)=I.$ The parameters of these representations are given below. Here we use $\sim$ to denote the conjugacy relation. 1\) $\rho(x) \sim \diag(1,-1,-1), \quad \rho(y) \sim \diag(1,e^{4\pi i/3},e^{2\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{10\pi i/11}+e^{16\pi i/11}+e^{18\pi i/11},\ \Im(t_{\rho([x,y])})=0.$ 2\) $\rho(x) \sim \diag(1,-1,-1), \quad \rho(y) \sim \diag(e^{2\pi i/3},1,e^{4\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{4\pi i/11}+e^{6\pi i/11}+e^{12\pi i/11},\ \Im(t_{\rho([x,y])})=0.$ 3\) $\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(1,e^{4\pi i/3},e^{2\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{4\pi i/11}+e^{8\pi i/11}+e^{10\pi i/11},\ \Im(t_{\rho([x,y])})=0.$ 4\) $\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(e^{2\pi i/3},1,e^{4\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{12\pi i/11}+e^{14\pi i/11}+e^{18\pi i/11},\ \Im(t_{\rho([x,y])})=0.$ 5\) $\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(e^{2\pi i/3},e^{4\pi i/3},1),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= 1+2\cos(2\pi/11),\ \Im(t_{\rho([x,y])})=0.$ It is no surprise that $t_{\rho(xy)}=t_{\rho(x^{-1}y)}$ and $\Im(t_{\rho([x,y])})=0$ in all the cases since $\rho(x)^2=I.$ We can easily check that these representations give 5 distinct irreducible representations when considered as elements of $\RR^*_{\PU}(\Sigma(2,3,11)).$ The Toledo invariant for representations of the fundamental group of an oriented surface into ${{\rm PU}(p,1)}$ is defined in [@toledo]. In [@krebs; @krebs2], M. Krebs defines the Toledo invariant for orbifold fundamental groups and uses it to obtain a lower bound for the number of connected components of the $\PU$ representation space. In particular, it is shown in [@krebs] that $\RR^*_{\PU}(\Sigma(2,3,11))$ has at least 5 connected components. So in this case the bound obtained by using the Toledo invariant is sharp. [**Example 2.**]{} Our next example is the manifold $\Sigma(2,3,13)$ which has the fundamental group : $$\fg(\Sigma(2,3,13))= \langle x,y,z,h| \ h\ \text{central},\ x^2h=y^3h^{-1}=z^{13}h^{-2}=xyz=1\rangle.$$ A similar computer search as in the previous example shows that the irreducible representation space $\RR^*_{\SU}(\Sigma(2,3,13))$ consists of $8$ isolated points. In the following, we list the parameters of these representations. Note that $\rho(h)=I$ in all the cases. 1\) $\rho(x) \sim \diag(-1,1,-1), \quad \rho(y) \sim \diag(e^{2\pi i/3},e^{4\pi i/3},1),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{4\pi i/13}+e^{10\pi i/13}+e^{12\pi i/13},\ \Im(t_{\rho([x,y])})=0.$ 2\) $\rho(x) \sim \diag(-1,1,-1), \quad \rho(y) \sim \diag(e^{2\pi i/3},e^{4\pi i/3},1),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{14\pi i/13}+e^{16\pi i/13}+e^{22\pi i/13},\ \Im(t_{\rho([x,y])})=0.$ 3\) $\rho(x) \sim \diag(-1,1,-1), \quad \rho(y) \sim \diag(1,e^{4\pi i/3},e^{2\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{6\pi i/13}+e^{22\pi i/13}+e^{24\pi i/13},\ \Im(t_{\rho([x,y])})=0.$ 4\) $\rho(x) \sim \diag(-1,1,-1), \quad \rho(y) \sim \diag(1,e^{2\pi i/3},e^{4\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{2\pi i/13}+e^{4\pi i/13}+e^{20\pi i/13},\ \Im(t_{\rho([x,y])})=0.$ 5\) $\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(1,e^{4\pi i/3},e^{2\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{6\pi i/13}+e^{8\pi i/13}+e^{12\pi i/13},\ \Im(t_{\rho([x,y])})=0.$ 6)$\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(e^{2\pi i/3},e^{4\pi i/3},1),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= 1+2\cos(2\pi/13),\ \Im(t_{\rho([x,y])})=0.$ 7)$\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(e^{2\pi i/3},e^{4\pi i/3},1),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= 1+2\cos(4\pi/13),\ \Im(t_{\rho([x,y])})=0.$ 8\) $\rho(x) \sim \diag(-1,-1,1), \quad \rho(y) \sim \diag(1,e^{2\pi i/3},e^{4\pi i/3}),$ $t_{\rho(xy)}=t_{\rho(x^{-1}y)}= e^{14\pi i/13}+e^{18\pi i/13}+e^{20\pi i/13},\ \Im(t_{\rho([x,y])})=0.$ These representations give us $8$ distinct points of $\RR^*_{\PU}(\Sigma(2,3,13)).$ According to [@krebs2](Appendix), in this case there are $7$ distinct values of the orbifold Toledo invariant. So the orbifold Toledo invariant does not distinguish the connected components of $\RR^*_{\PU}(\Sigma(2,3,13)).$ [**Acknowledgment.**]{} The author would like to thank Professor Takashi Tsuboi for advice and hospitality during the time in Tokyo. The author is grateful to his former thesis advisor, Daniel Ruberman, for informing him of the work of M. Krebs and for continuous support. We express our sincere thanks to the anonymous referee for pointing out several inaccuracies in the earlier version of this paper. [Rubinsteinetal]{} Aslaksen, H., Drensky, V. and Sadikova, L. Defining relations of invariants of two $3\times 3$ matrices. C. R. Acad. Bulg. Sci. 58, No.6, 617-622 (2005). Boden, H. U., Unitary representations of Brieskorn spheres. Duke Math. J. 75, No.1, 193-220 (1994). Boden, H. U. and Herald, C. M., The $\text{SU}(3)$ Casson invariant for integral homology $3$-spheres. J. Differ. Geom. 50, No.1, 147-206 (1998). Chen, S.S. and Greenberg, L., Hyperbolic spaces. [*Contributions to analysis (a collection of papers dedicated to Lipman Bers)*]{}, 49- 87. Academic Press, New York, 1974. Culler, M. and Shalen, P. B., Varieties of group representations and splittings of 3-manifolds. Ann. Math. (2) 117, 109-146 (1983). Fintushel, R. and Stern, R. J., Instanton homology of Seifert fibred homology three spheres. Proc. Lond. Math. Soc., III. Ser. 61, No.1, 109-137 (1990). Furuta, M. and Steer, B., Seifert fibred homology 3-spheres and the Yang-Mills equations on Riemann surfaces with marked points. Adv. Math. 96, No.1, 38-102 (1992). Goldman, W. M., Complex hyperbolic geometry. Oxford Mathematical Monographs.(1999). Kirk, P. A. and Klassen, E. P., Representation spaces of Seifert fibered homology spheres. Topology 30, No.1, 77-95 (1991). Krebs, M., Higgs bundles, orbifold Toledo invariants, and Seifert fibered homology 3-spheres, preprint.math.GT/0503509. Krebs, M., Toledo invariants on 2-orbifolds. PhD Thesis, Johns Hopkins University 2005, www.calstatela.edu/faculty/mkrebs/research/thesis\_ 3-8.pdf. Nakamoto, K., The structure of the invariant ring of two matrices of degree 3. J. Pure Appl. Algebra 166, No.1-2, 125-148 (2002). Teranishi, Y., The ring of invariants of matrices. Nagoya Math. J. 104, 149-161 (1986). Toledo, D., Harmonic maps from surfaces to certain Kaehler manifolds. Math. Scand. 45, no. 1, 13–26 (1979). [^1]: The author was supported by a COE Postdoctoral Fellowship of University of Tokyo, the JSPS’s Kakenhi Grant and the National Basic Research Program of Vietnam
[**Gheorghe IVAN**]{} [. We propose a new method for determining the elementary paths and elementary circuits in a directed graph. Also, the Hamiltonian paths and Hamiltonian circuits are enumerated.]{} [[^1]]{} Introduction ============ Idempotent mathematics is based on replacing the usual arithmetic operations with a new set of basic operations, that is on replacing numerical fields by idempotent semirings. Exotic semirings such as the max-plus algebra ${\bf R}_{max}$ or concatenation semiring ${\cal P}(\Sigma^{\ast})$ have been introduced in connection with various fields: graph theory, Markov decision processes, language theory, discrete event systems theory, see [@kusa], [@fink], [@mohr], [@bagu]. In this paper we will have to consider various semirings, and will universally use the notation $\oplus, \otimes, \varepsilon, e$ with a context dependent manning (e.g. $\oplus:= max $ in ${\bf R}_{max}$ but $\oplus:= \cup$ in ${\cal P}(\Sigma^{\ast})$, $\varepsilon:= - \infty $ in ${\bf R}_{max}$ but $\varepsilon:= \emptyset$ in ${\cal P}(\Sigma^{\ast})$). In many fields of applications, the graphs are used widely for modelling of practical problems. This paper will focus on two algebraic path problems, namely: the [*elementary path problem*]{} ([**EPP**]{}) and the [*elementary circuit problem*]{} ([**ECP**]{}). For a directed graph $G=(V,E)$ ($|V|=n$) and $u,v\in V$, ([**EPP**]{}) and ([**ECP**]{}) are formulated as follows:\ $\bullet~~({\bf EPP})$ [*enumerate the elementary paths from $u$ to $v$ of length $k$ ($1\leq k\leq n-1$)*]{};\ $\bullet~~({\bf ECP})$ [*enumerate the elementary circuits starting in $u$ of length $k$ ($1\leq k\leq n$)*]{}. For to solve the above problems we give a method based on a $n\times n$ matrix with entries in the semiring of distinguished languages. These algebraic path problems are applied into large domains: combinatorial optimization, traffic control, Internet routing etc. The paper is organized as follows. In Section 2 we construct a special idempotent semiring denoted by ${\cal P}^{\ast}(\Sigma_{dw}^{\ast})$ and named the semiring of distinguished languages. The semiring of matrices with entries in ${\cal P}^{\ast}(\Sigma_{dw}^{\ast})$ is presented in Section 3. This algebraic tool is used to establishing of a one-to-one correspondence between the distinguished words and the elementary paths in a directed graph. In Section 4 we give an algorithm to determine all the elementary paths and elementary circuits in a directed graph. This new practical algorithm is based on the latin composition of distinguished languages. Another method for determination of the elementary paths is the well-known latin multiplication technique of Kaufmann (see [@kauf]). Semiring of distinguished formal languages ========================================== We start this section by recalling of some necessary backgrounds on semirings for our purposes (see [@mohr], [@bagu], [@litv] and references therein for more details). [**Semirings**]{}. Let $S$ be a nonempty set endowed with two binary operations, [*addition*]{} (denoted with $\oplus$) and [*multiplication*]{} (denoted with $\otimes$). The algebraic structure $(S,\oplus, \otimes, \varepsilon, e )$ is a *semiring*, if it fulfills the following conditions: $(1)~~(S,\oplus, \varepsilon)~$ is a commutative monoid with $\varepsilon$ as the neutral element for $\oplus;$ $(2)~(S, \otimes, e )~$ is a monoid with $\varepsilon$ as the identity element for $\otimes;$ $(3)~~\otimes $ distributes over $\oplus;$ $(4)~~\varepsilon~$ is an absorbing element for $\otimes$, that is $~a\otimes \varepsilon=\varepsilon \otimes a= \varepsilon,~\forall a\in S.$ A semiring where addition is idempotent (that is, $~a\oplus a=a,~\forall a \in S$) is called an [*idempotent semiring*]{}. If $\otimes$ is commutative, we say that $S$ is a [*commutative semiring*]{}. In the following we introduce the [*monoid of distinguished words over an alphabet*]{}. An [*alphabet*]{} is a finite set $\Sigma$ of symbols. An [*word*]{} over $\Sigma$ is a finite sequence of symbols of the alphabet $\Sigma$. The number of symbols of a word $x$ is called the [*length of $x$*]{} and its length is represented by $ |x|$. The [*empty word*]{}, denoted with $\lambda$, is the word with length zero (that is, it has no symbols). A [*simple word of length $k$*]{} with $1\leq k\leq n$ over the alphabet $\Sigma$ ($|\Sigma|=n$) is a word of the form $a = a_{1} a_{2}...a_{k-1}a_{k}$ such that $a_{i}\neq a_{j}$ for $i\neq j $ and $i,j=\overline{1,k}$. A [*simple cyclic word of length $k+1$*]{} with $1\leq k\leq n$ over $\Sigma$ is a word of the form $b = a_{1} a_{2}\ldots a_{k-1}a_{k}a_{1}$ such that $a = a_{1} a_{2}\ldots a_{k-1}a_{k}$ is a simple word. The simple cyclic words of length $2$ over $\Sigma$ are the words of the form $u = a a$ for $a \in \Sigma$. By a [*distinguished word*]{} over $\Sigma$ we mean a word $w$ over $\Sigma$ which is a simple word of length $k$ or a simple cyclic word of length $k+1$ with $1\leq k\leq n$. Denote the [*set of distinguished words over $\Sigma$ completed with the word*]{} $\lambda$ by $\Sigma_{dw}^{\ast}$. For example, if $\Sigma = \{a,b\}$, then $~\Sigma_{dw}^{\ast}=\{\lambda, a, b, aa, bb, ab, ba, aba, bab\}.$ It is easy to prove that, [*if $~\Sigma$ is an alphabet with $|\Sigma|=n$ symbols, then $\Sigma_{dw}^{\ast}$ is a finite set with $\sigma_{n}$ elements, where*]{}\ $$\sigma_{n} = 1 + 2 n! + 2 \sum\limits_{k=1}^{n-1}\label{(2.1)} \frac{n!}{(n-k)!}.$$ On the set $\Sigma_{dw}^{\ast}$ we introduce the binary operation $\circ_{\ell}$ given as follows:\ $(1)~$ for all distinguished word $ x \in \Sigma_{dw}^{\ast}$ and simple cyclic word $c\in \Sigma_{dw}^{\ast}$, we have $$\lambda \circ_{\ell} x = x\circ_{\ell}\lambda = \lambda~~~\hbox{and}~~~ c \circ_{\ell} x = x\circ_{\ell} c = \lambda;\label{(2.2)}$$ $(2)~~$ Let $x = a_{1} a_{2}\ldots a_{k-1} a_{k}$ and $y = b_{1} b_{2}\ldots b_{r-1} b_{r}$ be two simple words of the lengths $k$ and $r$ with $1\leq k, r\leq n$. The word $x \circ_{\ell} y \in \Sigma_{dw}^{\ast} $ is defined by:\ $$x \circ_{\ell} y = \left \{ \begin{array}{ll} a_{1}\ldots a_{k}b_{2}\ldots b_{r-1} b_{r},& \hbox{if}~~a_{k}= b_{1}, \{a_{1},\ldots, a_{k}\}\cap\{b_{2},\ldots, b_{r}\}= \emptyset\\ a_{1}\ldots a_{k}b_{2}\ldots b_{r-1} a_{1}, & \hbox{if}~~ a_{k}= b_{1}, b_{r}=a_{1}, \{a_{1},\ldots, a_{k}\}\cap\{b_{2},\ldots, b_{r-1}\}= \emptyset\\ \lambda,& \hbox{otherwise}.\label{(2.3)} \end{array}\right.$$ The operation $\circ_{\ell}$ is called the [*latin composition of distinguished words*]{}. [The set $\Sigma_{dw}^{\ast}$ of distinguished words over $\Sigma=\{1, 2, 3, 4\}$ has $\sigma_{4}=129$ elements. If $ x = 123, y=31, z=1, c_{1}= 22$ and $c_{2}= 343$, then\ $c_{1}\circ_{l} x = 22 \circ_{l} 123 =\lambda,~~~x \circ_{l} c_{2} = 123 \circ_{l} 343 =\lambda,~~~ x \circ_{l} y = 123 \circ_{l} 31 = 1231,$\ $y \circ_{l} x = 31 \circ_{l} 123 =3123,~~~ z \circ_{l} x = 1 \circ_{l} 123 = 123.~$ Note that $ x \circ_{l} y \neq y \circ_{l} x$]{}.$\Box$ For all $a, b, c\in \Sigma_{dw}^{\ast}$ we have $~( a \circ_{\ell} b ) \circ_{\ell} c = a \circ_{\ell} ( b \circ_{\ell} c )$. Then $( \Sigma_{dw}^{\ast}, \circ_{\ell}, \lambda )$ is a monoid, called the [*monoid of distinguished words generated by alphabet $\Sigma$*]{}. A [*distinguished language*]{} $L$ over alphabet $\Sigma$ is a subset of distinguished words over $\Sigma$ completed with the empty word $\lambda$, that is $L\subseteq \Sigma_{dw}^{\ast}$ and $\lambda \in L$. [**Convention**]{}. $(i)~$ If $L=\{\lambda\}$ or $L=\{a\}$ where $a$ is a distinguished word, then we shall use the notations $\lambda$ and $a$, respectively. $(ii)~$ If $L\neq \{\lambda \}$, then we enumerate only its distinguished words of length $k\geq 1. \hfill\Box$ The monoid $\Sigma_{dw}^{\ast}$ contains two special distinguished languages, namely: the language $\{\lambda\}$ (it contains only the empty word $\lambda$) and the language $\Sigma$ (it contains all symbols of the alphabet and the empty word $\lambda$). The [*set of distinguished languages over an alphabet $\Sigma$*]{} is ${\cal P}^{\ast}(\Sigma_{dw}^{\ast}).$ Since distinguished languages are sets, all the set operations can be applied to distinguished languages. Then the union and intersection of two distinguished languages are distinguished languages. The [*latin composition*]{} of the distinguished languages $L_{1}$ and $L_{2}$ is the distinguished language $L_{1} \circ_{\ell} L_{2}$ defined by $$L_{1} \circ_{\ell} L_{2} =\{ x\circ_{\ell} y~|~ x \in L_{1} ~\hbox{and}~ y\in L_{2} \},\label{(2.4)}$$ that is, $L_{1} \circ_{\ell} L_{2}$ is the set of distinguished words obtaining by the latin composition of words of $L_{1}$ with those of $L_{2}$. For example, if $L_{1},L_{2}\subseteq \Sigma_{dw}^{\ast}$ are distinguished languages over $\Sigma=\{1, 2, 3, 4\}$, where $L_{1}= \{2, 412\}$ and $L_{2}= \{11, 23\}$, then\ $L_{1}\circ_{\ell} L_{2}= \{2, 412\}\circ_{\ell}\{11, 23\} = \{ 2\circ_{\ell} 11, 412 \circ_{\ell} 11, 2 \circ_{\ell} 23, 412 \circ_{\ell} 23\}= \{ 23, 4123\}$. $(i)~$ Let $ L, L_{1}, L_{2}, L_{3}\in {\cal P}(\Sigma_{dw}^{\ast}).$ Then: $$L \circ_{l}\lambda = \lambda \circ_{l} L=\lambda,~~~ L \circ_{l}\Sigma = \Sigma \circ_{l} L= L,~~~ ( L_{1} \circ_{l} L_{2}) \circ_{l} L_{3}= L_{1} \circ_{l} ( L_{2} \circ_{l} L_{3}).\label{(2.5)}$$ $(ii)~$ The set ${\cal P}^{\ast}(\Sigma_{dw}^{\ast}) $ endowed with multiplication $\circ_{l}$ has a structure of monoid. $(i)~$ Using the definitions one easily verify that $(2.5)$ holds. $(ii)$ From $(i)$ it follows that $({\cal P}^{\ast}(\Sigma_{dw}^{\ast}), \circ_{l}, \Sigma ) $ is a monoid. $\Box$\ On the set ${\cal P}^{\ast}(\Sigma_{dw}^{\ast}) $ of distinguished languages we define the binary operations: $$L_{1} \oplus L_{2}:= L_{1} \cup L_{2}~~~\hbox{and}~~~L_{1} \otimes L_{2}:= L_{1} \circ_{\ell} L_{2},~~~\forall L_{1}, L_{2} \in {\cal P}^{\ast}(\Sigma_{dw}^{\ast}). \label{(2.6)}$$ $({\cal P}^{\ast}(\Sigma_{dw}^{\ast}), \cup, \circ_{\ell}, \lambda, \Sigma )$ is an idempotent semiring. [*Proof.*]{} For to verify the conditions from definition of an idempotent semiring we apply the properties of the union of sets and Proposition 2.1. $\Box$ We call $({\cal P}^{\ast}(\Sigma_{dw}^{\ast}),\cup, \circ_{\ell}, \lambda, \Sigma ) $ the [*semiring of distinguished languages over $\Sigma$*]{}. Matrices over semirings and directed graphs =========================================== Let $(S, \oplus, \otimes, \varepsilon, e )$ be an (idempotent) semiring. For each positive integer $n$, let $ M_{n}(S)$ be denote the set of $n\times n$ matrices with entries in $S$. The operations $\oplus$ and $\otimes$ on $S$ induce corresponding operations on $ M_{n}(S)$ in the obvious way. Indeed, if $A=(A_{ij}), B=(B_{ij}) \in M_{n}(S)$ then we have:\ $$A\oplus B= ((A\oplus B)_{ij}) ~~~\hbox{and}~~~ A\otimes B= ((A\otimes B)_{ij}),~~~i,j =\overline{1,n}~~~\hbox{where}$$\ $$(A\oplus B)_{ij}:= A_{ij} \oplus B_{ij}~~~\hbox{and}~~~ (A\otimes B)_{ij}:= \bigoplus\limits_{k=1}^{n} A_{ik}\otimes B_{kj}.\label{(3.1)}$$ The set $M_{n}(S)$ contains two special matrices with entries in $S$, namely the zero matrix $O_{\oplus n}$, which has all its entries equal to $\varepsilon$, and the identity matrix $I_{\otimes n}$, which has the diagonal entries equal to $e$ and the other entries equal to $\varepsilon$. It is easy to check that the following proposition holds. $( M_{n}(S), \oplus, \otimes, O_{\oplus n}, I_{\otimes n}) $ is an idempotent semiring, where the operations $\oplus$ and $\otimes$ are given in $(3.1)$.$\Box$ We call $( M_{n}(S), \oplus, \otimes, O_{\oplus n}, I_{\otimes n}) $ the [*semiring of $n\times n$ matrices with entries in $S$*]{}. In particular, if $S:= ({\cal P}^{\ast}(\Sigma_{dw}^{\ast}),\cup, \circ_{\ell}, \emptyset, \Sigma )$, then $( M_{n}({\cal P}^{\ast}(\Sigma_{dw}^{\ast})), \cup, \circ_{\ell}, O_{\oplus n}, I_{\otimes n}) $ is called the [*semiring of $n\times n$ matrices over $ {\cal P}^{\ast}(\Sigma_{dw}^{\ast})$*]{}. The operation $\otimes:=\circ_{\ell} $ is called the [*multiplication of matrices based on latin composition of words*]{}. [Let be semiring $( M_{2}({\cal P}^{\ast}(\Sigma_{dw}^{\ast})), \cup, \circ_{\ell}, O_{\oplus 2}, I_{\otimes 2}) $ with $\Sigma = \{a, b, c\}.$ The product $A\circ_{\ell} B$ of the $A, B $ with entries in the semiring ${\cal P}^{\ast}(\Sigma_{dw}^{\ast})$ is]{} $$A\circ_{\ell} B= \left(\begin{array}{cc} ab & \varepsilon\\ bca & bc\\ \end{array}\right)\circ_{\ell}\left(\begin{array}{cc} b &ab\\ c & \varepsilon\\ \end{array}\right) =\left(\begin{array}{cc} (ab \circ_{\ell} b)\oplus (\varepsilon \circ_{\ell} c) & (ab \circ_{\ell} ab)\oplus (\varepsilon \circ_{\ell} \varepsilon) \\ (bca \circ_{\ell} b)\oplus (bc \circ_{\ell} c) & (bca \circ_{\ell} ab)\oplus (bc \circ_{\ell} \varepsilon) \\ \end{array}\right)=$$ $$=\left(\begin{array}{cc} ab \oplus \varepsilon & \varepsilon \oplus \varepsilon \\ \varepsilon\oplus bc & bcab \oplus \varepsilon) \\ \end{array}\right )= \left(\begin{array}{cc} ab \cup\varepsilon & \varepsilon \cup \varepsilon \\ \varepsilon \cup bc & bcab \cup \varepsilon) \\ \end{array}\right )= \left(\begin{array}{cc} ab & \varepsilon \\ bc & bcab \\ \end{array}\right) .~~~\hfill\Box$$ A [*directed graph*]{} is a pair $ G=(V,E)$ where $V$ is a finite [*set of vertices*]{} of the graph $G$ and $E\subseteq V\times V$ is a [*set of arcs*]{} of $G$. A typical arc $(u,v)\in E$ is thought of as an arrow directed from $u$ to $v$. Let $G=(V,E)$ be a directed graph with $|V|=n$. A [*path from $u$ to $v$ of length $k$*]{} ($k\geq 1$) in $G$ is a sequence of vertices $p =( v_{1}, v_{2},\ldots, v_{k}, v_{k+1})$ with $ v_{1}=u, v_{k+1}=v$ such that $(v_{i}, v_{i+1})\in E$ for all $i=\overline{1,k}$; $v_{1}$ is called the [*starting vertex*]{} and $v_{k+1}$ the [*end-vertex*]{} of $p$, respectively. The length of path $p$ will denoted by $\ell(p)$. A path $p =( v_{1}, v_{2},\ldots, v_{k}, v_{k+1})$ is called [*circuit*]{} if $ v_{k+1}= v_{1}$ and $k\geq 1$. In particular, for $k=1$ we obtain the circuit $(v_{1}, v_{1})$ of length $1$. We denote with $P(v_{i}, v_{j}, k)$ ($k\geq 1$), the set of all paths of length $k$ from the starting vertex $v_{i}\in V$ to end-vertex $v_{j}\in V$. In particular, when $v_{i}=v_{j}$, $C(v_{i}, k)=P(v_{i}, v_{i}, k)$ ( $k\geq 1$) is the set of all circuits of length $k$ starting at vertex $v_{i}$. A path $p =( v_{1}, v_{2},\ldots, v_{k}, v_{k+1})$ is called an [*elementary path*]{} from $v_{1}$ to $v_{k+1}$, if $k\geq 1$ and $ v_{i}\neq v_{j}$ for $i\neq j$ and $i,j=\overline{1,k+1}.$ A circuit $c =( u_{1}, u_{2},\ldots, u_{k}, u_{1})$ with $\ell(c)=k$ is called an [*elementary circuit*]{}, if $( u_{1}, u_{2},\ldots, u_{k})$ is an elementary path. We denote with $P_{elem}(v_{i}, v_{j}, k)$ ($k\geq 1$), the set of all elementary paths of length $k$ from $v_{i}\in V$ to $v_{j}\in V$. In particular, when $v_{i}=v_{j}$, then $C_{elem}(v_{i}, k)=P_{elem}(v_{i}, v_{i}, k)$ ($k\geq 1$) is the set of all elementary circuits of length $k$ starting at $v_{i}$. A [*Hamiltonian path*]{} (resp., [*circuit*]{}) is a path (resp., circuit) that contains each vertex exactly once. Hence, a Hamiltonian path (resp., circuit) is an elementary path $p_{H}$ with $\ell(p_{H})=n-1$ (resp., an elementary circuit $c_{H}$ with $\ell(c_{H})=n$). A [*weighted directed graph*]{} is a graph $G=(V,E)$ with a mapping $w: E\rightarrow S$ that assigns each arc $(u,v)\in E$ a weight $w(u,v)$ from the semiring $(S, \oplus, \otimes, \varepsilon, e)$. A weighted directed graph with the cost function $w$ is denoted by $G=(V,E,w)$. The [*weight*]{} or [*cost*]{} of path $p =( v_{1},\ldots, v_{k}, v_{k+1})$ is the element $w(p)\in S$ where $$w(p)=\bigotimes\limits_{i=1}^{k} w(v_{i}, v_{i+1}). \label{(3.1)}$$ To each given weighted directed graph $G=(V, E, w)$ with $V=\{v_{1}, v_{2}, \ldots, v_{n}\}$ we can associate a $n\times n $ matrix $M_{w}(G)$ with entries in a semiring $(S, \oplus, \otimes , \varepsilon , e)$ as follows. For this, we define the matrix $M_{w}(G)= (M_{ij})\in M(n, S)$ where $$M_{ij}=\left \{\begin{array}{lll} w(v_{i}, v_{j}) & \hbox{if} & (v_{i}, v_{j})\in E\\ \varepsilon & \hbox{if} & (v_{i}, v_{j})\notin E\\ \end{array}\right. \label{(3.2)}$$ [*To each directed graph $G=(V, E)$ with $V=\{v_{1}, v_{2}, \ldots, v_{n}\}$ we can associate two weight functions*]{} in the following way. $\bullet ~~$ Let be the [*numbering semiring*]{} $({\bf N}, +, \cdot, 0, 1)$ of natural numbers endowed with the usual addition and multiplication. Consider the weight function $w_{a}: E\to {\bf N}$ defined by $ w_{a}(v_{i}, v_{j})=1 $ for all $(v_{i}, v_{j})\in E.$ The matrix $M_{w_{a}}(G)\in M(n, {\bf N})$, denoted with $A$, is called the [*adjacency matrix*]{} of graph $G$. $\bullet ~~$ Let be the idempotent semiring $({\cal P}^{\ast}(\Sigma_{dw}^{\ast}), \cup, \circ_{\ell}, \emptyset, \Sigma )$ of distinguished languages over alphabet $\Sigma =V$. Define the weight function $w_{\ell}: E\to {\cal P}^{\ast}(\Sigma_{dw}^{\ast})$ given by $ w_{\ell}(v_{i}, v_{j})= v_{i}v_{j} $ for all $(v_{i}, v_{j})\in E$ (that is, $w_{\ell}(v_{i}, v_{j})$ is the distinguished language which contains only the distinguished word $v_{i}v_{j}$ of length $1$). The matrix $M_{w_{\ell}}(G)\in M(n, {\cal P}^{\ast}(\Sigma_{dw}^{\ast}))$ is denoted with $L$ and is called the [*latin matrix*]{} of $G$. More precisely, the adjacency matrix $A=(A_{ij})\in M(n, {\bf N})$ and latin matrix $L=(L_{ij})\in M(n, {\cal P}^{\ast}(\Sigma_{dw}^{\ast}))$ associated to graph $G$ are given by $$A_{ij}=\left \{\begin{array}{lll} 1 & \hbox{if} & (v_{i}, v_{j})\in E\\ 0 & \hbox{if} & (v_{i}, v_{j})\notin E\\ \end{array}\right.~~~\hbox{and}~~~L_{ij}=\left \{\begin{array}{lll} v_{i}v_{j} & \hbox{if} & (v_{i}, v_{j})\in E\\ \varepsilon & \hbox{if} & (v_{i}, v_{j})\notin E\\ \end{array}\right. \label{(3.3)}$$ Matrix algorithm for enumerating of elementary paths and elementary circuits in a directed graph ================================================================================================ Consider the latin matrix $L=(L_{ij})\in M(n, {\cal P}^{\ast}(\Sigma_{dw}^{\ast}))$ associated to graph $G=(V,E)$ ($|V|=n$), defined by $(3.4)$. Using the multiplication of matrices based on latin composition, we define by recurrence the power $L^{[k]}$ of matrix $L$ in the following way:\ $$L^{[1]}=L,~~ L^{[2]}= L\circ_{\ell}L,~~\ldots,~~L^{[k]}=L\circ_{\ell}L^{[k-1]}~~\hbox{for}~~k\geq 2 .\label{(4.1)}$$ Applying $(3.1)$ and replacing $\oplus$ and $\otimes$ with the correspondent operations of the semiring $ M(n, {\cal P}^{\ast}(\Sigma_{dw}^{\ast}))$, we have $L^{[k]}=(L_{ij}^{[k]})$ for $i,j=\overline{1,n}$, where\ $$L_{ij}^{[k]}=\bigcup\limits_{m=1}^{n}(L_{im}\circ_{\ell} L_{mj}^{[k-1]}),~~k\geq 2.\label{(4.2)}$$ It is easy to prove that:\ $$L^{[n]} ~~\hbox{ is a diagonal matrix}~~\hbox{and}~~ L^{[n+q]}=O_{\oplus n}~~\hbox{for all}~~q\geq 1.\label{(4.3)}$$ Let $G=(V,E)$ be a directed graph with $V=\{v_{1}, v_{2}, \ldots, v_{n}\}$ and the latin matrix $L=(L_{ij})\in M(n, {\cal P}^{\ast}(\Sigma_{dw}^{\ast}))$. If $L^{[k]}=(L_{ij}^{[k]})$, then:\ $$L_{ij}^{[k]} = P_{elem}(v_{i}, v_{j}, k),~~~~ i,j=\overline{1,n},~i \neq j,~1\leq k\leq n-1; \label{(4.4)}$$ $$L_{ii}^{[k]} = C_{elem}(v_{i}, k),~~~~~~~~~~~~~~~~~~~~~~ i=\overline{1,n},~1\leq k\leq n. \label{(4.5)}$$ [*Proof.*]{} We proceed by induction on $k$. Choose $v_{i}$ and $v_{j}$ arbitrarily. [**(1) Case $i\neq j$**]{}. For $k=1$, the relation $(4.4)$ holds. Indeed, we have that the only elementary path in $P_{elem}(v_{i}, v_{j}, 1)$ is the arc $(v_{i}, v_{j})$, since $L_{ij}^{[1]}= v_{i}v_{j}.$ Therefore, $P_{elem}(v_{i}, v_{j}, 1)= \{ (v_{i}, v_{j})\}$. Note that if $L_{ij}=\varepsilon,$ then $P_{elem}(v_{i}, v_{j}, 1)=\emptyset.$ Now assume the relation $(4.4)$ holds true for $k$. Then $ L_{ij}^{[k]} = P_{elem}(v_{i}, v_{j}, k),$ that is $ L_{ij}^{[k]}$ represent the set of all elementary paths of length $k$ from $v_{i}$ to $v_{j}$. Using $(4.2)$ the $(i,j)$ entry of the matrix $L^{[k+1]} = L\circ_{\ell} L^{[k]}$ is given explicitly by $$L_{ij}^{[k+1]}=(L_{i1}\circ_{\ell} L_{1j}^{[k]})\cup \ldots \cup (L_{im}\circ_{\ell} L_{mj}^{[k]})\cup\ldots \cup (L_{in}\circ_{\ell} L_{nj}^{[k]}), ~~~k\geq 1. \label{(4.6)}$$ We first evaluate the term $L_{im}\circ_{\ell} L_{mj}^{[k]}$ from the equality $(4.6)$, for an fixed integer $m$ with $1\leq m\leq n$. We have the following situations. $\bullet~~~$ If $L_{im}=\varepsilon$ or $ L_{mj}^{[k]}=\varepsilon$, then $L_{im}\circ_{\ell} L_{mj}^{[k]}=\varepsilon$, that is the set of elementary paths of length $k+1$ from $v_{i}$ to $v_{j}$ stopping at vertex $v_{m}$ is empty set. $\bullet~~~L_{im}\neq\varepsilon$ and $ L_{mj}^{[k]}\neq\varepsilon$. By the induction hypothesis we have\ $L_{mj}^{[k]}= P_{elem}(v_{m}, v_{j}, k)= \{ p_{mj}^{1}, p_{mj}^{2},\ldots, p_{mj}^{r-1},p_{mj}^{r} \},$ where $p_{mj}^{s}$ is an elementary path of length $k$ from $v_{m}$ to $ v_{j}$ for $1\leq s\leq r$. Since $L_{im}=v_{i}v_{m}$ it follows that\ $$L_{im}\circ_{\ell} L_{mj}^{[k]}=v_{i}v_{m}\circ_{\ell}\{ p_{mj}^{1}, p_{mj}^{2},\ldots, p_{mj}^{r-1},p_{mj}^{r} \}$$ where $p_{mj}^{s}$ is regarded as a distinguished word, having the set of vertices\ $X_{mj}^{s}= \{v_{m}= v_{m_{0},j}, v_{m_{0}+1,j},\ldots, v_{m_{0}+k-1,j},v_{m_{0}+k,j}=v_{j} \}.$ The element $ v_{i}v_{m}\circ_{\ell} p_{mj}^{s}~$ can be take the following values: $-~~~v_{i}v_{m}\circ_{\ell} p_{mj}^{s}=\{v_{i}v_{m}v_{m_{0}+1,j}\ldots v_{m_{0}+k-1,j}v_{j} \},$ if $\{v_{i}\}\cap X_{mj}^{s}=\emptyset$,\ that is $v_{i}v_{m}\circ_{\ell} p_{mj}^{s}$ is an elementary path of length $k+1$ from $v_{i}$ to $v_{j}$; $-~~~v_{i}v_{m}\circ_{\ell} p_{mj}^{s}=\{\lambda\}, $ if $\{v_{i}\}\cap X_{mj}^{s}\neq\emptyset$. Then $L_{im}\circ_{\ell} L_{mj}^{[k]}=\{\lambda\} $ or $~L_{im}\circ_{\ell} L_{mj}^{[k]} $ is a set which contains $t~(1\leq t\leq s)$ elementary paths of length $k+1$ from $v_{i}$ to $v_{j}$ stopping at the vertex $v_{m}$. Therefore, $ L_{ij}^{[k+1]} = \bigcup\limits_{m-1}^{n} (L_{im}\circ_{\ell} L_{mj}^{[k]}) = P_{elem}(v_{i}, v_{j}, k+1) $. Hence, $(4.4)$ holds for $k+1.$ This completes the inductive step and proves the assertion in the case $i\neq j$. [**(2) Case $i=j$.**]{} If $L_{ii}^{[1]}= v_{i}v_{i},$ then $ C_{elem}(v_{i}, 1)= \{ (v_{i}, v_{i})\}$. Also, if $L_{ii}=\varepsilon,$ then $C_{elem}(v_{i}, 1)=\emptyset.$ Hence, $(4.5)$ holds for $k=1.$ Assume that $(4.5)$ holds true for $k$. Then $ L_{ii}^{[k]} = C_{elem}(v_{i}, k),$ that is $ L_{ii}^{[k]}$ represent the set of all elementary circuits of length $k$ starting at $v_{i}$. The $(i,i)$ entry of the matrix $ L^{[k+1]}$ is $~L_{ii}^{[k+1]}=\bigcup\limits_{m=1}^{n}(L_{im}\circ_{\ell} L_{mi}^{[k]}),~k\geq 1.$ We evaluate the term $L_{im}\circ_{\ell} L_{mi}^{[k]}$ for an fixed integer $m$ with $1\leq m\leq n$. We have the following situations. $\bullet~~$ If $L_{im}=\varepsilon$ or $ L_{mi}^{[k]}=\varepsilon$, then $L_{im}\circ_{\ell} L_{mi}^{[k]}=\varepsilon$. $\bullet~~ L_{im}\neq\varepsilon$ and $ L_{mi}^{[k]}\neq\varepsilon$. Applying $(4.4)$ we have $~ L_{mi}^{[k]}= P_{elem}(v_{m}, v_{i}, k)= \{ q_{mi}^{1}, q_{mi}^{2},\ldots, q_{mi}^{r-1},q_{mi}^{r} \},$ where $q_{mi}^{s}$ is an elementary path of length $k$ from $v_{m}$ to $ v_{i}$ for $1\leq s\leq r$. Since $L_{im}=v_{i}v_{m}$ it follows that $$L_{im}\circ_{\ell} L_{mi}^{[k]}=v_{i}v_{m}\circ_{\ell}\{ q_{mi}^{1}, q_{mi}^{2},\ldots, q_{mi}^{r-1},q_{mi}^{r} \}$$ where $q_{mi}^{s}$ is regarded as a distinguished word, having the set of vertices\ $Y_{mi}^{s}= \{v_{m}= v_{m_{1},i}, v_{m_{1}+1,i},\ldots, v_{m_{1}+k-1,i},v_{m_{1}+k,i}=v_{i}\}$. The element $ v_{i}v_{m}\circ_{\ell} q_{mi}^{s} $ can be take the following values: $-~~~v_{i}v_{m}\circ_{\ell} q_{mi}^{s}=\{v_{i}v_{m}v_{m_{1}+1,i}\ldots v_{m_{1}+k-1,i}v_{i} \},$ if $\{v_{i}\}\cap (Y_{mi}^{s}\setminus \{v_{i}\})=\emptyset$,\ that is $v_{i}v_{m}\circ_{\ell} q_{mi}^{s}$ is an elementary circuit from $v_{i}$ to $v_{i}$ of length $k+1$; $-~~~v_{i}v_{m}\circ_{\ell} q_{mi}^{s}=\{\lambda\}, $ if $\{v_{i}\}\cap (Y_{mi}^{s}\setminus \{v_{i}\})\neq\emptyset$. Then $L_{im}\circ_{\ell} L_{mi}^{[k]}=\{\lambda\} $ or $~L_{im}\circ_{\ell} L_{mi}^{[k]} $ is a set which contains $t_{1}~(1\leq t_{1}\leq s)$ elementary circuits of length $k+1$ from $v_{i}$ to $v_{i}$ stopping at $v_{m}$. Therefore, $ L_{ii}^{[k+1]} = C_{elem}(v_{i}, k+1) $. Hence, $(4.5)$ holds for $k+1.$ This completes the inductive step and proves the assertion in the case $i=j$. $\Box$ Applying Theorem 4.1 we give an answer of $({\bf EPP})$ and $({\bf ECP})$ for a directed graph. In this purpose we give a new method based on latin composition of distinguished languages, We will called the [*algorithm of latin composition of distinguished languages*]{} (shortly, [**LCDL**]{}-algorithm). *For a directed graph $G=(V, E)$ with $V=\{v_{1}, v_{2}, \ldots, v_{n}\}$, the [**LCDL**]{}-algorithm consists from the following steps:* [**Step 1.**]{} Associate the latin matrix $L\in M(n, {\cal P}^{\ast}(\Sigma_{dw}^{\ast}))$ to graph $G$; [**Step 2.**]{} For each $k (1\leq k\leq n)$ compute the matrix $L^{[k]}=(L_{ij}^{[k]})$; [**Step 3.**]{}$~(i)$ For each pair $(i,j)$ and each $k (1\leq k\leq n-1)$ enumerate the elementary paths of length $k$ from $v_{i}$ to $v_{j}$ in $G$ [(we apply the relation $(4.4)$)]{}; $(ii)~$For each $i$ and each $k (1\leq k\leq n)$ enumerate the elementary circuits of length $k$ starting at $v_{i}$ in $G$ [(we apply the relation $(4.5)$).]{} Let $G=(V,E)$ be a directed graph with $V=\{v_{1}, v_{2}, \ldots, v_{n}\}$. $(i)$ [**(1) **]{} For $1\leq k\leq n-2$ and $i,j=\overline{1,n}$ with $i\neq j$, the elements $L_{ij}^{[k]}$ indicates the elementary paths which are formed by $k$ arcs, so that $L_{ij}^{[n-1]}$ determines all Hamiltonian paths in $G$ between $v_{i}$ and $v_{j}$. [**(2) **]{} For $1\leq k\leq n-1$ and $i=\overline{1,n}$, the elements $L_{ii}^{[k]}$ indicates the elementary circuits of length $k$, so that $L_{ii}^{[n]}$ determines all Hamiltonian circuits in $G$ starting at $v_{i}$. $(ii)~$ The necessity to determine the elementary paths and elementary circuits of maximum length in a directed graph arises. The [**LCDL**]{}-algorithm determines the Hamiltonian paths (resp., Hamiltonian circuits) when there exist or determines the elementary paths (resp., circuits) of maximum length when we have no Hamiltonian paths (resp., circuits). $\Box$ ([@bagu]) [The powers of the adjacency matrix $A\in M(n;{\bf N})$ associated to graph $G=(V,E)$ are used to find the number of distinct paths and distinct circuits between two vertices in $V$ (two paths or circuits of length $k$ are distinct if they visit a different sequence of vertices). More precisely:]{} Let $A^{k}=(A_{ij}^{k})$ be the $k-$th power of the adjacency matrix $A$. Then:\ $$A_{ij}^{k}= |P(v_{i}, v_{j},k)| ~~~\hbox{and}~~~ A_{ii}^{k}= |C(v_{i}, k)|~~~\hbox{for all}~~k\geq 1.~~~ \hfill\Box$$ Let $G=(V, E)$ be a directed graph with vertex set $V=\{v_{1}, v_{2}, v_{3}, v_{4}\}$ and the adjacency matrix $A\in M_{4}({\bf N})$ of $G$ where\ $$A = \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{array} \right).$$\ The geometric representation of graph $G$ is given in Figure 4.1.\ ![image](fig-1.eps){width="7cm"} Fig.4.1.\ Computing the power $A^{3}$ of the adjacency matrix $A$ in terms of numbering semiring $({\bf N}, +, \cdot, 0, 1)$ we have $A_{14}^{3}=5~$ and $A_{22}^{3}=1$. Then: ${\bullet~~}$ Between $v_{1}$ and $v_{4}$ there exist $5$ paths of length $3$, namely: $p_{1}^{3}=( v_{1}, v_{2}, v_{3},v_{4}),$\ $p_{2}^{3}=( v_{1}, v_{1}, v_{1}, v_{4}),~ p_{3}^{3}=(v_{1}, v_{1}, v_{2}, v_{4}),~p_{4}^{3}= (v_{1}, v_{2}, v_{2}, v_{4}),~ p_{5}^{3}=( v_{1}, v_{1}, v_{3}, v_{4})$; ${\bullet~~} G$ has one circuit of length $3$ starting at $v_{2}$, namely $ c_{1}^{3}=( v_{2}, v_{2}, v_{2}, v_{2})$. $(ii)~$ We consider the alphabet $\Sigma = \{ v_{1}, v_{2}, v_{3}, v_{4} \} $. For to find the elementary paths and elementary circuits in graph $G$ we apply [**LCDL**]{}-algorithm. The latin matrix $L\in M_{4}({\cal P}(\Sigma_{dw}^{\ast}))$ associated to graph $G$ is\ $$L = \left( \begin{array}{cccc} v_1v_1 & v_1v_2 & v_1v_3& v_1v_4 \\ \varepsilon & v_2v_2 & v_2v_3 & v_2v_4 \\ \varepsilon & \varepsilon & \varepsilon & v_3v_4 \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \end{array} \right).$$\ We compute the powers od the latin matrix $L$ in terms of the semiring of distinguished languages. We first compute the matrix $L^{[2]}=(L_{ij}^{[2]}).$ We have $$L^{[2]}=L\circ_{\ell} L= =\left( \begin{array}{cccc} \varepsilon & \varepsilon & v_1v_2v_3& \{v_{1} v_{2}v_{4}, v_{1} v_{3}v_{4} \} \\ \varepsilon & \varepsilon & \varepsilon & v_{2}v_{3}v_{4} \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \end{array} \right).$$ For example, $L_{14}^{[2]}$ is computed as follows\ $L_{14}^{[2]}= (L_{11}\circ_{\ell}L_{14})\cup (L_{12}\circ_{\ell}L_{24})\cup(L_{13}\circ_{\ell}L_{34})\cup(L_{14}\circ_{\ell}L_{44})=(v_{1}v_{1}\circ_{\ell}v_{1}v_{4})\cup$\ $\cup(v_{1}v_{2}\circ_{\ell}v_{2}v_{4})\cup(v_{1}v_{3}\circ_{\ell}v_{3}v_{4})\cup (v_{1}v_{4}\circ_{\ell}\varepsilon) = \varepsilon\cup \{v_{1}v_{2}v_{4}\}\cup \{v_{1}v_{3}v_{4}\}\cup \varepsilon=\{v_{1}v_{2}v_{4}, v_{1}v_{3}v_{4}\}.$\ Since $L_{14}^{[2]}= \{v_{1}v_{2}v_{4}, v_{1}v_{3}v_{4}\}$ it follows that there exist two elementary paths of length $2$ from $v_{1}$ to $v_{4}$ and we have$~ P_{elem}(v_{1}, v_{4}, 2)=\{ (v_{1}, v_{2}, v_{4}), ~(v_{1}, v_{3}, v_{4})\}.$ The matrices $ L^{[3]}=L\circ_{\ell} L^{[2]}$ and $ L^{[4]}=L\circ_{\ell} L^{[3]}$ are given by $$L^{[3]}=\left( \begin{array}{cccc} \varepsilon & \varepsilon & \varepsilon & v_{1} v_{2}v_{3}v_{4} \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \end{array} \right)~~~\hbox{and}~~~L^{[4]}=\left( \begin{array}{cccc} \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon & \varepsilon & \varepsilon & \varepsilon \end{array} \right).$$ Using the matrices $L^{[3]}$ and $L^{[4]}$ one obtains the following results: $\bullet~~ L_{14}^{[3]}=\{v_{1}v_{2}v_{3}v_{4}\}$. Then $P_{elem}(v_{1}, v_{4}, 3)=\{(v_{1}, v_{2}, v_{3}, v_{4})\}$ and $G$ has only one Hamiltonian path. The set of elementary paths of maximum length from $v_{2}$ to $v_{4}$ is $P_{elem}(v_{2}, v_{4}, 2)=\{ ( v_{2}, v_{3}, v_{4})\}$, since $L_{24}^{[2]}=\{v_{2}v_{3}v_{4}\}$ and $L_{24}^{[3]}=\varepsilon.$ $\bullet~~ L_{ii}^{[k]}=\varepsilon~$ for $2\leq k\leq 4$ and $i=\overline{1,4}.$ Then $C_{elem}(v_{i}, v_{i}, k)=\emptyset $. The elementary circuits of maximum length are those of length $1$. $\Box$ [**Application. The finding of Hamiltonian paths and Hamiltonian circuits of minimal cost in a weighted directed graph**]{}. *The [**LCDL**]{}-algorithm can be to use for solving of the following two problems in a weighted directed graph $G=(V,E)$:* $(i)~$ find a Hamiltonian path of minimal cost between two vertices in $G$; $(ii)~$ for each $v\in G$, find a Hamiltonian circuit of minimal cost starting at $v$. One way to solve the above problems consists of searching all possible Hamiltonian paths and Hamiltonian circuits (we apply [**LCDL**]{}-algorithm) and computing their cost (we apply the relation (3.1)). Let $G=(V, E, w)$ be a weighted directed graph where $V=\{1, 2, 3, 4, 5\},$\ $E=\{ (1, 2), (1, 3), (1, 5), (2, 1), (2, 5), (3, 2), (4, 3), (4, 5), (5, 1), (5, 2), (5, 3), (5, 4) \}$ and the cost function $ w_{cost}: E\to {\bf R},~(i,j)\rightarrowtail w_{cost}(i,j)=w_{ij}$ given by\ $\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline (i,j)& (1, 2)& (1,3)&(1, 5)&(2, 1)&(2, 5)&(3, 2)&(4, 3)&(4, 5)&(5, 1)&(5, 2)\\ \hline w_{ij}& 4 & 2 & 6 & 3 & 3 & 1 & 5 & 4 & 6 & 1 \cr \hline \end{array}$\ $\begin{array}{|c|c|c|} \hline (i,j)& (5, 3)&(5, 4)\\ \hline w_{ij}& 2 &1 \cr \hline \end{array}$\ The latin matrix $L\in M_{5}({\cal P}(\Sigma_{dw}^{\ast}))$ ($\Sigma = \{ 1, 2, 3, 4, 5 \} $) associated to graph $G$ is $$L = \left( \begin{array}{ccccc} \varepsilon & 12 & 13 & \varepsilon & 15\\ 21 & \varepsilon & \varepsilon &\varepsilon & 25 \\ \varepsilon & 32 & \varepsilon & \varepsilon & \varepsilon \\ \varepsilon& \varepsilon & 43 & \varepsilon & 45\\ 51 & 52 & 53 & 54 & \varepsilon\\ \end{array} \right).$$ Let us we compute the powers of the latin matrix $L$. We have $$L^{[2]}=L\circ_{\ell} L =\left( \begin{array}{ccccc} \{121, 151\}& \{132,152\}& 153& 154 & 125 \\ 251 & \{ 212, 252\}& \{213, 253\} & 254 & 215 \\ 321 & \varepsilon & \varepsilon & \varepsilon & 325 \\ 451 & \{ 432, 452\}& 453 & 454 & \varepsilon\\ 521 & \{ 512, 532\}& \{513, 543\} & \varepsilon &\{515, 525, 545\} \\ \end{array} \right).$$ The matrix $ L^{[3]} = L\circ_{\ell} L^{[2]}$ has the following lines: $L_{1}^{[3]}:~~(\{1321,1521,1251\},~1532,~ \{1543,1253\},~1254,~1325);$\ $L_{2}^{[3]}:~~(\varepsilon,~ \{2512,2132,2532,2152\},~ \{ 2513,2543,2153\},~ 2154, ~~ \varepsilon );$\ $L_{3}^{[3]}:~~(3251,~~ \varepsilon,~ \{3213,3253\},~3254,~ 3215 );$\ $L_{4}^{[3]}:~~(\{4321,4521\},~\{4512, 4532\},~ 4513,~~ \varepsilon,~~ 4325 );$\ $L_{5}^{[3]}:~~(5321,~ \{5132,5432\},~ 5213,~ \varepsilon,~ \{5215,5125,5325\}).$\ The matrix $ L^{[4]} = L\circ_{\ell} L^{[3]}$ has the following lines: $L_{1}^{[4]}:~~(\{15321,13251\},~ 15432, ~12543,~ 13254,~~ \varepsilon );$\ $L_{2}^{[4]}:~~(\varepsilon,~ \{25132,25432,21532\},~ 21543,~~ \varepsilon,~~ \varepsilon);$\ $L_{3}^{[4]}:~~(\varepsilon,~~ \varepsilon,~ \{32513,32543,32153\},~~ \varepsilon,~~ \varepsilon);$\ $L_{4}^{[4]}:~~(\{45321,43251\},~ 45132,~ 45213,~ 43254,~ 43215);$\ $L_{5}^{[4]}:~~(54321,~~ \varepsilon,~~ \varepsilon,~~ \varepsilon,~ \{53215,51325,54325\}).$\ Finally, we have $$L^{(5)} = L^{(4)} \circ_{\ell} L= \left( \begin{array}{ccccc} 154321 & \varepsilon & \varepsilon & \varepsilon & \varepsilon\\ \varepsilon & 215432 & \varepsilon & \varepsilon & \varepsilon\\ \varepsilon & \varepsilon & \varepsilon & \varepsilon & \varepsilon\\ \varepsilon & \varepsilon & \varepsilon & 432154 & \varepsilon\\ \varepsilon & \varepsilon & \varepsilon & \varepsilon & 543215 \end{array} \right).$$ $\bullet ~~$ From $L^{[4]}$ it follows that $G$ has $11$ Hamiltonian paths. For example, $~P_{elem}(4,1,4)=\{p_{1,H}^{4}=(4,5,3,2,1), p_{2,H}^{4}=(4,3, 2, 5,1)\}$ since $L_{41}^{[4]}=\{45321, 43251\}$. We have $~w(p_{1,H}^{4})= 10$ and $~w(p_{2,H}^{4})=15.~$ It follows that $p_{2,H}^{4}$ is a Hamiltonian path between the vertices $4$ and $1$ having the maximal cost equal to $15$. $\bullet ~~$ From $L^{[5]}$ it follows that $G$ has $4$ Hamiltonian circuits. For example, $C_{elem}(1,1,5)=L_{11}^{[5]}=\{c_{1,H}^{5}=(1, 5, 4, 3, 2, 1)\}$. We have $~w(c_{1,H}^{5}) = 16$. Hence $c_{1,H}^{5}$ is a Hamiltonian circuit starting at vertex $1$ having the maximal cost equal to $16$. $\Box$ [**Conclusions**]{}. The [**LCDL**]{}- algorithm can be easily programmed and gives an efficient solution of [**EPP**]{} and [**ECP**]{} for a finite directed graph. This can be seen as an improved version of Kaufmann’s algorithm. Let us list some classes of algebraic path problems, which can be reduced to applying of the [**LCDL**]{}-algorithm: $(i)~$ enumeration of the elementary paths (resp., circuits); $(ii)$ determination of the elementary paths (resp., circuits) of maximum length; $(iii)$ testing a graph $G$ for having Hamiltonian paths or Hamiltonian circuits; $(iv)$ optimization (Hamiltonian path or Hamiltonian circuit of minimal cost). $\Box$ [99]{} Bang-Jensen and G. Gutin, [*Digraphs Theory, Algorithms and Applications*]{}, Springer-Verlag, 2007. E. Fink, [*A survey of sequential and systolic algorithm for the algebraic path problem*]{}, Technical Reports CS-92-37, Department of Computer Science, University of Waterloo,1992. W. Kaufmann, *Graphs, Dynamic Programming and Finite Games,* Academic Press, New York, 1967. W. Kuich and A. Salomaa, [*Semirings, Automata, Languages*]{}, EACTS Monographs on Theoretical Computer Science, [**5**]{}, Springer-Verlag, 1986. G. L. Litvinov, *The Maslov dequantization, idempotent and tropical mathematics: a brief introduction,* Journal of Mathematical Sciences, [**140**]{} (2007), no. 3, 426–444. M. Mohri, *Semirings, frameworks and algorithms for shortest-distance problems,* Journal of Automata, Languages and Combinatorics, [**7**]{} (2002), no. 3, 321–350. Author’s adress\ West University of Timişoara,\ Department of Mathematics, Bd. V. P[â]{}rvan, no. 4, 300223, Timişoara, Romania\ E-mail: ivan@math.uvt.ro\ [^1]: [*AMS classification:*]{} 16Y60, 15A09, 05C20.\ [*Key words and phrases:*]{} idempotent semiring, semiring of distinguished languages, elementary path.
--- abstract: 'Like early-type galaxies, also nearby galaxy clusters define a Fundamental Plane, a luminosity-radius, and a luminosity-velocity dispersion relations, whose physical origin is still unclear. By means of high resolution N–body simulations of massive dark matter halos in a $\Lambda$CDM cosmology, we find that scaling relations similar to those observed for galaxy clusters are already defined by their dark matter hosts. The slopes however are not the same, and among the various possibilities in principle able to bring the simulated and the observed scaling relations in mutual agreement, we show that the preferred solution is a luminosity dependent mass-to-light ratio ($M/L\propto L^{\sim 0.3}$), that well corresponds to what inferred observationally. We then show that at galactic scales there is a conflict between the cosmological predictions of structure formation, the observed trend of the mass-to-light ratio in ellipticals, and the slope of their luminosity-velocity dispersion relation (that significantly differs from the analogous one followed by clusters). The conclusion is that the scaling laws of elliptical galaxies might be the combined result of the cosmological collapse of density fluctuations at the epoch when galactic scales became non-linear, plus important modifications afterward due to early-time dissipative merging. Finally, we briefly discuss the possible evolution of the cluster scaling relations with redshift.' author: - 'B. Lanzoni$^1$, L. Ciotti$^{2,3}$, A. Cappi$^1$, G. Tormen$^4$, G. Zamorani$^1$' date: 'September 19, 2003; accepted' title: THE SCALING RELATIONS OF GALAXY CLUSTERS AND THEIR DARK MATTER HALOS --- Introduction {#sec:intro} ============ Early-type galaxies are known to follow well defined scaling relations involving their main observational properties, i.e, the luminosity $L$, effective radius $\re$, and velocity dispersion $\sigma$: in particular we recall here the Faber-Jackson (hereafter FJ; Faber & Jackson 1976), the Kormendy (Kormendy 1977), and the Fundamental Plane (hereafter FP; Djorgovski & Davis 1987; Dressler et al. 1987) relations. Within the limitations of a poorer statistics, analogous relations have been found to hold also for galaxy clusters (Schaeffer et al. 1993, hereafter S93; Adami et al. 1998a, hereafter A98; for 3-parameters scaling relations involving X–ray observables, see Annis 1994; Fujita & Takahara 1999a; Fritsch & Burchert 1999; Miller, Melott & Gorman 1999). Besides their potential importance as distance indicators, these scaling relations are also useful to get insights on the structure and, possibly, on the formation and evolutionary processes of galaxies and galaxy clusters. Well defined scaling relations, that recall the observed ones, are indeed expected on the basis of the simplest model for the formation of structures in an expanding Universe, namely the gravitational collapse of density fluctuations in an otherwise homogeneous distribution of collisionless dark matter (DM). In fact, the spherical top-hat model (Gunn & Gott 1972) predicts that, at any given epoch, all the existing DM halos have just collapsed and virialized, i.e., $M = \rvir\,\sigvir^2/G$ (where $\rvir\equiv -U/G\,M^2$, $\sigvir^2\equiv 2T/M$, $U$ and $T$ being the potential and kinetic energies of a halo of mass $M$, respectively). In addition, all the halos are characterized by a constant mean density $\rhovir$, given by the critical density of the Universe at that redshift times a factor $\Delta$ depending on $z$ and on the given cosmology (see, e.g., Peebles 1980; Eke, Cole & Frenk 1996). For simplicity, we call $\r200$ the radius of the sphere containing such a mean density, so that $M\propto\r200^3$. In general, $\rvir\ne\r200$, but, if for a family of density distributions $\rvir/\r200\simeq const$, then the virial theorem can be rewritten as $M \propto\r200\,\sigvir^2$, and, together with $M\propto \r200^3$, it brings to $M\propto\sigvir^3$, thus providing three relations that closely resemble the observed ones. Note that these expectations involve the [*global three-dimensional*]{} properties of DM halos, while the quantities entering the observed scaling relations are [*projected*]{} on the plane of the sky. However, if DM halos are structurally homologous[^1] systems, as found in cosmological simulations (Navarro, Frenk & White 1997, hereafter NFW), and are characterized by similar velocity dispersion profiles (e.g., Cole & Lacey 1996), their projected properties are also expected to follow well defined scaling relations (with some scatter due to departures from perfect homology and sphericity). Of course, the simple considerations above are not sufficient to account for the [*observed*]{} scaling relations of galaxy clusters, at least for two reasons. The first is that a given potential well (as the one associated with the cluster DM distribution) can be filled, in principle, by very different distributions of “tracers” (such as the galaxies in the clusters, from which the scaling relations are derived). This means that the very existence of the cluster FP implies a remarkable regularity in their formation processes: galaxies must have formed or “fallen” in all clusters in a similar way. The second reason is that any trend of the cluster mass-to-light ratio (necessary to transform masses, involved in the theoretical relations, into luminosities[^2], entering the observed ones) must be taken into account for a proper interpretation of the observed scaling relations. A distinct but strongly related question about the origin and the meaning of the scaling laws naturally arises when applying the predictions of the cosmology also at galactic scales: in fact, while scale-invariant relations are predicted, different slopes of the FJ relation are observed for galaxies and for galaxy clusters (see Section \[sec:obs\], and Girardi et al. 2000). This suggests that different processes have been at work in setting or modifying the correlations at the two mass scales. The theoretical implications of the scaling laws for elliptical galaxies (Es) have been intensively explored (see, e.g., Bertin et al. 2002 and references therein), and several works have been devoted to their study within the framework of the dissipationless merging scenario in Newtonian dynamics (e.g., Capelato, de Carvalho & Carlberg 1995; Nipoti, Londrillo & Ciotti 2003a, 2003b, hereafter NLC03ab; Evstigneeva, Reshetnikov & Sotnikova 2002; Dantas et al. 2003; González-García & van Albada 2003). In particular, if the initial total energy of the system is non-negative, the merger products are found to follow the observed edge-on FP (with the important exception of merging dominated by accretion of small galaxies), but they badly fail at reproducing the FJ and the Kormendy relations, in accordance with elementary predictions based on energy conservation and on the virial theorem (NLC03ab). It has also been shown (Ciotti & van Albada 2001) that gas free mergers cannot account at the same time for the FP and for the $M_{\rm BH}$-$\sigma$ relation (that link the black hole mass and the stellar velocity dispersion of the host spheroid, Ferrarese & Merrit 2000; Gebhardt et al. 2000); the importance of gas dissipation in the formation of elliptical galaxies is also apparent from the observed color–magnitude and ${\rm Mg}_2$-$\sigma$ relations (e.g., Saglia et al. 2000; Bernardi et al. 2003a). Much less effort has been devoted to the theoretical study of the FP of galaxy clusters: beside a work on the effects of dissipationless merging (Pentericci, Ciotti & Renzini 1996), the other few theoretical studies mainly focus on the possible relation between the FP properties and the cluster age, the number of substructures, and the underlying cosmology (Fujita & Takahara 1999b; Beisbart, Valdarnini & Buchert 2001). From the comparison between the FP of galaxies and of galaxy clusters in the $k$-space, Burstein et al. (1997) derived support to the idea that, at variance with groups and clusters, gas dissipation must have had an important role on the formation and evolution of galaxies. In order to get a more complete view of the problems depicted above, we use high-resolution N-body simulations to study the scaling relations of very massive DM halos, that are thought to host the present day galaxy clusters. The aim is to verify whether these relations are similar or not to the observed ones, and determine the assumptions required to make them in mutual agreement. The results thus obtained and the empirical evidence of different slopes of the scaling relations at galactic and cluster scales are then discussed. The paper is organized as follows: in Section \[sec:obs\] we present the scaling relations observed for nearby galaxy clusters and those followed by early-type galaxies. In Section \[sec:DM\] we describe the high-resolution resimulation technique employed to build the sample of massive DM halos used for our analysis, and we derive their scaling relations. In Sections \[sec:simu\_obs\] and \[sec:cl\_gal\] we discuss under which hypothesis these scaling relations can be translated into the observed ones, focusing in particular on the astrophysical implications of the differences at galactic and cluster scales. The main results are summarized and discussed in Section \[sec:disc\]. Scaling relations of clusters and early-type galaxies {#sec:obs} ===================================================== From the observational point of view only two works (namely, S93 and A98) report on the FP of galaxy clusters, i.e., a relation among their optical luminosity, scale radius, and velocity dispersion. The two groups agree about the existence of a tight and well defined FP, even if quantitative differences in the numerical values of its coefficients are found, that can be traced back to the different choice of the radial variable. In fact, A98 use 4 density profiles (King, Hubble, NFW, and de Vaucouleurs) to describe the distribution of cluster galaxies, concluding that the best fit is provided by the first two models: consistently, they adopt the clusters core radii to obtain the FP. S93 instead use in their analysis the cluster half-light projected radii (the standard choice in the vast majority of the FP studies). For this reason we choose to compare our results to those presented by S93: we note however that the FP coefficients derived by A98 when using the projected half-light radii derived from the de Vaucouleurs model agree within the errors with those reported by S93. For their sample of 16 galaxy clusters at $z\le 0.2$, S93 used the photometric parameters $L$ and $\re$ in the V band (quoted by West, Oemler & Dekel, 1989), and the velocity dispersion $\sigma$ (given by Struble & Rood, 1991), and they derived not only the FP, but also a FJ-like and a Kormendy-like relations. However, instead of reporting the S93 scaling laws, that have been obtained by means of least-square fits to the data, here we re-derive them by minimizing the distance of the residuals perpendicular to a straight line (for the FJ and the Kormendy relations) or to a plane (for the FP). Results are anyway consistent with those of S93. The FJ and Kormendy relations we obtain are: $$L\propto \sigma^{2.18\pm0.52}, %bisector \label{eq:fj}$$ in good agreement also with the results of Girardi et al. (2000), and $$L\propto \re^{1.55\pm0.19}, %bisector \label{eq:kor}$$ where $L$ is given in $10^{12}L_\odot/h^2$, $\sigma$ in 1000 km/s, $\re$ in Mpc$/h$ ($H_0 = 100\,h$ km s$^{-1}$ Mpc$^{-1}$, and $h=1$), and the errors on the exponents take into account also the observational uncertainties. As can be seen from Fig.\[fig:fjk\], where equations (\[eq:fj\]) and (\[eq:kor\]) are plotted together with the S93 data, the two relations above describe real scalings among the cluster properties, even if their scatter is quite large: without taking into account the observational errors, the $rms$ dispersion of the data around the best-fit lines is 0.19 in both cases. Similarly to what happens for galaxies, a considerable improvement is achieved when combining all the three observables together in a FP relation, that we have derived by performing a Principal Component Analysis (PCA; see e.g. Murtagh & Heck 1987) on the data sample, thus obtaining the new orthogonal variables $p_i$ defined by: $$p_i\equiv\alpha_i \log\re + \beta_i\log L + \gamma_i \log\sigma, ~~ i=1,2,3. \label{eq:pi}$$ The numerical values of the coefficients $\alpha_i$, $\beta_i$, and $\gamma_i$ are listed in Table \[tab:pca\], while the resulting distribution of the observed clusters in the ($p_1,p_3$) and ($p_1, p_2$) spaces are shown in Fig.\[fig:fp\_pi\], the former providing an exact edge-on view of the FP and making apparent its small thickness. A more intuitive representation of a (nearly) edge-on view of the FP can be obtained by solving equation (\[eq:pi\]) for $i=3$ and using $p_3\simeq const$: $$L\propto\re^{0.9\pm 0.15} \,\sigma^{1.31\pm 0.22}. \label{eq:fp}$$ This relation is shown in Fig.\[fig:fpl\], and is characterized by an $rms$ dispersion of $\sim 0.07$. How do cluster scaling relations compare with the analogous relations followed by early-type galaxies? For what concerns the FP, the agreement with that of galaxies is remarkable. For example, the FP of elliptical galaxies in the B band is given by $L \propto \re^{\sim 0.8}\, \sigma^{\sim 1.3}$ (e.g., Dressler et al. 1987; J[ø]{}rgensen, Franx & Kj[æ]{}rgaard 1996; Scodeggio et al. 1998; Bernardi et al. 2003c). As well known, the exponents of the galaxy FP depend on the adopted photometric band (see, e.g., Scodeggio et al. 1998; Treu 2001), but the differences from the values reported above become significant only when using the K-band (Pahre, Djorgovski & de Carvalho 1998). Thus, the agreement between the cluster and the galaxy FP can be regarded as robust. The situation is similar for the Kormendy relation: in fact, $L\propto \re^{\sim 1.7\pm0.07}$ has been reported for ellipticals in the B band (Davies et al. 1983; see also Schade, Barrientos & Lopez-Cruz 1997; Bernardi et al. 2003b). However, this agreement should be regarded as less robust than that of the FP, because largely different values of the Kormendy slope for various selections of the data sample have been reported in the case of galaxies (Ziegler et al. 1999). In any case, the FJ relation of galaxies is different from that the clusters FJ: in fact, $L\propto\sigma^4$ for (luminous) Es (Faber & Jackson 1976; Forbes & Ponman 1999; Bernardi et al. 2003b), while for clusters the reported slope is around 2 (a value of $\sim 1.58$ is also found by A98). The additional facts that the slope of the galaxy FJ seems to drop below 3 for ellipticals with $\sigma<170$ km/s (see, e.g., Davies et al. 1983), and that the exponent of the $M_{\rm BH}$-$\sigma$ relation (Ferrarese & Merrit 2000; Gebhardt et al. 2000) is remarkably similar to that of the FJ will be discussed in Section \[sec:cl\_gal\]. [rrrr]{} $p_1$ & -0.67 & -0.51 & -0.94\ $p_2$ & 0.88 & -0.005 & -1.22\ $p_3$ & 0.55 & -0.61 & 0.80\ \[tab:pca\] DM halos scaling relations {#sec:DM} ========================== The simulations {#sec:simu} --------------- To investigate whether the dark matter hosts of galaxy clusters, as obtained by numerical simulations, do define scaling relations similar to the observed ones, a large enough sample of very massive DM halos is needed. Therefore, we employed dissipationless simulation with $512^3$ particles of $6.86\times 10^{10}\msol/h$ mass each, where the volume of the Universe is sufficiently large for this purpose: the box side is $479\,h^{-1}\mpc$ comoving, with $h=0.7$ (see Yoshida, Sheth & Diaferio 2001). The adopted cosmological model is a $\Lambda$CDM Universe with $\Omega_{\rm m}=0.3$, $\Omega_\Lambda=0.7$ (e.g., Ostriker & Steinhardt 1995), spectral shape $\Gamma =0.21$, and normalization to the local cluster abundance, $\sigma_8=0.9$. From this simulation, we have randomly selected a sub-sample of 13 halos at $z=0$, with masses between $10^{14}\msol/h$ and $2.3\times 10^{15}\msol/h$. They span a variety of shapes, from nearly round to more elongated. The richness of their environment also changes from case to case, with the less isolated halos usually surrounded by pronounced filamentary structures, containing massive neighbors (up to $20\%$ of the selected halo in mass). Given the mass resolution of the simulation, less than 1500 particles compose a halo of $10^{14}\msol/h$ and, due to discreteness effects, its properties defining the FP relation cannot be accurately determined. We have therefore [*resimulated*]{} at higher resolution the halos in our sample by means of the technique introduced in Tormen, Bouchet & White (1997): here we recall only its relevant aspects (for more details see Lanzoni, Cappi & Ciotti 2003). The first step is to select in a given cosmological simulation the halo one wants to “zoom in”. Then, the region defined by all the particles composing the selected halo and its immediate surroundings is detected in the initial conditions of the parent simulation, and the number of particles within it is increased by the factor needed to attain the suited mass resolution. Such a region is therefore called “the high resolution region” (HRR). Since the mean inter-particle separation within the HRR is smaller than in the parent simulation, the corresponding high-frequency modes of the primordial fluctuation spectrum are added to those on larger scales originally used in the parent simulation, and the overall displacement field is also modified consequently. At the same time, the distribution of surrounding particles is smoothed by means of a spherical grid, whose spacing increases with the distance from the center: in such a way, the original surrounding particles are replaced by a smaller number of *macroparticles*, whose mass grows with the distance from the HRR. Thanks to this method, even if the number of particles in the HRR is increased, the total number of particles to be evolved in the simulation remains small enough to require reasonable computational costs, while the tidal field that the overall particle distribution exerts on the HRR remains very close to the original one. For the new initial configuration thus produced, vacuum boundary conditions are adopted, i.e., we assume a vanishing density fluctuation field outside the spherical distribution of particles with diameter equal to the original box size $L$. A new N-body simulation is then run starting from these new initial conditions, and allows to re-obtain the selected halo at the required resolution. We have applied this technique to the selected 13 massive DM halos. For 8 of them the resolution has been increased by a factor $\sim 33$, by means of high-resolution particles of mass $\sim 2.07\times 10^9\msol/h$ each, while a further increase of a factor of 2 has been adopted for the 5 intermediate mass halos (the particle mass is $10^9\msol/h$ in this case). The gravitational softening used for the high-resolution region is $\epsilon = 5\,\kpc/h$ (roughly Plummer equivalent), corresponding to about $0.2\%$ and $0.5\%$ of the virial radius of the most and least massive halos, respectively. This scale length represents the spatial resolution of the resimulations, to be compared with that of $30\,\kpc/h$ of the original one. To run the resimulations, the parallel dissipationless tree-code GADGET (Springel, Yoshida & White 2001) has been used. At $z=0$ the DM halos have been selected by means of a spherical overdensity criterium (Lacey & Cole 1994; Tormen et al. 1997), i.e., they are defined as spheres centered on maximum density peaks in the particle distribution, and with mean density equal to the virial density $\rhovir$ predicted by the spherical top-hat model for the adopted $\Lambda$CDM cosmology ($\rhovir \simeq 97 \,\rho_{\rm crit}$ at $z=0$). The corresponding mass, linear scales and velocity dispersions are listed in Table \[tab:simu\]. [rrccrcccrrr]{} 1542 & 0.82 & 0.99 & 0.77 & 760 & 0.22 & 0.24 & 0.21 & 471 & 470 & 511\ 3344 & 1.09 & 0.99 & 0.83 & 815 & 0.30 & 0.24 & 0.29 & 459 & 554 & 480\ 914 & 1.45 & 1.09 & 0.75 & 967 & 0.27 & 0.28 & 0.28 & 616 & 605 & 541\ 4478 & 2.92 & 1.37 & 1.02 & 1075 & 0.54 & 0.52 & 0.45 & 556 & 656 & 689\ 1777 & 3.83 & 1.50 & 0.94 & 1177 & 0.40 & 0.54 & 0.46 & 783 & 638 & 742\ 564 & 4.91 & 1.63 & 1.01 & 1271 & 0.65 & 0.59 & 0.47 & 696 & 878 & 766\ 689 & 6.08 & 1.75 & 1.01 & 1354 & 0.56 & 0.65 & 0.69 & 837 & 749 & 761\ 245 & 6.50 & 1.79 & 1.03 & 1332 & 0.51 & 0.70 & 0.76 & 855 & 799 & 764\ 51 & 10.78 & 2.12 & 0.84 & 1794 & 0.62 & 0.70 & 0.52 & 1114 & 1018 & 1241\ 696 & 11.37 & 2.16 & 0.98 & 1669 & 0.74 & 0.72 & 0.74 & 1062 & 1048 & 959\ 72 & 11.77 & 2.18 & 0.90 & 1777 & 0.70 & 0.71 & 0.55 & 1105 & 993 & 1207\ 1 & 13.99 & 2.31 & 0.88 & 1870 & 0.66 & 0.72 & 0.69 & 1187 & 1136 & 1189\ 8 & 23.42 & 2.75 & 0.92 & 2262 & 0.79 & 0.93 & 0.99 & 1459 & 1395 & 1319\ \[tab:simu\] Scaling relations for the DM halos {#sec:DMsr} ---------------------------------- According to the selection method described in the previous section, the DM halos are all characterized by the same $\rhovir$ and should also be nearly virialized systems. The first condition (and thus the relation $M\propto \r200^3$) is satisfied by construction while the second can be easily verified. In fact, we find that all the selected halos follow the relations $M\propto \r200\,\sigvir^2$ with a $rms$ scatter of 0.03 only, and $M\propto\sigvir^{3.1}$, with $rms\simeq 0.05$. In Table \[tab:simu\] we list the ratio $\rvir/\r200$ for each halo. However, [*projected*]{} quantities are involved in the observations and the first step of our analysis is the determination of which (if any) scaling relations are satisfied by the DM halos when projected. We have therefore constructed the projected radial profiles of the selected halos by counting the DM particles within concentric shells around the center of mass for three arbitrary orthogonal directions ($x$, $y$, and $z$), and we defined $\rh$ as the projected radius of the circle containing half of the total number of particles. Then, the velocity dispersion $\sigmah$ has been computed from the line-of-sight (barycentric) velocity of all the particles within $\rh$. Since the DM halos, as well as real clusters, are not spherical, such a procedure gives different values of $\rh$ and $\sigmah$ for the three line-of-sights (the maximum variations however never exceed 33% and 21% for the two quantities, respectively; see Table \[tab:simu\]) and we decided to build our data sample by considering all the three projections for each halo. With the projected properties $\rh$ and $\sigmah$ now available, we have determined the best fit relations between $M$ and $\rh$, and between $M$ and $\sigmah$ by minimizing the distance of the residuals perpendicular to a straight line, and thus obtaining the DM analogues of the observed FJ and Kormendy relations: $$M\propto\sigmah^{3.02\pm 0.15}, %bisector \label{eq:DM_fj}$$ and $$M\propto \rh^{2.36\pm 0.14}, %bisector \label{eq:DM_kor}$$ with $rms \simeq 0.12$ and 0.15, respectively. With a PCA of these data we determined the relation analogous to equation (\[eq:fp\]): $$M\propto \rh^{1.1\pm 0.05}\sigmah^{1.73\pm 0.04}, \label{eq:DM_fp}$$ with $rms=0.04$. Compared to those among the virial properties, these relations have larger scatters, as expected. The FJ and FP have slopes similar to those obtained for the virial quantities, while the $M$-$\rh$ relation appears to be significantly flatter, as a consequence of the different density concentration of low and high mass halos. Note that the NFW concentration parameter varies only weakly in the mass range covered by the halos in our sample (e.g., Eke, Navarro & Steinmetz 2001; Bullock et al. 2001). This is consistent with our results: in fact, from equation (\[eq:DM\_kor\]) and $M\propto \r200^3$, we find that $\rh/\r200 \propto M^{\sim 0.09}$. We stress that while scaling relations between $M$, $\r200$ (or $\rvir$) and $\sigvir$ were expected, a tight correlation between projected properties is a less trivial result: in fact, structural and dynamical non-homology can, in principle, produce significantly different effective radii and projected velocity dispersion profiles for systems characterized by identical $M$, $\rvir$, and $\sigvir$. It is also known that weak homology, coupled with the virial theorem, does indeed produce well defined scaling laws (see, e.g., Bertin et al. 2002). Therefore, the scaling relations presented in equations (\[eq:DM\_fj\])–(\[eq:DM\_fp\]) are a first interesting result of our study. The difference between the values of the exponents appearing in equations (\[eq:DM\_fj\])–(\[eq:DM\_fp\]) and those in the virial relations is the direct evidence of weak homology of the halos: in fact, strong homology would result in exactly the same exponents, while a strong non-homology would disrupt any correlation. We checked this further, and we found that the DM halos in our sample still show well defined scaling relations (although with different exponents) when considering the projected radius encircling any fixed fraction of the total mass, and the corresponding projected velocity dispersion within it (see Table \[tab:shells\]). Note that this finding is in agreement with the results already pointed out by several groups, namely the fact that DM halos obtained with numerical N-body simulations in standard cosmologies are characterized by significant structural and dynamical weak homology (e.g., Cole & Lacey 1996; NFW; Subramanian, Cen & Ostriker 2000, and references therein). [rrrrrrrrrrrr]{} 0.8 & 3.01 & 0.12 & 2.67 & 0.10 & 1.47 & 1.43 & 0.04\ 0.5 & 3.02 & 0.12 & 2.36 & 0.15 & 1.10 & 1.73 & 0.04\ 0.3 & 3.10 & 0.13 & 2.29 & 0.17 & 0.97 & 1.95 & 0.04\ 0.1 & 3.12 & 0.13 & 2.15 & 0.19 & 0.88 & 2.02 & 0.03\ \[tab:shells\] From the simulated to the observed scaling relations {#sec:simu_obs} ==================================================== Comparison of equations (\[eq:DM\_fj\]), (\[eq:DM\_kor\]), (\[eq:DM\_fp\]) and (\[eq:fj\]), (\[eq:kor\]), (\[eq:fp\]), reveals that the FJ, Kormendy and FP relations of simulated DM halos are characterized by different slopes with respect to those derived observationally. What kind of regular and systematic trend with cluster mass of the galaxy properties and distribution are implied by these differences? In order to answer this question, we define the dimensionless quantities $\Upsilon\equiv M/L$, ${\cal R}\equiv\rh/\re$, and ${\cal S}\equiv\sigmah/\sigma$. Focusing first on the edge-on FP, from equations (\[eq:fp\]) and (\[eq:DM\_fp\]) we obtain: $$\frac{\Upsilon}{{\cal{R}}^{1.1}\,{\cal{S}}^{1.73}} \propto \re^{0.2}\sigma^{0.42}. \label{eq:cfr_fp}$$ Thus, in order to transform the DM halos FP into the observed FP, the product $\Upsilon {\cal{R}}^{-1.1} {\cal{S}}^{-1.73}$ must systematically increase as $\re^{0.2}\sigma^{0.42}$, which in turn, again from equation (\[eq:fp\]), is approximately proportional to $L^{0.3}$. In principle, $\Upsilon$, ${\cal{R}}$, and ${\cal{S}}$ could all vary in a [*combined*]{} and [*regular*]{} way from cluster to cluster, so that equation (\[eq:cfr\_fp\]) is satisfied. Of course, given the small scatter around the best fit relation (\[eq:fp\]), this kind of solution requires a remarkable fine tuning of the variations of the three parameters. Alternatively, it is possible that only one of the three parameters varies significantly, while the other two are approximately constant. This situation is analogous to that faced in the studies of the physical origin of the FP tilt of elliptical galaxies, where the so called “orthogonal exploration of the parameter space” is often adopted (see, e.g., Renzini & Ciotti 1993; Ciotti 1997) . In this approach all but one of the available model parameters are fixed to constant values, and the goal is to determine what kind of variation of the “free” parameter is necessary to reproduce the observed FP tilt. Several quantitative results have been derived in this framework (see, e.g., Ciotti, Lanzoni & Renzini 1996; Ciotti & Lanzoni 1997, and references therein), even though, by construction, it cannot provide the most general solution to the problem, and the choice of the specific parameter responsible for the tilt is somewhat arbitrary (see Bertin et al. 2002; Lanzoni & Ciotti 2003). In the present context some of this arbitrariness can be removed: in fact, here we assume that 1) the DM distribution in real clusters is described by the simulated DM halos, and that galaxies are merely dynamical [*tracers*]{} of the total potential well, 2) in addition to the edge-on FP, we also consider the constraints imposed by the FJ and the Kormendy relations. These two points will [*allow to use the orthogonal exploration approach for determining what is the most plausible origin of the tilt between the simulated and the observed cluster FP*]{}. In order to make the DM halos FP reproduce the observed one within the framework of the orthogonal exploration approach, we have three different possibilities, each corresponding to the choice of $\Upsilon$, ${\cal{R}}$, or ${\cal{S}}$ as the key parameter, while keeping constant the remaining two in the l.h.s. of equation (\[eq:cfr\_fp\]). Note that the two choices based on variations of ${\cal{R}}$ or ${\cal{S}}$ should be interpreted from an astrophysical point of view as systematic differences in the way galaxies populate the cluster DM potential well as a function of the cluster mass. However, the orthogonal analysis of the FJ and the Kormendy relations strongly argue against these two solutions, since from equations (\[eq:fj\]), (\[eq:kor\]), (\[eq:DM\_fj\]), and (\[eq:DM\_kor\]) one obtains $$\frac{\Upsilon}{{\cal{S}}^{3.02}} \propto \sigma^{0.84}, %bisector \label{eq:cfr_fj}$$ and $$\frac{\Upsilon}{{\cal{R}}^{2.36}} \propto\re^{0.81}. %bisector \label{eq:cfr_kor}$$ Thus, it is apparent that any attempt to reproduce equation (\[eq:cfr\_fp\]) by a variation of ${\cal R}$ (or ${\cal S}$) alone will fail at reproducing the FJ (or the Kormendy) relation: in fact, the only parameter appearing in all the equations (\[eq:cfr\_fp\]), (\[eq:cfr\_fj\]), and (\[eq:cfr\_kor\]) is the mass-to-light ratio[^3] $\Upsilon$. Therefore, while a [*purely structural*]{} (${\cal R}$) and a [*purely dynamical*]{} (${\cal S}$) origin of the tilt between the DM halos FP and the clusters FP seem to be both ruled out by the reasons above, a systematically varying mass-to-light ratio, for ${\cal R}$ and ${\cal S}$ constant, could in principle account for all the three considered scaling relations. In particular, from equation (\[eq:cfr\_fp\]), $\Upsilon\propto\,L^{\alpha}$ with $\alpha\sim 0.3$. Guided by this indication, we tried to superimpose the points corresponding to the simulated DM halos to the sample of observed clusters by using $\Upsilon\propto M^\beta$, and we found that if $$\Upsilon = 280\, h\,\left(\frac{M}{10^{14}\,\msol/h}\right)^{0.23} \,\frac{\msol}{\lsol}, \label{eq:ml}$$ [*the edge-on FP of DM halos is practically indistinguishable from that of real clusters*]{} (see Fig.\[fig:fp\_pi\]a and Fig.\[fig:fpl\]): the value of $\alpha$ derived from this assumption is $\alpha=\beta/(1-\beta)\simeq 0.3$. Note that such a trend of $\Upsilon$ is in agreement not only with the expectations of equation (\[eq:cfr\_fp\]), but also with what inferred observationally in the B-band (S93; Girardi et al. 2002; Bahcall & Comeford 2002; but see Bahcall, Lubin & Dorman 1995; Bahcall et al. 2000; and Kochanek et al. 2003 for claims of a constant mass-to-light ratio at large scales), and with what inferred from the comparison between the observed B-band luminosity function of virialized systems and the halo mass function predicted in CDM cosmogonies (Marinoni & Hudson 2002). A remarkable agreement is also found with the results of van den Bosch, Yang & Mo (2003), who, using a completely different approach, obtain $\Upsilon\propto M^{0.26}$ for their model A, which allows for a non-constant $\Upsilon$ at the cluster scales (see their equation 15 and Table 2). In addition, $\Upsilon\sim 280 \,h\,\msol/\lsol$ for $M\simeq 10^{14}\,\msol/h$ clusters, and $\Upsilon\sim 475 \,h\,\msol/\lsol$ for $M\simeq 10^{15}\,\msol/h$ clusters are values well within the range of the various estimates for galaxy clusters (e.g., Adami et al. 1998b; Mellier 1999; Wilson, Kaiser & Luppino 2001; Girardi et al. 2000; Girardi et al. 2002). It is also remarkable (as not a necessary consequence) that by adopting equation (\[eq:ml\]) [*also the face-on FP, and the FJ and Kormendy relations are very well reproduced*]{}, as apparent from Fig.\[fig:fp\_pi\]b and Figs.\[fig:fjk\]ab, respectively. What could be the physical interpretation of the required trend of the mass-to-light ratio with cluster mass? We note that equation (\[eq:ml\]) can be formally rewritten as $\mlgal\times(M/M_{\rm gal}) \propto M^\beta$, where M is the DM mass of the clusters ($\sim$ their total mass), $M_{\rm gal}$ is their total stellar content in galaxies, $$\mlgal\equiv\frac{\int{N_{\rm gal}(L)\, \Upsilon_*(L)\, L\, dL}}{\int{N_{\rm gal}(L)\, L\,dL}},$$ where $N_{\rm gal}(L)$ is the cluster luminosity function, and finally $\Upsilon_*(L)$ is the mean stellar mass-to-light ratio of a galaxy of total luminosity $L$. Thus, the trend of $\Upsilon$ with $M$ could be ascribed or to $\mlgal\propto M^\beta$ for $M/M_{\rm gal} = const$, or to $M_{\rm gal}\propto M^{1-\beta}$ for $\mlgal = const$ (or, more generally, to a combined effect of these two quantities). Both possible solutions have interesting astrophysical implications. For example, a constant $\mlgal$ from cluster to cluster is obtained only if all clusters have the same population of galaxies, i.e., if they are characterized by a universal luminosity function (LF) and by a similar morphological mix, so that the distribution of the stellar mass-to-light ratios of their galaxies is also the same. In such a case, the required trend $M_{\rm gal}\propto M^{0.77}$ should be entirely explained by a systematic increase with $M$ of the total number of galaxies, in remarkable agreement with several studies of the halo occupation numbers, that find $N_{\rm gal}\propto M^a$, with $a$ in the range 0.7–0.9 (Peacock & Smith 2000; Scranton 2002; Berlind & Weinberg 2002; Marinon & Hudson 2002; van den Bosch et al. 2003). To study the ratio $M/M_{\rm gal}$, an estimate of the stellar mass in galaxies can also be obtained from the observed total luminosity in the near infrared (since Es are the dominant component of the cluster population and their luminosity in the K band gives a reasonable measure of their stellar mass, i.e., $M_{\rm gal}\propto L_K$). However, observational results are still uncertain and controversial: a constant or weakly decreasing $M/L_K$ with $M$ is reported by Kochaneck et al. (2003) for the 2MASS clusters, while an increasing $M/L_K$ is claimed by Lin, Mohr & Standford (2003) for the data from the same survey. In any case, the universality of the cluster luminosity function is still under debate since it appears to be appropriate in many cases, but variations in some individual clusters have also been reported (see, e.g., Yagi et al. 2002, De Propris et al. 2003, and Christlein & Zabludoff 2003 for detailed discussions and recent results). In particular, the LF of early-type (or quiescent) galaxies appears to vary significantly among clusters (but see Andreon 1998): if the richer clusters contain a proportionally larger fraction of elliptical galaxies (Balogh et al. 2002), that are characterized by a higher $\Upsilon_*$ compared to that of spirals, a cluster-dependent $\mlgal$ should be expected even in presence of a universal LF. Cluster vs. galaxy scaling relations {#sec:cl_gal} ==================================== As discussed in Section \[sec:obs\], the cosmological collapse model predicts the same scaling relations at all mass scales and, remarkably, the edge-on FP and the (although more dispersed) Kormendy relation of clusters and ellipticals are very similar. In particular, the interpretation of the FP tilt of elliptical galaxies as the combined results of the virial theorem and strong structural and dynamical homology, implies $\Upsilon_*\propto L^{0.3}$ in the B band (e.g., Faber et al. 1987; van Albada, Bertin & Stiavelli 1995; Bertin et al. 2002), in agreement with the result for clusters (equation \[eq:ml\]). In Section \[sec:simu\_obs\] we showed that equation (\[eq:DM\_fj\]) coupled with $\Upsilon\propto L^{0.3}$ does reproduce the observed FJ relation at cluster scales, but clearly this cannot work in the case of galaxies: in fact, the FJ relation of Es is characterized by a significantly higher exponent ($\sim 4$) than that of clusters ($\sim 2$). Indeed, $\Upsilon_*$ should actually [*decrease*]{} for increasing galaxy luminosity in order to reproduce the observed FJ, at variance with all the available indications (e.g., Faber et al. 1987; J[ø]{}rgensen et al. 1996; Scodeggio et al. 1998). One is then forced to assume that 1) the relation $M\propto\sigvir^3$ does not apply in the case of Es, perhaps due a failure of standard cosmology at small scales, or 2) evolutionary processes have modified the ratio $\sigma_*/\sigvir$, where $\sigma_*$ is the galaxy central velocity dispersion (from which the FJ relation is derived). For what concerns point 1), this might be another problem encountered by the current cosmological paradigm at small scales, in addition to the well known over-prediction of the number of DM satellites (the so called “DM crisis”; see Moore et al. 1999; but see also Stöhr et al. 2002 and the recent results on the “running spectral index” obtained by Peiris et al. 2003 from the WMAP data) and the cusp-core problem for low surface brightness and dwarf galaxies (e.g., Swaters et al. 2003ab, and references therein). However, this point is at present very speculative, and thus in the following we restrict to the standard CDM paradigm, that predicts $M\propto\sigvir^3$ also at galactic mass scales. Two physical processes (namely, gas dissipation and early-time merging) certainly played a major role in galaxy evolution, thus supporting point 2). The effects of these two processes have been already discussed in the context of $k$-space by Burstein et al. (1997): here we will focus on the implications that can be derived from the simpler FJ relation, taking into account the additional information from recent numerical simulations (NLC03ab) and the constraints imposed on galaxy merging by the recently discovered $M_{\rm BH}$-$\sigma$ relation (Ferrarese & Merritt 2000; Gebhardt et al. 2000), according to which $M_{\rm BH}\propto\sigma_*^{\sim 4}$, with very small dispersion. We then adopt the point of view that at the epoch of the detachment from the Hubble flow, also the seed galaxies were characterized by the “universal” scaling law $M\propto\sigvir^3$, and we discuss the possibility that galaxy merging and gas dissipation originated the observed FJ of Es. We start by considering the effect of dissipationless merging alone. From this point of view, it is clear that merging played different roles in the evolutionary history of clusters and galaxies. In fact, while clusters can be thought as formed from the [*collapse*]{} of density perturbations at scales that [*just became non-linear*]{}, galaxies are presently in a highly non-linear regime and the [*merging*]{} they suffered since their separation from the Hubble expansion can no longer be interpreted in terms of the cosmological collapse of density fluctuations. The differences between these two dynamical processes have important consequences for the present discussion. In fact, in the [*cosmological collapse*]{} case the initial conditions correspond to those of a “cold” system (i.e., $2\,T_i+U_i=V<0$), and so virialization increases the virial velocity dispersion of the end-products as $T_f=T_i-V$: the systematic increase of $\sigvir$ with $M$ in clusters is basically due to this process. At highly non-linear scales (such as, for instance, in the case of galaxies in the outskirts of clusters or groups) the situation is considerably different. In fact, if merging occurs, it cannot be interpreted as the collapse of a cold system, but, on the contrary, the initial conditions of the merging pair are in general characterized by a null or a positive $V$ (see, e.g., Binney & Tremaine 1987, Chap.7): under these conditions, the virial velocity dispersion of the remnant will not increase. In fact, numerical simulations of one and two-component galaxy models show that successive dissipationless merging at galactic scales, while preserving the edge-on FP, does not reproduce the FJ relation, being the end-products characterized by a too low (i.e., nearly constant) $\sigma_*$ compared to the expectations of the FJ (NLC03ab). In other words, dissipationless merging at galactic scales in general will produce a relation $M\propto\sigvir^\alpha$ with $\alpha>3$. Obviously, this could be an interesting property in our context, since we are exactly looking for a mechanism able to increase $\alpha$ from 3 to 4. We also note that faint elliptical galaxies do follow a FJ relation with a best-fit slope significantly lower than 4 (Davies et al. 1983 report a value of $\sim 2.4$) and more similar to the cosmological predictions: a simple interpretation of this fact could be that small galaxies experimented less merging events than giant Es, so that their scaling relations are more reminiscent of the cosmological origin. However, even if able to increase the exponent of the $M$-$\sigma$ relation, several theoretical and empirical arguments clearly indicate that purely dissipationless merging cannot be at the origin of the spheroids. For example, why the exponent should increase to the observed value is unclear, even though it has been claimed that the FJ of Es is the result of cumulative effects of inelastic merging and passing encounters taking place in cluster environment (Funato, Makino & Ebisuzaki 1993; Funato & Makino 1999). These authors already discuss some weakness of the scenario they suggest (for instance, the difficulty to account for the analogous Tully-Fisher relation of spirals), and we add that it is not clear whether also the other scaling laws are reproduced by their simulations, nor how the FJ relation could be followed also by the field Es if resulting from dynamical interactions in cluster environment. Moreover, a further difficulty is added by the recently discovered $M_{\rm BH}$-$\sigma$ relation: if the FJ is the cumulative result of random dynamical processes that changed the galaxy velocity dispersion, how can the $M_{\rm BH}$-$\sigma$ relation be so tight? Finally, an even stronger empirical argument argues against substantial dissipationless merging for Es: in fact, if the merging end-products are forced to obey both the edge-on FP and the $M_{\rm BH}$-$\sigma$ relation, then they are characterized by exceedingly large effective radii with respect to real galaxies (Ciotti & van Albada 2001). Concerning the effect of gas dissipation on the predicted relations at galactic and cluster scales, we note that an important difference already resides in [*what*]{} is effectively observed in the two cases when deriving the scaling laws: the galaxy distribution, tracing the gravitational potential of the dark matter, in the case of clusters; the stellar population within a fraction of the effective radius, where stars themselves are the major contributors to the total gravitational field, in the case of Es. Since the top-hat model properly describes the (dissipationless) collapse of the DM component, it is not surprising that it cannot be accurately applied for predicting the properties of the baryonic (dissipative) matter where it is dominant. In addition, gas dissipation is thought to be negligible for the formation and evolution of clusters, while it empirically appears to be increasingly important with mass for the galaxies, as mirrored by their $Mg_2$-$\sigma$ and color-magnitude relations, and by the observed metallicity gradients (e.g., Saglia et al. 2000; Bernardi et al. 1998; Bernardi et al. 2003a). As discussed by Ciotti & van Albada (2001), its effects also represent a possible solution to the exceedingly large effective radii found for (dissipationless) merger products forced to follow both the FP and the $M_{\rm BH}$-$\sigma$ relation. Thus, from the considerations above, gas dissipation is a good candidate to explain the difference between cluster and galaxy FJs. However, we note that [*baryonic dissipation alone*]{} cannot solve the discrepancy: in fact, its effect is to increase $\sigma_*$, thus further decreasing the exponent in the galactic FJ relation. This can be seen in a more quantitative way as follows. Given that galactic DM halos follows $M\propto \sigvir^3$, then the FJ relation implies that $(M/M_*)\,\Upsilon_*\,(\sigma_*/\sigvir)^3\propto \sigma_*^{-1}$: this means that at fixed $M/M_*$ and $\Upsilon_*$, dissipation should have been more important in low mass galaxies, contrarily to the existing observational evidences. In summary, the arguments presented above strongly suggest that merging [*and*]{} gas dissipation played a fundamental role in the formation and evolution of galaxies. In particular, given the competitive effect of these two processes on the slope of the $M$-$\sigma$ relation, they appear to be [*both*]{} necessary for modifying the $M\propto\sigvir^3$ into the observed FJ. Coupled with the necessity of dissipative merging, the observational evidence that the bulk of stars in Es, even at high $z$, are old puts a strong constraint on the time when the mass of spheroids was assembled: say, $z\, \gsim\, 2$. As already recognized by several authors, this is in agreement with the available information about the star formation history of the Universe and the redshift evolution of the quasar luminosity function (see, e.g., Haehnelt & Kauffmann 2000; Ciotti & van Albada 2001; Burkert & Silk 2001; Yu & Tremaine 2002; Cavaliere & Vittorini 2002; Haiman, Ciotti & Ostriker 2003). A tight connection between the star formation and the quasar activity in Es is further supported by the striking similarity of slopes between the FJ and the $M_{\rm BH}$-$\sigma$ relations, thus suggesting a common origin for these two empirical laws. Based on these considerations, it seems likely that the scaling relations of Es are the result of the cosmological collapse of the density fluctuations at the epoch when galactic scales became non-linear, plus important modifications afterward due to early-time merging in gas-rich systems. On the contrary, since the present day non-linearity scale is that of galaxy clusters, and since the role of gas dissipation is thought to be marginal in clusters life, the cosmological collapse model and a mass-to-light ratio increasing with cluster mass seem to be sufficient to account for their scaling relations. Discussions and conclusions {#sec:disc} =========================== In this paper we have addressed the question of whether high mass DM halos (that are thought to host the clusters of galaxies) do define the FP and the other scaling relations observed for nearby clusters. For that purpose, we have analyzed a sample of 13 massive DM halos, obtained with high-resolution N-body simulations in a $\Lambda$CDM cosmology. After verifying that the DM halos do follow the predictions of the spherical collapse model for virialized systems, we have found that also their projected properties define a FJ, a Kormendy and a FP–like relations. This latter result is not trivial and it can be traced back to the weak (structural and dynamical) homology shown by the DM halos assembled via hierarchical cosmological merging. However, the slopes of the DM halos scaling laws do not coincide with the observed ones, and we have discussed what kind of systematic variations of one or more of the structural and dynamical properties of the galaxy distribution with cluster mass could account for such differences. We have shown that two of the three basic options can be discarded just by requiring the simultaneous reproduction of the (edge-on) FP, FJ and Kormendy relations. A solution that instead works remarkably well is to assume a cluster mass-to-light ratio $\Upsilon$ increasing as a power law of the luminosity. The required normalization and slope well agree with those estimated observationally for real galaxy clusters. We have discussed two possible causes for such a trend of $\Upsilon$, namely a systematic increase of the [*galaxy*]{} mass-to-light ratio with cluster luminosity, or a decrease of the baryonic mass fraction for increasing cluster total mass, concluding that both the possibilities are consistent with the available observational data. In any case, it appears that [*the FJ, Kormendy and FP relations of nearby clusters of galaxies can be explained as the result of the cosmological collapse of density fluctuations at the appropriate scales, plus a systematic trend of the total mass-to-light ratio with the cluster mass*]{}. Note that this is by no means a trivial result: in fact, it is known that numerical simulations of galaxy dissipationless merging (where the mass-to-light ratio is kept fixed) do reproduce well the edge-on FP, but badly fail with the FJ and the Kormendy relations (NLC03b). We next focused on the fact that Es follow a FJ relation with a significantly different slope ($\sim 4$) compared to that of clusters ($\sim 2$), but show a similar trend of the mass-to-light ratio with the system luminosity. We have discussed the implications of this observational evidences under the assumption that at the epoch of the detachment from the Hubble flow, $M\propto\sigvir^3$ also at galactic scales, as required by standard cosmology, and taking into account several empirical and theoretical evidences suggesting that gas dissipation and merging must have played an important role in galaxy evolution. Since dissipationless merging and baryonic dissipation have competitive consequences on the system velocity dispersion, we argue that [*combined*]{} effects of these two processes are required to account for the slope of the FJ of elliptical galaxies. We therefore conclude that [*the scaling relations of Es might be the result of the cosmological collapse of density fluctuations at the epoch when galactic scales became non-linear, plus successive modifications due to (early-time) dissipative merging*]{}. This scenario seems to be supported also by the similarity between the slopes of the FJ and the $M_{\rm BH}$-$\sigma$ relations, and between the peak of the star formation rate of the Universe and that of the quasar luminosity function. Before to conclude we point out that, while no observational data are yet available for the scaling relations of galaxy clusters at high redshift, the results that we have obtained allow us to speculate on what could be expected. On one hand, the predictions of the top-hat model are the same at any redshift, and also weak homology of DM halos in numerical simulations appears to hold at any epoch: thus, if the dependence of the cluster $\Upsilon$ on $L$ changes accordingly to passive evolution, scaling relations with the same slopes should be found also at high redshift. Even if with some uncertainties, observational evidences for passive evolution of these cluster properties already exist. In fact, the characteristic luminosity of the cluster LF increases with redshift consistently with pure, passive luminosity evolution models (De Propris et al. 1999), and the same is found for the stellar mass-to-light ratio $\Upsilon_*$ of elliptical galaxies, as derived from the studies of their FP up to redshift $z=1.27$ (e.g., van Dokkum & Franx 1996; J[ø]{}rgensen et al 1999; Kelson et al. 2000; Treu, M[ø]{}ller & Bertin 2002; van Dokkum & Standford 2003). On the other hand, a more complicate evolution of $\Upsilon=\Upsilon_{\rm gal}\times M/M_{\rm gal}$ with $z$ might also be expected, and thus the redshift variation of the FP zero-point and slope could give important information on cluster and galaxy evolution. Although the fraction of Es in clusters appears not to change with redshift, a substantial increase of the relative amount of spirals with respect to S0 galaxies at higher $z$ is reported (Dressler et al. 1997; Fasano et al. 2000). In addition, also $M/M_{\rm gal}$ might increase with $z$, if clusters form by continuous accretion of galaxies from the field, in a DM potential well that already settled on shorter time scales. Of course, a detailed prediction of the evolution of the cluster scaling relations in this latter case is much more difficult. Due to the interest of the questions addressed above, it is apparent that with more available data (e.g., from the SDSS, RCS, MUNICS, 2dF, 2MASS surveys), a strong effort should be devoted to better determine the cluster scaling relations while, on theoretical ground, a larger set of simulated clusters, spanning a wider mass range and taking into account also the presence of gas, should be performed and analyzed for different choices of the cosmological parameters. [99]{} Adami C., Mazure A., Biviano A., Katgert P., & Rhee G., 1998a, A&A 331, 493 (A98) Adami C., Mazure A., Katgert P., & Biviano A., 1998b, A&A 336, 63 Andreon S., 1998, A&A 336, 98 Annis J., 1994, AAS 26, 1427 Bahcall, N.A.; Lubin, L.M., & Dorman, V., 1995, ApJ 447L, 81 Bahcall, N.A.; Cen, R.; Davé, R.; Ostriker, J.P., & Yu, Q., 2000, ApJ 541, 1 Bahcall, N.A., & Comeford J.M., 2002, ApJ 565, L5 Balogh M.L., Smail I., Bower R.G., Ziegler B.L., Smith G.P., Davies R.L., Gaztelu A., Kneib J.P., & Ebeling H., 2002, ApJ 566, 123 Beisbart C., Valdarnini R., & Buchert T., 2001, A&A 379, 412 Berlind, A.A., & Weinberg, D.H., 2002, ApJ 575, 587 Bernardi M., et al. 1998, ApJ 508, L43 Bernardi M., et al. 2003a, AJ 125, 1882 Bernardi M., et al. 2003b, AJ 125, 1849 Bernardi M., et al. 2003c, AJ 125, 1866 Bertin G., Ciotti L., & Del Principe M. 2002, A&A, 386, 149 Binney, J., & Tremaine, S. 1987, Galactic Dynamics, (Princeton University Press) Bullock, J.S., Kolatt, T.S., Sigad, Y., Somerville, R.S., Kravtsov, A.V., Klypin, A.A., Primack, J.R., & Dekel, A., 2001, MNRAS 321, 559 Burkert A., & Silk J., 2001, ApJ 554, L151 Burstein D., Bender R., Faber S., & Nolthenius R., 1997, AJ 114, 1365 Capelato H.V., de Carvalho R.R., & Carlberg R.G., 1995, ApJ 451, 525 Cavaliere A., & Vittorini V., 2002, ApJ 570, 114 Christlein D., & Zabludoff A., 2003, ApJ 591, 764 Ciotti L., in “3rd ESO-VLT Workshop – Galaxy Scaling Relations: Origins, Evolution and Applications”, L. da Costa and A. Renzini eds., (Kluwer: Dordrecht), 1997, p. 38 Ciotti L., Lanzoni B., & Renzini A., 1996, MNRAS 281, 1 Ciotti L., & Lanzoni B. 1997, A&A, 321, 724 Ciotti L., & van Albada T.S., 2001, ApJ 552, L13 Cole S., & Lacey C., 1996, MNRAS 281, 716 Dantas C.C., Capelato H.V., Ribeiro A.L.B., & de Carvalho R.R, 2003, MNRAS 340, 398 Davies R.L., Efstathiou G., Fall S.M., Illingworth G., & Schechter P.L., 1983, ApJ 266, 41 De Propris R., Stanford S.A., Eisenhardt P.R., Dickinson M., Elston R., 1999, ApJ 118, 719 De Propris R., et al. 2003, MNRAS 342, 725 Djorgovski S., & Davis M. 1987, ApJ, 313, 59 Dressler A., Lynden-Bell D., Burstein D., Davies R.L., Faber S.M., Terlevich R., & Wegner G. 1987, ApJ, 313, 42 Dressler A., Oemler A.Jr., Couch W.J., Smail I., Ellis R.S., Barger A., Butcher H., Poggianti B.M., Sharples R.M., 1997, ApJ 490, 577 Eke V.R., Cole S., & Frenk C.S., 1996, MNRAS 282, 263 Eke, V.R., Navarro, J.F., & Steinmetz, M., 2001, ApJ 554, 114 Evstigneeva E.A., Reshetnikov V.P., & Sotnikova N.Ya., 2002, A&A 381, 6 Faber S.M., & Jackson R.E. 1976, ApJ, 204, 668 Faber S.M., Dressler A., Davies R.L., Burstein D., Lynden-Bell D., Terlevich R., & Wegner G. 1987, in: Nearly normal galaxies, ed. S.M. Faber, p. 175 (Springer, New York) Fasano G., Poggianti B.M., Couch W.J., Bettoni D., Kj[æ]{}rgaard P., & Moles M., 2000, ApJ 542, 673 Ferrarese L., & Merritt D., 2000, ApJ 539, L9 Forbes D.A., & Ponman T.J., 1999, MNRAS 309, 623 Fritsch C., & Buchert T., 1999, A&A 344, 749 Fujita Y., & Takahara F., 1999a, ApJ 519, L51 Fujita Y., & Takahara F., 1999b, ApJ 519, L55 Funato Y., Makino J., & Ebisuzaki T., 1993, PASJ 45, 289 Funato Y., & Makino J., 1999, ApJ 511, 625 Gebhardt K., et al., 2000, ApJ 539, L13 Girardi M., Manzato P., Mezzetti M., Giuricin G., & Limboz F., 2002, ApJ 569, 720 Girardi M., Borgani S., Giuricin G., Mardirossian F., & Mezzetti M., 2000, ApJ 530, 62 González-García A.C., & van Albada T.S., 2003, MNRAS 342, L36 Gunn J.E., & Gott J.R., 1972, ApJ 176, 1 Haehnelt M.G., & Kauffmann G., 2000, MNRAS 318, L35 Haiman Z., Ciotti L., & Ostriker J.P., 2003, ApJ submitted, astro-ph/0304129 J[ø]{}rgensen I., Franx M., & Kj[æ]{}rgaard P. 1996, MNRAS 280, 167 J[ø]{}rgensen I., Franx M., Hjorth J., & van Dokkum P.G., 1999, MNRAS 308, 833 Kelson D.D., Illingworth G.D., van Dokkum P.G, & Franx, M., 2000, ApJ 531, 184 Kochanek C.S., White M., Huchra J., Macri L., Jarrett T.H., Schneider S.E., Mader J., 2003, ApJ 585, 161 Kormendy J., 1977, ApJ218, 333 Lacey C., & Cole S., 1994, MNRAS 271, 781 Lanzoni B., Cappi A., & Ciotti L., 2003, in “Computational astrophysics in Italy: methods and tools”, R. Capuzzo-Dolcetta ed., Mem. S.A.It. Supplement, vol. 1, p. 145 (online edition) Lanzoni B., & Ciotti L., 2003, A&A 404, 819 Lin Y., Mohr J.J, & Stanford S.A., 2003, ApJ 591, 749 Marinoni C., & Hudson M.J., 2002, ApJ 569, 101 Mellier Y., 1999, ARAA 371, 27 Miller C.J., Melott A., & Gorman P., 1999, ApJ 526, L61 Moore B., Ghigna S., Governato F., Lake G., Quinn T., Stadel J., & Tozzi P., 1999, ApJL, 524, L19 Murtagh F., & Heck A., 1987, [*Multivariate Data Analysis*]{}, D.Reidel Publishing Company, Dordrecht, Holland Navarro J.F., Frenk C.S., White S.D.M., 1997, ApJ 490, 493 (NFW) Nipoti C., Londrillo P., & Ciotti L., in ESO Astrophysics Symposia: “The mass of galaxies at low and high redshift”, R. Bender and A. Renzini, eds. (Springer-Verlag), p.70, 2003 (NLC03a) Nipoti C., Londrillo P., & Ciotti L., 2003, MNRAS 342, 501 (NLC03b) Ostriker J.P., & Steinhardt P.J., 1995, Nature 377, 600 Pahre M.A., Djorgovski S.G., & de Carvalho R.R., 1998, AJ 116, 159 Peacock, J.A., & Smith, R.E., 2000, MNRAS 318, 1144 Peebles P.J.E., 1980, The Large-Scale Structure of the Universe (Princeton: Princeton Univ. Press) Peiris H.V., Komatsu E., Verde L., et al. 2003, ApJS, 148, 213 Pentericci L., Ciotti L., & Renzini A. 1996, Astrophysical Letters and Communications, 33, 213 Renzini A., & Ciotti L., 1993, ApJ, 416, L49 Saglia R., Maraston C., Greggio L., Bender R., & Ziegler B., 2000, A&A 360, 911 Schade D., Barrientos L.F., & Lopez-Cruz Omar, 1997, ApJL 477, 17 Schaeffer R., Maurogordato S., Cappi A., & Bernardeau F., 1993, MNRAS 263, L21 (S93) Scodeggio M., Gavazzi G., Belsole E., Pierini D., & Boselli, A., 1998, MNRAS 301, 1001 Scranton, R., 2002, MNRAS 332, 697 Springel V., Yoshida N., & White S.D.M., 2001, New Astronomy, 6, 79 Stöhr F., White S.D.M., Tormen G., & Springel V., 2002, MNRAS 335, L84 Struble M.F., & Rood H.J., 1991, ApJS 77, 363 Subramanian K., Cen R., & Ostriker J.P., 2000, ApJ 538, 528 Swaters, R.A., Madore, B.F., van den Bosch, F.C., & Balcells, M., 2003a, ApJ 583, 732 Swaters, R.A., Verheijen, M.A.W., Bershady, M.A., & Andersen, D.R., 2003b, ApJ 587, L19 Tormen G., Bouchet F., & White S.D.M., 1997, MNRAS 286, 865 Treu T., 2001, PhD Thesis, Scuola Nomrale Superiore di Pisa Treu T., M[ø]{}ller P., & Bertin G., 2002, ApJ 564, L13 van Albada T.S., Bertin G., & Stiavelli M. 1995, MNRAS, 276, 1255 van den Bosch, F.C.; Yang, X.; & Mo, H.J., 2003, MNRAS 340, 771 van Dokkum P.G., & Franx M., 1996, MNRAS 281, 985 van Dokkum P.G., & Standford S.A., 2003, ApJ 585, 78 West M.J., Oemler A., & Dekel A., 1989, ApJ 346, 539 Wilson G., Kaiser N., & Luppino G.A., 2001, ApJ 556, 601 Yagi M., Kashikawa N., Sekiguchi M., Doi M., Yasuda N., Shimasaku, K.,& Okamura S., 2002, AJ 123, 66 Yoshida N., Sheth R.K., & Diaferio A., 2001, MNRAS 328, 669 Yu Q., & Tremaine S., 2002, MNRAS 335, 965 Ziegler B.L., Saglia R.P., Bender R., Belloni P., Greggio L., & Seitz S., 1999, A&A 346, 13 [^1]: Strictly speaking, DM halos are only [*weakly*]{} homologous systems, since the low mass halos are systematically more concentrated than the more massive ones (for a definition of weak homology, see Bertin, Ciotti & del Principe 2002). [^2]: We recall here that the luminosity of a cluster refers to the sum of the luminosities of all its constituent galaxies, i.e., a sum over the cluster luminosity function. [^3]: Note that the constraints imposed by the FJ and the Kormendy relations should not be considered redundant with respect to those imposed by the edge-on FP: in fact, these two relations, albeit with a large scatter, describe how galaxies are distributed on the face-on FP.
--- abstract: 'In this paper, we first study a new sensitivity index that is based on higher moments and generalizes the so-called Sobol one. Further, following an idea of Borgonovo ([@borgonovo2007]), we define and study a new sensitivity index based on the Cramér von Mises distance. This new index appears to be more general than the Sobol one as it takes into account, not only the variance, but the whole distribution of the random variable. Furthermore, we study the statistical properties of a Monte Carlo estimate of this new index.' author: - Fabrice Gamboa - Thierry Klein - 'Agnès Lagnoux[^1]' bibliography: - 'biblio\_MultiPick.bib' title: Sensitivity analysis based on Cramér von Mises distance --- [**Keywords: Sensitivity analysis, Cramér von Mises distance, Pick and Freeze method, functional delta-method, Anderson-Darling statistic.**]{} Introduction {#sinto} ============ A very classical problem in the study of computer code experiments (see [@sant:will:notz:2003]) is the evaluation of the relative influence of the input variables on some numerical result obtained by a computer code. This study is usually called sensitivity analysis in this paradigm and has been widely assessed (see for example [@sobol1993], [@saltelli-sensitivity], [@rocquigny2008uncertainty] and references therein). More precisely, the result of the numerical code $Y$ is seen as a function of the vector of the distributed input $(X_r)_{r=1,\cdots,d}$ ($d\in\N^*$). Statistically speaking, we are dealing here with the unnoisy non parametric model $$Y=f(X_{1},\ldots, X_{d}), \label{momodel}$$ where $f$ is a regular unknown numerical function on the state space $E_1\times E_2\times \ldots \times E_d$ on which the distributed variables $(X_{1},\ldots, X_{d})$ are living. Generally, the random inputs are assumed to be independent and a sensitivity analysis is performed by using the so-called Hoeffding decomposition (see [@van2000asymptotic] and [@anton84]). In this functional decomposition, $f$ is expanded as an $L^2$-sum of uncorrelated functions involving only a part of the random inputs. For any subset $v$ of $I_d=\{1,\ldots,d\}$, this leads to an index called the Sobol index ([@sobol1993]) that measures the amount of [*randomness*]{} of $Y$ carried in the subset of input variables $(X_i)_{i\in v}$. Since nothing has been assumed on the nature of the inputs, one can consider the vector $(X_i)_{i\in v}$ as a single input. Thus without loss of generality, let us consider the case where $v$ reduces to a singleton. The numerator $H_{v}$ of the Sobol index related to the input $X_{v}$ is $$H_{v}=\Var\left(\E\left[Y|X_{v}\right]\right)=\Var(Y)-\E\left[\left(Y-\E\left[Y|X_{v}\right]\right)^{2}\right] \label{trucmoche}$$ while the denominator of the index is nothing more than the variance of $Y$. In order to estimate $H_{v}$ the clever trick discovered by Sobol [@sobol1993] is to rewrite the variance of the conditional expectation as a covariance. Further, a well tailored design of experiment called the Pick and Freeze scheme is considered [@janon2012asymptotic]. More precisely, let $X^v$ be the random vector such that $X^v_v=X_v$ and $X^v_i=X'_i$ if $i\neq v$ where $X'_i$ is an independent copy of $X_i$. Then, setting $$\begin{aligned} \label{def:Yv} Y^{v}:=f(X^v)\end{aligned}$$ an obvious computation leads to the nice relationship $$\label{sobol_cov} \Var (\mathbb{E}(Y|X_v))=\Cov\left(Y,Y^{v}\right).$$ The last equality leads to a natural Monte Carlo estimator (Pick and Freeze estimator) $$\begin{aligned} \label{esteffgen} T^{v}_{N, \mathrm{Cl}}&= \frac{1}{N} \sum_{j=1}^N Y_j Y_j^{{v}} - \left(\frac{1}{2N} \sum_{j=1}^N (Y_j+Y_j^{v})\right)^2\end{aligned}$$ where for $j=1,\cdots, N$, $Y_j$ (resp. $Y_j^{v}$) are independent copies of $Y$ (resp. $Y^{v}$). The sharp statistical properties and some functional extensions of the Pick and Freeze method are considered in [@janon2012asymptotic], [@pickfreeze] and [@radouche]. Notice that the Sobol indices and their Monte Carlo estimation are based on order two methods since they derived from the $L^2$ Hoeffding functional decomposition. This is the main drawback of this kind of methods. As an illustration consider the following example. Let $X_{1}$ and $X_{2}$ be two independent random variables having the same first four moments (equal e.g. to 1) and such that $\E\left[ X_{1}^{3}\right]\neq \E\left[ X_{2}^{3}\right]$. Let us consider the following model $$Y=X_{1}+X_{2}+X_{1}^{2}X_{2}^{2}.$$ Then $$\Var\left(\E\left[Y|X_{1} \right]\right)=\Var(X_{1}+X_{1}^{2})=\Var(X_{2}+X_{2}^{2})=\Var\left(\E\left[Y|X_{2} \right]\right).$$ However, since $Y$ is a symmetric function of the inputs $X_{1}$ and $X_{2}$ that do not share the same distribution, $X_{1}$ and $X_{2}$ should not have the same importance. That shows the need to introduce a sensitivity index that takes into account not only the second order behaviors but all the distributions. As pointed out before, Sobol indices are based on $L^{2}$ decomposition. As a matter of fact, Sobol indices are well adapted to measure the contribution of an input on the deviation around the mean of $Y$. However, it seems very intuitive that the sensitivity of an extreme quantile of $Y$ could depend on sets of variables that cannot be read only in the variances. Thus the same index should not be used for any task and we need to define more adapted indices. There are several ways to generalize the Sobol indices. One can, for example, define new indices through contrast functions based on the quantity of interest (see [@FKR13]). Unfortunately the Monte Carlo estimator of these new indices are computationally very expensive. In [@DaVeiga13], the author presents a way to define moment independent measures through dissimilarity distances. These measures define a unified framework that encompasses some already known sensitivity indices. Unfortunately, the estimation of such indices relies on the estimation of density ratio estimation that can be computationally expensive. Now, as pointed out in [@borgonovo2007], [@grobobo], [@petibobo], [@ODC13] and [@Owen12], there are situations where higher order methods give a sharper analysis on the relative influence of the input and allow finer screening procedures. Borgonovo et al. propose and study an index based on the total variation distance (see [@borgonovo2007], [@grobobo] and [@petibobo]). While Owen et al. suggest to use procedures based on higher moments (see [@ODC13], [@Owen12]). Our paper follows these tracks. We will first revisit the works of Owen et al. by studying the asymptotic properties of the multiple Pick and Freeze scheme proposed therein for the estimation of higher order Sobol indices. Further, we propose a new natural index based on the Cramér von Mises distance between the distribution of the output $Y$ and its conditional law when an input is fixed. We will show that this approach leads to natural self-normalized indices as in the case of the Sobol-Hoeffding decomposition of the variance. As a matter of fact, as for Sobol indices, the sum of all first order indices can not exceeds one. Notice that these indices extend naturally to multivariate outputs. Furthermore, we show that surprisingly a Pick and Freeze scheme is also available to estimate this new index. The sample size required to build such an estimator is of the same order as the size needed for the classical Sobol index estimation allowing its use in concrete situations.\ The paper is divided in three sections. In the next section, we will study the statistical properties of the multiple Pick and Freeze method proposed earlier by Owen et al ([@ODC13], [@Owen12]). Section \[sec:cramer\] is devoted to the new index built on the Cramér von Mises distance. In the last section, we give some numerical simulation that illustrate the interest of the new index. In particular, we revisit a real data example introduced in [@BD92] and studied in [@FH04] and [@BHP14]. Multiple Pick and Freeze method {#sec:pickfreeze} =============================== Using the classical Hoeffding decomposition, for a singleton $v\in I_{d}$, the numerator of the classical Sobol index with respect to $v$ is given by $$\begin{aligned} \label{trucmoche2} H_{v}^{2}&=\E\left[\left(\E[Y|X_v]-\E[Y]\right)^{2}\right].\end{aligned}$$ Following [@ODC13] and [@Owen12], we generalize this quantity by considering higher order moments. Indeed, for any integer $p\geqp 2,$ we set $$\begin{aligned} \label{trucmochep} H^{p}_{v}:=\E\left[\left(\E[Y|X_v]-\E[Y]\right)^{p}\right].\end{aligned}$$ $H_{v}=H^{2}_{v}$. The following lemma gives the Pick and Freeze representation of $H^{p}_{v}$ for $p\geqp 2$. \[lemma:covp\] For any $v \in I_d$, one has $$\begin{aligned} \label{sobol_covp} \E\left[\left(\E[Y|X_v]-\E[Y]\right)^{p}\right]=\E\left[\prod_{i=1}^{p}\left(Y^{v,i}-\E[Y]\right)\right]. \end{aligned}$$ Here, $Y^{v,1}=Y$ and for $i=2,\ldots,p$, $Y^{v,i}$ is constructed independently as $Y^{v}$ defined in equation . Obviously, $H^{p}_{v}$ is non negative for even $p$ and $${\left\lvert H^{p}_{v} \right\rvert}\leqp \E\left[{\left\lvert Y-\E[Y] \right\rvert}^p\right].$$ Further, $H^{p}_{v}$ is invariant by any translation of the output. #### Estimation procedure In view of the estimation of $H^{p}_{v}$, we first expand the product in the right-hand side of to get that $$\begin{aligned} H^{p}_{v} & =\sum_{l=0}^{p }\binom{p}{l}(-1)^{p-l} \E\left[Y \right]^{p-l}\E\left[\prod_{i=1}^{l}Y^{v,i}\right]. $$ with the usual convention $\prod_{i=1}^0 Y^{v,i}=1$. Second, we use a Monte Carlo scheme and consider the following Pick and Freeze design constituted by the following $p\times N$-sample $$\begin{pmatrix} Y^{v,i}_{j}\end{pmatrix}_{(i,j)\in I_{p}\times I_{N}}.$$ We define for any any $N\in \N^{*}$, $j\in I_N$ and $l\in I_{p}$, $$\begin{aligned} P^{v}_{l,j}=\binom{p}{l}^{-1}\sum_{k_1<\ldots < k_l\in I_p}\left(\prod_{i=1}^{l} Y^{v,k_i}_{j}\right) \quad \mbox{ and } \quad \overline{P}^{v}_{l}=\frac{1}{N}\sum_{j=1}^{N} P^{v}_{l,j}.\end{aligned}$$ The Monte Carlo estimator is then $$\begin{aligned} \label{trucserieux} H^{v}_{p,N}=\sum_{l=0}^{p}\binom{p}{l}(-1)^{p-l} \left(\overline{P}^{v}_{1}\right)^{p-l} \overline{P}^{v}_{l}.\end{aligned}$$ Notice that we generalize the estimation procedure of [@pickfreeze] and use all the available information by considering the means over the set of indices $k_1,\ldots, k_l\in I_d,\; k_n\neq k_m$. The following theorem provides asymptotic properties of $H^{v}_{p,N}$. \[th:TCL\] $H^{ v}_{p,N}$ is consistent and asymptotically Gaussian: $$\begin{aligned} \label{eq:TCL} \sqrt{N}\left(H^{v}_{p,N} - H^{v}_{p}\right) \overset{\mathcal{L}}{\underset{N\to\infty}{\rightarrow}}\mathcal{N}\left(0,\sigma^2 \right)\end{aligned}$$ where $$\sigma^2=p \left[\Var(Y)+(p-1)\Cov(Y,Y^{v,2})\right]\left(\sum_{l=1}^p a_lb_l\right)^2,$$ $$a_l=\frac{l}{p}\E[Y]^{l-1},\qquad l=1,\ldots, p$$ $$b_1=(-1)^{p-1}p(p-1) \E[Y]^{p-1}+\sum_{l=2}^{p-1}\binom{p}{l}(-1)^{p-l} (p-l) \E[Y]^{p-l-1} \E\left[\prod_{i=1}^l Y^{v,i}\right]$$ and $$b_l=\binom{p}{l}(-1)^{p-l}\E[Y]^{p-l},\qquad l=1,\ldots, p.$$ The consistency follows from a straightforward application of the strong law of large numbers. The asymptotic normality is derived by two successive applications of the delta method [@van2000asymptotic] .\ (1) Let $W_j^1=(Y^{v,1}_{j},\ldots, Y^{v,p}_{j})^T$ ($j=1,\ldots, N$) and $g^1$ the mapping from $\R^{p}$ to $\R^{p}$ whose $l$-th coordinate is given by $$g^1_l(x_1,\ldots, x_p)=\binom{p}{l}^{-1}\sum_{\begin{matrix}k_1<\ldots < k_l \\ k_{i}\in I_p,i=1,\ldots,l\end{matrix}}\left(\prod_{i=1}^{l} x_{k_i}\right).$$ Let $\Sigma^1$ be the covariance matrix of $W_j^1$. Clearly, one has $\Sigma_{ii}^1=\Var(Y)$ for $i\in I_p$ while $\Sigma_{ij}^1=\Cov(Y^{v,i},Y^{v,j})=\Cov(Y,Y^{v,2})$. The multidimensional central limit theorem gives with $m=(\E[Y],\ldots, \E[Y])^T$ $$\sqrt{N}\left(\frac1N \sum_{j=1}^N W_j^1-m\right)\overset{\mathcal{L}}{\underset{N\to\infty}{\rightarrow}} \mathcal{N}_{p}\left(0,\Sigma^1\right).$$ We then apply the so-called delta method to $W^1$ and $g^1$ so that $$\sqrt{N}\left(g^1\left(\overline{W}^1_N\right)-g^1\left(\E\left[W^1\right]\right)\right)\overset{\mathcal{L}}{\underset{N\to\infty}{\rightarrow}} \mathcal{N}\left(0,J_{g^1}\left(\E\left[W^1\right]\right)\Sigma^1 J_{g^1}\left(\E\left[W^1\right]\right)^T\right)$$ with $J_{g^1}\left(\E\left[W^1\right]\right)$ the Jacobian of $g^1$ at point $\E\left[W^1\right]$. Notice that for $i\in I_p$ and $k\in I_p$, $$\frac{\partial g_l^1}{\partial x_k} \left(\E\left[W^1\right]\right)=\frac{\binom{p-1}{l-1}}{\binom{p}{l}} m^{l-1}=\frac{l}{p}\E[Y]^{l-1}=:a_l.$$ Thus $\Sigma^2:=J_{g^1}\left(\E\left[W^1\right]\right)\Sigma^1 J_{g^1}\left(\E\left[W^1\right]\right)^T$ is given by $$\Sigma^2_{ij}=pa_ia_j \left(\Sigma_{11}^1+(p-1)\Sigma_{12}^1\right).$$ \(2) Now consider $W_j^2=(P^{v,1}_{j},\ldots P^{v,p}_{j})^T$ ($j=1,\ldots, N$) and $g^2$ the mapping from $\R^{p}$ to $\R$ defined by $$g^2(y_1,\ldots, y_p)=\sum_{l=0}^{p}\binom{p}{l}(-1)^{p-l} y_1^{p-l} y_{l}.$$ We apply once again the delta method to $W^2$ so that $$\sqrt{N}\left(g^2\left(\overline{W}^2_N\right)-g^2\left(\E\left[W^2\right]\right)\right)\overset{\mathcal{L}}{\underset{N\to\infty}{\rightarrow}} \mathcal{N}\left(0,J_{g^2}\left(\E\left[W^2\right]\right)\Sigma^2 J_{g^2}\left(\E\left[W^2\right]\right)^T\right)$$ with $J_{g^2}\left(\E\left[W^2\right]\right)$ the Jacobian of $g^2$ at point $\E\left[W^2\right]$. Notice that for $k\in I_p$, $$\begin{aligned} \frac{\partial g^2}{\partial y_1} \left(\E\left[W^2\right]\right)&= (-1)^{p-1}p(p-1) \E[Y]^{p-1} \\ &+\sum_{l=2}^{p-1}\binom{p}{l}(-1)^{p-l} (p-l) \E[Y]^{p-l-1} \E\left[\prod_{i=1}^l Y^{v,i}\right]\end{aligned}$$ and $$\begin{aligned} \frac{\partial g^2}{\partial y_l} \left(\E\left[W^2\right]\right)&= \binom{p}{l}(-1)^{p-l}\E[Y]^{p-l}.\end{aligned}$$ Thus the limiting variance is $$\sigma^2:=J_{g^2}\left(\E\left[W^2\right]\right)\Sigma^2 J_{g^2}\left(\E\left[W^2\right]\right)^T=p \left(\Sigma_{11}^1+(p-1)\Sigma_{12}^1\right)\left(\sum_{i=1}^p a_ib_i\right)^2,$$ where $b_i$ is the $i$-th coordinate of $\nabla g^2\left(\E\left[W^2\right]\right)$. The collection of all indices $H^{p}_{v}$ is much more informative than the classical Sobol index. Nevertheless it has several drawbacks: it may be negative when $p$ is odd. To overcome this fact, we may have introduced $\E\left[{\left\lvert \E[Y|X_i, i\in v]-\E[Y] \right\rvert}^{p}\right]$ but proceeding in such a way, we would have loose the Pick and Freeze estimation procedure. The Pick and Freeze estimation procedure is computationally expensive: it requires a $p\times N$ sample of the output $Y$. In a sense, if we want to have a good idea of the influence of an input on the law of the output, we need to estimate the first $d$ indices $H^{p}_{v}$ and hence we need to run the black-box code $K\times N$ times. Moreover, these indices are moment based and it is well known that they are not stable when the moment order increases. In the next section, we introduce a new sensitivity index that is based on the conditional distribution of the output and requires only $3\times N$. The Cramér von Mises index {#sec:cramer} ========================== In this section the code will be denoted by $Z=f(X_{1},\ldots, X_{d})\in\R^k$. Let $F$ be the distribution function of $Z$. For any $t=(t_1,\ldots, t_k)\in\R^k$, $$F(t)=\P\left(Z\leqp t\right)=\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z\leqp t\}}\right]$$ and $F^{v}(t)$ the conditional distribution function of $Z$ conditionally on $X_{v}$: $$F^{v}(t)=\P\left(Z\leqp t|X_{v},\right)=\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z\leqp t\}}|X_{v}\right].$$ Notice that $\{Z\leqp t\}$ means that $\{Z_1\leqp t_1, \ldots, Z_k\leqp t_k\}$. Obviously, $\E\left[F^{v}(t)\right]=F(t)$. Now, we apply the framework presented in Section \[sec:pickfreeze\] with $Y(t)={{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z\leqp t\}}$ and $p=2$. Hence, for $t\in \R^{k}$ fixed, we have a consistent and asymptotically normal estimation procedure for the estimation of $$\E\left[\left(F(t)-F^{v}(t)\right)^{2}\right].$$ We define a Cramér Von Mises type distance of order $2$ between $\mathcal{L}\left(Z\right)$ and $\mathcal{L}\left(Z|X_{v}\right)$ by $$\begin{aligned} \label{CVM} D_{2,CVM}^{v}:=\int_{\R^{k}}\E\left[\left(F(t)-F^{v}(t)\right)^{2}\right]dF(t).\end{aligned}$$ The aim of the rest of the section is dedicated to the estimation of $D_{2,CVM}^{v}$ and the study of the asymptotic properties of the estimator. Notice that $$\begin{aligned} \label{CVM_esp} D_{2,CVM}^{v}=\E\left[\E\left[\left(F(Z)-F^{v}(Z)\right)^{2}\right]\right].\end{aligned}$$ Let us note that these indices are naturally adapted to multivariate outputs. Unlike the procedure for $p=2$, we did not normalize the generalized Sobol index of $Y(t)$. The purpose, that becomes clear in this section, is to avoid numerical explosion during the estimation procedure. Indeed, the normalizing term would be $F(t)(1-F(t))$, like in the ANderson-Darling statistic, canceling for small and large values of $t$. Nevertheless, in view of the following proposition, one can consider $4 D_{2,CVM}^{v}$ instead of $D_{2,CVM}^{v}$ in order to have an index bounded by 1 as for the Sobol index. The asymptotic properties will not be affected by this renormalizing factor, so we still consider $D_{2,CVM}^{v}$. One has the following properties. 1. $0\leqp D_{2,CVM}^{v}\leqp \frac 14$. Moreover, if $k=1$ and $F$ is continuous, we have $0\leqp D_{2,CVM}^{v}\leqp \frac 16$. 2. $D_{2,CVM}^{v}$ is invariant by translation, by left-composition by any nonzero scaling of $Y$. We then proceed to a double Monte-Carlo scheme for the estimation of $D_{2,CVM}^{v}$ and consider the following design of experiment consisting in: 1. two $N$-samples of $Z$: $(Z_j^{v,1},Z_j^{v,2})$, $1\leqp j\leqp N$; 2. a third $N$-sample of $Z$ independent of $(Z_j^{v,1},Z_j^{v,2})_{1\leqp j\leqp N}$: $W_k$, $1\leqp k\leqp N$. The empirical estimator of $D_{2,CVM}^{v}$ is then given by $$\widehat D_{2,CVM}^{v}=\frac 1N \sum_{k=1}^N \left\{\frac 1N \sum_{j=1}^N {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}-\left[\frac {1}{2N} \sum_{j=1}^N \left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}\right)\right]^2\right\}.$$ The consistency of $\widehat D_{2,CVM}^{v}$ follows directly from the following lemma: \[lem:cv\] Let $G$ and $H$ be two $L^{1}-$measurable functions. Let $(U_j)_{j\in I_N}$ and $(V_k)_{k\in I_N}$ be two independent samples of iid rv such that $\E[G(U_1,V_1)]=0$ and $\E[H(U_1,U_2,V_1)]=0$. We define $S_N$ and $T_N$ by $$S_N=\frac{1}{N^2} \sum_{j,k=1}^N G(U_j,V_k) \quad {\textrm}{and} \quad T_N=\frac{1}{N^3} \sum_{i,j,k=1}^N H(U_i,U_j,V_k).$$ Then $S_N$ and $T_N$ converge a.s. to 0 as $N$ goes to infinity. \(i) If we prove that $\E[S_N^4]=O\left(\frac{1}{N^2}\right)$, we then apply Borel-Cantelli lemma to deduce the almost sure convergence of $S_N$ to 0. Clearly, $$\begin{aligned} \E[S_N^4]&=\frac{1}{N^8} \sum \E[G(U_{i_1},V_{j_1})G(U_{i_2},V_{j_2})G(U_{i_3},V_{j_3})G(U_{i_4},V_{j_4})]\end{aligned}$$ where the sum is taken over all the indices $i_1$, $i_2$, $i_3$, $i_4$, $j_1$, $j_2$, $j_3$, $j_4$ from 1 to $N$. The only scenarii that could lead to terms in $O\left(\frac{1}{N}\right)$ or even $O\left(1\right)$ appear when we sum over indices all different except 2 i’s or 2 j’s or over indices all different. Nevertheless, in those cases, at least one term of the form $\E[G(U_{i},V_{j})]$ appears. Since the function $G$ is centered, those scenarii are then discarded.\ \ (ii) Analogously, it suffices to show that $\E[T_N^4]=O\left(\frac{1}{N^2}\right)$. The only scenarii that could lead to terms in $O\left(\frac{1}{N}\right)$ or even $O\left(1\right)$ appear when we sum over indices all different except 2 i’s, 2 j’s or 2 k’s or over indices all different. Nevertheless, in those cases, at least one term of the form $\E[H(U_{i},U_{j},V_{k})]$ appears. Since the function $H$ is centered, those scenarii are then discarded. \[cor:cons\_D2\_est\] $\widehat D_{2,CVM}^{v}$ is strongly consistent as $N$ goes to infinity. The proof is based on Lemma \[lem:cv\]. First, we define $Z_j=\left(Z_j^{v,1},Z_j^{v,2}\right)$, $G(Z_j,W_k)={{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}$, $F(Z_j,W_k)=\frac{1}{2}\left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}\right)$ and $H(Z_i,Z_j,W_k)=F(Z_i,W_k)F(Z_j,W_k)$. Second we proceed to the following decomposition $$\begin{aligned} \widehat D_{2,CVM}^{v}&=\frac 1N \sum_{k=1}^N \left\{\frac 1N \sum_{j=1}^N {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}-\left[\frac {1}{2N} \sum_{j=1}^N \left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k}\}\right)\right]^2\right\}\\ &=\frac{1}{N^2} \sum_{j,k=1}^N {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}-\frac {1}{4N^3} \sum_{i,j,k=1}^N \left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_i^{v,1}\leqp W_k\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_i^{v,2}\leqp W_k\}}\right)\left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp W_k\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp W_k\}}\right)\\ &=\frac{1}{N^2} \sum_{j,k=1}^N G(Z_j,W_k)-\frac {1}{N^3} \sum_{i,j,k=1}^N H(Z_i,Z_j,W_k)\\ &=\frac{1}{N^2} \sum_{j,k=1}^N \left\{G(Z_j,W_k)-\E[G(Z_j,W_k)]\right\}-\frac {1}{N^3} \sum_{i,j,k=1}^N \left\{H(Z_i,Z_j,W_k)-\E[H(Z_i,Z_j,W_k)]\right\}\\ &{\textrm}{~~~~}+ \frac {1}{N^2} \sum_{j,k=1}^N \E[G(Z_j,W_k)]- \frac {1}{N^3} \sum_{i,j,k=1}^N \E[H(Z_i,Z_j,W_k)]\\ &=\frac{1}{N^2} \sum_{j,k=1}^N \left\{G(Z_j,W_k)-\E[G(Z_j,W_k)]\right\}-\frac {1}{N^3} \sum_{i,j,k=1}^N \left\{H(Z_i,Z_j,W_k)-\E[H(Z_i,Z_j,W_k)]\right\}\\ &{\textrm}{~~~~}+ \E[G(Z_1,W_1)]- \left(1-\frac {1}{N}\right) \E[H(Z_1,Z_2,W_1)]-\frac {1}{N} \E[H(Z_1,Z_1,W_1)].\\\end{aligned}$$ The two first sums converges almost surely to 0 by Lemma \[lem:cv\]. The remaining term goes to $\E[G(Z_1,W_1)]- \E[H(Z_1,Z_2,W_1)]$ as $N$ goes to infinity.\ \ It remains to show that $D_{2,CVM}^{v}=\E[G(Z_1,W_1)]- \E[H(Z_1,Z_2,W_1)]$. On the one hand, $$\begin{aligned} D_{2,CVM}^{v}&=\int_{\R}\E[(F(t)-F^v(t))^2]dF(t)=\E[H_v^2(W)]\\ &=\E[\Cov({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} ,{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}})]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]-\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}]^2].\\\end{aligned}$$ On the other hand, $$\begin{aligned} &\E[G(Z_1,W_1)]- \E[H(Z_1,Z_2,W_1)]\\ &=\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]- \frac{1}{4}\E[\left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}\right)\left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,1}\leqp W_1\}}+ {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,2}\leqp W_1\}}\right)]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]]-\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,2}\leqp W_1\}}]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]]-\E[\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,2}\leqp W_1\}}\vert W_1]]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]]-\E[\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}\vert W_1]\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,2}\leqp W_1\}}\vert W_1]]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]]-\E[\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}\vert W_1]]\E[\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,2}\leqp W_1\}}\vert W_1]]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]]-\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}]\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_2^{v,2}\leqp W_1\}}]\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]]-\E[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}]^2\\ &=\E_W[\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,2}\leqp W_1\}}]-\E_Z[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_1^{v,1}\leqp W_1\}}]^2].\end{aligned}$$ We now turn to the asymptotic normality of $\widehat D_{2,CVM}^{v}$. We follow van der Vaart [@van2000asymptotic] to establish the following proposition (more precisely Theorems 20.8 and 20.9, Lemma 20.10 and Example 20.11). \[th:as\_norm\_D2\_est\] The sequence of estimators $\widehat D_{2,CVM}^{v}$ is asymptotically Gaussian in estimating $D_{2,CVM}^{v}$ that is $\sqrt{N}\left(\widehat D_{2,CVM}^{v}- D_{2,CVM}^{v}\right)$ is weakly convergent to a Gaussian centered variable with variance $\xi^2$ given by . We define $$\begin{aligned} &\mathbb{G}_N^i(t)=\frac 1N \sum_{j=1}^N {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,i}\leqp s\}},\; i=1,2,\\ &\mathbb{G}_N^{1,2}(t,t)=\frac 1N \sum_{j=1}^N {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,1}\leqp t\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z_j^{v,2}\leqp t\}},\\ &\mathbb{F}_N(t)=\frac 1N \sum_{k=1}^N {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{W_k\leqp t\}}.\end{aligned}$$ and rewrite $\widehat D_{2,CVM}^{v}$ as a regular function depending on the four empirical processes defined behind: $$\begin{aligned} \widehat D_{2,CVM}^{v}&= \int \left[\mathbb{G}_N^{1,2}-\left(\frac{\mathbb{G}_N^1+\mathbb{G}_N^2}{2}\right)^2\right]d\mathbb{F}_N.\\\end{aligned}$$ Since these processes are cad-lag functions of bounded variation, we introduce the maps $\psi_1,\; \phi_2:BV_1[-\infty,+\infty]^2\mapsto \R$ and $\Psi:BV_1[-\infty,+\infty]^4\mapsto \R$ by $$\psi_i(F_1,F_2)=\int (F_1)^idF_2 \quad {\textrm}{and} \quad \Psi(F_1,F_2,F_3,F_4)=\psi_1(F_1,F_4)-\psi_2\left(\frac{F_2+F_3}{2},F_4\right),$$ where set $BV_M[a,b] $ is the set of càd-làg functions of variation bounded by $M$.\ By Donsker’s theorem, $$\sqrt{N}\left(\mathbb{G}_N^1-F,\mathbb{G}_N^2-F,\mathbb{G}_N^{1,2}-\widetilde{G},\mathbb{F}_N-F\right)\overset{\mathcal{L}}{\underset{N\to\infty}{\rightarrow}}\mathbb{G}$$ where $G(t,s)=\P\left(Z^{v,1}\leqp t,\; Z^{v,2}\leqp s\right)$, $\widetilde{G}(t)=G(t,t)$ and $\mathbb{G}$ is a centered Gaussian process of dimension 4 with covariance function defined for $(t,s) \in \R^2$ by $$\Pi(t,s)=\E\left(X_tX_s^T\right)-\E\left(X_t\right)\E\left(X_s\right)^T$$ and $X_t:=\left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,1}\leqp t\}}, {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,2}\leqp t\}},{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,1}\leqp t\}} {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,2}\leqp t\}},{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{W\leqp t\}}\right)^T$.\ Using the chain rule 20.9 and Lemma 20.10 in [@van2000asymptotic], the map $\Psi$ is Hadamard-differentiable from the domain $BV_1[-\infty,+\infty]^4$ into $\R$. The derivative is given by $$(h_1,h_2,h_3,h_4)\mapsto \psi'_{(F_3,F_4)}(h_3,h_4)-\psi'_{\left(\frac{F_1+F_2}{2},F_4\right)}\left(\frac{h_1+h_2}{2},h_4\right)$$ where the derivative of $\psi$ (resp. $\phi$) are given by Lemma 20.10: $$(h_1,h_2)\mapsto h_2\varphi \circ F_1\vert_{-\infty}^{+\infty}-\int h_{2-}d\varphi \circ F_1+\int \varphi'(F_1)h_1dF_2$$ taking $\varphi\equiv Id$ (resp. $\varphi(x)=x^2$) and $h_-$ is the left-continuous version of a càd-làg function $h$.\ Since $$\begin{aligned} \widehat D_{2,CVM}^{v}&= \Psi\left(\mathbb{G}_N^1,\mathbb{G}_N^2,\mathbb{G}_N^{1,2},\mathbb{F}_N\right),\end{aligned}$$ we apply the functional delta method 20.8 in [@van2000asymptotic] to get limit distribution of $\sqrt{N}\left(\widehat D_{2,CVM}^{v}- D_{2,CVM}^{v}\right)$ converges weakly to the following limit distribution $$\int h_{4-}d(F^2-\widetilde{G})+\int h_3dF-\int F(h_1+h_2)dF.$$ Since the map $\Psi$ is defined and continuous on the whole space $BV_1[-\infty,+\infty]^4$, the delta method in its stronger form 20.8 in [@van2000asymptotic] implies that the limit variable is the limit in distribution of the sequence $$\begin{aligned} &\Psi_{(F,F,\widetilde{G},F)}'\left(\sqrt{N}\left(\mathbb{G}_N^1-F,\mathbb{G}_N^2-F,\mathbb{G}_N^{1,2}-\widetilde{G},\mathbb{F}_N-F\right)\right)\\ &=\sqrt{N} \left[\int \left(\mathbb{F}_N-F\right)_-d\left(F^2-\widetilde G)\right)+\int \left(\mathbb{G}_N^{1,2}-\widetilde{G}-F\left(\mathbb{G}_N^{1}+\mathbb{G}_N^{2}-2F\right)\right)dF\right].\end{aligned}$$ We define $$\begin{aligned} U&:=\int {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{W< t\}} d\left(F^2(t)-G(t,t)\right)=G(W_+,W_+)-F(W_+)^2,\\ V&:=\int \left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,1}\leqp t\}}{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,2}\leqp t\}}-\left({{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,1}\leqp t\}}+{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\{Z^{v,2}\leqp t\}}\right)F(t)\right] dF(t)=\frac 12 \left(F(Z^{v,1})^2+F(Z^{v,2})^2\right)-F(Z^{v,1}\vee Z^{v,2}).\end{aligned}$$ Obviously, $$\begin{aligned} &\E(U)=\int \left(G(t_+,t_+)-F(t_+)^2\right)dF(t),\\ &\E(U^2)=\int \left(G(t_+,t_+)-F(t_+)^2\right)^2dF(t),\\ &\E(V)=\int \left(F(t)^2-G(t,t)\right)dF(t),\\ &\E(V^2)=\frac 12 \int F(t)^4dF(t)+\iint \left[F(t\vee s)\left(F(t\vee s)-F(t)^2-F(s)^2\right)+\frac 12 F(t)^2F(s)^2\right]dG(t,s).\end{aligned}$$ By independence, the limiting variance $\xi^2$ is $$\begin{aligned} \label{eq:var_xi} \xi^2=\Var U+\Var V.\end{aligned}$$ Numerical applications {#snum} ====================== A flavour of the method in a toy model -------------------------------------- Let us consider the quite simple linear model $$Y=\alpha X_1+ X_2,\;\; \alpha>0,$$ where $X_1$ has the Bernoulli distribution with success probability $p$ and $X_1$, $X_2$ are independent. Assume further that $X_2$ has a continuous distribution $F$ on $\R$ with finite variance $\sigma^2$ and that $\mu=\E[X_2]$ and $\sigma^2=\alpha^2p(1-p)$. With these choices the random variables $\alpha X_1$ and $X_2$ share the same variances and $X_1$ and $X_2$ have the same first order Sobol indices ($1/2$). On one hand, the conditional distribution $Y$ knowing $X_1=0$ is the same as the one of $X_2$ and the conditional distribution $Y$ knowing $X_1=1$ is $F(\cdot-\alpha)$. On the other hand, the conditional distribution of $Y$ knowing $X_2$ is $$\P\left(Y=\alpha+X_2\left|\right .X_2\right)=1-\P\left(Y=X_2|X_2\right)=p.$$ Hence, the density of $Y$ is the mixture $pF(\cdot-\alpha)+(1-p)F(\cdot)$. Tedious computations lead to $$\begin{aligned} D_{2,CVM}^{1}= p(1-p)\int_\R(F(t)-F(t-\alpha))^2\left[(1-p)dF(t)+pdF(t-\alpha)\right] \label{H1_ex1}\end{aligned}$$ and $$\begin{aligned} D_{2,CVM}^2=\frac 16-p(1-p)\left[\frac{1}{2}-\int_\R F(t-\alpha)dF(t)\right]. \label{H2_ex1}\end{aligned}$$ As $p$ goes to $0$ (and $\alpha$ goes to infinity), $D_{2,CVM}^{1}$ goes to $0$ and $D_{2,CVM}^{2}$ goes to $1/6$ while the two classical Sobol indices remains equal to $1/2$. Our new indices shed lights on the fact that, for small $p$, $X_2$ is much more influent on $Y$ than $X_1$ which follows the intuition but is lost when one computes the classical Sobol indices.\ Similarly we can compute the indices of order $q$ ($q\geqp 2$): $$\begin{aligned} H_1^q&=\alpha^q \left[p(1-p)^q+(-p)^q(1-p)\right]\\ H_2^q&=\E[(X_2-\mu)^q].\end{aligned}$$ [Some examples]{} (i) if $X_2$ is a centered Gaussian with variance $\sigma^2=\alpha^2p(1-p)$, one can easily derive an explicit formula for the second index of order $q$: $$\begin{aligned} H_2^q&=\E[(X_2-m)^q]=\left\{ \begin{array}{ll} 0 & {\textrm}{if}\; q\; \textrm{is an odd number}\\ \frac{q!}{2^{q/2}\cdot (q/2)!} & \textrm{else}. \end{array} \right.\end{aligned}$$ \(ii) if $X_2$ is a uniformly distributed on $[0,b]$ with $b=2 \alpha \sqrt{3p(1-p)}$, one can easily derive an explicit formula for the different indices introduced before: $$\begin{aligned} D_{2,CVM}^{1}=p(1-p)\times\left\{ \begin{array}{ll} \left(\frac{\alpha}{b}\right)^2\left(1-\frac 23 \frac{\alpha}{b}\right) & {\textrm}{if}\; \alpha\leqp b\\ 1/3 & \textrm{else}, \end{array} \right.\end{aligned}$$ $$\begin{aligned} D_{2,CVM}^2=\frac 16 -\frac{p(1-p)}{2}\left(1-\left(\frac{b-\alpha}{b}\right)^2 {{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\alpha\leqp b}\right) \end{aligned}$$ and $$\begin{aligned} H_2^q&=\E[(X_2-\mu)^q]=\left\{ \begin{array}{ll} 0 & {\textrm}{if}\; q\; \textrm{is an odd number}\\ (b/2)^q/(q+1) & \textrm{else}. \end{array} \right.\end{aligned}$$ \(iii) if $X_2$ is a exponentially distributed with mean $1/\lambda=\alpha \sqrt{p(1-p)}$, one can easily derive an explicit formula for the different indices introduced before: $$\begin{aligned} D_{2,CVM}^{1}=\frac{p(1-p)}{3}(1-e^{-\lambda \alpha})^2 \quad {\textrm}{and} \quad D_{2,CVM}^2=\frac 16 -\frac{p(1-p)}{2}(1-e^{-\lambda \alpha})\end{aligned}$$ and $$\begin{aligned} H_2^q&=\E[(X_2-\mu)^q]=\frac{q!}{2}\lambda^{-q}.\end{aligned}$$ ![Example 1 - $X_2$ Gaussian distributed. []{data-label="fig:simu_borgo_gauss"}](simu_borgo_gauss.png){width="10cm" height="10cm"} ![Example 1 - $X_2$ uniformly distributed.[]{data-label="fig:simu_borgo_unif"}](simu_borgo_unif.png){width="10cm" height="10cm"} ![Example 1 - $X_2$ exponentially distributed.[]{data-label="fig:simu_borgo_expo"}](simu_borgo_expo.png){width="10cm" height="10cm"} The results are presented in Figures \[fig:simu\_borgo\_gauss\] to \[fig:simu\_borgo\_expo\]. The blue line (resp. the red dashed line) represents the true value of index $D^1_{2,CVM}$ (resp. $D^2_{2,CVM}$). The blue line with o (resp. the red dashed line with +) represents the estimation of index $D^1_{2,CVM}$ (resp. $D^2_{2,CVM}$). A non linear model ------------------ Let us consider the quite simple model $$Y=\exp\{ X_1+ 2X_2\},$$ where $X_1$ and $X_2$ are independent standard Gaussian random variables. Straightforwardly, we can derive the density function of the output $Y$ and its distribution function: $$f_Y(y)=\frac{1}{\sqrt{10\pi}y}e^{-(\ln y)^2/10}{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{\R^+}(y)\quad {\textrm}{and} \quad F_Y(y)=\Phi\left(\frac{\ln y}{\sqrt 5}\right)$$ where $\Phi$ stands for the distribution function of the standard Gaussian random variable. Its density function will be denoted $f$ in the sequel. Then tedious computations lead to the Sobol indices $D_{2,CVM}^{1}$ and $D_{2,CVM}^{2}$. $$\begin{aligned} D_{2,CVM}^{1}=\frac{1}{\pi}\arctan 2-\frac 13\approx 0.019 \label{H1_ex2}\end{aligned}$$ and $$\begin{aligned} D_{2,CVM}^2=\frac{1}{\pi}\arctan \sqrt{19}-\frac 13\approx 0.095. \label{H2_ex2}\end{aligned}$$ First of all, the distribution function of $Y|X_1$ is given by $$\begin{aligned} F^{(1)}(t)&=\P(Y\leqp t|X_1)=\Phi\left(\frac{\ln t-X_1}{2}\right).\end{aligned}$$ Then $$\begin{aligned} D_{2,CVM}^{1}&=\int_{\R} \E\left[(F^{(1)}(t)-F_Y(t))^2\right]f_Y(t)dt\\ &=\int_{\R^+} \E\left[\left(\Phi\left(\frac{\ln t-X_1}{2}\right)-\Phi\left(\frac{\ln y}{\sqrt 5}\right)\right)^2\right]\frac{1}{\sqrt{10\pi}t}e^{-(\ln t)^2/10}dt\\ &=\int_{\R} \E\left[\left(\Phi\left(\frac{\sqrt 5 z-X_1}{2}\right)-\Phi\left(z\right)\right)^2\right]e^{-z^2/10}\frac{dz}{\sqrt{2\pi}}\\ &=\E\left[\left(\Phi(X_2)-\Phi\left(\frac{\sqrt 5 X_2-X_1}{2}\right)\right)^2\right]\end{aligned}$$ where $X_1$ and $X_2$ are independent standard Gaussian random variables. In the same way, $$\begin{aligned} D_{2,CVM}^{2}&=\E\left[(\Phi(X_2)-\Phi\left(\sqrt 5 X_2-2X_1\right))^2\right].\end{aligned}$$ Thus we are lead to compute the bivariate function: $$\varphi(\alpha,\beta):=\E\left[(\Phi(X_2)-\Phi\left(\alpha X_2-\beta X_1\right))^2\right]$$ at $(\alpha,\beta)=(\sqrt 5/2,1/2)$ and $(\alpha,\beta)=(\sqrt 5,2)$. The term $\E\left[\Phi(X_2)^2\right]$ is $$\begin{aligned} \E\left[\Phi(X_2)^2\right]&= \int \Phi(z)^2 f(z)dz=\left[\frac 13 \Phi(z)^3\right]_{-\infty}^{+\infty}=\frac 13.\end{aligned}$$ We introduce $U$, $U'$ and $V$ independent random variables distributed as a standard Gaussian for the two first and a centered Gaussian with variance $\alpha^2+\beta^2$ for the third one. Then the term $\E\left[\Phi\left(\alpha X_2-\beta X_1\right)^2\right]$ can be rewritten as $$\begin{aligned} \E\left[\Phi\left(\alpha X_2-\beta X_1\right)^2\right]&=\E\left[\Phi\left(V\right)^2\right]=\E\left[\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U\leqp V}\vert V\right]^2\right]\\ &=\E\left[\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U\leqp V}\vert V\right]\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U'\leqp V}\vert V\right]\right]=\E\left[\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U\leqp V}{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U'\leqp V}\vert V\right]\right]\\ &=\E\left[{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U\leqp V}{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{U'\leqp V}\right]=\P\left(U\leqp V,\, U'\leqp V\right).\end{aligned}$$ Let $G$ be the real-valued function defined on $\R$ by $G(a)=\P\left(U\leqp aV,\, U'\leqp aV\right)$ where $U$, $U'$ and $V$ are independent standard Gaussian random variables. We want to compute $G(\sqrt{\alpha^2+\beta^2})$. Integrating by parts, we have $$\begin{aligned} G'(a) &=2 \int_{\R} z\Phi(az)e^{-(a^2+1)z^2/2}\frac{dz}{2\pi}\\ &=-\frac{1}{\pi(a^2+1)} \left(\left[\Phi(az)e^{-(a^2+1)z^2/2}\right]_{-\infty}^{+\infty}-a\int_{\R} f(az)e^{-(a^2+1)z^2/2}dz\right)\\ &=\frac{a}{\pi(a^2+1)}\frac{1}{\sqrt{2a^2+1}}\end{aligned}$$ Since $G(1)=1/3$, we get $$\begin{aligned} G(a)&=\frac 13 + \int_{1}^a \frac{x}{\pi(x^2+1)}\frac{1}{\sqrt{2x^2+1}}dx=\frac 13 + \frac{1}{\pi}(\arctan \sqrt{1+2a^2}-\arctan \sqrt 3)=\frac{1}{\pi}\arctan \sqrt{1+2a^2}\\\end{aligned}$$ and $$\begin{aligned} \E\left[\Phi\left(\alpha X_2-\beta X_1\right)^2\right]&=\frac 13 + \frac{1}{\pi}(\arctan \sqrt{1+2(\alpha^2+\beta^2)}-\arctan \sqrt 3)=\frac{1}{\pi}\arctan \sqrt{1+2(\alpha^2+\beta^2)}.\end{aligned}$$ In the same way, the last term $\E\left[\Phi(X_2)\Phi\left(\alpha X_2-\beta X_1\right)\right]$ is given by $$\begin{aligned} \E\left[\Phi(X_2)\Phi\left(\alpha X_2-\beta X_1\right)\right] &=\P\left(U\leqp V,\, \sqrt{\frac{1+\beta^2}{\alpha^2}}U'\leqp V\right).\end{aligned}$$ where $U$, $U'$ and $V$ are independent standard Gaussian random variables. Remind we only need to consider $(\alpha,\beta)=(\sqrt 5/2,1/2)$ and $(\alpha,\beta)=(\sqrt 5,2)$ in which cases $\sqrt{\frac{1+\beta^2}{\alpha^2}}=1$. Thus the last equals $1/3$ in both cases that leads to the result. In the previous proof, we show that $$\begin{aligned} G(a)&:=\P\left(U\leqp aV,\, U'\leqp aV\right)=\frac{1}{\pi}\arctan \sqrt{1+2a^2}\end{aligned}$$ where $U$, $U'$ and $V$ are independent standard Gaussian random variables. Actually, this result is also a straightforward consequence of Lemma 4.3 in [@AW09] at 0 with $X=(aV-U)/\sqrt{a^2+1}$ and $Y=(aV-U')/\sqrt{a^2+1}$. Nevertheless, since our proof is different and elegant, we decide not to skip it. We can compute the indices of order $q$ ($q\geqp 2$): $$\begin{aligned} H_1^q&=\E\left[(e^{X_1+2}-e^{5/2})^q\right]\\ H_2^q&=\E\left[(e^{2X_1+1/2}-e^{5/2})^q\right].\end{aligned}$$ The results are in the following tabular. ------------- --------------- --------------- -------- -------- $D^1_{2,CVM}$ $D^2_{2,CVM}$ $S^1$ $S^2$ True values 0.0191 0.0949 0.0118 0.3738 $N=10^2$ 0.0372 0.0960 0.1962 0.1553 $N=10^3$ 0.0192 0.0929 0.0952 0.1085 ------------- --------------- --------------- -------- -------- As a conclusion, with only $N=10^3$, the algorithm provides a precise estimation of the different indices. Moreover, in this example, Sobol and Cramér von Mises indices give the same influence ranking of the two random inputs. Nevertheless, it seems that the estimation of the Cramér von Mises indices is more efficient to give the true ranking. Application: The Giant Cell Arthritis Problem --------------------------------------------- #### Context and goal \ In this subsection, we consider the realistic problem of management of suspected giant cell arthritis posed by Bunchbinder and Detsky in [@BD92]. More recently, this problem was also studied by Felli and Hazen [@FH04] and Borgonovo et al. [@BHP14]. As explained in [@BD92], “ giant cell arthritis (GCA) is a vasculitis of unknown etiology that affects large and medium sized vessels and occurs almost exclusively in patients 50 years or older”. This disease may lead to severe side effects (loss of visual accuity, fever, headache,...) whereas the risks of not treating it include the threat of blindness and major vessels occlusion. A patient with suspected GCA can receive a therapy based on Prednisone. Unfortunately, a treatment with high Prednisone doses may cause severe complications. Thus when confronted to a patient with suspected GCA, the clinician must adopt a clinical strategy. In [@BD92], the authors considered four different strategies: - : Treat none of the patients; - : Proceed to the biopsy and treat all the positive patients; - : Proceed to the biopsy and treat all the patients whatever their result; - : Treat all the patients. The clinician wants to adopt the strategy optimizing the patient outcomes measured in terms of utility. The reader is referred to [@NM53] for more details on the concept of utility. The basic idea is that a patient with perfect health is assigned a utility of 1 and the expected utility of the other patients (not perfectly healthy) is calculated subtracting some “disutilities” from this perfect score of 1. These strategies are represented in Figures \[fig:A\] to \[fig:D\] with the different inputs involved in the computation of the utilities. ![The decision tree for the treat none alternative](diagramme_A.png "fig:"){width="10cm"} \[fig:A\] ![The decision tree for the biopsy and the treat positive alternative](diagramme_B.png "fig:"){width="12cm"} \[fig:B\] ![The decision tree for the biopsy and the treat all alternative](diagramme_C.png "fig:"){width="12cm"} \[fig:C\] ![The decision tree for the treat all alternative](diagramme_D.png "fig:"){width="10cm"} \[fig:D\] For example in strategy A (see Figure \[fig:A\]), the utility of a patient having GCA and developing severe GCA complications is given by $1-d_s-du_{gc}-du_{dx}$. His entire sub-path is then $$g\times gc\times (1-d_s-du_{gc}-du_{dx}).$$ #### The input parameters \ As seen in Figures \[fig:A\] to \[fig:D\], the different strategies involve input parameters like e.g. the proportion $g$ of patients having GCA or the probability $gc$ for a patient to develop severe GCA complications (fixed at 0.8 as done in [@BD92]) or even the disutility associated to having GCA symptoms. Table \[tab:input\] summarizes the input parameters involved. --------------------------------------------------- ----------- ------- ---------- ---------- ------------- ------------ Parameters Symbols Base Min. $m$ Max. $M$    $\alpha$    $\beta$ $\P$\[having GCA\] $g$ 0.8 – –    –    – $\P$\[developing severe complications of GCA\] $gc$ 0.3 0.05 0.5 4.179 11.011 $\P$\[developing severe iatrogenic side effects\] $pc$ 0.2 0.05 0.5 2.647 10.589 Efficacy of high dose Prednisone $e$ 0.9 0.8 1 27.787 3.087 Sensitivity of temporal artery biopsy $sens$ 0.83 0.6 1 7.554 1.547 D(major complication from GCA) $du_{gc}$ 0.8 0.3 0.9 27.454 6.864 D(Prednisone therapy) $du_p$ 0.08 0.03 0.2 4.555 52.380 D(major iatrogenic side effect) $du_{pc}$ 0.3 0.2 0.9 15.291 35.680 D(having symptoms of GCA) $du_s$ 0.12 – –    –    – D(having a temporal artery biopsy) $du_b$ 0.005 – –    –    – D(not knowing the true diagnosis) $du_{dx}$ 0.025 – –    –    – --------------------------------------------------- ----------- ------- ---------- ---------- ------------- ------------ : \[tab:input\] The data used by Buchbinder and Detsky [@BD92] in their analysis The base values are provided by a physician expertise. The values $\P[\cdot{} ]$ and $D(\cdot{} )$ refer respectively to the probability of an event and to the disutility associated with an event. The minimum and maximum values $m$ and $M$ depict each parameter’s range for the sensitivity analysis. The base values are provided by a physician expertise. The utilities of the different strategies when all the input parameters are set to their base value are summarized in Table \[tab:utilities\]. Treatment alternative Utilility ----------------------------- ----------- A Treat none 0.6870 B Biopsy and treat positive 0.7575 C Biopsy and treat all 0.7398 D Treat all 0.7198 : \[tab:utilities\] The utilities of the different strategies when all the input parameters are set to their base value The base value of some input parameters are reliable while the others are really uncertain that leads us to consider them as random. As a consequence, if $Y_A$, $Y_B$, $Y_C$ and $Y_D$ represent the outcomes corresponding to the four different strategies $A$ to $D$, the clinician aims to determine $$\begin{aligned} \ \max\{\E[Y_A],\E[Y_B],\E[Y_C],\E[Y_D]\}\end{aligned}$$ with the uncertain model input presented in Table \[tab:input\]. A sensitivity analysis is then performed to determine the most influent input variables on the outcome. #### Estimation phase and sensitivity analysis \ As done in [@FH04] and [@BHP14], all the random inputs will be independently Beta distributed. The Beta density parameters corresponding to each random input are determined by fitting the base value as their mean and capturing 95$\%$ of the probability mass in the range defined by the minimum and maximum. The remaining 5$\%$ will be equally distributed to either side of this range if possible. Concretely, each random input will be distributed as $$Z{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{m\leqp Z< M}+U{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{m> Z}+V{{1\hspace{-0.2ex}\rule{0.12ex}{1.61ex}\hspace{0.5ex}}}_{Z\geqp M}$$ where $Z$, $U$ and $V$ are independent random variables. $Z$ is Beta distributed with parameters $(\alpha,\beta)$. $U$ and $V$ are uniform random variables on $[0,m]$ and $[M,1]$ respectively. #### Results \ The expected values of the utilities corresponding to the distributions given in Table \[tab:input\] are $\E[Y_A]=0.6991$, $\E[Y_B]=0.7570$, $\E[Y_C]=0.7371$ and $\E[Y_D]=0.7171$. Table \[tab:results\_borgo\] summarizes the sensitivity measures of the seven random inputs with three different methodologies: considering the Sobol indices associated to the output vector $Y=(Y_A, Y_B, Y_C, Y_D)$ (Multivariate) [@GJKL14] and the indices presented in this paper based on the Cramér von Mises distance. The last index considered is the one presented in [@BB13] and named $\beta$ defined by $$\beta_i=\E[\sup_{y\in\mathcal{Y}} \{|F_Y(y)-F_{Y|X_i}(y)|\}].$$ We then use the estimator given in [@BHP14 Table 1] adapted to the multivariate case that is based on the tedious and costly estimation of conditional expectations. Sensitivity meas. Ranking ------------------- --------- Multivariate 1236475 Baucells et al. 1627354 Cramér von Mises 1627354 : \[tab:results\_borgo\] Sensitivity measures As a conclusion, both methodologies based on the whole distribution provide the same ranking unlike the multivariate sensitivity indices. Nevertheless, the main advantage of the Cramér von Mises sensitivity methodology is that one can use the Pick and Freeze estimation scheme which provides an accurate estimation simple to implement. In [@BHP14], the authors study a slightly different model that explains the numerical differences between their results and the ones of the present paper. Furthermore, they perform a sensitivity analysis on the best alternative with the greater mean instead of considering the multivariate output. [^1]: Institut de Mathématiques de Toulouse, 118 Route de Narbonne 31062 Toulouse Cedex 9. France. [firstname.lastname@math.univ-toulouse.fr]{}
--- bibliography: - 'biblio.bib' title: 'Measurement of dijet $\mathbf{{\textit{k}}_{T}}$ in [p–Pb]{} collisions at $\mathbf{\sqrt{{\textit{s}}_{NN}}=5.02}$ TeV' --- Acknowledgements {#acknowledgements .unnumbered} ================ The ALICE Collaboration {#app:collab} =======================
--- abstract: | In this manuscript we present exponential inequalities for spatial lattice processes which take values in a separable Hilbert space and satisfy certain dependence conditions. We consider two types of dependence: spatial data under $\alpha$-mixing conditions and spatial data which satisfies a weak dependence condition introduced by [@dedecker2005new]. We demonstrate their usefulness in the functional kernel regression model of [@ferraty2004nonparametric] where we study uniform consistency properties of the estimated regression operator on increasing subsets of the underlying function space.\ [**Keywords:**]{} Asymptotic inequalities; Functional data; Nonparametric statistics; Spatial Lattice Processes; Strong mixing; Weak dependence measures\ [**MSC 2010:**]{} Primary: 62G08; 62M40 Secondary: 37A25; 62G20 author: - 'Johannes T. N. Krebs[^1][^2]' title: | A Note on Exponential Inequalities in Hilbert Spaces\ for Spatial Processes with Applications to the\ Functional Kernel Regression Model [^3] --- This article studies the nonparametric regression problem for spatial functional data. Pioneering work in functional data analysis has been done by [@ramsay1997functional] and [@bosq_linear_2000]. The last-named was among the first who considered the linear functional autoregressive model and related estimation techniques. Recently, the analysis of spatial data has gained importance in many applications such as image analysis, geophysics astronomy and environmental science. A systematic introduction to random fields is given in [@guyon1995random] or in [@cressie1993statistics]. In the same time, technological advances make it possible to sample data at high frequencies such that sample data nowadays can be rather considered as a collection of objects in an infinite-dimensional space, the so-called functional data. In this article, we address one problem related to functional data, more precisely, the estimation of the regression operator in a nonlinear double functional regression model where both the regressor and the predictor are functional and where the data are generated by a spatial lattice process. We do this in the functional kernel regression model of [@ferraty2002functional], [@ferraty_nonparametric_2007] and [@ferraty2012regression]. So far, nonparametric regression for finite-dimensional spatial data has been studied in several variants: [@li2016nonparametric] studies a wavelet approach. [@krebs2017orthogonal] constructs an orthogonal series estimator for spatial data. In particular, the kernel method has been popular for regression problems which involve spatial data, e.g., see [@carbon1996kernel], [@tran1990kernel], [@hallin2004local] and [@carbon2007kernel]. Often the dependence within the spatial data or the time series is assumed to satisfy a strong mixing condition, see [@bradley2005basicMixing] for an introduction to mixing conditions. [@ferraty2004nonparametric], [@delsol2009advances] study the functional regression model for $\alpha$-mixing time series. We generalize their results to $\alpha$-mixing spatial processes in one part of the manuscript. Unfortunately, many stochastic processes lack certain smoothness conditions and are thus not $\alpha$-mixing, see e.g. [@andrews1984non]. So other dependence concepts have been studied as well: [@laib2010nonparametric] consider the functional kernel regression model for stationary ergodic data. [@hormann2010weakly] study $L^p$-$m$-approximable functional data. An alternative notion of dependence has been proposed by [@dedecker2005new]: their definition of the weak dependence coefficient admits to consider only a finite time interval in the future. We continue with this approach and also study the functional kernel regression model for ${\mathcal{C}}$-weakly dependent spatial data, see [@maume2006exponential] for a similar application to finite-dimensional time series. [@politis1994limit] develop limit theorems for sums of weakly dependent Hilbert space-valued random variables. We give in this article exponential inequalities for Hilbert space-valued spatial data and continue with the investigations of [@ferraty2004nonparametric] and [@ferraty2012regression]: we study the uniform $a.s.$-convergence of the kernel regression estimator on increasing subsets of an infinite-dimensional function space. This paper is organized as follows: we introduce in Section \[Section\_DefinitionsNotation\] two selected dependence concepts for spatial data. We study exponential inequalities for $\alpha$-mixing Hilbert space-valued spatial processes in Section \[Section\_ExponentialInequalities\]. Moreover, we give exponential inequalities for ${\mathcal{C}}$-weakly dependent Hilbert space-valued spatial processes in Section \[Section\_ExpInequalitiesPhiMixing\]. In the last Section \[Section\_Application\], we apply the inequalities in the functional kernel regression framework of [@ferraty2004nonparametric]. Two dependence concepts for spatial processes {#Section_DefinitionsNotation} ============================================= Let ${(\Omega,{\mathcal{A}},{\mathbbm{P}})}$ be a probability space, $(T,{\mathfrak{T}})$ be a measurable space and $N\in{\mathbb{N}}_+$ be a positive natural number. We consider a generic random field $Z$ which is indexed by ${\mathbb{Z}}^N$, i.e., a collection of random variables $\{Z_{s}: {s}\in {\mathbb{Z}}^N\}$ where each $Z_s$ takes values in $T$. $Z$ is (strictly) stationary if for each $k\in {\mathbb{N}}_+$, for all points $s_1,\ldots,s_k \in {\mathbb{Z}}^N$ and for each translation $w\in {\mathbb{Z}}^N$, the joint distribution of the translated vector $( Z_{s_1+w},\ldots,Z_{s_k+w} )$ is equal to the joint distribution of $( Z_{s_1},\ldots,Z_{s_k} )$. Denote the Euclidean maximum norm by ${\left\lVert \cdot \right\rVert}_{\max}$ and define for two subsets $I, J\subseteq {\mathbb{Z}}^N$ their distance by $ d_{\infty}(I,J) = \inf\{ {\left\lVert s-t \right\rVert}_{\max}: s\in I, t\in J \}$. Furthermore, we write ${s}\le {t}$ if and only if $s_i \le t_i$ for each $i=1,\ldots,N$. Set ${e_N} = (1,\ldots,1)\in {\mathbb{Z}}^N$. Let ${n}=(n_1,\ldots,n_N)\in {\mathbb{N}}^N$, then we write $I_{n}$ for the $N$-dimensional cube on the lattice which is spanned by ${e_N}$ and ${n}$, i.e., $I_{n} = \{ {s}\in {\mathbb{Z}}^N: {e_N}\le {s} \le {n} \}$. Consider a sequence $(n(k): k\in{\mathbb{N}})\subseteq {\mathbb{N}}^N$ such that $$\liminf_{k\rightarrow\infty } \; { \min(n_i(k):i=1,\ldots,N) }/{\max(n_i(k):i=1,\ldots,N)} > 0$$ and $\lim_{k\rightarrow \infty} n_{i,k} = \infty$ for all $i=1,\ldots,N$. We say that such a sequence converges to infinity and write $n\rightarrow \infty$. Moreover, if $(A_{n(k)}:k\in{\mathbb{N}})$ is sequence which is indexed by the sequence $(n_k:k\in{\mathbb{N}})$, we also write $A_n$ for this sequence. In particular, we characterize limits for real-valued sequences $A_n$ in this notation, i.e., we agree to write $\lim_{n\rightarrow\infty} A_n$ for $\lim_{k\rightarrow \infty} A_{n(k)}$. $\limsup$ and $\liminf$ are to be understood in the analogue way. Furthermore, we write ${\left\lVert U \right\rVert}_{{\mathbbm{P}},p}$ for the $p$-norm of a real-valued random variable $U\in{(\Omega,{\mathcal{A}},{\mathbbm{P}})}$. The $\alpha$-mixing coefficient describes the dependence between random variables, it was introduced by [@rosenblatt1956central] and is defined for two sub-$\sigma$-algebras ${\mathcal{F}}, {\mathcal{G}}$ of ${\mathcal{A}}$ by $ \alpha({\mathcal{F}},{\mathcal{G}}) \coloneqq \sup \left\{ \left| {\mathbbm{P}}(A\cap B)-{\mathbbm{P}}(A){\mathbbm{P}}(B) \right|: A\in{\mathcal{F}}, B\in{\mathcal{G}}\right\}. $ Denote by ${\mathcal{F}}(I) \coloneqq \sigma( Z_{s}: {s}\in I )$ the $\sigma$-algebra generated by the $Z_{s}$ for $s\in I$ where $I\subseteq{\mathbb{Z}}^N$. The $\alpha$-mixing coefficient of the random field $Z$ is then defined as $$\begin{aligned} \label{StrongSpatialMixing} \alpha(k) \coloneqq \sup_{ I, J \subseteq {\mathbb{Z}}^N,\; d_{\infty}(I,J)\ge k } \alpha( {\mathcal{F}}(I),{\mathcal{F}}(J)), \quad k\in{\mathbb{N}}.\end{aligned}$$ The random field $Z$ is said to be strongly (spatial) mixing (or $\alpha$-mixing) if $\alpha(k)\rightarrow 0$ as $k\rightarrow \infty$. In general the strong mixing condition can fail even for Markov processes if certain smoothness conditions are not satisfied. For instance, consider the stationary AR(1) process $X_k = 1/2 (X_{k-1} + {\varepsilon}_k)$ where the innovations are Bernoulli distributed. This process fails to be strongly mixing see [@andrews1984non]. In particular, $(X_k: k\in {\mathbb{N}})$ does not satisfy any mixing condition which is stricter than $\alpha$-mixing. Thus, beside the $\alpha$-mixing condition, we shall study processes which satisfy a weak dependence criterion, introduced in [@dedecker2005new]. Consider the class of (nonlinear) operators mapping from a measurable space $({\mathcal{S}},{\mathfrak{S}})$ to the real numbers. Define for such an operator the supremum norm by ${\left\lVert g \right\rVert}_{\infty} = \sup_{x\in{\mathcal{S}}} |g(x)|$ and write $$\begin{aligned} \label{DefinitionCC} {\mathcal{C}}=\{ g: {\mathcal{S}}\rightarrow{\mathbb{R}}, {\left\lVert g \right\rVert}_{\infty}<\infty\}. \end{aligned}$$ Moreover, let ${\left\lVert \cdot \right\rVert}^{\sim}$ be a pseudo-norm on $\mathcal{C}$ (which is intended to measure the roughness of an element of ${\mathcal{C}}$). For example, a possible choice is the pseudo-norm associated with Lipschitz- or the H[ö]{}lder-constant of the operator. Another choice could be some measure for the total variation of the operator $g$. Write $ {\mathcal{C}}_1 \coloneqq \{g\in{\mathcal{C}}, {\left\lVert g \right\rVert}^\sim \le 1 \} $ for the bounded operators which have a pseudo-norm of at most 1. We define the ${\varphi}_{{\mathcal{C}}}$-dependence coefficient between a random variable $X$ which takes values in ${\mathcal{S}}$ and a sub-$\sigma$-algebra ${\mathcal{M}}\subseteq {\mathcal{A}}$ by $$\begin{aligned} \label{DefPhiC} {\varphi}_{{\mathcal{C}}} ({\mathcal{M}},X) \coloneqq \sup\{ {\left\lVert {\mathbb{E}\left [ \, g(X)|{\mathcal{M}}\, \right ]} - {\mathbb{E}\left [ \, g(X) \, \right ]} \right\rVert}_{{\mathbbm{P}},\infty} : g\in {\mathcal{C}}_1 \}. \end{aligned}$$ It follows from this definition in that $${\varphi}_{{\mathcal{C}}}({\mathcal{M}},X) = \sup\left\{ |\operatorname{Cov}(Z,g(X))| : Z \text{ is } {\mathcal{M}}-\text{measurable,} {\left\lVert Z \right\rVert}_{{\mathbbm{P}},1} \le 1 \text{ and } g\in {\mathcal{C}}_1 \right\},$$ see [@dedecker2005new] Lemma 4. In the following, we shall study the stationary spatial process $(X_s,y_s)$ where the $X_s$ take values in the space ${\mathcal{S}}$ and the $y_s$ are real-valued and bounded by a constant $B\coloneqq {\left\lVert y_s \right\rVert}_{{\mathbbm{P}},\infty}<\infty$. In this case, we define the following variant of which corresponds to the approach of [@maume2006exponential] for finite-dimensional time series: consider the $\sigma$-algebra ${\mathcal{M}}_k \coloneqq \sigma\{ (X_s,y_s): 1\le {\left\lVert s \right\rVert}_{\max} \le k \}$ and define for $i\in{\mathbb{N}}$ $$\begin{aligned} \label{DefPhiCV} {\varphi}_{{\mathcal{C}},y_s}(i) \coloneqq \sup\left\{ {\left\lVert {\mathbb{E}\left [ \, \frac{y_s}{B} g(X_s) \Big| {\mathcal{M}}_k \, \right ]} - {\mathbb{E}\left [ \, \frac{y_s}{B} g(X_s) \, \right ]} \right\rVert}_{{\mathbbm{P}},\infty}, g\in {\mathcal{C}}_1, s \in {\mathbb{N}}^N, {\left\lVert s \right\rVert}_{\max} = k+i \right\}.\end{aligned}$$ We say that the process $\{(X_s,y_s):s\in{\mathbb{Z}}^N\}$ is ${\mathcal{C}}$-weakly dependent if the coefficients ${\varphi}_{{\mathcal{C}},y_s}(i)$ are summable. If we only consider the univariate process $\{X_s:s\in{\mathbb{Z}}^N\}$, we formally replace the $y_s$ by ones in the above definition and write ${\varphi}_{{\mathcal{C}}}$ instead of ${\varphi}_{{\mathcal{C}},1}$. If the coefficients ${\varphi}_{{\mathcal{C}}}(i)$ are summable, we say that $\{X_s: s\in{\mathbb{Z}}^N\}$ is ${\mathcal{C}}$-weakly dependent. Consider a time series $\{X_t:t\in{\mathbb{Z}}\}$ and a $\sigma$-algebra ${\mathcal{M}}_k$ generated by the time series up to some time $k$. Let $i\in{\mathbb{N}}_+$ and assume that the time series is ${\mathcal{C}}$-weakly dependent. Interpreting the definition of ${\varphi}_{{\mathcal{C}}}$ from , we see that ${\varphi}_{{\mathcal{C}}}({\mathcal{M}}, X_{t+k})$ considers only a finite time in the future which is one main difference of a ${\mathcal{C}}$-weakly dependent process when compared to ($\alpha$-)mixing processes. Exponential inequalities for alpha-mixing processes on N-dimensional lattices {#Section_ExponentialInequalities} ============================================================================= We begin with an exponential inequality for strongly mixing real-valued random fields. The proofs do not only rely on the concept of splitting the index set in big blocks and small blocks, we additionally exploit the idea of [@merlevede2009] who give exponential inequalities for $\alpha$-mixing time series. The key idea is that the sum of a discrete time series on $\{1,\ldots,T\}$ can be understood as an integral of a piecewise constant process on the interval $(0,T]$; this interval is then partitioned in Cantor set-like elements. We generalize this concept to a spatial index set $I_n$. \[BernsteinLatticeImproved\] Let the real-valued random field $Z$ have exponentially decreasing $\alpha$-mixing coefficients, i.e., there are $c_0, c_1 \in {\mathbb{R}}_+$ such that the coefficient from satisfies $\alpha(k ) \le c_0 \exp( - c_1 k)$. The $Z_s$ have expectation zero and are bounded by $B$. Let ${n}\in {\mathbb{N}}^N$ be such that $$\begin{aligned} \min\{ n_i: i=1,\ldots,N \} \ge C' \max\{ n_i: i=1,\ldots,N \} \label{EqRatioIndex}\end{aligned}$$ for a constant $C'> 0$ and $\min\{n_i : i=1,\ldots,N\} \ge 2^{N+1}$. Define $ \tilde{C} \coloneqq 2^{-N} \wedge c_1C'^{N/(N+1)} 2^{-(N+1)}$. Moreover, let $\beta>0$ such that $$\beta B \le \left\{ \tilde{C} / |I_{n}|^{N/(N+1)} \vee 1/|I_{n}| \right\} \vee \left\{ \left( C' \tilde{C}^{(N+1)/N^2} /2^{N+3} \right)^{ N^2/(N+1)} \wedge \frac{c_1 C'}{2^{N+2}} \Big/ |I_{n}|^{(N-1)/N} \right\}.$$ Then there are constants $A_1, A_2\in {\mathbb{R}}_+$ which depend on the lattice dimension $N$, the constant $C'$ and the bound on the mixing coefficients but not on $n\in{\mathbb{N}}^N$ and not on $B$ such that $$\begin{aligned} \begin{split}\label{BernsteinLatticeImprovedEq0} \log {\mathbb{E}\left [ \, \exp\left\{ \beta \sum_{s \in I_{n}} Z_s \right\} \, \right ]} &\le A_1 (\beta B)^2 |I_{n}| \left(1+|I_{n}|^{(N-1)/N} \log |I_{n}| \right) + A_1 \beta B |I_{n}| \exp\left( - A_2 (\beta B)^{-1/N} \right) \\ &\quad + A_1 (\beta B )^{(N+1)/N} |I_{n}| \exp\left\{ - A_2 (\beta B)^{1-(N+1)/N^2} |I_{n}|^{(N-1)/N} \right\}. \end{split}\end{aligned}$$ Throughout the proof we use the convention to abbreviate constants by $C$. Define $\floor{s} \coloneqq (\floor{s_1},\ldots,\floor{s_N})$ for $s\in{\mathbb{R}}^d$. We extend the process $Z$ to the entire ${\mathbb{R}}^N$ with the definition $Z_s \coloneqq Z_{\floor{s}}$. In the same way, we extend the definition of the mixing coefficients consistently, $\alpha(z) = \alpha( \floor{z})$ for $z\in{\mathbb{R}}_+$. We have $\sum_{s\in I_{n}} Z_s = \int_{({e_N},n+{e_N}]} Z_s{\,\mathrm{d}s}$, this corresponds to $\int_{(0,n]} Z_s{\,\mathrm{d}s}$ for the process which is translated by $-{e_N}$. Write $\textbf{A} \coloneqq \prod_{i=1}^N A_i$ for the volume of the cube $(0,A]$ and set $\underline{A} \coloneqq \min\{A_k:k\in{\mathbb{N}}\}$. The proof is divided in part (A) and part (B). We begin with part (A). Consider the Laplace transform ${\mathbb{E}\left [ \, \exp\left(\beta \int_{(0,A]} Z_s{\,\mathrm{d}s} \right) \, \right ]}$ for $A\in{\mathbb{R}}^N$ such that $A$ satisfies . Firstly, we show that for a suitable constant $C^*$ $$\begin{aligned} \label{BernsteinLatticeImprovedEq1} {\mathbb{E}\left [ \, \exp\left( \beta \int_{(0,A]} Z_s{\,\mathrm{d}s} \right) \, \right ]} \le \exp( C^* 2^{2N} \beta^2 B^2 \textbf{A} ) + c_0 \textbf{A}^{1/(N+1)} \exp\left( - \frac{c_1}{2} \underline{A}^{N/(N+1)} \right)\end{aligned}$$ if $$\beta B \le \left[\frac{1}{2^N \textbf{A}^{N/(N+1)}} \wedge \frac{c_1 C'^{N/(N+1)} }{2^{N+1} \textbf{A}^{N/(N+1)} } \right] \vee 1/\textbf{A} \text{ and } \underline{A} \ge 2^{N+1}.$$ The proof is divided in two steps. In the first step, let $\beta B \textbf{A} \le 1$. We use that $e^x \le 1 + x+x^2$ for $x\le 1$ to deduce $$\begin{aligned} {\mathbb{E}\left [ \, \exp\left\{ \beta \int_{(0,A]} Z_s {\,\mathrm{d}s} \right\} \, \right ]}&\le \exp\left\{ {\mathbb{E}\left [ \, \left( \beta \int_{(0,A]} Z_s {\,\mathrm{d}s} \right)^2 \, \right ]} \right\} \label{EqDavydov0} \\ &\le \exp\left\{ \beta^2 \int_{(0,A]} \int_{(0,A]} {\mathbb{E}\left [ \, Z_s Z_t \, \right ]} {\,\mathrm{d}s} {\,\mathrm{d}t} \right \} \label{EqDavydov}\end{aligned}$$ We can bound this last inequality with a result of [@davydov1968convergence] and obtain the upper bound $$\begin{aligned} \exp\left\{ \beta^2 \int_{(0,A]} \int_{(0,A]} \alpha( {\left\lVert s-t \right\rVert}_{\max} ) B^2 {\,\mathrm{d}s} {\,\mathrm{d}t} \right \} \le \exp( C^* \beta^2 B^2 \textbf{A} )\end{aligned}$$ for a $C^*= \eta \int_{0}^{\infty} \alpha(u) u^{N-1} {\,\mathrm{d}u}$ where $\eta$ is a constant which depends on the lattice dimension $N$. This implies and finishes the first step. In the second step, let $\beta B \textbf{A} > 1$. Set $P_k = A_k^{N/(N+1)}$ and split each coordinate of the cube $(0,A]$ into intervals of length $2P_k$. $P_k$ needs not to be an integer (for $k=1,\ldots,N$). Set $U \coloneqq \prod_{k=1}^N \ceil{ A_k/(2P_k)}$. So in each dimension we can cover the interval $(0,A_k]$ by at most $2 \ceil{A_k/(2P_k)}$ disjoint intervals of length $P_k$. More precisely, we define for each $k=1,\ldots,N$ the collection of disjoint intervals $$\begin{aligned} J_{k,1} &= \bigcup_{v=1}^{\ceil{A_k/(2P_k)}} B_{k,v}^{(1)} = \bigcup_{v=1}^{\ceil{A_k/(2P_k)}} ( 2(v-1) P_k, 2(v-1) P_k + P_k], \\ J_{k,2} &=\bigcup_{v=1}^{\ceil{A_k/(2P_k)}} B_{k,v}^{(2)} = \bigcup_{v=1}^{\ceil{A_k/(2P_k)}} ( 2(v-1) P_k + P_k, 2v P_k].\end{aligned}$$ We obtain $$\begin{aligned} (0,A] &= \bigtimes_{k=1}^N ( J_{k,1} \cup J_{k,2} ) = \bigcup_{a\in \{1,2\}^N } \bigtimes_{k=1}^N J_{k,a_k} = \bigcup_{a\in \{1,2\}^N } \bigtimes_{k=1}^N \bigcup_{v_k = 1}^{ \ceil{A_k/(2P_k)}} B_{k,v_k}^{(a_k)} \\ &= \bigcup_{a\in \{1,2\}^N } \bigcup_{v_1=1}^{ \ceil{A_1/(2P_1)}} \ldots \bigcup_{v_N=1}^{ \ceil{A_N/(2P_N)}} \bigtimes_{k=1}^N B^{(a_k)}_{k,v_k} = \bigcup_{u=1}^{2^N} \bigcup_{j=1}^{U} I(u,j)\end{aligned}$$ where $I(u,j)$ equals $\bigtimes_{k=1}^N B^{(a_k)}_{k,v_k}$ for a certain $a\in\{1,2\}^N$ and $(v_1,\ldots,v_N)\in \bigtimes_{k=1}^N \{1,\ldots,\ceil{A_k/(2P_k)}\}$ for each $u=1,\ldots,2^N$ and $j=1,\ldots,U$. Consequently, the $I(u,r)$ are disjoint cubes with edge lengths $P_k$ and each has a volume of $\textbf{P} = \prod_{k=1}^N P_k$. The distance between two cubes $I(u,r)$ and $I(u,r')$ for $r\neq r'$ is at least $\underline{p}\coloneqq\min_{k=1,\ldots,N} P_k$ for each $u=1,\ldots,2^N$. We can partition the integral as follows $$\int_{(0,A]} Z_s {\,\mathrm{d}s} = \sum_{u=1}^{2^N} \sum_{j=1}^{U} \int_{ I(u,j) } Z_s {\,\mathrm{d}s} = \sum_{u=1}^{2^N} T(u), \text{ where } T(u) = \sum_{j=1}^{U} \int_{ I(u,j) } Z_s {\,\mathrm{d}s} .$$ We use the inequality of arithmetic and geometric means to derive that $$\begin{aligned} \label{BernsteinLatticeImprovedEq2a} {\mathbb{E}\left [ \, \exp\left( \beta \int_{(0,A]} Z_s{\,\mathrm{d}s} \right) \, \right ]} \le \frac{1}{2^N} \sum_{u=1}^{2^N} {\mathbb{E}\left [ \, \exp\left(2^N \beta T(u) \right) \, \right ]}.\end{aligned}$$ Moreover, we obtain for the Laplace transform of $T(u)$ with the lemma of [@ibragimov1962some] (Lemma \[IbragimovAlphaMixing\]) the bound $$\begin{aligned} \label{BernsteinLatticeImprovedEq2} {\mathbb{E}\left [ \, \exp\left(2^N \beta T(u) \right) \, \right ]} &\le \prod_{j=1}^{U} {\mathbb{E}\left [ \, \exp\left(2^N \beta \int_{ I(u,j) } Z_s{\,\mathrm{d}s} \right) \, \right ]} + \alpha( \underline{p} ) \, U \exp\left(2^N \beta B \textbf{P} U \right).\end{aligned}$$ By assumption, we have $\underline{A} \ge 2^{N+1}$ which entails that $A_k / (2P_k) \ge 1$, thus, $\ceil{A_k/(2P_k)}\le A_k/P_k$ for each $k=1,\ldots,N$ and $U \le \textbf{A}/\textbf{P}$. Furthermore, we have $2^N\beta B \textbf{P} \le 1$, i.e., $\beta B \le 1/\left(2^N \textbf{A}^{N/(N+1)} \right)$. Next, we need the assumption that the mixing coefficients satisfy $\alpha(z) \le c_0\exp(-c_1 z)$ for all $z\in{\mathbb{R}}_+$. We use the same approximation within each cube $I(u,j)$ as in the above lines starting with Equation  and obtain $$\begin{aligned} \eqref{BernsteinLatticeImprovedEq2} &\le \exp\left( C(\beta B)^2 2^{2N} \textbf{P} U \right) + c_0 \frac{ \textbf{A}}{\textbf{P}} \exp\left( - c_1 \underline{p} + 2^N \beta B \textbf{P} U \right) \nonumber \\ &\le \exp( C^* 2^{2N} \beta^2 B^2 \textbf{A} ) + c_0 \textbf{A}^{1/(N+1)} \exp\left(-\frac{c_1}{2} \underline{A}^{N/(N+1)} \right). \label{BernsteinLatticeImprovedEq3}\end{aligned}$$ Here we use for the $\exp$ factor in the second term the requirement that $$\beta B \le \frac{c_1 C'^{N/(N+1)} }{2^{N+1} \textbf{A}^{N/(N+1)} },$$ which implies $c_1/2 \cdot \underline{A}^{N/(N+1)} \ge 2^N \beta B \textbf{A}$. Set now $\tilde{C}\coloneqq 1/2^N \wedge c_1 C'^{N/(N+1)}/2^{N+1}$. Combining with equations and , we obtain provided that both $$\begin{aligned} \label{BernsteinLatticeImprovedEq3b} \beta B \le \tilde{C} / \textbf{A}^{N/(N+1)} \vee 1/\textbf{A} \text{ and } \underline{A} \ge 2^{N+1}. \end{aligned}$$ In part (B), we assume that $$\tilde{C} / \textbf{A}^{N/(N+1)} \vee 1/\textbf{A} < \beta B \le \left( C' \tilde{C}^{(N+1)/N^2} /2^{N+3} \right)^{ N^2/(N+1)} \wedge \frac{1}{2}\frac{c_1 C'}{2^{N+1}} \frac{1}{\textbf{A}^{(N-1)/N} } .$$ We follow the ideas of [@merlevede2009] and partition the cube $(0,A]$ in Cantor set-like elements. Therefore, let $\delta \in (0,1)$ be defined as follows $$\begin{aligned} \label{BernsteinLatticeImprovedEq3c} \delta \coloneqq \frac{2^{N+1}}{c_1} \beta B \frac{ \textbf{A}}{\underline{A}}. \end{aligned}$$ By assumption, we have that $\underline{A}\ge C' A_k$ for $k=1,\ldots,N$ and that $\beta B \le \frac{1}{2}\frac{c_1 C'}{2^{N+1}} \textbf{A}^{(1-N)/N}$, thus $\delta \le 1/2$. We partition each interval $(0,A_k]$ into a middle interval of length $\delta A_k$ and two outer intervals each of length $(1-\delta)/2 A_k$. The outer intervals form outer cubes within the cube $(0,A]$ of measure $(1-\delta)^N/2^N \textbf{A}$, there are $2^N$ outer cubes in total. The remaining number $3^N-2^N$ form those cubes which have at least in one dimension $k$ an edge length of $\delta A_k$ and for which one edge is an inner interval. The total measure of the outer cubes is $2^N \cdot (1-\delta)^N/2^N \textbf{A} = (1-\delta)^N \textbf{A}$, the measure of the residual cubes is $(1-(1-\delta)^N)\textbf{A}$. Denote by $\{O^{(1)}_j : j=1,\ldots,2^N \}$ the collection of the outer cubes. Then the Laplace transform can be bounded as $$\begin{aligned} {\mathbb{E}\left [ \, \exp\left( \beta \int_{(0,A]} Z_s {\,\mathrm{d}s} \right) \, \right ]} &\le {\mathbb{E}\left [ \, \exp\left( \beta \int_{ \bigcup_{j=1}^{2^N} O^{(1)}_j } Z_s {\,\mathrm{d}s} \right) \, \right ]} \exp\left\{ \beta B \textbf{A} (1-(1-\delta)^N ) \right\} \nonumber \\ \begin{split}\label{BernsteinLatticeImprovedEq4} &\le \left\{ \prod_{j=1}^{2^N} {\mathbb{E}\left [ \, \exp\left( \beta \int_{ O^{(1)}_j} Z_s {\,\mathrm{d}s} \right) \, \right ]} + \alpha( \delta \underline{A} ) 2^N \prod_{j=1}^{2^N} \exp\left( \beta B \textbf{A} \left(\frac{1-\delta}{2}\right)^N \right) \right\} \\ &\quad \cdot \exp\left\{ \beta B \textbf{A} (1-(1-\delta)^N ) \right\}, \end{split}\end{aligned}$$ where the last Equation  is once more a result of [@ibragimov1962some]. Next, use the relation $|\log x - \log y| \le |x-y|$ if $x,y \ge 1$ to obtain for the logarithm of the Laplace transform with the help of the upper bound $$\begin{aligned} &\log {\mathbb{E}\left [ \, \exp\left( \beta \int_{(0,A]} Z_s {\,\mathrm{d}s} \right) \, \right ]} \nonumber \\ \begin{split} &\le \sum_{j=1}^{2^N} \log {\mathbb{E}\left [ \, \exp\left( \beta \int_{ O^{(1)}_j} Z_s {\,\mathrm{d}s} \right) \, \right ]} \\ &\quad + 2^N c_0 \exp\left( -c_1 \underline{A}\delta + 2^N \beta B \textbf{A} \left(\frac{1-\delta}{2} \right)^N \right) + \beta B \textbf{A} \left(1-(1-\delta)^N \right).\label{BernsteinLatticeImprovedEq5} \end{split}\end{aligned}$$ We can repeat the computations for the Laplace transform on the sets $O^{(1)}_{j_1}$. By formally replacing the cube $(0,A]$ with the cube $O^{(1)}_{j_1}$, we obtain a similar bound in terms of new outer subcubes w.r.t. $O^{(1)}_{j_1}$, these are given by $$\left\{ O^{(2)}_{j_2}: j_2 = 1+(j_1-1)2^N,\ldots,2^N+(j_1-1)2^N \right\}$$ for $j_1 = 1,\ldots,2^N$. Here we have to replace in as well $\underline{A}$ by $\underline{A}\frac{1-\delta}{2}$ and $\textbf{A}$ by $\textbf{A} \left(\frac{1-\delta}{2}\right)^N$ . Next, define the number $l$ by $$l \coloneqq \inf\left\{ k\in {\mathbb{Z}}: (\beta B)^{(N+1)/N} \textbf{A} \left(\frac{1-\delta}{2}\right)^{Nk} \le \tilde{C}^{(N+1)/N} \right\},$$ where $\tilde{C} = 1/2^N \wedge c_1 C'^{N/(N+1)} /2^{N+1} $. Note that this definition is meaningful because we are in the case where $(\beta B)^{(N+1)/N} \textbf{A} > \tilde{C}^{(N+1)/N}$. Write $O^{(0)}_1$ for the cube $(0,A]$. After further $l-1$ iterations of , we obtain the following bound with the sets $\left\{ O^{(l)}_{j_l}: j_l= 1,\ldots, 2^{Nl} \right\}$ $$\begin{aligned} \log {\mathbb{E}\left [ \, \exp\left( \beta \int_{(0,A]} Z_s {\,\mathrm{d}s} \right) \, \right ]} &= \log {\mathbb{E}\left [ \, \exp\left( \beta \int_{O^{(0)}_1 } Z_s {\,\mathrm{d}s} \right) \, \right ]} \nonumber \\ \begin{split}\label{BernsteinLatticeImprovedEq6} &= \sum_{j_l=1}^{2^{Nl}} \log {\mathbb{E}\left [ \, \exp\left( \beta \int_{ O^{(l)}_{j_l} } Z_s {\,\mathrm{d}s} \right) \, \right ]} \\ &\quad + \sum_{j=0}^{l-1} \beta B \textbf{A} \left( \frac{1-\delta}{2} \right)^{Nj} \left(1-(1-\delta)^N \right) 2^{Nj} \\ &\quad + \sum_{j=0}^{l-1} c_0 2^{N(j+1)} \exp\left\{ - c_1 \underline{A} \left(\frac{1-\delta}{2}\right)^j \delta+ 2^N \beta B \textbf{A} \left(\frac{1-\delta}{2}\right)^{N(j+1 )} \right\}. \end{split}\end{aligned}$$ We can bound the three sums in , therefore we use the following inequalities which follow from the definition of $l$ $$2^{Nl} \le C (\beta B)^{(N+1)/N} \textbf{A}, \quad l \le C \log \textbf{A} \text{ and } \textbf{A} \left(\frac{1-\delta}{2}\right)^{N(l-1)} > \left(\frac{\tilde{C}}{\beta B}\right)^{(N+1)/N}.$$ The second sum in is at most $$\begin{aligned} \label{BernsteinLatticeImprovedEq7} \sum_{j=0}^{l-1} \beta B \textbf{A} \left( \frac{1-\delta}{2} \right)^{Nj} \left(1-(1-\delta)^N \right) 2^{Nj} &\le \beta B \textbf{A} (1-(1-\delta)^N ) l \le \beta \delta B \textbf{A} l \le C \beta \delta B \textbf{A} \log \textbf{A}.\end{aligned}$$ Next, we apply the inequality from to bound the first sum in . Therefore, we need that the requirements of are satisfied: it follows from the definition of $l$ that $ \beta B \le \tilde{C} \big/ \left( \textbf{A} \left(\frac{1-\delta}{2}\right)^{Nl} \right)^{N/(N+1)}$. Moreover, we need that $\underline{A} \left( \frac{1-\delta}{2} \right)^l \ge 2^{N+1}$: using the fact that $\delta \le 1/2$, we find $$\begin{aligned} \underline{A} \left(\frac{1-\delta}{2}\right)^l \ge C' \left( \textbf{A}\left(\frac{1-\delta}{2}\right)^{N(l-1)} \right)^{1/N} \frac{1-\delta}{2} \ge C' \left( \frac{\tilde{C}}{\beta B} \right)^{(N+1)/N^2} \frac{1}{4} \ge 2^{N+1}.\end{aligned}$$ The last inequality follows because $\beta B \le \left( C' \tilde{C}^{(N+1)/N^2} /2^{N+3} \right)^{ N^2/(N+1)}$. Hence, the first sum in can be estimated similarly as in : $$\begin{aligned} \label{BernsteinLatticeImprovedEq8} &2^{Nl} \log {\mathbb{E}\left [ \, \exp\left( \beta \int_{ O^{(l)}_{1} } Z_s {\,\mathrm{d}s} \right) \, \right ]} \nonumber \\ &\le 2^{Nl} \left\{ C^* 2^{2N} \beta^2 B^2 \textbf{A} \left( \frac{1-\delta}{2}\right)^{Nl} + c_0 \textbf{A}^{1/(N+1)} \left( \frac{1-\delta}{2}\right)^{Nl/(N+1)} \exp\left( - \frac{c_1}{2} \underline{A}^{N/(N+1)} \left( \frac{1-\delta}{2}\right)^{Nl/(N+1)} \right) \right\} \nonumber \\ &= C^* 2^{2N} (\beta B)^2 \textbf{A} (1-\delta)^{Nl} + c_0 \textbf{A}^{1/(N+1)} (1-\delta)^{Nl/(N+1)} 2^{N^2 l/(N+1)} \exp\left( - \frac{c_1}{2} \underline{A}^{N/(N+1)} \left( \frac{1-\delta}{2}\right)^{Nl/(N+1)} \right) \nonumber \\ &\le C (\beta B)^2 \textbf{A} + C \beta B \textbf{A} \exp\left( - \frac{c_1}{2} C'^{N/(N+1)} \left( \frac{1-\delta}{2}\right)^{N/(N+1)} \left( \frac{\tilde{C}}{\beta B} \right)^{1/N} \right).\end{aligned}$$ Consequently using the definition of $\delta$, we can bound and together by $$\begin{aligned} \label{BernsteinLatticeImprovedEq9} C (\beta B)^2 \textbf{A} \left(1+\textbf{A}^{1-1/N} \log \textbf{A} \right) + C \beta B \textbf{A} \exp\left( - C (\beta B)^{-1/N} \right).\end{aligned}$$ For the third sum in we need the condition that $\frac{c_1}{2} \underline{A}\delta \ge \beta B \textbf{A} (1-\delta)^N$. This is implied by the definition of $\delta$ from , thus, $$\begin{aligned} \label{BernsteinLatticeImprovedEq10} &\sum_{j=0}^{l-1} c_0 2^{N(j+1)} \exp\left\{ - c_1 \underline{A}\delta \left(\frac{1-\delta}{2}\right)^j + 2^N \beta B \textbf{A} \left(\frac{1-\delta}{2}\right)^{N(j+1 )} \right\} \nonumber \\ &\le \sum_{j=0}^{l-1} c_0 2^{N(j+1)} \exp\left\{ - \frac{c_1}{2} \underline{A}\delta \left(\frac{1-\delta}{2}\right)^j \right\} \nonumber \\ &\le 2^N c_0 \frac{2^{Nl}-1}{2^N-1} \exp\left\{ -\frac{c_1}{2} \delta \underline{A} \left(\frac{1-\delta}{2} \right)^l \right\} \nonumber \\ &\le C (\beta B)^{(N+1)/N} \textbf{A} \exp\left\{ -\frac{c_1}{2} \delta \underline{A} \left(\frac{1-\delta}{2} \right)^l \right\} \nonumber \\ & \le C (\beta B )^{(N+1)/N} \textbf{A} \exp\left\{ - C (\beta B)^{1-(N+1)/N^2} \textbf{A}^{1-1/N} \right\}.\end{aligned}$$ Hence, combining with yields $$\begin{aligned} \begin{split}\label{BernsteinLatticeImprovedEq11} \log {\mathbb{E}\left [ \, \exp\left\{ \beta \int_{(0,A]} Z_s{\,\mathrm{d}s} \right\} \, \right ]} &\le C (\beta B)^2 \textbf{A} \left(1+\textbf{A}^{1-1/N} \log \textbf{A} \right) + C \beta B \textbf{A} \exp\left( - C (\beta B)^{-1/N} \right) \\ &\quad + C (\beta B )^{(N+1)/N} \textbf{A} \exp\left\{ - C (\beta B)^{1-(N+1)/N^2} \textbf{A}^{1-1/N} \right\} \end{split}\end{aligned}$$ if $\tilde{C} / \textbf{A}^{N/(N+1)} <\beta B \le c_1 C'/2^{N+2} \textbf{A}^{-(N-1)/N} \wedge \left( C' \tilde{C}^{(N+1)/N^2} /2^{N+3} \right)^{ N^2/(N+1)}$. Comparing with in the case that $\beta B \le \tilde{C} /\textbf{A}^{N/(N+1)}$ yields the result. The proof of Proposition \[BernsteinLatticeImproved\] reveals that for spatial data the rate of convergence is determined by the fact that the distance between the blocks decays at a rate $\underline{p}$, however, the number of observations within a block is at least $\underline{p}^N$. Compare the last term (resp. factor) on the right-hand side of (resp. ) which in both cases is due to the $\alpha$-mixing property, see the lemma of [@ibragimov1962some]. So if $N>1$, the decreasing mixing coefficient can not fully compensate for the sample which grows like a polynomial of degree $N$. We see this in the next corollary which shows that the exponential decay is determined by the effective sample size $|I_n|^{1/N}$. \[CorBernsteinImproved\] Let the real-valued random field $Z$ satisfy all conditions from Proposition \[BernsteinLatticeImproved\]. Then there are constants $A_1,A_2\in{\mathbb{R}}_+$ such that for all ${\varepsilon}> 0$ $$\begin{aligned} {\mathbbm{P}}\Big( |I_{n}|^{-1} \Big| \sum_{s \in I_{n}} Z_s \Big| \ge {\varepsilon}\Big) \le A_1 \exp\left( - A_2 \frac{{\varepsilon}}{B} \frac{ |I_{n}|^{1/N} }{ (\log |I_{n}|)^2 } \right).\end{aligned}$$ Choose $\beta \propto (B |I_n|^{(N-1)/N} (\log |I_n|)^2 )^{-1}$. Then we infer from Proposition \[BernsteinLatticeImproved\] that this choice is admissible (if $n$ is sufficiently large). Furthermore, we obtain with Markov’s inequality $$\begin{aligned} \label{CorBernsteinImprovedEq0} {\mathbbm{P}}\left( |I_n|^{-1}\left| \sum_{s \in I_{n}} Z_s \right| \ge {\varepsilon}\right) \le 2 \exp\left(-\beta |I_n| {\varepsilon}\right) {\mathbb{E}\left [ \, \exp\left( \beta \sum_{s\in I_n} Z_s \right) \, \right ]}.\end{aligned}$$ Thus, the expression inside the first $\exp$-factor is proportional to $ \beta |I_n| \propto |I_n|^{1/N} / B (\log |I_n|)^2$. Furthermore, a comparison with the requirements of Proposition \[BernsteinLatticeImproved\] shows that it remains to compute the quantities $$\begin{aligned} & (\beta B)^2 |I_n| |I_n|^{(N-1)/N} \log |I_n| \propto |I_n|^{1/N} / (\log |I_n|)^{3} \text{ and } (\beta B)^{(N+1)/N} |I_n| \propto |I_n|^{1/N^2} / (\log |I_n| )^{2(N+1)/N}.\end{aligned}$$ Hence, the first $\exp$-factor in dominates the second $\exp$-factor and we obtain the desired result. Next, we give an exponential inequality for centered Hilbert space-valued random variables. Therefore, we need two conditions: the first states that the tail of the entire distribution vanishes at an exponential rate. The second requires that the contribution of a further marginal dimension decays exponentially as well. In particular, this last assumption is not uncommon, see e.g., [@bosq_linear_2000]. \[BernsteinHilbert\] Let ${\mathcal{H}}$ be a separable Hilbert space with inner product ${\left\langle \cdot,\cdot \right\rangle}$ and orthonormal basis $\{ e_j: j\in {\mathbb{N}}\}$. Let $\{ Z_{s}: {s}\in {\mathbb{Z}}^N \}$ be a random field on ${\mathbb{Z}}^N$, $N\in{\mathbb{N}}_+$, the marginals of which take values in ${\mathcal{H}}$ and satisfy ${\mathbb{E}\left [ \, Z_{s} \, \right ]} =0$. The $Z_{s}$ satisfy uniformly in $s\in{\mathbb{Z}}^N$ the conditions $$\begin{aligned} \label{EqBernsteinHilbert0} {\mathbb{E}\left [ \, {\left\langle Z_{s},e_j \right\rangle}^2 \, \right ]} \le d_0 \exp(-d_1 j ) \text{ for all } j\in{\mathbb{N}}\text{ and } {\mathbbm{P}}( { {\left \lVert Z_{s} \right \rVert}_{{\mathcal{H}}} } \ge z ) \le \kappa_0 \exp( -\kappa_1 z^{\gamma} ) \end{aligned}$$ for positive constants $d_0,d_1\kappa_0,\kappa_1$ and $\gamma$. The mixing coefficients of the random field decrease exponentially as in Proposition \[BernsteinLatticeImproved\] and there is a lower bound $C'$ for the ratio between the smallest and the largest coordinate of $n$ as in Equation . Moreover, let ${\varepsilon}>0$. Then there are constants $A_1$ and $A_2$ such that $$\begin{aligned} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert \sum_{s\in I_{n}} Z_{s} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le A_1 \exp\left\{ - A_2 \left(\frac{ {\varepsilon}|I_{n}|^{1/N} }{(\log |I_{n}|)^2}\right)^{2\gamma/ (2+3\gamma)} \right\} .\end{aligned}$$ $A_1$ and $A_2$ depend on the decay rate of the mixing coefficients, on the tail parameters $\gamma,\kappa_i,d_i$ and on $C'$ but not on ${n}$. Additionally, $A_1$ depends polynomially on $|I_n|$ and ${\varepsilon}$. If additionally $\gamma \ge 1$, $A_1$ does not depend on ${\varepsilon}$ and $|I_n|$ and $$\begin{aligned} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert \sum_{{s}\in I_{n}} Z_{s} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le A_1 \exp\left\{ - A_2 \left(\frac{ {\varepsilon}|I_{n}|^{1/N} }{(\log |I_{n}|)^2}\right)^{2/5 } \right\} \\ &\quad\cdot \Biggl\{ {\varepsilon}^{-2} + \left(\frac{ {\varepsilon}|I_{n}|^{1/N} }{(\log |I_{n}|)^2}\right)^{2/5} +\left(\frac{|I_{n}|^{1/N} }{(\log |I_{n}|)^2}\right)^{1/5 }\; {\varepsilon}^{- 4/5} \Biggl\}. \end{aligned}$$ Following [@bosq_linear_2000], we decompose the sum $S_{n} = \sum_{{s}\in I_{n}} Z_{s}$ in a finite-dimensional part and a remainder. Then we bound the latter with the help of the decay in the single coordinates and apply the exponential inequality for finite-dimensional random variables to the first part. More precisely, the following decomposition is true for each natural number $m$ $$\begin{aligned} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert S_{n} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le {\mathbbm{P}}\left( \sum_{j=1}^m {\left\langle S_{n},e_j \right\rangle}^2 \ge (|I_{n}| {\varepsilon}/2 )^2 \right) + {\mathbbm{P}}\left( \sum_{j=m+1}^{\infty} {\left\langle S_{n},e_j \right\rangle}^2 \ge (|I_{n}| {\varepsilon}/2 )^2 \right) \nonumber \\ &\le \sum_{j=1}^m {\mathbbm{P}}\left({\left\langle S_{n},e_j \right\rangle}^2 \ge \frac{ (|I_{n}| {\varepsilon})^2}{4 m} \right) + {\mathbb{E}\left [ \, \sum_{j=m+1}^{\infty} {\left\langle S_{n},e_j \right\rangle}^2 \, \right ]} \left( \frac{ {\varepsilon}|I_n|}{2 } \right)^{-2} \nonumber \\ &\le m\cdot \max_{1\le j \le m} {\mathbbm{P}}\left( |{\left\langle S_{n},e_j \right\rangle} | \ge \frac{ |I_{n}| {\varepsilon}}{2 \sqrt{m} } \right) + \left( \frac{2}{{\varepsilon}} \right)^2 \sum_{j=m+1}^{\infty} {\mathbb{E}\left [ \, {\left\langle Z_{e_N},e_j \right\rangle}^2 \, \right ]} . \label{EqBernsteinHilbert1}\end{aligned}$$ By assumption, there are $d_0,d_1\in{\mathbb{R}}_+$ such that $\sum_{j=m+1}^{\infty} {\mathbb{E}\left [ \, {\left\langle Z_{e_N},e_j \right\rangle}^2 \, \right ]} \le \sum_{j=m+1}^{\infty} d_0 \exp(-d_1 j )$. Hence, the second term in decays at an exponential rate. Note that we do not use a covariance inequality for $\alpha$-mixing spatial processes for the second term in at this point because it would not increase significantly the overall rate of convergence. We apply the inequality from Proposition \[BernsteinLatticeImproved\] to the first term and use the assumption that the tail of the random variables decays exponentially, i.e., ${\mathbbm{P}}( |Z_{s}| \ge z ) \le \kappa_0 \exp( -\kappa_1 z^{\gamma} ) $. We obtain with similar arguments as in [@valenzuela2017bernstein] $$\begin{aligned} {\mathbbm{P}}\left( |{\left\langle S_{n},e_j \right\rangle} | \ge |I_{n}| {\varepsilon}\right) \le \inf_{ D > 0} \left( A_1 D^{1-\gamma} {\varepsilon}^{-1} \exp\left\{ - A_2 D^{\gamma} \right\} + A_1 \exp\left\{ -A_2 \frac{{\varepsilon}|I_{n}|^{1/N} }{D (\log |I_{n}|)^2} \right\} \right), \label{EqBernsteinHilbert2}\end{aligned}$$ where the constants $A_1$ and $A_2$ only depend on the coefficients $\kappa_0$ and $\kappa_1$ which bound the tail of the distribution, the lattice dimension $N$ and the mixing coefficients. We can approximately equate both terms in with the choice $D = \left( R({n}){\varepsilon}\right)^{1/(1+\gamma)}$, where $R({n}) \coloneqq |I_{n}|^{1/N} / (\log |I_{n}|)^2$. In particular, we obtain for the following asymptotic bound if we insert for the finite-dimensional part $$\begin{aligned} \begin{split}\label{EqBernsteinHilbert3} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert S_{n} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le \inf_{m\in{\mathbb{N}}} \Biggl( A_1 m \left\{ 1+ R({n})^{ (1-\gamma)/(1+\gamma)} \left(\frac{{\varepsilon}}{\sqrt{m}}\right) ^{-2\gamma /(1+\gamma) } \right\} \\ &\qquad\qquad\qquad \cdot \exp\left\{ -A_2 \left( \frac{ R({n}){\varepsilon}}{\sqrt{m}} \right)^{\gamma/(1+\gamma)} \right\} + A_3 \frac{ \exp\left\{ -A_4 m \right\} }{{\varepsilon}^2 } \Biggl). \end{split}\end{aligned}$$ Here the constants $A_1,\ldots,A_4$ do not depend on $m$ and $n$. Again, both term are approximately equal for the choice $m \coloneqq \floor{ ( R({n}) {\varepsilon})^{2\gamma /(2+3\gamma)} }$. In this case, reduces to $$\begin{aligned} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert S_{n} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le A_1 \exp\left\{ - A_2 \left( {\varepsilon}R({n}) \right)^{2\gamma/ (2+3\gamma)} \right\} \Biggl\{ {\varepsilon}^{-2} + \left( R({n}){\varepsilon}\right)^{2\gamma / (2+3\gamma)} \\ &\quad+ R({n})^{2(1+\gamma-\gamma^2)/[ (2+3\gamma)(1+\gamma) ] }\; {\varepsilon}^{- \gamma(3+5\gamma) /[ (2+3\gamma)(1+\gamma)] } \Biggl\}. \end{aligned}$$ This finishes the proof. Exponential inequalities for C-weakly dependent spatial processes {#Section_ExpInequalitiesPhiMixing} ================================================================= The aim of this section is to derive exponential inequalities for ${\mathcal{C}}$-weakly dependent spatial processes. We assume for the next proposition that $\{ (X_s,y_s):s\in{\mathbb{N}}^N\}$ is a stationary random field. The $X_s$ take values in the Banach space ${\mathcal{S}}$, the $y_s$ are real-valued and bounded by $B\coloneqq {\left\lVert y_s \right\rVert}_{{\mathbbm{P}},\infty} <\infty$. ${\left\lVert \cdot \right\rVert}^\sim$ is a pseudo-norm on the space of operators ${\mathcal{C}}$ from Equation . Note that ${\mathcal{C}}$ contains elements which are not necessarily linear and that the coefficients ${\varphi}_{{\mathcal{C}},y_s}(i)$ from depend on the choice of the pseudo-norm. We obtain with these assumptions: \[WeaklyCDependent\] Let $\{ (X_s,y_s):s\in{\mathbb{N}}^N\}$ be stationary such that the coefficients from satisfy $\sum_{i=1}^{\infty} {\varphi}_{{\mathcal{C}},y_s}(i)<\infty$. Let $\tilde{g}: {\mathcal{S}}\rightarrow {\mathbb{R}}$ be a bounded operator, i.e., $\sup_{x\in{\mathcal{S}}} |\tilde{g}(x)|<\infty$. Define $S_n = \sum_{s\in I_n} y_s \tilde{g}(X_s)$. Then there are constants $A_1,A_2$ which depend on the lattice dimension $N$, the coefficients ${\varphi}_{{\mathcal{C}},y_s}$ but neither on $n\in{\mathbb{N}}^N$ nor on $B$ such that $$\begin{aligned} \label{ExpPhiMixing} {\mathbbm{P}}\left( |I_n|^{-1} \left| S_n - {\mathbb{E}\left [ \, S_n \, \right ]} \right| \ge {\varepsilon}\right) \le A_1 \exp\left( -A_2 {\varepsilon}^2 |I_n|^{1/N} ({\left\lVert \tilde{g} \right\rVert}^{\sim})^{-1} B^{-2} \right).\end{aligned}$$ We write ${\left\lVert \cdot \right\rVert}$ for the maximum norm on ${\mathbb{N}}^N$ and partition the sum $\sum_{s\in I_n} y_s \tilde{g}(X_s) $ as follows: we collect all indices with equal maximum norm and set $$Z_k = \sum_{\substack{ s\in I_n,\\ {\left\lVert s \right\rVert}=k }} y_s \tilde{g}(X_s). \text{ Then } \sum_{s\in I_n} y_s \tilde{g}(X_s) = \sum_{k=1}^{{\left\lVert n \right\rVert}} \sum_{\substack{ s\in I_n,\\ {\left\lVert s \right\rVert}=k }} y_s \tilde{g}(X_s) = \sum_{k=1}^{{\left\lVert n \right\rVert}} Z_k.$$ Denote by $\tilde{{\mathcal{M}}}_k$ the $\sigma$-algebra generated by $\{Z_0,\ldots,Z_k\}$. We derive from Proposition 4 of [@dedecker2003new] that $$\begin{aligned} \label{ExpIneqEq1} {\left\lVert \sum_{k=1}^{{\left\lVert n \right\rVert}} Z_k -{\mathbb{E}\left [ \, Z_k \, \right ]} \right\rVert}_{{\mathbbm{P}},p} \le \left( 2p \sum_{k=1}^{{\left\lVert n \right\rVert}} b_{k,{\left\lVert n \right\rVert}} \right)^{1/2},\end{aligned}$$ for $p\ge 2$ and where the coefficients $b_{k,{\left\lVert n \right\rVert}}$ equal $$\begin{aligned} b_{k, {\left\lVert n \right\rVert} } = \max_{k\le l\le {\left\lVert n \right\rVert}} {\left\lVert (Z_k - {\mathbb{E}\left [ \, Z_k \, \right ]} ) \sum_{i=k}^l {\mathbb{E}\left [ \, Z_i \big| \tilde{M}_k \, \right ]} - {\mathbb{E}\left [ \, Z_i \, \right ]} \right\rVert}_{{\mathbbm{P}},p/2}.\end{aligned}$$ Note that ${\left\lVert Z_k \right\rVert}_{\infty} = {\mathcal{O}}(B k^{N-1})$. Hence, the coefficients $b_{k,{\left\lVert n \right\rVert}}$ satisfy the inequality $$\begin{aligned} b_{k,{\left\lVert n \right\rVert}} &\le {\left\lVert Z_k - {\mathbb{E}\left [ \, Z_k \, \right ]} \right\rVert}_{\infty} \sum_{i=k}^{{\left\lVert n \right\rVert}} {\left\lVert {\mathbb{E}\left [ \, Z_i \big| \tilde{M}_k \, \right ]} - {\mathbb{E}\left [ \, Z_i \, \right ]} \right\rVert}_{\infty} \le C B^2 k^{N-1} \sum_{i=k}^{{\left\lVert n \right\rVert}} i^{N-1} {\varphi}_{{\mathcal{C}},y_s}(i-k) {\left\lVert \tilde{g} \right\rVert}^{\sim} ,\end{aligned}$$ where ${\varphi}_{{\mathcal{C}},y_s}(i)$ is defined in . Thus, is at most (modulo a constant which depends on the lattice dimension $N$) $$\begin{aligned} &\left(2p \sum_{k=1}^{{\left\lVert n \right\rVert}} B^2 k^{N-1} \sum_{i=k}^{{\left\lVert n \right\rVert}} i^{N-1}{\varphi}_{{\mathcal{C}},y_s}(i-k) {\left\lVert \tilde{g} \right\rVert}^{\sim} \right)^{1/2} =\left( 2p{\left\lVert \tilde{g} \right\rVert}^{\sim} B^2 {\left\lVert n \right\rVert}^{N-1} \sum_{i=0}^{{\left\lVert n \right\rVert}-1} {\varphi}_{{\mathcal{C}},y_s}(i) \sum_{k=1}^{{\left\lVert n \right\rVert}-i} (i+k)^{N-1} \right)^{1/2} \nonumber \\ &\le C \left( 2p{\left\lVert \tilde{g} \right\rVert}^{\sim} B^2 {\left\lVert n \right\rVert}^{N-1} \sum_{i=0}^{{\left\lVert n \right\rVert}-1} {\varphi}_{{\mathcal{C}},y_s}(i) \left( ({\left\lVert n \right\rVert}+1)^{N} - (i+1)^N \right) \right)^{1/2}. \label{ExpIneqEq2}\end{aligned}$$ Following Proposition 5 in [@dedecker2005new], we obtain from this $L^p$-inequality the desired exponential inequality which is given in Equation : we obtain with Markov’s inequality $$\begin{aligned} {\mathbbm{P}}\left( |I_n|^{-1} \left| S_n - {\mathbb{E}\left [ \, S_n \, \right ]} \right| \ge {\varepsilon}\right) &\le 1 \wedge \inf_{p\ge 2} ({\varepsilon}|I_n|)^{-p} \, {\mathbb{E}\left [ \, \left| S_n - {\mathbb{E}\left [ \, S_n \, \right ]} \right|^p \, \right ]} \\ &\le 1 \wedge \inf_{p\ge 2} C_1 \, ({\varepsilon}|I_n|)^{-p} \, \left( C_2\, p{\left\lVert \tilde{g} \right\rVert}^{\sim} B^2 {\left\lVert n \right\rVert}^{2N-1 } \sum_{i=0}^{\infty} {\varphi}_{{\mathcal{C}},y_s}(i) \right)^{p/2} \\ &\le 1 \wedge \inf_{p\ge 2} C_3 \left( C_4\, {\varepsilon}^{-2} p{\left\lVert \tilde{g} \right\rVert}^{\sim} B^2 |I_n|^{-1/N } \right)^{p/2}\end{aligned}$$ for certain constants $C_1,\ldots,C_4$. Now, as demonstrated in [@dedecker2005new] this is bounded by the $\exp$-expression in Equation . The analogue of Theorem \[BernsteinHilbert\] for ${\mathcal{C}}$-weakly dependent data is given in terms of a stationary random field $\{(X_s,Y_s): s\in{\mathbb{N}}^N \}$ where the $X_s$ are ${\mathcal{S}}$-valued and the $Y_s$ are ${\mathcal{H}}$-valued. Again, ${\mathcal{H}}$ is a separable Hilbert space which is equipped with an orthonormal basis $\{e_j:j\in{\mathbb{N}}\}$. \[WeaklyCDependentHilbert\] Assume that the tail of the distribution of the $Y_s$ admits the exponential bounds as in Equation . Set $y_{j,s} \coloneqq {\left\langle Y_s,e_j \right\rangle}$ and $y_{j,s}^{(B)} \coloneqq \min(B,\max(-B,y_{j,s}))$ for $B>0$. Moreover, assume that $$\begin{aligned} \label{UniBddPhi} \sup_{j\in{\mathbb{N}}} \sup_{B>0} \sum_{i\in{\mathbb{N}}} {\varphi}_{{\mathcal{C}},y^{(B)}_{j,s}}(i) < \infty,\end{aligned}$$ where the ${\varphi}_{{\mathcal{C}},y^{(B)}_{j,s}}$ are defined in . Let $\tilde{g}\in{\mathcal{C}}_1$ and set $S_n = \sum_{s\in I_n} Y_s \tilde{g}(X_s) \in {\mathcal{H}}$. Then $$\begin{aligned} &{\mathbbm{P}}\left( |I_n|^{-1} { {\left \lVert S_n - {\mathbb{E}\left [ \, S_n \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) \nonumber\\ \begin{split}\label{WeaklyCDepEq0} &\le A_1 \left[ {\varepsilon}^{-2} + m + m^{(4+5\gamma)/(4+2\gamma)} {\varepsilon}^{-3\gamma/(2+\gamma)} \left(|I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{(1-\gamma)/(2+\gamma)} \right] \\ &\quad\cdot \exp\left\{ - A_2 \left( {\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{\gamma/(2+2\gamma)} \right\}, \end{split}\end{aligned}$$ where $m= \left({\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{\gamma/(2+2\gamma)}$. In particular, if $\gamma\ge 1$, $$\begin{aligned} {\mathbbm{P}}\left( |I_n|^{-1} { {\left \lVert S_n - {\mathbb{E}\left [ \, S_n \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le \left[{\varepsilon}^{-2}+\left({\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{1/4} + \left({\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{9/24} {\varepsilon}^{-1} \right] \\ &\quad \cdot \exp\left\{ - A_2 \left( {\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{1/4} \right\}.\end{aligned}$$ We proceed as in the proof of Theorem \[BernsteinHilbert\] and use the result from Proposition \[WeaklyCDependent\]. After splitting the sum in a finite-dimensional part and an infinite-dimensional remainder, we end up in a constellation as in Equation : $$\begin{aligned} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert S_{n}- {\mathbb{E}\left [ \, S_n \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le m\cdot \max_{1\le j \le m} {\mathbbm{P}}\left( |{\left\langle S_{n},e_j \right\rangle} | \ge \frac{ |I_{n}| {\varepsilon}}{2 \sqrt{m} } \right) + \left( \frac{2}{{\varepsilon}} \right)^2 \sum_{j=m+1}^{\infty}{\mathbb{E}\left [ \, {\left\langle Z_{e_N},e_j \right\rangle} \, \right ]}^2. \end{aligned}$$ The finite-dimensional part needs to be split in a part bounded by a constant $B$ as well as a positive and negative remainder. More precisely, we write $y_{j,s} = y_{j,s}^{(B)} + \max( y_{j,s}-B,0) + \min( y_{j,s}+B,0)$. Hence, if we use additionally the fact that the tail of the distribution of the $y_{j,s}$ is uniformly bounded, we obtain for the finite-dimensional part (similar as in Equation and using Proposition \[WeaklyCDependent\]) the bound $$\begin{aligned} {\mathbbm{P}}\left( |{\left\langle S_{n},e_j \right\rangle} |\ge \frac{ {\varepsilon}|I_n| }{2\sqrt{m}} \right) &\le A_1 \inf_{B>0} \Bigg\{ B^{1-\gamma} {\varepsilon}^{-1} m^{1/2} \exp(-A_2 B^{\gamma}) + \exp\left(-A_2 {\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} B^{-2} m^{-1} \right) \Bigg\}.\end{aligned}$$ Note that the uniform boundedness of the weak dependence coefficients from Equation  is necessary in order to apply Proposition \[WeaklyCDependent\] uniformly in $j$. Consequently, the choice $B=\left( {\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} m^{-1} \right)^{1/(2+\gamma)}$ yields $$\begin{aligned} {\mathbbm{P}}\left( |I_{n}|^{-1} { {\left \lVert S_{n}- {\mathbb{E}\left [ \, S_n \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge {\varepsilon}\right) &\le A_1 \inf_{m\in{\mathbb{N}}} \Biggl\{ m \left[1+ \left( |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{(1-\gamma)/(2+\gamma)} m^{3\gamma/(2(2+\gamma))} {\varepsilon}^{-3\gamma/(2+\gamma)} \right]\\ &\quad \cdot \exp\left[ - A_2 \left( {\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1} \right)^{\gamma/(2+\gamma)} m^{-\gamma/(2+\gamma)} \right] + \left(\frac{2}{{\varepsilon}}\right)^2\exp( -A_2 m ) \Biggl \}.\end{aligned}$$ Choosing $m$ proportional to $({\varepsilon}^2 |I_n|^{1/N} \left({\left\lVert \tilde{g} \right\rVert}^{\sim}\right)^{-1})^{\gamma/(2+2\gamma)}$ yields the rate in . Applications in the functional kernel regression model {#Section_Application} ====================================================== In this section, let $\mathcal{D}$ be a convex and compact subset of ${\mathbb{R}}^d$. The Hilbert space ${\mathcal{H}}$ is given by the function space $L^2( {\mathcal{D}}, {\mathcal{B}}({\mathcal{D}}), \nu)$ over the field ${\mathbb{R}}$, where $\nu$ is a finite measure, e.g., the Lebesgue measure or a probability measure. The inner product on ${\mathcal{H}}$ is ${\left\langle x,y \right\rangle} = \int_\mathcal{D} xy {\,\mathrm{d}\nu}$. We assume that ${\mathcal{S}}$ is a superset of the continuous functions on ${\mathcal{D}}$ and a subset of ${\mathcal{H}}$, i.e., $C^0({\mathcal{D}}) \subseteq {\mathcal{S}}\subseteq {\mathcal{H}}$. Consider a pseudo-metric $d$ on ${\mathcal{S}}$ which satisfies $d(x,y) \le { {\left \lVert x-y \right \rVert}_{{\mathcal{H}}} }= (\int_{\mathcal{D}}|x-y|^2{\,\mathrm{d}\nu})^{1/2}$ for all $x,y \in {\mathcal{S}}$. An example for $d$ would be a projection-based pseudo-metric. We study the strictly stationary process $( (X_s,Y_s) : s\in{\mathbb{Z}}^N)$, $N\in{\mathbb{N}}_+$, where $X_s$ takes in ${\mathcal{S}}$ and $Y_s$ takes values in ${\mathcal{H}}$. The process satisfies the functional regression model $$\begin{aligned} \label{GenX} Y_s = \Psi(X_s) + {\varepsilon}_s,\quad s\in{\mathbb{Z}}^N \end{aligned}$$ where the error terms ${\varepsilon}_s$ are ${\mathcal{H}}$-valued with ${\mathbb{E}\left [ \, {\varepsilon}_s | X_s \, \right ]} =0$. We estimate the operator $\Psi: {\mathcal{S}}\rightarrow{\mathcal{H}}$ with the methods from the kernel regression framework of [@ferraty2004nonparametric], [@ferraty_nonparametric_2007] and [@ferraty2012regression]. An important variable in this model is the small ball probability function which is defined with the help of $d$ as $F_x(h) = {\mathbbm{P}}( d(X_s,x)\le h )$, for $h\ge0$. Let $K$ be a kernel function; we write $K_h \coloneqq K( \cdot /h)$ and estimate the operator $\Psi$ pointwise by $$\begin{aligned} \begin{split}\label{DefHatPsi} &\hat{\Psi}_h(x) \coloneqq \frac{ \hat{g}_h(x) }{ \hat{f}_h(x) } \in{\mathcal{H}}, \quad \text{ for } x\in {\mathcal{S}}, \text{ where }\\ &\qquad\qquad \hat{f}_h (x) \coloneqq (|I_n| F_x(h))^{-1} \sum_{s\in I_n} K_h (d(X_s,x) ) \in {\mathbb{R}}\text{ and }\\ &\qquad\qquad\qquad\qquad \hat{g}_h (x)\coloneqq (|I_n| F_x(h))^{-1} \sum_{s\in I_n} Y_s K_h ( d(X_s,x) ) \in {\mathcal{H}}. \end{split}\end{aligned}$$ ${\mathcal{H}}$ is equipped with an orthonormal basis $\{e_j:j\in{\mathbb{N}}\}$. Denote by $\psi_{j} \coloneqq {\left\langle \Psi(\cdot),e_j \right\rangle}$ the $j$-th coordinate of the operator $\Psi$ w.r.t. the orthonormal basis and by $y_{j,s} \coloneqq {\left\langle Y_s,e_j \right\rangle}$ the $j$-th coordinate of the process $Y_s$. Set $y_{j,s}^{(B)} \coloneqq \min( B, \max(-B,y_{j,s}))$ for $B\ge0$. Moreover, define $\vartheta_{x,j}(s) \coloneqq {\mathbb{E}\left [ \, \psi_j(X_s)-\psi_j(x) | d(X_s,x)=s \, \right ]}$ for $j\in{\mathbb{N}}$ and $x\in{\mathcal{S}}$. We write ${\left\lVert x \right\rVert}_{\infty}$ for the essential supremum of a function $x$ on $\mathcal{D}$ w.r.t. $\nu$ and make the following assumptions: 1. \[Condition1\] $\Psi\colon {\mathcal{S}}\to{\mathcal{H}}$ is uniformly H[ö]{}lder continuous of order $r$ w.r.t. ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$, i.e., ${ {\left \lVert \Psi(x)-\Psi(y) \right \rVert}_{{\mathcal{H}}} }\le L_\Psi { {\left \lVert x-y \right \rVert}_{{\mathcal{H}}} }^r$ for some $r\in(0,1]$. For some $\delta>0$, all $0\le u \le \delta$, all $j\in{\mathbb{N}}$ and all $x\in{\mathcal{S}}$, $\vartheta_{x,j}(0)=0$, $\vartheta'_{x,j}(u)$ exists and $\vartheta'_{x,j}(u)$ is uniformly H[ö]{}lder continuous of order $r$, i.e., there is a $0<L_{x,j}< \infty$ such that $|\vartheta'_{x,j}(u)-\vartheta'_{x,j}(0)|\le L_{x,j} u^r$ for all $0\le u\le \delta$. Additionally, $\sup_{x\in{\mathcal{S}}} \sum_{j\in{\mathbb{N}}} \vartheta'_{x,j}(0)^2 < \infty$ and $\sup_{x\in{\mathcal{S}}} \sum_{j\in{\mathbb{N}}} L_{x,j}^2 <\infty$. 2. \[Condition2\] the kernel $K$ has support in $[0,1]$ and has a continuous derivative $K'\le 0$. The Lipschitz constant of $K$ on \[0,1\] is denoted by $L_K$, i.e., $\left| K(u)-K(v)\right| \le L_K |u-v|$ for all $u,v\in [0,1]$. 3. \[Condition3\] K(1) = 0, which implies that the kernel function is Lipschitz continuous on ${\mathbb{R}}_+$. 4. \[Condition4\] the small ball probability $F_x(h)={\mathbbm{P}}(d(X_s,x)\le h)$ is positive for all $h>0$ and for all $x\in {\mathcal{S}}$. The limit of the quotient $\tau_x(u)\coloneqq \lim_{h\downarrow 0 } F_x(hu) /F_x(h)$ exists for all $u\in[0,1]$ and all $x\in{\mathcal{S}}$ and it is uniform: $$\begin{aligned} \label{EqSmallBall} \lim_{h\downarrow 0} \;\sup_{x\in{\mathcal{S}}}\; \sup_{u\in[0,1]} \left| \frac{ F_x(hu)}{F_x(h)} - \tau_x(u) \right| = 0.\end{aligned}$$ 5. \[Condition5\] $M_x \coloneqq K(1)-\int_0^1 K'(u)\tau_x(u)\,{\,\mathrm{d}u} > 0$ for all $x\in{\mathcal{S}}$ and $\inf_{x\in{\mathcal{S}}}M_x >0$. 6. \[Condition6\] there is a $\delta>0$ such that the small ball probability quotient $$\begin{aligned} {\mathcal{S}}\times[0,1] \ni (z,u) \mapsto F_z(hu)/ F_x(h)\end{aligned}$$ is Lipschitz continuous for each fixed point $x\in{\mathcal{S}}$ with Lipschitz constant $L_{x} $ which is uniform in $h$ for $h\le \delta$. 7. \[Condition7\] the tail of the distribution of the $Y_s$ decays exponentially, i.e., ${\mathbbm{P}}( { {\left \lVert Y_s \right \rVert}_{{\mathcal{H}}} } \ge z ) \le \kappa_0 \exp(-\kappa_1 z^{\gamma})$ for some $\gamma \ge 1$. Furthermore, there are positive constants $d_0,d_1$ such that $${\mathbb{E}\left [ \, {\left\langle Y_s,e_j \right\rangle}^2 \, \right ]} \le d_0 \exp(-d_1 j).$$ 8. \[Condition8\] set $\tilde{\vartheta}_x(u) = {\mathbb{E}\left [ \, {\left\lVert Y_s \right\rVert} | d(X_s,x)=u \, \right ]}$. Then $\sup_{x\in{\mathcal{S}}, {\left\lVert x \right\rVert}_{\infty}\le R} \tilde{\vartheta}_x(0) = {\mathcal{O}}(R^r)$. Moreover, there is a $\delta>0$ such that for all $x\in{\mathcal{S}}$ and $0\le u\le \delta$ the derivative $\tilde{\vartheta}'_x(u)$ exists and $\sup_{x\in{\mathcal{S}}, u\le \delta} |\tilde{\vartheta}'_x(u)| < \infty$. 9. \[Condition9\] the process $\{ (X_s,Y_s): s\in {\mathbb{N}}^N \}$ is strongly spatial mixing with exponentially decreasing mixing coefficients such that $\alpha(k) \le c_0 \exp( -c_1 k)$ for $\alpha$ defined as in Equation . 10. \[Condition10\] the pseudo-norm on ${\mathcal{C}}$ from is defined by $ {\left\lVert g \right\rVert}^{\sim} \coloneqq \sup_{x,y\in {\mathcal{S}}, x\neq y} |g(x)-g(y)| / { {\left \lVert x-y \right \rVert}_{{\mathcal{H}}} }. $ The process $(X,Y)$ is uniformly ${\mathcal{C}}$-weakly dependent in the sense that the coordinate processes of the $Y_s$ satisfy $$\sup_{j\in{\mathbb{N}}} \sup_{B>0} \sum_{i\in{\mathbb{N}}} {\varphi}_{{\mathcal{C}},y^{(B)}_{j,s}}(i) < \infty \text{ and } \sum_{i\in {\mathbb{N}}} {\varphi}_{{\mathcal{C}}}(i) < \infty.$$ Condition \[Condition1\] ensures that the regression operator is uniformly continuous on ${\mathcal{S}}\subseteq {\mathcal{H}}$, w.r.t. the norm ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$ which is stronger than the pseudo-metric $d$. The requirement on the conditional expectation functions is not uncommon, a similar assumption is made in [@ferraty2012regression]. It ensures in particular that the conditional expectation of the difference of the full operator $\Psi(X_s)-\Psi(x)$ admits a meaningful first order expansion w.r.t. $d(X_s,x)$. Condition \[Condition2\] contains standard assumptions on the kernel, see [@ferraty_nonparametric_2007]. For the concept of weak dependence, we need in the following that the kernel function $K$ is continuous, thus, in this case Condition \[Condition3\] is additionally necessary. Condition \[Condition4\] can be motivated by the following observation: since the underlying Hilbert space is a function space, one has in many applications that for a point $x$ in the Hilbert space ${\mathbbm{P}}( {\left\lVert X_s - x \right\rVert} \le h ) \sim C(x) {\mathbbm{P}}( {\left\lVert X_s \right\rVert} \le h)$ for $h \downarrow 0$. For further details see e.g. [@ferraty2006estimating], [@ferraty_nonparametric_2007] and [@ferraty2012regression]. The positivity of the moments $M_x$ in Condition \[Condition5\] is technical and guaranteed if $K(1)>0$. In the same way, Conditions \[Condition6\] to  \[Condition8\] guarantee certain technical properties of the estimator $\hat{\Psi}$ in the subsequent proofs. Condition \[Condition9\] is not unusual if we assume that the data are $\alpha$-mixing and is also mentioned in [@ferraty2004nonparametric]. In the same way, Condition \[Condition10\] guarantees a solution if the data are ${\mathcal{C}}$-weakly dependent. Define on $C^0({\mathcal{D}})$ the norm $$\begin{aligned} \label{NormC0} {\left\lVert x \right\rVert}_{1,C^0({\mathcal{D}})} \coloneqq \sup_{u\in \mathcal{D} } |x(u)| + \sup_{u,v\in \mathcal{D}, u\neq v } \frac{ \left| x(u) - x(v) \right|}{ {\left\lVert u-v \right\rVert} }.\end{aligned}$$ Consider for $R>0$ the $\delta$-covering number $ N({\mathcal{G}}(R),\delta,{ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} })$ of the set $ {\mathcal{G}}(R) \coloneqq \left \{ x\in C^0(\mathcal{D}): {\left\lVert x \right\rVert}_{1,C^0({\mathcal{D}})} \le R \right\}$ w.r.t. the norm ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$. Then the following is well known: \[TotallyBoundednessFunctionalCase\] The set ${\mathcal{G}}(R)$ is totally bounded and there is a constant $C$ which only depends on $d$ such that the covering number w.r.t. the ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$-norm on the function space ${\mathcal{H}}$ satisfies $ \log N({\mathcal{G}}(R),\delta,{ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }) \le C \lambda( \mathcal{D} ^1 ) ( \sqrt{\nu(\mathcal{D})}R / \delta )^{d}$, where $\lambda({\mathcal{D}}^1) = \{u\in{\mathbb{R}}^d: \; \exists v\in{\mathcal{D}}:\, {\left\lVert u-v \right\rVert}_{\infty}\le 1\}$. By Theorem 2.7.1 in [@van2013weak] the logarithm of the covering number of ${\mathcal{G}}(1)$ w.r.t. the supremum norm can be bounded by $\lambda( \mathcal{D} ^1 ) (1 / \delta )^d$ times a constant which only depends on $d$. Now, note that the covering number of ${\mathcal{G}}(R)$ w.r.t. the 2-norm on $\mathcal{D}$ can be bounded by the $\delta/\sqrt{\nu(\mathcal{D})}$-covering number of ${\mathcal{G}}(R)$ w.r.t. the supremum norm on $\mathcal{D}$ which in turn can be bounded by the $\delta/(R\sqrt{\nu(\mathcal{D})})$-covering number of ${\mathcal{G}}(1)$ w.r.t. the supremum norm on $\mathcal{D}$. This finishes the proof. \[UnifConvHatF\] $\lim_{h\rightarrow 0} \sup\{ | {\mathbb{E}\left [ \, {K_h\left( d(X_0,x) \right)} F_x(h)^{-1} \, \right ]} - M_x |: x\in {\mathcal{S}}\} = 0$. In particular, ${\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \rightarrow M_x$ uniformly in $x\in{\mathcal{S}}$ for any choice of the bandwidth $h=h_n$ which vanishes if $n$ converges to infinity. The claim follows from the assumption of the uniform convergence of the small ball probability and the expansion provided in [@ferraty_nonparametric_2007]. Let $x\in{\mathcal{S}}$ be fixed, then $$\begin{aligned} \left| {\mathbb{E}\left [ \, \frac{{K_h\left( d(X_0,x) \right)}}{F_x(h) } \, \right ]}-M_x \right| &= \left| \int_0^1 K'(u) \left( \frac{ F_x(hu)}{F_x(h)} - \tau_x(u) \right) {\,\mathrm{d}u} \right| \\ &\le \int_0^1 |K'(u)|{\,\mathrm{d}u}\; \sup_{x\in{\mathcal{S}}} \sup_{u\in[0,1]} \left| \frac{ F_x(hu)}{F_x(h)} - \tau_x(u) \right| \rightarrow 0.\end{aligned}$$ The last inequality is independent of $x\in{\mathcal{S}}$. We give two results on the consistency of the estimator $\hat{\Psi}$. The first one applies to the case where the data is strongly spatial mixing, the second one applies to ${\mathcal{C}}$-weakly dependent data. For both results the number $\inf_{x\in{\mathcal{G}}(R)} F_x(h)$ will be of interest. It depends on the bandwidth $h$, the radius $R$ of the set ${\mathcal{G}}(R)$ and on the spatial process $X$ itself. So $R$, $\inf_{x\in{\mathcal{G}}(R)} F_x(h)$ and $h$ can be mutually dependent in a complex way which is of particular interest if $R$ converges to infinity. This has also consequences for the proofs of the upcoming Theorem \[UniformConvHatPsi\] and Theorem \[UniformConvHatPsiV\] where we need to construct a $\delta$-covering of the set of functions ${\mathcal{G}}(R)$ which depends on the radius $R$. To avoid this dependence, we choose $\delta$ only to depend on the sample size $|I_n|$ and not on the numbers $R$, $\inf_{x\in{\mathcal{G}}(R)} F_x(h)$ and $h$. \[UniformConvHatPsi\] Let Conditions (\[Condition1\]), (\[Condition2\]) and (\[Condition4\]) - (\[Condition9\]) be satisfied. Let $(n_k:k\in{\mathbb{N}})$ be a sequence in ${\mathbb{N}}^N$ which converges to infinity. Let $R_n$ be a real-valued sequence which has a limit in $(0,\infty]$ and assume that the bandwidth $h=h_n$ converges to zero such that $$\frac{ R_n^{5d/2} (\log |I_n|)^7 }{ |I_n|^{1/N\cdot 2/(5d+2)} \, \inf_{x\in{\mathcal{G}}(R_n) } F_{x} (h) } \rightarrow 0 \text{ and } \frac{R_n^r}{ |I_n|^{1/N \cdot 2/(5d+2) } \, h } \rightarrow 0.$$ Then $$\sup_{x\in {\mathcal{G}}(R_n) } { {\left \lVert \hat{\Psi}_h(x) - \Psi(x) \right \rVert}_{{\mathcal{H}}} } = {\mathcal{O}}\left( \frac{ R_n^{5d/2} (\log |I_n|)^{7} }{ |I_n|^{1/N \cdot 2/(5d+2) } \, \inf_{x\in{\mathcal{G}}(R_n) } F_x(h) } \right) + {\mathcal{O}}\left(\frac{R_n^r}{|I_n|^{1/N \cdot 2/(5d+2) } \,h } \right) + {\mathcal{O}}\left( h^r \right) \quad a.s.$$ Before we begin with the proof, we define $\delta_n \coloneqq |I_n|^{-1/N\cdot 2/(2+5d)}$ and choose a function $V(n)$ which is proportional to $$\frac{ R_n/\delta_n)^{5d/2} (\log |I_n|)^7 }{ \inf_{x\in{\mathcal{G}}(R_n) } F_x(h)\, |I_n|^{1/N} }$$ and which we will use later. We follow [@collomb1977estimation] and consider the difference $\hat{\Psi}_h(x) - \Psi(x)$ on the ball ${\mathcal{G}}= {\mathcal{G}}(R)$: $$\begin{aligned} \hat{\Psi}_h(x) - \Psi(x) &= (\hat{f}_h(x))^{-1} \Bigg\{ \left( \hat{g}_h(x) - {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} \right) - \Psi(x)\left( \hat{f}_h(x) - {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right) \\ &\qquad \qquad \qquad +\left( {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} - \Psi(x){\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right) \Bigg\} . \end{aligned}$$ Thus, $$\begin{aligned} \begin{split}\label{UniformConvHatPsiEq1} \sup_{x\in{\mathcal{G}}} { {\left \lVert \hat{\Psi}_h(x) - \Psi(x) \right \rVert}_{{\mathcal{H}}} } &\le \Bigg\{ \sup_{x\in{\mathcal{G}}} { {\left \lVert \hat{g}_h(x) - {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} } + \sup_{x\in{\mathcal{G}}} { {\left \lVert \Psi(x) \right \rVert}_{{\mathcal{H}}} } \cdot \sup_{x\in{\mathcal{G}}} \left| \hat{f}_h(x) - {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right| \\ &\qquad\qquad + \sup_{x\in{\mathcal{G}}} { {\left \lVert {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} - \Psi(x){\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} } \Bigg\} \Biggl/ \inf_{x\in{\mathcal{G}}} \hat{f}_h(x). \end{split}\end{aligned}$$ The third term in the numerator of can be bounded by $ \sup_{x\in{\mathcal{S}}} { {\left \lVert {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} - \Psi(x){\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} } = {\mathcal{O}}(h^r) $: $$\begin{aligned} { {\left \lVert {\mathbb{E}\left [ \, \hat{g}_h(x)-\Psi(x)\hat{f}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} }^2 &= \sum_{j\in{\mathbb{N}}} {\mathbb{E}\left [ \, (\psi_j(X_s)-\psi_j(x)) \frac{ {K_h\left( d(X_s,x) \right)} }{F_x(h)} \, \right ]} ^2 = \sum_{j\in{\mathbb{N}}} {\mathbb{E}\left [ \, \vartheta_{x,j}(d(X_s,x)) \frac{ {K_h\left( d(X_s,x) \right)} }{F_x( h)} \, \right ]} ^2 \\ &\le 2 \sum_{j\in{\mathbb{N}}} {\mathbb{E}\left [ \, \frac{ {K_h\left( d(X_s,x) \right)} }{F_x( h)} \vartheta'_{x,j}(0) d(X_s,x) \, \right ]}^2 + 2 \sum_{j\in{\mathbb{N}}} {\mathbb{E}\left [ \, \frac{ {K_h\left( d(X_s,x) \right)} }{F_x( h)} L_{x,j} h^r \, \right ]}^2.\end{aligned}$$ Note that the left-hand side of the last inequality is in ${\mathcal{O}}( h^{2r})$ uniformly in $x\in{\mathcal{S}}$ because both $\sup_{x\in{\mathcal{S}}}\sum_{j\in{\mathbb{N}}} L_{x,j}^2 <\infty$ and $\sup_{x\in{\mathcal{S}}}\sum_{j\in{\mathbb{N}}} \vartheta'_{x,j}(0)^2<\infty$ and because ${\mathbb{E}\left [ \, {K_h\left( d(X_s,x) \right)} / F_x( h) \, \right ]} $ converges uniformly to $M_x$ by Lemma \[UnifConvHatF\] and $\sup_{x\in{\mathcal{S}}} M_x < \infty$. The denominator in can be bounded as $$\begin{aligned} \inf_{x\in{\mathcal{G}}} \hat{f}_h(x) &\ge \inf_{x\in{\mathcal{G}}} {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} - \sup_{x\in {\mathcal{G}}} \left| \hat{f}_h(x) - {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right| \nonumber \\ &\ge \inf_{x\in{\mathcal{S}}} M_x - \sup_{x\in {\mathcal{S}}} \left| {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]}-M_x \right| - \sup_{x\in{\mathcal{G}}} \left| \hat{f}_h(x) - {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right|.\label{UniformConvHatPsiEq2}\end{aligned}$$ By assumption, the infimum on the right-hand side of is positive and the first supremum converges to zero by Lemma \[UnifConvHatF\]. In order to show that the right-hand side of is positive, it remains to show that the second supremum converges to zero $a.s.$ We demonstrate this implicitly when considering the two remaining terms of the numerator of Equation  $$\sup_{x\in{\mathcal{G}}} { {\left \lVert \hat{g}_h(x) - {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} } \text{ and } \sup_{x\in{\mathcal{G}}} { {\left \lVert \Psi(x) \right \rVert}_{{\mathcal{H}}} } \cdot \sup_{x\in{\mathcal{G}}} \left| \hat{f}_h(x) - {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right| .$$ We can bound $\sup_{x\in{\mathcal{G}}} { {\left \lVert \Psi(x) \right \rVert}_{{\mathcal{H}}} }$ by ${ {\left \lVert \Psi(0) \right \rVert}_{{\mathcal{H}}} } + L_{\Psi} \nu({\mathcal{D}})^{r/2} R^r$. In the sequel, we write for simplicity ${\left\lVert \cdot \right\rVert}$ both for ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$ and $|\cdot|$, so we can treat both cases at the same time. Consider the following generic situation $$\begin{aligned} \label{UniformConvHatPsiEq3} \sup_{x\in{\mathcal{G}}} {\left\lVert \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(l)}_s \frac{ {K_h\left( d(X_s,x) \right)}}{F_x(h)} - {\mathbb{E}\left [ \, \tilde{Y}^{(l)}_s \frac{ {K_h\left( d(X_s,x) \right)}}{F_x(h)} \, \right ]} \right\rVert},\end{aligned}$$ where $\tilde{Y}^{(l)}_s = Y_{s}$ if $l=1$ and $\tilde{Y}^{(l)}_s = { {\left \lVert \Psi(0) \right \rVert}_{{\mathcal{H}}} } + L_{\Psi} \nu({\mathcal{D}})^{r/2} R^r$ if $l=0$. Next, choose a $\delta_n$-covering of ${\mathcal{G}}$ w.r.t. the norm ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$, i.e., there are points $v_1,\ldots,v_m$ such that for all $x\in{\mathcal{G}}$ there is a point $v_j$ with the property $d(x,v_j)\le{ {\left \lVert x-v_j \right \rVert}_{{\mathcal{H}}} }<\delta_n$. The covering number $m\coloneqq N({\mathcal{G}}(R_n),\delta_n,{ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} })$ depends on $\delta_n$. Then we can bound  as $$\begin{aligned} \label{UniformConvHatPsiEq4} \begin{split} &\max_{1\le j \le m} {\left\lVert \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(l)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, \tilde{Y}^{(l)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right\rVert} \\ &\quad + \max_{1\le j \le m} \sup_{x\in U_{\delta}(v_j)} {\left\lVert \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(l)}_s \left\{ \frac{ {K_h\left( d(X_s,x) \right)}}{F_{x} (h)} - \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \right\} \right\rVert} \\ &\quad + \max_{1\le j \le m} \sup_{x\in U_{\delta}(v_j)} {\left\lVert {\mathbb{E}\left [ \, \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(l)}_s \left\{ \frac{ {K_h\left( d(X_s,x) \right)}}{F_{x} (h)} - \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \right\} \, \right ]} \right\rVert}. \end{split}\end{aligned}$$ We begin with the first term in and show that it vanishes $a.s.$ Therefore, we first consider the functional case for the $\tilde{Y}^{(1)}_s=Y_s$. We infer from Theorem \[BernsteinHilbert\] and Lemma \[TotallyBoundednessFunctionalCase\] that for the choices $\delta=\delta_n$, $R=R_n$ and $h=h_n$ there are generic constants such that $$\begin{aligned} &{\mathbbm{P}}\left( \max_{1\le j \le m} { {\left \lVert \frac{1}{|I_n|} \sum_{s\in I_n} Y_{s} \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, Y_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge z\right) \label{UniformConvHatPsiEq4bb} \\ &\le m\, \max_{1\le j\le m}\, {\mathbbm{P}}\left( { {\left \lVert \frac{1}{|I_n|} \sum_{s\in I_n} Y_s {K_h\left( d(X_s,v_j) \right)} - {\mathbb{E}\left [ \, Y_s {K_h\left( d(X_s,v_j) \right)} \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge z \inf_{x\in{\mathcal{G}}} F_{x} (h) \right) \nonumber \\ &\le A_1 \exp\left\{ A_2 \left(\frac{R_n}{\delta_n}\right)^d - A_3 \left( \frac{ z \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N} }{(\log |I_n|)^2}\right)^{2/5} \right\} \nonumber \\ &\qquad\qquad \cdot\left\{ \left( z \inf_{x\in{\mathcal{G}}} F_{x} (h) \right)^{-2} + \left( \frac{ z \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N}}{ (\log |I_n|)^{2}} \right)^{2/5} + \left( z \inf_{x\in{\mathcal{G}}} F_{x} (h) \right)^{-4/5} \left(\frac{|I_n|^{1/N}}{(\log |I_n|)^2}\right)^{1/5} \right\} . \nonumber\end{aligned}$$ If we multiply the factor $z$ inside the probability of by $V(n)$, we find that this probability is still summable for a sequence $(n(k):k\in{\mathbb{N}})\subseteq {\mathbb{N}}^N$ which converges to infinity. Hence, it follows from the first Borel-Cantelli Lemma that $$\begin{aligned} &\max_{1\le j \le m} { {\left \lVert \frac{1}{|I_n|} \sum_{s\in I_n} Y_{s} \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, Y_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right \rVert}_{{\mathcal{H}}} } = {\mathcal{O}}( V(n)) \quad a.s. \\ &= {\mathcal{O}}\left( \frac{ R_n^{5d/2} (\log |I_n|)^7 }{ \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N\cdot 2/(5d+2)} } \right) \quad a.s.\end{aligned}$$ This means in particular that the first summand in vanishes $a.s.$ in the functional case. Consider the first term in in the scalar case $l=0$. Note that $\tilde{Y}^{(0)}_s$ is the same for all $s$. We use the same bound on the covering number as before and obtain with Corollary \[CorBernsteinImproved\] generic constants $A_1$, $A_2$ and $A_3$ such that $$\begin{aligned} &{\mathbbm{P}}\left( \max_{1\le j \le m} \left| \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(0)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, \tilde{Y}^{(0)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right| \ge z \right) \label{UniformConvHatPsiEq4cc} \\ &\le m\, \max_{1\le j\le m} {\mathbbm{P}}\left( \left| \frac{1}{|I_n|} \sum_{s\in I_n} {K_h\left( d(X_s,v_j) \right)} - {\mathbb{E}\left [ \, {K_h\left( d(X_s,v_j) \right)} \, \right ]} \right| \ge z \inf_{x\in{\mathcal{G}}} F_{x} (h) / \tilde{Y}^{(0)}_0 \right) \nonumber \\ &\le A_1 \exp\left\{ A_2 \left(\frac{R_n}{\delta_n}\right)^d - A_3\, \frac{ z \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N} ) }{R_n^r ( \log |I_n|)^2 } \right\} .\nonumber \end{aligned}$$ Arguing similar as before, we infer from Equation  that $$\begin{aligned} &\max_{1\le j \le m} \left| \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(0)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, \tilde{Y}^{(0)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right| \\ &= {\mathcal{O}}\left( \frac{ (R_n / \delta_n)^d R_n^r (\log |I_n|)^3 }{ \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N} } \vee \frac{ R_n^r (\log |I_n|)^4 }{ \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N} } \right) \quad a.s.\\ &= {\mathcal{O}}\left( \frac{ R_n^{5d/2} (\log |I_n|)^7 }{ \inf_{x\in{\mathcal{G}}} F_{x} (h) |I_n|^{1/N\cdot 2/(5d+2)} } \right) \quad a.s.\end{aligned}$$ for a sequence $(n_k:k\in{\mathbb{N}})\subseteq{\mathbb{N}}^N$ which converges to infinity. In particular, the first summand in Equation  vanishes $a.s.$ in the real case, too. Next, we consider the third summand in , similar considerations apply to the second summand if we use the exponential inequalities from Section \[Section\_ExponentialInequalities\], so we do not need to inspect the second summand closer. We use the Lipschitz continuity of the kernel on the interval \[0,1\] and the uniform Lipschitz continuity of the small ball probability and bound the third summand as $$\begin{aligned} \label{UniformConvHatPsiEq5} \begin{split} &\max_{1\le j \le m} \sup_{x\in U_{\delta}(v_j)} {\mathbb{E}\left [ \, {\left\lVert \tilde{Y}^{(l)}_s \right\rVert} \Biggl\{ \left| \frac{ {K_h\left( d(X_s,x) \right)}}{F_{v_j} (h)} - \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} \right|\cdot \frac{ F_{v_j}(h)}{F_x(h)} + \frac{{K_h\left( d(X_s,v_j) \right)}}{ F_{v_j}(h)}\, \frac{ | F_x(h)-F_{v_j}(h)| }{F_x(h)} \Biggl\} \, \right ]} . \end{split}\end{aligned}$$ We write $U_{\delta}(y)$ for the $\delta$-neighborhood of $y\in{\mathcal{S}}$ w.r.t. the metric $d$ throughout the rest of this proof. For the difference in the kernel functions in , we need to distinguish two cases which are given by the following two inclusions $$\begin{aligned} &\left\{ X_s\in U_h(v_j)\cap U_h(x) \right\} \subseteq \{ X_s\in U_h(v_j) \} \text{ and } \\ &\qquad\qquad \{X_s\in [U_h(v_j)\setminus U_h(x)] \cup[ U_h(x)\setminus U_h(v_j) ] \} \subseteq \{ X_s \in U_{h}(v_j)\setminus U_{h-\delta_n}(v_j) \} \cup \{ X_s \in U_{h}(x)\setminus U_{h-\delta_n}(x) \}.\end{aligned}$$ Moreover, note that the quotient of the small ball probability functions in Equation  can be bounded with the help of a fixed reference point in ${\mathcal{S}}$, namely 0, as: $$\frac{ | F_x(h)-F_{v_j}(h)| }{F_x(h)} \le \frac{ F_0(h)}{\inf_{x \in {\mathcal{G}}} F_x(h)} L_0 d(x,v_j) \le \frac{ L_0 \delta_n }{\inf_{x \in {\mathcal{G}}} F_x(h)}.$$ Furthermore, we have $ F_y(h) / F_x(h) \le 1 + C \delta_n / \inf_{x\in{\mathcal{G}}} F_x(h) $, whenever $d(x,y)\le \delta_n$, using the Lipschitz continuity of the small ball probability function. Since $\delta_n /\inf_{x\in{\mathcal{G}}} F_x(h)$ converges to 0, this implies in particular that the above ratio $F_{v_j}(h)/F_x(h)$ in is bounded. Thus, we obtain for modulo a constant the bound $$\begin{aligned} \label{UniformConvHatPsiEq6} \begin{split} &\max_{1\le j \le m} \mathbb{E} \Biggl[ {\left\lVert \tilde{Y}^{(l)}_s \right\rVert} \frac{\delta_n}{h} \frac{ {\,\mathbbm{1}\! \left\{ X_s \in U_h(v_j) \right\} } }{F_{v_j}(h)} \\ &\qquad\qquad\qquad+ {\left\lVert \tilde{Y}^{(l)}_s \right\rVert} \frac{ {\,\mathbbm{1}\! \left\{ X_s \in U_{h}(v_j) \setminus U_{h-\delta_n}(v_j) \right\} }+ {\,\mathbbm{1}\! \left\{ X_s \in U_{h}(x) \setminus U_{h-\delta_n}(x) \right\} } }{F_{v_j}(h) } \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +{\left\lVert \tilde{Y}^{(l)}_s \right\rVert} \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \frac{\delta_n }{\inf_{x\in{\mathcal{G}}} F_x(h) } \Biggl]. \end{split}\end{aligned}$$ The first two terms in are from the difference in the kernel functions, the last one from the difference in the small ball probability functions. We begin with the case $l=0$. Using the uniform convergence result of Lemma \[UnifConvHatF\], we see that the first term in Equation  is in ${\mathcal{O}}( R_n^r \delta_n / h_n) = {\mathcal{O}}( R_n^r / (h_n\, |I_n|^{1/N \cdot 2/(5d+2)} ) ) $. Similarly, the third term is in ${\mathcal{O}}( R_n^r \delta_n / \inf_{x\in{\mathcal{G}}} F_x(h)) = {\mathcal{O}}( R_n^r /(\inf_{x\in{\mathcal{G}}} F_x(h)) |I_n|^{1/N \cdot 2/(5d+2)} ) $. Note that we can bound $R_n^r$ by $R_n^{5d/2} (\log |I_n|)^7$ in the last ${\mathcal{O}}$-expression. For the second term in , we use the continuity of the quotient of the small ball probability functions w.r.t. a fixed reference point to find that this summand is in ${\mathcal{O}}( R^r \delta_n / \inf_{x\in{\mathcal{G}}} F_x(h))$. We continue with the case $l=1$ and consider the second term in . We write $\tilde{\vartheta}_x(u)$ for the conditional expectation function ${\mathbb{E}\left [ \, {\left\lVert Y_s \right\rVert}|d(X_s,x)=u \, \right ]}$ which is assumed to be differentiable in a neighborhood of zero. So we can use a Taylor expansion for the following difference $$\begin{aligned} \begin{split}\label{UniformConvHatPsiEq7} {\mathbb{E}\left [ \, {\left\lVert Y_s \right\rVert} \frac{ {\,\mathbbm{1}\! \left\{ X_s \in U_{h}(x) \setminus U_{h-\delta_n}(x) \right\} } }{F_{x}(h) } \, \right ]} &= {\mathbb{E}\left [ \, \left(\tilde{\vartheta}_x(0) + \tilde{\vartheta}'_x(Z_{1,s} ) d(X_s,x) \right) \frac{ {\,\mathbbm{1}\! \left\{ X_s \in U_{h}(x) \right\} } }{F_{x}(h) } \, \right ]} \\ &\quad - {\mathbb{E}\left [ \, \left(\tilde{\vartheta}_x(0) + \tilde{\vartheta}'_x(Z_{2,s}) d(X_s,x) \right) \frac{ {\,\mathbbm{1}\! \left\{ X_s \in U_{h-\delta_n}(x) \right\} } }{F_{x}(h) } \, \right ]} \end{split}\end{aligned}$$ where the random variables $Z_{1,s}$ and $Z_{2,s}$ are between $x$ and $X_s$. We can give upper bounds on : $$\begin{aligned} &\tilde{\vartheta}_x(0) \frac{ F_x(h) - F_x(h-\delta_n)}{F_x(h)} + \sup_{u\le h} |\tilde{\vartheta}'_x( u )| h \frac{ F_x(h) + F_x(h-\delta_n)}{F_x(h)} \\ &\le C \left( \sup_{x\in{\mathcal{G}}} \tilde{\vartheta}_x(0) \frac{\delta_n}{\inf_{x\in{\mathcal{G}}} F_x(h)} + \sup_{x\in{\mathcal{G}}} \sup_{u\le h} |\tilde{\vartheta}'_x(u) | h \right) \in {\mathcal{O}}\left( R^r \frac{ \delta_n }{\inf_{x\in{\mathcal{G}}} F_x(h)} + h \right).\end{aligned}$$ Similarly, we find that the first term in is in ${\mathcal{O}}( R^r \delta_n / h )$ and that the third term is in ${\mathcal{O}}( R^r \delta_n / \inf_{x\in{\mathcal{G}}} F_x(h) )$. This proves that converges to zero as well as the third term in . Consequently, $$\begin{aligned} &\sup_{x\in{\mathcal{G}}} { {\left \lVert \Psi(x) \right \rVert}_{{\mathcal{H}}} } \sup_{x\in{\mathcal{G}}} \left| \hat{f}_h(x)-f(x)\right| + \sup_{x\in{\mathcal{G}}} { {\left \lVert \hat{g}_h(x)-g(x) \right \rVert}_{{\mathcal{H}}} } \\ &= {\mathcal{O}}\left( \frac{ R_n^{5d/2} (\log |I_n|)^{7} }{ \inf_{x\in{\mathcal{G}}(R_n) } F_x(h)\,|I_n|^{1/N \cdot 2/(5d+2) } } \right) + {\mathcal{O}}\left(\frac{R_n^r}{h \, |I_n|^{1/N \cdot 2/(5d+2) } } \right)\end{aligned}$$ This completes the proof. Next, we give a result for ${\mathcal{C}}$-weakly dependent processes Therefore, we consider the pseudo-norm on ${\mathcal{C}}$ defined by $$\begin{aligned} \label{SemiNorm} {\left\lVert g \right\rVert}^{\sim} = \sup_{u,v\in {\mathcal{S}}, u\neq v } \frac{\left| g(u)-g(v)\right|}{d(u,v)}\end{aligned}$$ for an element $g: {\mathcal{S}}\rightarrow {\mathbb{R}}$ such that ${\left\lVert g \right\rVert}_{\infty}<\infty$. We assume for the next theorem that the kernel function $K$ is zero at 1. Note that we have in this case for the pseudo-norm ${\left\lVert \cdot \right\rVert}^{\sim}$ that ${\left\lVert K(h^{-1} d(\cdot,x)) \right\rVert}^{\sim} $ is proportional to $h^{-1}$ (from the reverse triangle inequality). \[UniformConvHatPsiV\] Let Conditions (\[Condition1\])-(\[Condition8\]) and (\[Condition10\]) be satisfied. Let $(n_k:k\in{\mathbb{N}})$ be a sequence in ${\mathbb{N}}^N$ which converges to infinity. Let $R_n$ be a real-valued sequence which has a limit in $(0,\infty]$ and assume that the bandwidth $h=h_n$ converges to zero such that $$\begin{aligned} \frac{R_n^{4d} \, (\log |I_n|)^8 }{|I_n|^{1/N \cdot 1/(4d+1)} \inf_{x\in{\mathcal{G}}(R_n)} F_x(h)^2 \, h_n }\rightarrow 0.\end{aligned}$$ Then $$\sup_{x\in {\mathcal{G}}(R_n) } { {\left \lVert \hat{\Psi}_h(x) - \Psi(x) \right \rVert}_{{\mathcal{H}}} } = {\mathcal{O}}\left( \frac{R_n^{4d} \, (\log |I_n|)^8 }{|I_n|^{1/N \cdot 1/(4d+1)} \inf_{x\in{\mathcal{G}}(R_n)} F_x(h)^2 \, h_n } \right) + {\mathcal{O}}\left( h^r \right) \quad a.s.$$ The structure of the proof is the same as in Theorem \[UniformConvHatPsi\]. We can continue with the decomposition of Collomb from and it remains to demonstrate that both $$\begin{aligned} \label{EqUniformConvHatPsiV1} \sup_{x\in{\mathcal{G}}} { {\left \lVert \hat{f}_h(x) - {\mathbb{E}\left [ \, \hat{f}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} } \rightarrow 0 \;a.s. \text{ and } \sup_{x\in{\mathcal{B}}} { {\left \lVert \Psi(x) \right \rVert}_{{\mathcal{H}}} } \, \sup_{x\in{\mathcal{G}}} { {\left \lVert \hat{g}_h(x) - {\mathbb{E}\left [ \, \hat{g}_h(x) \, \right ]} \right \rVert}_{{\mathcal{H}}} } \rightarrow 0 \;a.s.\end{aligned}$$ with the desired rate. Therefore, we can immediately pass to the first term in . We merely have to adjust the parameters in the exponential inequalities which are given in Equations  and . The analogue of reads now $$\begin{aligned} &{\mathbbm{P}}\left( \max_{1\le j \le m} { {\left \lVert \frac{1}{|I_n|} \sum_{s\in I_n} Y_{s} \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, Y_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right \rVert}_{{\mathcal{H}}} } \ge z\right) \\ &\le A_1 Q_n \exp\left( A_2 \frac{R_n^d}{\delta_n^d} - A_3 (z^2 |I_n|^{1/N} \inf_{x\in{\mathcal{G}}} F_x(h)^2 h )^{1/4} \right), \end{aligned}$$ where we use a $\delta_n$ covering and apply Proposition \[WeaklyCDependentHilbert\]. The factor $Q_n$ is negligible. The analogue of can be bounded with an application of Proposition \[WeaklyCDependent\] $$\begin{aligned} &{\mathbbm{P}}\left( \max_{1\le j\le m} \left| \frac{1}{|I_n|} \sum_{s\in I_n} {K_h\left( d(X_s,v_j) \right)} - {\mathbb{E}\left [ \, {K_h\left( d(X_s,v_j) \right)} \, \right ]} \right| \ge z \inf_{x\in{\mathcal{G}}} F_{x} (h) / \tilde{Y}^{(0)}_0 \right) \\ &\le A_1 \exp\left( A_2 \frac{R_n^d}{\delta_n^d} - A_3 \frac{z^2 |I_n|^{1/N} \inf_{x\in{\mathcal{G}}} F_x(h)^2 h }{R_n^r} \right).\end{aligned}$$ In particular, in both cases $l=0$ and $l=1$ $$\max_{1\le j \le m} { {\left \lVert \frac{1}{|I_n|} \sum_{s\in I_n} \tilde{Y}^{(l)}_{s} \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j} (h)} - {\mathbb{E}\left [ \, \tilde{Y}^{(l)}_s \frac{ {K_h\left( d(X_s,v_j) \right)}}{F_{v_j}(h)} \, \right ]} \right \rVert}_{{\mathcal{H}}} } = {\mathcal{O}}\left( \frac{ (R(n)/\delta_n)^{4d} (\log|I_n|)^8 }{ |I_n|^{1/N} \inf_{x\in{\mathcal{G}}} F_x(h)^2 h } \right)$$ The analogue of second and the third term in are of a simpler structure because this time the kernel function is Lipschitz continuous on entire ${\mathbb{R}}$. So in particular, Equation  becomes simpler. The analogue of the third term in is again in $${\mathcal{O}}\left( \frac{R_n^r \delta_n}{h_n} \right)+{\mathcal{O}}\left( \frac{R_n^r \delta_n}{ \inf_{x\in{\mathcal{G}}} F_{x} (h) } \right).$$ We can proceed similar as in the proof of Theorem \[UniformConvHatPsi\] and choose $\delta_n = |I_n|^{-1/N\cdot 1/(4d+1)}$. We arrive at the conclusion that both terms in converge to zero $a.s.$ at the stated rate. We can compare the rates of convergence of the estimate $\hat{\Psi}$ from Theorem \[UniformConvHatPsi\] and Theorem \[UniformConvHatPsiV\] with the results in [@ferraty2004nonparametric]. Here the authors consider the estimator on a compact set ${\mathcal{K}}\subseteq{\mathcal{H}}$ and assume that the data generating process is a strongly mixing time series with a one-dimensional response variable. The further technical assumptions are quite similar. Therefore, we can compare the two rates in the case where ${\mathcal{K}}\subseteq{\mathcal{G}}(R)$ and where the lattice process $(X,Y)$ is strongly mixing. We obtain for the estimate $\hat{\Psi}$ which is based on ${\mathcal{H}}$-valued spatial response variables a rate of $${\mathcal{O}}\left( \frac{(\log |I_n|)^7 }{ |I_n|^{1/N \cdot 2/(5d+2)} \inf_{x\in {\mathcal{K}}} F_x(h) } \right) + {\mathcal{O}}\left( \frac{1}{ (|I_n|^{1/N \cdot 2/(5d+2)} h } \right ) + {\mathcal{O}}(h^r)$$ because the radius $R=R_n$ of the set ${\mathcal{G}}(R)$ can be chosen as constant. In the special case of time series data $((X_t,Y_t):t=1,\ldots,n)$, where the lattice dimension $N$ is one, the rate simplifies as $${\mathcal{O}}\left( \frac{ (\log n)^7 }{ n^{2/(5d+2)} \inf_{x\in {\mathcal{K}}} F_x(h) } \right) + {\mathcal{O}}\left( \frac{1}{ (n^{2/(5d+2)} h } \right ) + {\mathcal{O}}(h^r).$$ The rate obtained by [@ferraty2004nonparametric] is derived under the weaker condition that the one-dimensional response variables only satisfy a moment condition and not an exponential tail condition as in our case for Hilbertian response variables. Their rate is given in terms of a parameter $s$ which characterizes the moment condition, a function which is proportional to our function $\inf_{x\in{\mathcal{K}}} F_x(h)$ and a function $\chi$ which is a bound on the maximum of $\inf_{x\in{\mathcal{K}}} F_x(h)^2$ and the joint small ball probability of $X_t$ and $X_{t'}$, for details see [@ferraty2004nonparametric]. The rate is in their case $${\mathcal{O}}\left( \sqrt{ \frac{\log n}{ n \inf_{x\in{\mathcal{K}}} F_x(h) } } \right) + {\mathcal{O}}\left( \sqrt{ \frac{\log n}{n } \frac{ \chi(h)}{\inf_{x\in {\mathcal{K}}} F_x(h)^2} \floor*{\frac{n}{\chi(h)}}^s } \right) + {\mathcal{O}}(h^r).$$ Hence, the structure of the rate of convergence is similar to ours, in particular, the third ${\mathcal{O}}$-expression is also due to the local approximation of $\Psi(X_t)$ by $\Psi(x)$. It is not unexpected that the rate of the first ${\mathcal{O}}$-term is slower in the case of a ${\mathcal{H}}$-valued response. In the case of a constant radius $R$, we obtain for ${\mathcal{C}}$-weakly dependent spatial data a rate of $${\mathcal{O}}\left( \frac{ (\log |I_n|)^8 }{|I_n|^{1/N\cdot 1/(4d+1)} \inf_{x\in{\mathcal{K}}} F_x(h) ^2 \; h } \right) + {\mathcal{O}}(h^r).$$ Again, this rate is similar to the rate of [@ferraty2004nonparametric] (for the special case of time series data). Note that the factor $h$ in the denominator of the first ${\mathcal{O}}$-expression is due to the ${\left\lVert \cdot \right\rVert}^\sim$-norm of the scaled kernel function $K_h$. Once more the second ${\mathcal{O}}$-expression is due to the local approximation of $\Psi(X_t)$ by $\Psi(x)$. The dimension of the domain of the functions ${\mathcal{D}}$ influences the rate negatively in our case. In the case of functional data as curves, $d=1$ and we have the correction factors $2/7$ resp. $1/5$. If the dimension $d$ is bigger, e.g., if we observe manifolds as functional data, the correction factor is even more pronounced. The reason for this is the increasing number of balls of radius $\delta_n$ which cover the space ${\mathcal{G}}(R)$. Furthermore, this covering is w.r.t. the norm on the Hilbert space and not w.r.t. the pseudo-metric $d$. Note that in the proofs it would be sufficient to use a $\delta_n$-covering w.r.t. $d$. However, in order to exploit this, we would have to make further assumptions on $d$. Furthermore, in many applications $d$ is a projection-based pseudo metric. Hence, in a possible extension of the current setting, one could consider the case of a sequence of such pseudo-metrics $d_k$ which tend to the metric induced by ${ {\left \lVert \cdot \right \rVert}_{{\mathcal{H}}} }$. To conclude, we shortly discuss the influence of the lattice dimension $N$. We see that the sample $I_n$ does not enter in the denominator with its full size but rather with an effective size, where $|I_n|$ is normalized by the $N$-th root. The technical reason for this behavior is explained in the short remark before Corollary \[CorBernsteinImproved\]. It is up to future research whether this factor can be removed under the current assumptions with more sophisticated techniques or whether additional assumptions are necessary. Appendix ======== \[IbragimovAlphaMixing\] Let $Z_1,\ldots,Z_n$ be real-valued non-negative random variables each $a.s.$ bounded. Set $\alpha \coloneqq \sup_{s\in \{1,\ldots,n\} } \alpha\left( \sigma( Z_i: i \le k), \sigma( Z_i: i > k) \right)$. Then $ \left| {\mathbb{E}\left [ \, \prod_{i=1}^n Z_i \, \right ]} - \prod_{i=1}^n {\mathbb{E}\left [ \, Z_i \, \right ]} \right| \le (n-1) \, \alpha\, \prod_{i=1}^n {\left\lVert Z_i \right\rVert}_{\infty}$. [31]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} D. W. Andrews. Non-strong mixing autoregressive processes. *Journal of Applied Probability*, 210 (4):0 930–934, 1984. D. Bosq. *Linear Processes in Function Spaces: Theory and Applications*, volume 149. Springer Science & Business Media, 2000. R. C. Bradley. Basic properties of strong mixing conditions. a survey and some open questions. *Probability [S]{}urveys*, 20 (2):0 107–144, 2005. M. Carbon, M. Hallin, and L. T. Tran. Kernel density estimation for random fields: the [$L^1$]{} theory. *Journal of Nonparametric Statistics*, 60 (2-3):0 157–170, 1996. M. Carbon, C. Francq, and L. T. Tran. Kernel regression estimation for random fields. *Journal of Statistical Planning and Inference*, 1370 (3):0 778–798, 2007. G. Collomb. Estimation non param[é]{}trique de la r[é]{}gression par la m[é]{}thode du noyau: propri[é]{}t[é]{} de convergence asymptotiquememt normale ind[é]{}pendante. *Annales scientifiques de l’Universit[é]{} de Clermont. Math[é]{}matiques*, 650 (15):0 24–46, 1977. N. Cressie. *Statistics for spatial data*. Wiley series in Probability and Mathematical Statistics: Applied Probability and Statistics. J. Wiley, 1993. Y. A. Davydov. Convergence of distributions generated by stationary stochastic processes. *Theory of Probability & Its Applications*, 130 (4):0 691–696, 1968. J. Dedecker and P. Doukhan. A new covariance inequality and applications. *Stochastic processes and their applications*, 1060 (1):0 63–80, 2003. J. Dedecker and C. Prieur. New dependence coefficients. [E]{}xamples and applications to statistics. *Probability Theory and Related Fields*, 1320 (2):0 203–236, 2005. L. Delsol. Advances on asymptotic normality in non-parametric functional time series analysis. *Statistics*, 430 (1):0 13–33, 2009. F. Ferraty and P. Vieu. The functional nonparametric model and application to spectrometric data. *Computational Statistics*, 170 (4):0 545–564, 2002. F. Ferraty and P. Vieu. Nonparametric models for functional data, with application in regression, time series prediction and curve discrimination. *Nonparametric Statistics*, 160 (1-2):0 111–125, 2004. F. Ferraty, A. Laksaci, and P. Vieu. Estimating some characteristics of the conditional distribution in nonparametric functional models. *Statistical Inference for Stochastic Processes*, 90 (1):0 47–76, 2006. F. Ferraty, A. Mas, and P. Vieu. Nonparametric regression on functional data: inference and practical aspects. *Australian & New Zealand Journal of Statistics*, 490 (3):0 267–286, 2007. F. Ferraty, I. Van Keilegom, and P. Vieu. Regression when both response and predictor are functions. *Journal of Multivariate Analysis*, 109:0 10–28, 2012. X. Guyon. *Random fields on a network: modeling, statistics, and applications*. Springer Science & Business Media, 1995. M. Hallin, Z. Lu, and L. T. Tran. Local linear spatial regression. *The Annals of Statistics*, 320 (6):0 2469–2500, 2004. S. H[ö]{}rmann and P. Kokoszka. Weakly dependent functional data. *The Annals of Statistics*, 380 (3):0 1845–1884, 2010. I. A. Ibragimov. Some limit theorems for stationary processes. *Theory of Probability & Its Applications*, 70 (4):0 349–382, 1962. J. T. N. Krebs. Orthogonal series estimates on strong spatial mixing data. *Journal of Statistical Planning and Inference*, 193:0 15–41, 2018. N. Laib and D. Louani. Nonparametric kernel regression estimation for functional stationary ergodic data: asymptotic properties. *Journal of Multivariate analysis*, 1010 (10):0 2266–2281, 2010. L. Li. Nonparametric regression on random fields with random design using wavelet method. *Statistical Inference for Stochastic Processes*, 190 (1):0 51–69, 2016. V. Maume-Deschamps. Exponential inequalities and functional estimations for weak dependent data: applications to dynamical systems. *Stochastics and Dynamics*, 60 (04):0 535–560, 2006. F. Merlev[è]{}de, M. Peligrad, and E. Rio. *Bernstein inequality and moderate deviations under strong mixing conditions*, volume 5 of *Collections*, pages 273–292. Institute of Mathematical Statistics, Beachwood, Ohio, USA, 2009. D. N. Politis and J. P. Romano. Limit theorems for weakly dependent [H]{}ilbert space valued random variables with application to the stationary bootstrap. *Statistica Sinica*, pages 461–476, 1994. J. O. Ramsay and B. Silverman. *Functional Data Analysis*. Springer, Berlin, 1997. M. Rosenblatt. A central limit theorem and a strong mixing condition. *Proceedings of the National Academy of Sciences*, 420 (1):0 43–47, 1956. L. T. Tran. Kernel density estimation on random fields. *Journal of Multivariate Analysis*, 340 (1):0 37–53, 1990. E. Valenzuela-Dom[í]{}nguez, J. T. N. Krebs, and J. E. Franke. A [B]{}ernstein inequality for spatial lattice processes. *arXiv preprint arXiv:1702.02023*, 2017. A. van der vaart and J. Wellner. *Weak Convergence and Empirical Processes: With Applications to Statistics*. Springer Series in Statistics. Springer New York, 2013. [^1]: Department of Statistics, University of California, Davis, CA, 95616, USA, email: [^2]: Corresponding author [^3]: This research was supported by the German Research Foundation (DFG), grant number KR 4977/1-1.
--- abstract: 'We identify a velocity distribution function of ideal gas particles that is compatible with the local equilibrium assumption and the fundamental thermodynamic relation satisfying the endoreversibility. We find that this distribution is a Maxwell–Boltzmann distribution with a spatially uniform temperature and a spatially varying local center-of-mass velocity. We construct the local equilibrium Carnot cycle of an ideal gas, based on this distribution, and show that the efficiency of the present cycle is given by the endoreversible Carnot efficiency using the molecular kinetic temperatures of the gas. We also obtain an analytic expression of the efficiency at maximum power of our cycle under a small temperature difference. Our theory is also confirmed by a molecular dynamics simulation.' author: - Yuki Izumida - Koji Okuda title: Molecular kinetic analysis of a local equilibrium Carnot cycle --- [^1] Introduction {#Introduction} ============ Global equilibrium between the working substance and the heat reservoir as the reversibility condition is essential for the thermodynamic cycle of heat engines to attain the maximum efficiency (Carnot efficiency) [@C; @Reif]. Denoting by $Q_h$ ($Q_c$) the heat from the hot (cold) heat reservoir with the temperature $T_h^{\rm R}$ ($T_c^{\rm R}$) ($T_h^{\rm R}>T_c^{\rm R}$) during the isothermal processes, we can express the global equilibrium as the Clausius equality applied to the Carnot cycle: $$\begin{aligned} \frac{Q_h}{T_h^{\rm R}}+\frac{Q_c}{T_c^{\rm R}}=0,\end{aligned}$$ from which the efficiency $\eta\equiv \frac{W}{Q_h}$ of the heat-energy conversion into work $W\equiv Q_h+Q_c$ is given by the Carnot value $1-\frac{T_c^{\rm R}}{T_h^{\rm R}}\equiv \eta_{\rm C}$. For this global equilibrium to hold, the heat engine should run along the cycle infinitely slowly (quasistatic limit) and hence output zero power (work per unit time). Curzon and Ahlborn (CA) [@CA] considered the efficiency at maximum power $\eta^*$ as a more practical figure of merit (The same subject was also considered by some authors even earlier. See [@VLF; @MP] for historical perspectives on the origin of the efficiency at maximum power and references therein). CA assumed that their heat engine cycle (CA cycle) satisfies the Fourier’s law of heat transport and the so-called endoreversibility condition [@R], which is written explicitly for a cycle as $$\begin{aligned} \frac{Q_h}{T_h}+\frac{Q_c}{T_c}=0,\label{eq.endoreversibility1}\end{aligned}$$ where $T_h$ ($T_c$) is the well-defined temperature of the working substance in contact with the hot (cold) heat reservoir during the isothermal process at a finite rate. From this, the efficiency of the CA cycle is given by the temperatures of the working substance as $$\begin{aligned} \eta=1-\frac{T_c}{T_h},\label{eq.endo_effi}\end{aligned}$$ which we call the endoreversible Carnot efficiency. This suggests that the efficiency of the endoreversible heat engine is still expressed by the Carnot-like expression, depending only on the temperatures of the working substance. CA showed that Eq. (\[eq.endo\_effi\]) at the maximum power becomes $$\begin{aligned} \eta^*=1-\sqrt{\frac{T_c^{\rm R}}{T_h^{\rm R}}}=1-\sqrt{1-\eta_{\rm C}}=\frac{\eta_{\rm C}}{2}+\frac{\eta_{\rm C}^2}{8}+O(\eta_{\rm C}^3),\label{eq.ca}\end{aligned}$$ which we call the CA efficiency. This result gave birth to the field of finite-time thermodynamics [@B; @SNSAL; @BKSST] that studies various thermodynamic systems performing finite-time transformations based on the endoreversibility. Since the universality of Eq. (\[eq.ca\]) was addressed in [@VB] based on linear irreversible thermodynamics, the efficiency at maximum power has been investigated as a fundamental problem in nonequilibrium thermodynamics [@CH; @SS2008; @IO2008; @IO2009; @ELV; @EKLV; @IO2012; @ST; @WZ; @HNE; @PDCV; @SH; @BSS; @PV; @CPV]. In a recent paper [@IO2015], the present authors showed that a physical origin of the endoreversibility Eq. (\[eq.endoreversibility1\]), which is usually simply assumed in finite-time thermodynamics, can be attributed to a special case of a local equilibrium assumption (see also [@R; @WZ] for similar ideas). Here, we refer to the local equilibrium assumption as an assumption where a total system is not in a global equilibrium state sharing the same intensive thermodynamic variables, while each partial system is in an equilibrium state with locally-defined thermodynamic variables [@GM]. The endoreversibility condition can then be regarded as the special case of this local equilibrium assumption applied to the heat engine constituted with the working substance and the heat reservoir: The whole working substance itself is assumed to be in a local equilibrium state with the well-defined temperature $T$ without spatial variation, where this temperature is different from that of the heat reservoir in a local equilibrium state, while the global equilibrium between them is violated. In this case, the following fundamental thermodynamic relation holds for the thermodynamic variables of the working substance, $$\begin{aligned} dU=TdS-pdV,\label{eq.ftr}\end{aligned}$$ where $S$, $U$, $p$, and $V$ are the entropy, internal energy, pressure, and volume of the working substance, respectively. Indeed, it can be shown that the endoreversibility condition Eq. (\[eq.endoreversibility1\]) holds automatically by applying the following closed-cycle condition to the cycle with constant temperatures during the isothermal processes as the CA cycle: $$\begin{aligned} \oint dS=\oint \frac{dQ}{T}=0.\end{aligned}$$ In this sense, we may say that the local equilibrium is an essential feature of the CA cycle as the endoreversible heat engine in such a manner that the global equilibrium is an essential feature of the Carnot cycle as the reversible heat engine [@IO2015]. However, how such a macroscopic and phenomenological description using the fundamental thermodynamic relation in a finite-time process can be established from a statistical mechanics point of view using a state distribution of the working substance is still not obvious, which would be of crucial importance to strengthen the foundation of finite-time thermodynamics. In the present paper, from a molecular kinetic analysis, we identify a velocity distribution of ideal gas particles as the simplest case of the working substance that is consistent with the local equilibrium assumption and the fundamental thermodynamic relation Eq. (\[eq.ftr\]) satisfying the endoreversibility. Based on this distribution, we construct a local equilibrium Carnot cycle and study the efficiency at maximum power of our cycle by comparing it to the CA efficiency. We also perform a molecular dynamics simulation to confirm the validity of our theory. The rest of the paper is organized as follows. In Sec. \[Molecular kinetic model\], as a preparation, we introduce our molecular kinetic model of an ideal gas system in a cylinder with a moving piston, and derive the velocity distribution of the gas particles. In Sec. \[Local equilibrium Carnot cycle\], we construct our local equilibrium Carnot cycle based on the preparation in Sec. \[Molecular kinetic model\], and study the efficiency at maximum power. The molecular dynamics simulation is also given in this section. We discuss and summarize the present paper in Sec. \[Discussion and summary\]. Molecular kinetic model {#Molecular kinetic model} ======================= ![Schematic illustration of $2$D ideal gas particles confined in a rectangular-shaped cylinder $l \times L$ with a piston on the head at $x=l$. The piston moves at a constant velocity $u=\frac{dl}{dt}$. The thermal wall with length $L_{\rm th}$ that mimics the interaction with the heat reservoir is set on the bottom of the cylinder at $x=0$. The local center-of-mass $x$-velocity of the particles $\bar{v}_x(x)$ is shown to change linearly from $0$ at $x=0$ (bottom of the cylinder) to $u$ at $x=l$ (moving piston). []{data-label="piston"}](piston_ver3_2017_6_pre_2nd_resubmit.eps) Ideal gas system in cylinder with moving piston {#Ideal gas system in cylinder with moving piston} ----------------------------------------------- As a preparation for constructing our local equilibrium Carnot cycle, we first develop the molecular kinetics of the working substance in a cylinder with a moving piston. We assume a two-dimensional (2D) ideal gas as the working substance for simplicity and assume that the temperature $T$ of the gas can be defined uniquely and the density of the gas is always uniform without spatial variation. Imagine that $N$ ideal gas particles with mass $m$ are in a rectangular-shaped cylinder with dimensions $l\times L$ (Fig. \[piston\]). At the bottom of the cylinder ($x=0$) is a thermal wall with length $L_{\rm th}$, which realizes contact with the heat reservoir at the temperature $T^{\rm R}$ during an isothermal process. When a particle with velocity ${\bm v}=(v_x,v_y)$ collides with the thermal wall, its velocity stochastically changes to ${\bm v}'=(v_x',v_y')$ according to a normalized probability distribution [@TTKB] (Maxwell boundary condition [@K]), $$\begin{aligned} f_{\rm th}({\bm v}')=\frac{1}{\sqrt{2\pi}}\left(\frac{m}{k_{\rm B}T^{\rm R}}\right)^{3/2} v_x' \exp \left(-\frac{m({v_x'}^2+{v_y'}^2)}{2k_{\rm B}T^{\rm R}}\right), \label{eq.distri_thermal}\end{aligned}$$ where $0< v_x' < \infty$ and $-\infty < v_y' < \infty$. This reflecting rule ensures that the temperature of the static gas becomes $T^{\rm R}$ (see also [@CLL; @BCM] and references therein for different types of thermal walls). The heat flowing into the working substance per collision is calculated as the kinetic energy change before and after the collision, given by $$\begin{aligned} \frac{m(|{\bm v}'|^2-|{\bm v}|^2)}{2}.\label{eq.micro_heat}\end{aligned}$$ At the top of the cylinder ($x=l$) is a moving piston. When a particle with ${\bm v}$ collides with the piston moving with the constant velocity $u\equiv \frac{dl}{dt}$, where $t$ is the time, the particle velocity changes as ${\bm v}\to {\bm v}'=(-v_x+2u, v_y)$, where the mass of the piston is assumed to be sufficiently larger than that of the particle and the collision is perfectly elastic. The work done on the piston per collision is calculated in the same way as the heat in Eq. (\[eq.micro\_heat\]), as follows: $$\begin{aligned} -\frac{m(|{\bm v}'|^2-|{\bm v}|^2)}{2}=2mu(v_x-u).\label{eq.micro_work}\end{aligned}$$ Any particle collision with other parts of the cylinder and the particle–particle collisions are assumed to be perfectly elastic. Here, we note that by our assumption, the gas must relax to a local equilibrium with a constant temperature $T(u)$, depending on $u$, much faster than the global equilibrium between the gas and the heat reservoir is realized. This is justified under the assumption of weak coupling between the working substance and the heat reservoir, where the time scale of the equilibration inside the gas is much faster than the time scale of the energy exchange between the gas and the heat reservoir. With this separation of the time scales, we can regard that the equilibration process of the gas into the local equilibrium state with the uniform temperature by heat conduction inside the gas is instantaneous, and that the dynamics of the gas during the isothermal process is reduced to that of the temperature. This would be realized in the case of a thermal wall with a sufficiently short length as $L_{\rm th}\ll L$ in the present setup [@IO2008], where the collision frequency with the thermal wall becomes much lower than that of the interparticle collisions. In addition, our ideal gas should be precisely regarded as a “weakly interacting nearly ideal gas," meaning that the equilibration inside the gas is caused by interparticle collisions [@IO2008]. Velocity distribution with local center-of-mass velocity {#sec.local_vel} -------------------------------------------------------- The local center-of-mass $x$ velocity $\bar{v}_x (x)$ of the particles located at position $\bm x=(x,y)$ can be uniquely determined according to the following argument: Let us consider that the length of the cylinder changes as $l'=l+u\Delta t=l\left(1+\frac{u}{l}\Delta t\right)$ after an infinitesimal time duration $\Delta t$. We also consider a partial system $x\times L$ inside the cylinder $l\times L$, where the $x$-length of the partial system also changes as $x'=x+\bar{v}_x(x)\Delta t$ with the local center-of-mass $x$-velocity $\bar{v}_x(x)$ during $\Delta t$. Before the displacement, the density of the entire system agrees with the density of the partial system with its particle number $N_x$ as $\frac{N}{lL}=\frac{N_x}{xL}$ from the uniformity of the density over the entire system. Assuming that the particle number of the partial system after the displacement $N_x'$ is conserved as $N_x'=N_x$, and using the uniformity of the density over the entire system after the displacement as $\frac{N}{l'L}=\frac{N_x'}{x'L}$, we can obtain the relation $\frac{x'}{l'}=\frac{x}{l}$. We then obtain $$\begin{aligned} x'=\frac{l'}{l}x=x+\frac{x}{l}u\Delta t,\end{aligned}$$ which identifies $\bar{v}_x(x)$ as $$\begin{aligned} \bar{v}_x(x)=\frac{x}{l}u \ \ (0\le x \le l).\label{eq.local_velocity}\end{aligned}$$ We can also validate Eq. (\[eq.local\_velocity\]) based on the inviscid Navier-Stokes equations (see the Appendix \[appendix\]). If we look at the particle velocities at position ${\bm x}$ in the moving frame with the local center-of-mass velocity $\bar{\bm v}=(\bar{v}_x(x),0)$, that is, under a variable transformation ${\bm v} \to \tilde{\bm v} \equiv {\bm v}-\bar{\bm v}= (v_x-\bar{v}_x(x), v_y)$, the velocity distribution measured in this frame should be equal to the usual Maxwell–Boltzmann distribution with $T$ as $$\begin{aligned} f_{\rm MB}(\tilde{{\bm v}},T)=\frac{m}{2\pi k_{\rm B}T} \exp \left(-\frac{m(\tilde{v}_x^2+\tilde{v}_y^2)}{2k_{\rm B} T}\right),\label{eq.mb_distri}\end{aligned}$$ where $T$ can be regarded as the molecular kinetic temperature defined by the averaged kinetic energy per degree of freedom measured in the moving frame as $$\begin{aligned} \frac{k_{\rm B}T}{2}\equiv \int \frac{m}{2}\tilde{v}_x^2 f_{\rm MB}(\tilde{{\bm v}},T) d\tilde{\bm v}=\int \frac{m}{2}\tilde{v}_y^2 f_{\rm MB}(\tilde{{\bm v}},T) d\tilde{\bm v}.\label{eq.eff_temp}\end{aligned}$$ Then, as the Jacobian associated with the variable transformation is unity, we obtain the velocity distribution $f({\bm v})$ of the ideal gas particles at position ${\bm x}$ from $f({\bm v})\equiv f_{\rm MB}(v_x-\bar{v}_x(x), v_y, T)$ as $$\begin{aligned} f({\bm v})=\frac{m}{2\pi k_{\rm B}T} \exp \left(-\frac{m((v_x-\bar{v}_x(x))^2+v_y^2)}{2k_{\rm B} T}\right).\label{eq.micro_distri}\end{aligned}$$ The spatially non-uniform shape of this distribution is remarkable as the temperature and the density of the gas are assumed to be spatially uniform inside the cylinder. Equation (\[eq.micro\_distri\]) is expected to recover the ordinary Maxwell–Boltzmann distribution with $T=T^{\rm R}$ in the quasistatic limit $u \to 0$, where the global equilibrium between the working substance and the heat reservoir holds. First law of thermodynamics as time-evolution equation of temperature {#sec.1st_law} --------------------------------------------------------------------- We introduce the first law of thermodynamics (the law of energy conservation) as a time-evolution equation of the temperature of the gas by calculating the total energy of the gas, heat flow, and power based on Eq. (\[eq.micro\_distri\]) as follows: The energy density $e({\bm x})$ of the gas at position $\bm x$ is given by $$\begin{aligned} e({\bm x})&&\equiv \frac{N}{V}\int d{\bm v}\frac{m(v_x^2+v_y^2)}{2}f({\bm v})\nonumber\\ &&=\frac{N}{V}k_{\rm B}T+\frac{N}{V}\frac{m}{2}\bar{v}_x(x)^2,\end{aligned}$$ where $V\equiv Ll$ is the volume of the cylinder. By performing a spatial integral, we obtain the total energy $E$ of the gas as $$\begin{aligned} E=\int_0^l dx \int_0^L dy e({\bm x})=Nk_{\rm B}T+\frac{Nm}{6}u^2.\label{eq.energy}\end{aligned}$$ The first term is the internal energy $U$ of the 2D ideal gas at temperature $T$, while the second term is the kinetic energy of the fluid. The heat flow from the thermal wall at the bottom of the cylinder is obtained by using Eq. (\[eq.micro\_distri\]) at $x=0$ [@note]: $$\begin{aligned} f({\bm v})|_{x=0}=\frac{m}{2\pi k_{\rm B}T} \exp \left(-\frac{m(v_x^2+v_y^2)}{2k_{\rm B} T}\right).\end{aligned}$$ By using this distribution, we obtain the following expression of the heat flow according to the procedure developed in [@IO2008]: We first count the number of particles $n_{\rm in}$ that collide with the thermal wall per unit time as $$\begin{aligned} n_{\rm in}&& \equiv \int_{-\infty}^{0}dv_x \int_{-\infty}^{\infty}dv_y\frac{N}{V}L_{\rm th}(-v_x) f({\bm v})|_{x=0}\nonumber\\ &&=\frac{L_{\rm th}N}{2\pi V}\sqrt{\frac{2\pi k_{\rm B}T}{m}}.\end{aligned}$$ The energy $q_{\rm in}$ flowing from the colliding particles into the thermal wall per unit time is also calculated as $$\begin{aligned} q_{\rm in}&&\equiv \int_{-\infty}^0dv_x \int_{-\infty}^{\infty}dv_y \frac{N}{V}L_{\rm th}\frac{m(v_x^2+v_y^2)}{2}(-v_x)f({\bm v})|_{x=0}\nonumber\\ &&=\frac{3L_{\rm th}Nk_{\rm B}T}{4\pi V}\sqrt{\frac{2\pi k_{\rm B}T}{m}}.\end{aligned}$$ Because the number of the reflected particles $n_{\rm out}$ per unit time should be equal to $n_{\rm in}$, we can calculate the energy flowing into the working substance as $$\begin{aligned} q_{\rm out}&&\equiv n_{\rm in}\int_0^{\infty}dv'_x \int_{-\infty}^{\infty}dv'_y\frac{m({v_x'}^2+{v'_y}^2)}{2} f_{\rm th}({\bm v}')\nonumber\\ &&=\frac{3L_{\rm th}Nk_{\rm B}T^{\rm R}}{4\pi V}\sqrt{\frac{2\pi k_{\rm B}T}{m}}\end{aligned}$$ by using Eq. (\[eq.distri\_thermal\]). Then the heat flow $q\equiv q_{\rm out}-q_{\rm in}$ is obtained as $$\begin{aligned} q=\sqrt{\frac{2\pi k_{\rm B}T}{m}}\frac{3L_{\rm th}Nk_{\rm B}(T^{\rm R}-T)}{4\pi V}\equiv \kappa (T^{\rm R}-T),\label{eq.fourier}\end{aligned}$$ where we have defined the following thermal conductance $$\begin{aligned} \kappa \equiv \sqrt{\frac{2\pi k_{\rm B}T}{m}}\frac{3L_{\rm th}Nk_{\rm B}}{4\pi V(t)}.\label{eq.thermal_conduc}\end{aligned}$$ This depends on time $t$ through the volume change even if the temperature $T$ does not change with time. Therefore, although Eq. (\[eq.fourier\]) has the form of the linear Fourier’s law of heat transport, it is different from the setup in the original CA cycle, where $\kappa$ is assumed not to depend on $T$ and $t$ [@CA]. To calculate the power as the work done on the piston per unit time, we use Eq. (\[eq.micro\_distri\]) at $x=l$: $$\begin{aligned} f({\bm v})|_{x=l}=\frac{m}{2\pi k_{\rm B}T} \exp \left(-\frac{m((v_x-u)^2+v_y^2)}{2k_{\rm B} T}\right).\end{aligned}$$ Then the power $w$ is calculated by using this distribution and the work per collision Eq. (\[eq.micro\_work\]) as $$\begin{aligned} w&&=\int_{-\infty}^{\infty}dv_y\int_{u}^{\infty}d{v}_x 2mu (v_x-u)^2\frac{N}{V}Lf({\bm v})|_{x=l}\nonumber\\ &&=\frac{Nk_{\rm B}T}{V}Lu=p\frac{dV}{dt},\label{eq.work_flux}\end{aligned}$$ which is expressed by the product of the pressure and the time derivative of the volume, where we used the equation of state for the ideal gas $p=\frac{Nk_{\rm B}T}{V}$. By using Eqs. (\[eq.energy\]), (\[eq.fourier\]), and (\[eq.work\_flux\]), we finally obtain the first law of thermodynamics $\frac{dU}{dt}=Nk_{\rm B}\frac{dT}{dt}=q-w$ as $$\begin{aligned} Nk_{\rm B}\frac{dT(t)}{dt}=\kappa(t) (T^{\rm R}-T(t))-\frac{Nk_{\rm B}T(t)}{V(t)}\frac{dV(t)}{dt},\label{eq.1st_law_deriv}\end{aligned}$$ which serves as the time-evolution equation of the temperature of the gas. We can also validate Eq. (\[eq.1st\_law\_deriv\]) based on the inviscid Navier-Stokes equations (see the Appendix \[appendix\]). We note that reducing the dynamics of the gas into the time-evolution equation of the spatially uniform temperature in this way is an approximation based on the separation of the time scales (see the last paragraph in Sec. \[Ideal gas system in cylinder with moving piston\]), where validity of the results obtained under this approximation should be verified by a molecular dynamics simulation. Local equilibrium Carnot cycle {#Local equilibrium Carnot cycle} ============================== ![(Normalized) pressure–volume ($p$–$V$) diagram of the local equilibrium Carnot cycle of the 2D ideal gas under the parameters $N=100$, $k_{\rm B}=m=L=1$, $T_h^{\rm R}=1$, $T_c^{\rm R}=0.7$, $V_1=1$, $V_2=1.5$, $L_{{\rm th},h}=L_{{\rm th},c}=0.05$, and $u_h=-u_c=2\times 10^{-3}$. The thin curve represents the quasistatic (global equilibrium) cycle. The bold curve represents the local equilibrium cycle, where $p=\frac{Nk_{\rm B}T_i^{\rm st}}{V}$ with Eq. (\[eq.steady\_temp2\]) during the isothermal processes and Eq. (\[eq.p\_v\_adi\]) during the adiabatic processes. $\tilde{V}_j$ denotes the switching volume of the local equilibrium Carnot cycle as in Eq. (\[eq.switch\_finite\]) and $V_j$ denotes the corresponding volume of the quasistatic cycle.[]{data-label="t_v"}](fig_p_v_2017_6_pre_2nd_resubmit.eps) Construction of cycle --------------------- We construct the local equilibrium Carnot cycle of the ideal gas based on the preparation in Sec. \[Molecular kinetic model\]. Hereafter, the suffix $i$ ($i=h,c$) denotes the quantity during the isothermal processes in contact with the heat reservoir with the temperature $T_i^{\rm R}$. We require that the local equilibrium Carnot cycle should recover the quasistatic Carnot cycle in the quasistatic limit $u_i \to 0$. We denote by $V_j$ ($j=1, \cdots,4$) the volume at which we switch each thermodynamic process of the quasistatic cycle. The quasistatic Carnot cycle consists of the following successive thermodynamic processes (Fig. \[t\_v\]): (i) the isothermal expansion process in contact with the heat reservoir with $T_h^{\rm R}$ (${V}_1\to {V}_2$); (ii) the adiabatic expansion process (${V}_2\to {V}_3$); (iii) the isothermal compression process (${V}_3\to {V}_4$) in contact with the heat reservoir with $T_c^{\rm R}$; (iv) the adiabatic compression process (${V}_4\to {V}_1$). Because the adiabatic equation of the $2$D ideal gas $TV={\rm const}.$ holds for the quasistatic adiabatic process, $V_j$’s depend on each other as $$\begin{aligned} V_3=\frac{T_h^{\rm R}}{T_c^{\rm R}}V_2, \ V_4=\frac{T_h^{\rm R}}{T_c^{\rm R}}V_1,\label{eq.qs_vol}\end{aligned}$$ showing that the independent variables are only $V_1$ and $V_2$ when we fix the temperatures $T_h^{\rm R}$ and $T_c^{\rm R}$. Denoting by $\tilde{V}_j$ the volume at which we switch each thermodynamic process depending on the constant piston velocity and defining the cylinder length $\tilde{l}_j$ at the switching volume as $\tilde l_j \equiv {\tilde V}_j/L$, we design our local equilibrium cycle consisting of the successive thermodynamic processes as follows (Fig. \[t\_v\]): (i) the isothermal expansion process with piston velocity $u_h$ in contact with the heat reservoir with $T_h^{\rm R}$ ($\tilde{V}_1\to \tilde{V}_2$) \[the duration of this process is $t_h\equiv ({\tilde{l}_2-\tilde{l}_1})/{u_h}$, and the temperature of the working substance always takes the steady value $T_h^{\rm st}\equiv T_h(u_h)$ ($\le T_h^{\rm R}$)\]; (ii) the adiabatic expansion process with duration $\gamma t_h$ ($\tilde{V}_2\to \tilde{V}_3$); (iii) the isothermal compression process with piston velocity $u_c$ in contact with the heat reservoir with $T_c^{\rm R}$ ($\tilde{V}_3\to \tilde{V}_4$) \[the duration of this process is $t_c\equiv ({\tilde{l}_4-\tilde{l}_3})/{u_c}$, and the temperature of the working substance always takes the steady value $T_c^{\rm st}\equiv T_c(u_c)$ ($\ge T_c^{\rm R}$)\]; (iv) the adiabatic compression process with duration $\gamma t_c$ ($\tilde{V}_4\to \tilde{V}_1$). In this design, the total duration completing the adiabatic processes is proportional to $t_h+t_c$, as assumed in [@CA]. While there may be many ways of switching each process for $\tilde{V}_j$ to recover $V_j$ in the quasistatic limit $u_i\to 0$, we adopt the following switching volumes depending on $u_i$ through $T_i^{\rm st}$ as $$\begin{aligned} \tilde{V}_1=\frac{T_h^{\rm R}}{T_h^{\rm st}}V_1, \tilde{V}_2=\frac{T_h^{\rm R}}{T_h^{\rm st}}V_2, \tilde{V}_3=\frac{T_c^{\rm R}}{T_c^{\rm st}}V_3, \tilde{V}_4=\frac{T_c^{\rm R}}{T_c^{\rm st}}V_4.\label{eq.switch_finite} \end{aligned}$$ Because, as shown below, the adiabatic equation $TV={\rm const}.$ holds irrespective of $u_i$, the adiabatic processes of the local equilibrium cycle as switched by Eq. (\[eq.switch\_finite\]) always overlap with the quasistatic adiabatic ones (see Fig. \[t\_v\]) [@IO2015], and they end with the steady temperatures of the succeeding isothermal processes. To obtain the steady temperature $T_i^{\rm st}$, we consider the time-evolution equation of the gas in Eq. (\[eq.1st\_law\_deriv\]) during the isothermal processes: $$\begin{aligned} Nk_{\rm B}\frac{dT_i(t)}{dt}=\kappa_i(t) (T_i^{\rm R}-T_i(t))-\frac{Nk_{\rm B}T_i(t)}{V(t)}\frac{dV(t)}{dt}.\label{eq.1st_law}\end{aligned}$$ We can obtain the steady solution $T^{\rm st}_i$ of Eq. (\[eq.1st\_law\]) that satisfies $\frac{dT_i(t)}{dt}=0$, solving a quadratic equation in $T^{\rm st}_i$, $$\begin{aligned} T_i^{\rm R}-T_i^{\rm st}=\frac{u_i}{A_i}\sqrt{T_i^{\rm st}},\label{eq.steady_temp} \end{aligned}$$ where we used Eq. (\[eq.thermal\_conduc\]) and $$\begin{aligned} A_i\equiv \sqrt{\frac{2\pi k_{\rm B}}{m}}\frac{3L_{{\rm th},i}}{4\pi L},\label{eq.A}\end{aligned}$$ where we consider heat-reservoir dependence of $L_{\rm th}$, which also leads to heat-reservoir dependence of the thermal conductance $\kappa$ in Eq. (\[eq.thermal\_conduc\]) as [@CA]. The solution of Eq. (\[eq.steady\_temp\]) is given by $$\begin{aligned} T^{\rm st}_i=T_i^{\rm R}-\frac{u_i}{2A_i}\sqrt{4T_i^{\rm R}+\frac{u_i^2}{A_i^2}}+\frac{u_i^2}{2A_i^2},\label{eq.steady_temp2}\end{aligned}$$ where we have chosen the minus sign as a physically relevant solution. In the quasistatic limit $u_i \to 0$, we can see that $T^{\rm st}_i$ agrees with $T_i^{\rm R}$ of the heat reservoir, as expected. This exact relation between the steady temperature and the piston velocity is a merit obtained by our microscopic formulation using the specific working substance, which cannot be obtained by general but phenomenological approaches [@CA; @IO2015]. Since the first term on the right-hand side of Eq. (\[eq.1st\_law\]) vanishes in the adiabatic processes, we obtain the adiabatic relation between $T$ and $V$ as $TV={\rm const}.$ holding irrespective of $u_i$, by directly solving Eq. (\[eq.1st\_law\]). From this we can validate Eq. (\[eq.switch\_finite\]). The relation $$\begin{aligned} pV^2=\rm const.\label{eq.p_v_adi}\end{aligned}$$ also follows from the equation of state $p=\frac{Nk_{\rm B}T}{V}$ as is used to depict the adiabatic curves in Fig. \[t\_v\]. The entropy of the ideal gas with temperature $T$ is calculated by using $f({\bm v})$ in Eq. (\[eq.micro\_distri\]) from the Gibbs entropy formula as $$\begin{aligned} S(T,V)&&=-Nk_{\rm B}\int \frac{f({\bm v})}{V}\ln \frac{f({\bm v})}{V}d{\bm v}d{\bm x}\nonumber\\ &&=Nk_{\rm B}\ln T+Nk_{\rm B}\ln V+S_0,\label{eq.entropy}\end{aligned}$$ where $S_0$ is a constant independent of $T$ and $V$. It is then easy to confirm $$\begin{aligned} q_i(t)=\kappa_i(t)(T_i^{\rm R}-T_i^{\rm st})=T_i^{\rm st}\frac{dS}{dt}\label{eq.heat_entropy_exp}\end{aligned}$$ from Eqs. (\[eq.thermal\_conduc\]), (\[eq.steady\_temp\]) and (\[eq.entropy\]). From this definition of the entropy, it turns out that the switching volumes in Eq. (\[eq.switch\_finite\]) maintain the entropy change during the isothermal process at any piston velocity $u_i$ as $\Delta S_h \equiv Nk_{\rm B}\ln \frac{\tilde{V}_2}{\tilde{V}_1}=Nk_{\rm B}\ln \frac{{V}_2}{{V}_1}\equiv \Delta S$ and $\Delta S_c \equiv Nk_{\rm B}\ln \frac{\tilde{V}_4}{\tilde{V}_3}=Nk_{\rm B}\ln \frac{{V}_4}{{V}_3}=-\Delta S$, where we used Eq. (\[eq.qs\_vol\]) [@IO2015]. Efficiency and power -------------------- The net heat from the heat reservoir during each isothermal process is calculated as $$\begin{aligned} Q_i=\int_0^{t_i}\kappa_i(t)(T_i^{\rm R}-T^{\rm st}_i)dt=T_i^{\rm st}\Delta S_i,\label{eq.heat_avr}\end{aligned}$$ where we used Eq. (\[eq.heat\_entropy\_exp\]). This is the local-equilibrium counterpart of the quasistatic heat $T_i^{\rm R}\Delta S_i$ with $T_i^{\rm R}$ being replaced with $T_i^{\rm st}$ of the working substance [@IO2015]. From Eq. (\[eq.heat\_avr\]) and $\Delta S_h=-\Delta S_c=\Delta S$, the efficiency of the present local equilibrium Carnot cycle is given by $$\begin{aligned} &&\eta=1+\frac{Q_c}{Q_h}=1-\frac{T_c^{\rm st}}{T_h^{\rm st}},\label{eq.effi}\end{aligned}$$ which corresponds to the endoreversible expression of Eq. (\[eq.endo\_effi\]) in the present model, revealing that $T_i$ in Eq. (\[eq.endo\_effi\]) is the steady value of the molecular kinetic temperature of the working substance as defined in Eq. (\[eq.eff\_temp\]). By using Eqs. (\[eq.qs\_vol\]), (\[eq.switch\_finite\]), and (\[eq.steady\_temp\]), we can express the power of our cycle by using $T_i^{\rm st}$ without $u_i$ as [@CA] $$\begin{aligned} P&\equiv \frac{W}{(1+\gamma)(t_h+t_c)}=\frac{(T_h^{\rm st}-T_c^{\rm st})\Delta S}{(1+\gamma)\left(\frac{{\tilde{l}_2-\tilde{l}_1}}{{u_h}}+\frac{{\tilde{l}_4-\tilde{l}_3}}{{u_c}}\right)}\nonumber\\ &=\frac{A_hA_c\Delta S \sqrt{(T_c^{\rm R}-y)(T_h^{\rm R}-x)}(\Delta T^{\rm R}-x+y)xy}{(1+\gamma)(l_2-l_1)T_h^{\rm R} \left(A_c y\sqrt{T_c^{\rm R}-y}-A_h x \sqrt{T_h^{\rm R}-x}\right)}, \label{eq.pow_full}\end{aligned}$$ where we have defined $x \equiv T_h^{\rm R}-T_h^{\rm st}$, $y \equiv T_c^{\rm R}-T_c^{\rm st}$, and $\Delta T^{\rm R}\equiv T_h^{\rm R}-T_c^{\rm R}$. Efficiency at maximum power --------------------------- In principle, by maximizing the power Eq. (\[eq.pow\_full\]) as $\frac{\partial P}{\partial x}=\frac{\partial P}{\partial y}=0$ as done in the original CA paper [@CA], we can obtain the efficiency at maximum power of our cycle. This is, however, difficult to perform analytically in general. Therefore, we focus here on the case of a small temperature difference $\Delta T^{\rm R}$ for this analytic treatment as a guideline. In this case, we obtain the power instead of Eq. (\[eq.pow\_full\]) as $$\begin{aligned} P=\frac{A_hA_c\Delta S}{(1+\gamma)(l_2-l_1)\sqrt{\bar{T}^{\rm R}}}\frac{(\Delta T^{\rm R}-x+y)xy}{A_cy-A_hx},\label{eq.pow_approx} \label{eq.pow}\end{aligned}$$ to the lowest order of $\Delta T^{\rm R}$, $x$ and $y$, where $\bar{T}^{\rm R}\equiv (T_h^{\rm R}+T_c^{\rm R})/2$. By maximizing the power Eq. (\[eq.pow\]) as $\frac{\partial P}{\partial x}=\frac{\partial P}{\partial y}=0$, we easily obtain the $x$ and $y$ values at maximum power as $$\begin{aligned} x^*=\frac{\sqrt{A_c}\Delta T^{\rm R}}{2(\sqrt{A_h}+\sqrt{A_c})}, \ y^*=-\frac{\sqrt{A_h}\Delta T^{\rm R}}{2(\sqrt{A_h}+\sqrt{A_c})}.\label{eq.temp_at_maxpow}\end{aligned}$$ Then the maximum power and the efficiency at maximum power turn out to be $$\begin{aligned} &&P^*=\frac{A_hA_c\Delta S}{4(1+\gamma)(l_2-l_1)\sqrt{\bar{T}^{\rm R}}}\frac{{\Delta T^{\rm R}}^2}{(\sqrt{A_h}+\sqrt{A_c})^2},\label{eq.max_pow}\\ &&\eta^*=1-\frac{T_c^R-y^*}{T_h^R-x^*}=\frac{\eta_{\rm C}}{2-\frac{\eta_{\rm C}}{1+\sqrt{\frac{A_h}{A_c}}}},\label{eq.ss}\end{aligned}$$ respectively. This expression of $\eta^*$ is essentially the same as the Schmiedl–Seifert efficiency in a stochastic heat engine model [@SS2008]. By expanding Eq. (\[eq.ss\]) with respect to $\eta_{\rm C}$, we obtain $$\begin{aligned} \eta^*=\frac{\eta_{\rm C}}{2}+\frac{\eta_{\rm C}^2}{4\left(1+\sqrt{\frac{A_h}{A_c}}\right)}+O(\eta_{\rm C}^3).\end{aligned}$$ The linear order agrees with that of the CA efficiency Eq. (\[eq.ca\]), which has been shown to be the upper bound of $\eta^*$ in the linear response regime [@VB]. This bound is attained by heat engines with the tight-coupling property between the heat and the motion fluxes without heat-leakage [@VB], which is satisfied in our present model. The quadratic order also recovers that of the CA efficiency Eq. (\[eq.ca\]) under the symmetric condition of $A_h=A_c$, i.e., $L_{{\rm th},h}=L_{{\rm th},c}$ from Eq. (\[eq.A\]) [@ELV]. The efficiency of the same form as Eq. (\[eq.ss\]) has also previously been obtained such as in the low-dissipation Carnot cycle [@EKLV], the minimally nonlinear irreversible heat engine [@IO2012], and the heat engine based on the weighted thermal flux [@ST], which describe heat engines to the lowest degree of nonequilibrium from the quasistatic limit. The reason why we have obtained Eq. (\[eq.ss\]) rather than the CA efficiency Eq. (\[eq.ca\]) can be considered as follows: A crucial difference between our model and the CA model is that the steady temperature during the isothermal process Eq. (\[eq.steady\_temp2\]) as a function of the piston velocity is available owing to the time-evolution equation Eq. (\[eq.1st\_law\]) in our case. Because the approximation Eq. (\[eq.pow\_approx\]) is equivalent to considering only the lowest correction to the quasistatic limit in Eq. (\[eq.steady\_temp2\]) as $T_i^{\rm st}\simeq T_i^{\rm R}-\frac{u_i}{A_i}\sqrt{T_i^{\rm R}}$ together with the quasistatic-case switching volumes Eq. (\[eq.qs\_vol\]) for a small temperature difference $\Delta T^{\rm R}$, it is natural that it yields the efficiency like Eq. (\[eq.ss\]) as similar to the other models [@EKLV; @IO2012; @ST] rather than the CA efficiency. As the temperature difference increases, we expect that the higher-order terms of the piston velocity in Eq. (\[eq.steady\_temp2\]) together with the piston-velocity dependent switching volumes Eq. (\[eq.switch\_finite\]) that are not adopted in the other models may give rise to a discrepancy between our model and the other models. In Fig. \[effi\_at\_pmax\], we show $\eta^*$ obtained by maximizing Eq. (\[eq.pow\_full\]) with respect to $x$ and $y$ numerically and the analytical result Eq. (\[eq.ss\]) in the case of $A_h=A_c$. The CA efficiency in Eq. (\[eq.ca\]) is also shown for comparison. We can confirm that the numerical value agrees with Eq. (\[eq.ss\]) and the CA efficiency for the small temperature difference, while it begins to deviate from these efficiencies as the temperature difference increases. ![Efficiency at maximum power $\eta^*$ under the symmetric condition of $A_h=A_c$ as a function of $\eta_{\rm C}=1-T_c^{\rm R}$, with $T_h^{\rm R}=1$. The numerical curve indicates $\eta^*$ obtained by maximizing Eq. (\[eq.pow\_full\]) with respect to $x$ and $y$ numerically.[]{data-label="effi_at_pmax"}](paper_efficiency_at_pmax_2017_6_pre_2nd_resubmit.eps) Verification by molecular dynamics simulation --------------------------------------------- To verify the validity of our theory, we performed an event-driven molecular dynamics (MD) simulation [@AW] of our local equilibrium Carnot cycle by regarding the 2D ideal gas particles as low-density hard discs [@IO2008] with diameter $d$. In Fig. \[center\_velocity\], we show the local center-of-mass $x$-velocity $\bar{v}_x(x_k)$ obtained from the simulation as follows: When the cylinder length is $l_m < l < l_m+\Delta l$ during the isothermal expansion processes, where $l_m$ is the starting point of measurement and $\Delta l$ is a small displacement, we divide the cylinder $l \times L$ into small cells $X_k \times L$ in the $x$-direction with $X_k=[X_k^{\rm min}, X_k^{\rm max}]\equiv \left[\frac{l}{N_{\rm cell}}(k-1), \frac{l}{N_{\rm cell}}k\right]$ ($k=1, \cdots, N_{\rm cell}$). At every particle–particle collision event that occurs during $l_m < l < l_m+\Delta l$ along repeated cycles, we measure the $x$ velocity of the particles belonging to each cell. We define the local center-of-mass $x$ velocity at the $k$th cell $\bar{v}_x(x_k)$ as the average of all the $x$ velocities measured in the $k$th cell, where $x_k \equiv \frac{X_k^{\rm min}+X_k^{\rm max}}{2}$. We can see that $\bar{v}_x(x_k)$ agrees with the theoretical line Eq. (\[eq.local\_velocity\]) well. In Fig. \[efficiency\_power\], we also compare the efficiency and power obtained by summing the heat and work per collision Eqs. (\[eq.micro\_heat\]) and (\[eq.micro\_work\]) by an MD simulation with the theoretical values Eqs. (\[eq.effi\]) and (\[eq.pow\_full\]) using $T_i^{\rm st}$ in Eq. (\[eq.steady\_temp2\]) for the case of $u_h=-u_c$, which show a good agreement over the whole working regime. ![Local center-of-mass $x$ velocity of the gas particles obtained from an MD simulation. The same parameters as in Fig. \[t\_v\] are used with $\gamma=0.5$, $d=0.01$, $l_m=\tilde{l}_1\simeq 1.069$, $\Delta l=0.1$, and $N_{\rm cell}=10$. We used $262400$ cycles for the average (see the main text). The theoretical line is given by Eq. (\[eq.local\_velocity\]) with $l=l_m$.[]{data-label="center_velocity"}](paper_center_velocity_latest_2017_6_pre_2nd_resubmit.eps) ![(a) Efficiency and (b) power as functions of $u_h=-u_c\equiv u$. The same parameters as in Fig. \[t\_v\], except the piston velocity are used with $\gamma=0.5$ and $d=0.01$. We used 3200–76160 cycles for the average. The theoretical Carnot efficiency is $\eta_{\rm C}=0.3$.[]{data-label="efficiency_power"}](paper_efficiency_power_2017_6_pre_2nd_resubmit.eps) Discussion and summary {#Discussion and summary} ====================== We previously studied a finite-time Carnot cycle of 2D ideal gas [@IO2008] based on molecular kinetics in a setup similar to that in the present work. Although in that work we used a usual Maxwell–Boltzmann distribution with a well-defined temperature $T$ of the gas as the velocity distribution of the particles, it was just an assumption without considering the spatial variation of the distribution. Because of the lack of this spatial variation, the fundamental thermodynamic relation as Eq. (\[eq.ftr\]) did not hold for the model in [@IO2008]. Moreover, we constructed the finite-time Carnot cycle by switching each thermodynamic process at the same volumes as in the quasistatic cycle. This led to an extra heat transfer for relaxation of the working substance to a steady temperature during the isothermal processes, which do not exist in the original CA cycle [@CA]. In the present local equilibrium Carnot cycle, we have overcome these difficulties in [@IO2008] by deriving the velocity distribution with reasonable spatial variation Eq. (\[eq.micro\_distri\]), and by appropriately switching each thermodynamic process depending on the piston velocity so that such an extra heat transfer does not occur. In the present paper, we identified the velocity distribution Eq. (\[eq.micro\_distri\]) of 2D ideal gas as the working substance that is compatible with the local equilibrium assumption and the fundamental thermodynamic relation satisfying the endoreversibility. We found that this distribution is the Maxwell–Boltzmann distribution with the spatially uniform temperature and the spatially varying local center-of-mass velocity Eq. (\[eq.local\_velocity\]). Based on this distribution, we obtained the time-evolution equation of the temperature of the gas. We then constructed the local equilibrium Carnot cycle by using the steady solution of the equation. We confirmed that the efficiency of the present local equilibrium Carnot cycle is given by the endoreversible Carnot efficiency using the steady values of the molecular kinetic temperatures of the working substance during the isothermal processes. We also studied the efficiency at maximum power of our cycle, and showed that it is given by the Schmiedl–Seifert efficiency [@SS2008] under a small temperature difference. We have numerically confirmed the local center-of-mass velocity Eq. (\[eq.local\_velocity\]) by performing an MD simulation. We expect that our theory gives a nonequilibrium statistical mechanics basis for the endoreversible heat engines and finite-time thermodynamics. Y. I. acknowledges the financial support from JSPS KAKENHI Grant No. 16K17765. Consistency with inviscid Navier-Stokes equations {#appendix} ================================================= We validate the local center-of-mass $x$ velocity Eq. (\[eq.local\_velocity\]) derived in Sec. \[sec.local\_vel\] and the first law of thermodynamics Eq. (\[eq.1st\_law\_deriv\]) derived in Sec. \[sec.1st\_law\] based on the following fluid-mechanical argument: Dynamics of a 2D compressible inviscid fluid is determined by the following mass-, momentum-, and energy-conservation equations [@EM] $$\begin{aligned} &&\frac{\partial \rho}{\partial t}+\nabla \cdot (\rho \bar{\bm v})=0,\label{eq.mass}\\ &&\frac{\partial (\rho \bar{\bm v})}{\partial t}+\nabla \cdot (\rho \bar{\bm v}\bar{\bm v})+\nabla p={\bm 0,}\label{eq.momentum}\\ &&\frac{\partial e}{\partial t}+\nabla \cdot \left((e+p)\bar{\bm v}+{\bm J}\right)=0,\label{eq.energy_density}\end{aligned}$$ respectively. Here, $\bar{\bm v}({\bm x},t)$ is the fluid velocity corresponding to our local center-of-mass velocity, $\rho({\bm x},t)$ is the mass density, $p({\bm x},t)$ is the pressure, $e({\bm x},t)$ is the energy density, and ${\bm J}({\bm x},t)$ is the heat flux. To be more specific, the fluid is a 2D ideal gas with $p({\bm x},t)=\frac{\rho({\bm x},t)}{m}k_{\rm B}T({\bm x},t)$ and $e({\bm x},t)=p({\bm x},t)+\frac{1}{2}\rho({\bm x},t)\bar{{\bm v}}^2({\bm x},t)$, where $p({\bm x},t)$ serves as the internal energy density of the 2D ideal gas. We assume that the fluid is uniform in the $y$-direction and the $y$-component of the fluid velocity vanishes as $\bar{\bm v}({\bm x},t)=(\bar{v}_x(x,t), 0)$. Equation (\[eq.momentum\]) can then be reduced to a $1$D inviscid Navier-Stokes equation: $$\begin{aligned} &&\frac{\partial \bar{v}_x}{\partial t}+\bar{v}_x\frac{\partial \bar{v}_x}{\partial x}+\frac{1}{\rho}\frac{\partial p}{\partial x}=0,\label{eq.momentum_2}\end{aligned}$$ where we used Eq. (\[eq.mass\]). By assuming a spatially uniform mass density and temperature as the endoreversibility condition as $$\begin{aligned} &&\rho({\bm x},t)=\rho(t)=\frac{mN}{Ll(t)},\label{eq.endo_1}\\ &&T({\bm x},t)=T(t),\label{eq.endo_2}\end{aligned}$$ we can directly solve Eq. (\[eq.momentum\_2\]) as follows: The separation of variables $\bar{v}_x(x,t)=F(x)G(t)$ yields $$\begin{aligned} \frac{dF}{dx}=u,\label{eq.f}\\ -\frac{1}{G^2}\frac{dG}{dt}=u,\label{eq.g}\end{aligned}$$ where $u$ is an arbitrary constant independent of $x$ and $t$. By solving Eqs. (\[eq.f\]) and (\[eq.g\]), we obtain $F(x)=ux+C_1$ and $G(t)=\frac{1}{ut+C_2}$, where $C_1$ and $C_2$ are integral constants. By imposing $F(0)=0$ and $G(0)=\frac{1}{l_0}$, we obtain $$\begin{aligned} \bar{v}_x(x,t)=\frac{ux}{ut+l_0},\label{eq.fluid_vel}\end{aligned}$$ which agrees with the local center-of-mass velocity Eq. (\[eq.local\_velocity\]) by regarding $u$ and $l_0$ as the constant piston velocity and the initial position of the piston, respectively. We can confirm that $x$ component Eq. (\[eq.fluid\_vel\]) and vanishing $y$ component of the fluid velocity also satisfy Eq. (\[eq.mass\]). We next consider the energy conservation equation Eq. (\[eq.energy\_density\]). From the endoreversibility condition Eqs. (\[eq.endo\_1\]) and (\[eq.endo\_2\]) and Eq. (\[eq.fluid\_vel\]), Eq. (\[eq.energy\_density\]) becomes $$\begin{aligned} \frac{Nk_{\rm B}}{V}\frac{dT}{dt}=-\nabla \cdot {\bm J}-\frac{Nk_{\rm B}T}{Vl} \frac{dl}{dt}.\label{eq.energy_2}\end{aligned}$$ By performing a spatial integral on both sides of this equation, we obtain $$\begin{aligned} Nk_{\rm B}\frac{dT}{dt}=q-p\frac{dV}{dt},\label{eq.1st_law_fluid}\end{aligned}$$ where we defined $q\equiv \int_0^L J_x(0,y,t)dy$ and used $J_y(x,0,t)=J_y(x,L,t)=J_x(l,y,t)=0$ except at the thermal wall of the cylinder. Equation (\[eq.1st\_law\_fluid\]) corresponds to the first law of thermodynamics Eq. (\[eq.1st\_law\_deriv\]), where the detailed form of $q$ has been determined by the molecular kinetics. H. Callen, [*Thermodynamics and an Introduction to Thermostatistics*]{}, 2nd ed. (Wiley, New York, 1985). F. Reif, [*Fundamentals of Statistical and Thermal Physics*]{}, (Waveland, Illinois, 2009). F. Curzon and B. Ahlborn, Am. J. Phys. **43**, 22 (1975). A. Vaudrey, F. Lanzetta, and M. Feidt, J. Non-Equilib. Thermodyn. **39**, 199 (2014). M. Moreau and Y. Pomeau, Eur. Phys. J. Special Topics, **224**, 769 (2015). M. H. Rubin, Phys. Rev. A **19**, 1272 (1979). A. Bejan, J. Appl. Phys. **79**, 1191 (1996). P. Salamon, J. D. Nulton, G. Siragusa, T. R. Anderse, and A. Limon, Energy **26**, 307 (2001). R. S. Berry, V. A. Kazakov, S. Sieniutycz, Z. Szwast, and A. M. Tsirlin, [*Thermodynamics Optimization of Finite-Time Processes*]{}, (Wiley, Chichester, 2000). C. Van den Broeck, Phys. Rev. Lett. **95**, 190602 (2005). B. Jiménez de Cisneros and A. Calvo Hernández, Phys. Rev. Lett. **98**, 130602 (2007). T. Schmiedl and U. Seifert, Europhys. Lett. **81**, 20003 (2008). Y. Izumida and K. Okuda, Europhys. Lett. **83**, 60003 (2008). Y. Izumida and K. Okuda, Phys. Rev. E **80**, 021121 (2009). M. Esposito, K. Lindenberg, and C. Van den Broeck, Phys. Rev. Lett. **102**, 130602 (2009). M. Esposito, R. Kawai, K. Lindenberg, and C. Van den Broeck, Phys. Rev. Lett. **105**, 150603 (2010). Y. Izumida and K. Okuda, Europhys. Lett. **97**, 10004 (2012). S. Sheng and Z. C. Tu, Phys. Rev. E **89**, 012129 (2014). Y. Wang and Z. C. Tu, EPL [**9**8]{}, 40001 (2012). J. Hoppenau, M. Niemann, and A. Engel, Phys. Rev. E **87**, 062127 (2013). K. Proesmans, C. Driesen, B. Cleuren, and C. Van den Broeck, Phys. Rev. E **92**, 032105 (2015). L. Cerino, A. Puglisi, and A. Vulpiani, Phys. Rev. E **93**, 042116 (2016). T. G. Sano and H. Hayakawa, Prog. Theor. Exp. Phys. **2016**, 083A03 (2016). K. Brandner, K. Saito, and U. Seifert, Phys. Rev. X **5**, 031019 (2015). K. Proesmans and C. Van den Broeck, Phys. Rev. Lett. **115**, 090601 (2015). Y. Izumida and K. Okuda, New J. Phys. **17**, 085011 (2015). S. R. de Groot and P. Mazur, [*Non-equilibrium Thermodynamics*]{}, (Dover, New York, 1984). R. Tehver, F. Toigo, J. Koplik, and J. R. Banavar, Phys. Rev. E **57**, R17 (1998). W. Krauth, [*Statistical mechanics: algorithms and computations*]{}, (Oxford University Press, UK, 2006), Chap. 2. C. Cercignani, M. Lampis, A. Lentati, Transp. Th. and Stat. Phys. [**24**]{}, 1319 (1995). S. Brull, P. Charrier, and L. Mieussens, Kinetic and Related Models [**7**]{}, 219 (2014). Even when the thermal wall is put on a side of the cylinder instead of the bottom of the cylinder, as in the present setup, the local center-of-mass $x$ velocity \[Eq. (\[eq.local\_velocity\])\] and the velocity distribution \[Eq. (\[eq.micro\_distri\])\] are not affected by this change as far as the piston moves in the $x$ direction, as is clear from their derivation. However, the heat flow in this case is not the exact Fourier type as Eq. (\[eq.fourier\]), but has an extra correction term of $O(u^2)$. B. J. Alder and T. E. Wainwright, J. Chem. Phys. **31**, 459 (1959). D. J. Evans and G. Morriss, [*Statistical Mechanics of Nonequilibrium Liquids*]{}, 2nd ed. (Cambridge University Press, UK, 2008), Chap. 2. [^1]: Present address: Department of Complex Systems Science, Graduate School of Informatics, Nagoya University, Nagoya 464-8601, Japan
--- abstract: 'The tomographic Alcock-Paczynski (AP) method can result in tight cosmological constraints by using small and intermediate clustering scales of the large scale structure (LSS) of the galaxy distribution. By focusing on the redshift dependence, the AP distortion can be distinguished from the distortions produced by the redshift space distortions (RSD). In this work, we combine the tomographic AP method with other recent observational datasets of SNIa+BAO+CMB+$H_0$ to reconstruct the dark energy equation-of-state $w$ in a non-parametric form. The result favors a dynamical DE at $z\lesssim1$, and shows a mild deviation ($\lesssim2\sigma$) from $w=-1$ at $z=0.5-0.7$. We find the addition of the AP method improves the low redshift ($z\lesssim0.7$) constraint by $\sim50\%$.' author: - Zhenyu Zhang - Gan Gu - Xiaoma Wang - 'Yun-He Li' - 'Cristiano G. Sabiu' - Hyunbae Park - Haitao Miao - Xiaolin Luo - Feng Fang - 'Xiao-Dong Li' title: 'Non-parametric dark energy reconstruction using the tomographic Alcock-Paczynski test' --- Introduction ============ The late-time accelerated expansion of the Universe [@Riess1998; @Perl1999] implies either the existence of “dark energy” or the breakdown of general relativity on cosmological scales. The theoretical origin and observational measurements of cosmic acceleration, although have attracted tremendous attention, are still far from being well explained or accurately measured [@SW1989; @Li2011; @2012IJMPD..2130002Y; @DHW2013]. The Alcock-Paczynski (AP) test [@AP1979] enables us to probe the angular diameter distance $D_A$ and the Hubble factor $H$, which can be used to place constraints on cosmological parameters.Under a certain cosmological model, the radial and tangential sizes of some distant objects or structures take the forms of $\Delta r_{\parallel} = \frac{c}{H(z)}\Delta z$ and $\Delta r_{\bot}=(1+z)D_A(z)\Delta \theta$, where $\Delta z$, $\Delta \theta$ are their redshift span and angular size, respectively. Thus, if incorrect cosmological models are assumed for transforming redshifts into comoving distances, the wrongly estimated $\Delta r_{\parallel}$ and $\Delta r_{\bot}$ induce a geometric distortion, known as the AP distortion. Statistical methods which probe and quantify the AP distortion has been developed and applied to a number of galaxy redshift surveys to constrain the cosmological parameters [@Ryden1995; @Ballinger1996; @Matsubara1996; @Outram2004; @Blake2011; @LavausWandelt2012; @Alam2016; @Qingqing2016]. Recently, a novel tomographic AP method based on the redshift evolution of the AP distortion has achieved significantly strong constraints on the cosmic expansion history parameters [@topology; @Li2014; @Li2015; @Li2016]. The method focuses on the redshift dependence to differentiate the AP effect from the distortions produced by the redshift space distortions (RSD), and has proved to be successful in dealing with galaxy clustering on relatively small scales. [@Li2016] firstly applied the method to the SDSS (Sloan Digital Sky Survey) BOSS (Baryon Oscillation Spectroscopic Survey) DR12 galaxies, and achieves $\sim35\%$ improvements in the constraints on $\Omega_m$ and $w$ when combining the method with external datasets of the Cosmic Microwave Background (CMB), type Ia supernovae (SNIa), baryon acoustic oscillations (BAO), and the $H_0$. In this work we aim to study how the tomographic AP method can be optimised to aid in measuring and characterising dark energy. We apply the method to reconstruct the dark energy equation-of-state $w(z)$, using the non-parametric approach developed in [@Crittenden2009; @Crittenden2012; @zhao2012], which has the advantage of not assuming any [*ad hoc*]{} form of $w$. In a recent work [@ZhaoGB:2017] use this method to reconstruct $w$ from 16 observational datasets, and claim a $3.5\sigma$ significance level in preference of a dynamical dark energy. It would be interesting to see what the results would be if the tomographic AP method is used to reconstruct $w$, and whether the reconstructed $w$ is consistent with the results of [@ZhaoGB:2017]. The brief outline of this paper is as follows. In §\[sec:method\] we outline the tomographic AP method and how we practically implement the non-parametric modelling of $w(z)$. In §\[sec:results\] we present the results of our analysis in combination with other datasets. We conclude in §\[sec:conclusion\]. Methodology {#sec:method} =========== In pursuit of reconstructing DE in a model-independent manner, we adopt the non-parametric method of $w$ [@Crittenden2009; @zhao2012] without choosing any particular parameterization. To start, $w$ is parameterized in terms of its values at discrete steps in the scale factor $a$. Fitting a large number of uncorrelated bins would lead to extremely large uncertainties and, in fact, would prevent the Monte Carlo Markov Chains (MCMC) from converging due to the large number of degenerate directions in the parameter space. On the other hand, fitting only a few bins usually lead to an unphysical discrete distribution of $w$ and significantly bias the result. The solution is to introduce a prior covariance among a large number of bins based on a phenomenological two-point function, $$\xi_w(|a-a^\prime|) \equiv \left< [w(a)-w^{\rm fid}(a)][w(a^\prime)- w^{\rm fid}(a^\prime)] \right>,$$ which is chosen as the form of [@Crittenden2009], $$\label{eq:CPZ} \xi_{\rm CPZ}(\delta a) = \xi_w (a=0) /[1 + (\delta a/a_c)^2],$$ where $\delta a\equiv|a-a^\prime|$. Clearly, $a_c$ describes the typical smoothing scale, and $\xi_w(0)$ is the normalization factor determined by the expected variance of the mean of the $w$’s, $\sigma^2_{\bar{w}}$. The ‘floating’ fiducial is defined as the local average, $$w^{\rm fid}_i = \sum_{|a_j - a_i| \leq a_c} w^{\rm true}_j / N_j,$$ where $N_j$ is the number of neighbouring bins lying around the $i$-th bin within the smoothing scale. In practice, one should set the priors to conduct the analysis. A very weak prior (i.e., small $a_c$ or large $\sigma^2_{\bar w}$) can match the true model on average (i.e., unbiased), but will result in a noisy reconstruction. A stronger prior reduces the variance but pulls the reconstructed results towards the peak of the prior. In this paper, we use the “weak prior” $a_c=0.06$, $\sigma_{\bar w}=0.04$, the prior which was also adopted in @zhao2012. The tests performed in [@Crittenden2009] shown that the results are largely independent of the choice of the correlation function. Also, [@Crittenden2011] has showed that a stronger prior $\sigma_{\bar w}=0.02$ is already enough for reconstructing a range of models without introducing a sizeable bias. We parametrize $w$ in terms of its values at $N$ points in $a$, i.e., $$w_i=w(a_i),\ i=1,2,...,N.$$ In this analysis we choose $N=30$, where the first 29 bins are uniform in $a\in[0.286,1]$, corresponding to $z\in[0,2.5]$, and the last bin covers the wide range of $z\in[2.5,1100]$. Given the binning scheme, together with the covariance matrix $\bf C$ given by Equation \[eq:CPZ\], it is straightforward to write down prior following the Gaussian form PDF $$\mathcal{P}_{\rm prior}(\bf w) \propto exp\left( -\frac{1}{2}(\bf{w}-\bf{w^{\rm fid})} \bf{C}^{-1} (\bf{w}-\bf{w^{\rm fid}} ) \right).$$ Effectively, the prior results in a new contribution to the total likelihood of the model given the datasets $D$, $${\cal P}({\bf w}|{\bf D}) \propto {\cal P}({\bf D}|{\bf w}) \times {\cal P}_{\rm prior}({\bf w}),$$ thus penalizes those models who are less smooth. The method is then applied to a joint dataset of recent cosmological observations including the CMB temperature and polarization anisotropies measured by full-mission Planck [@Planck2015], the “JLA” SNIa sample [@JLA], a Hubble Space Telescope measurement of $H_0=70.6\pm3.3$ km/s/Mpc [@Riess2011; @E14H0], and the BAO distance priors measured from 6dFGS [@6dFGS], SDSS MGS [@MGS], and the SDSS-III BOSS DR11 anisotropic measurements [@Anderson2013], as was also adopted in @Li2016 [@Li2018]. These datasets are then combined with the AP likelihood of SDSS-III BOSS DR12 galaxies [@Li2016; @Li2018], for which we evaluate the redshift evolution of LSS distortion induced by wrong cosmological parameters via the anisotropic correlation function, $$\label{eq:deltahatxi} \delta \hat\xi_{\Delta s}(z_i,z_j,\mu)\ \equiv\ \hat\xi_{\Delta s}(z_i,\mu) - \hat\xi_{\Delta s}(z_j,\mu).$$ $\xi_{\Delta s}(z_i,\mu)$ is the integrated correlation function which captures the information of LSS distortion within the clustering scales one were interested in, $$\xi_{\Delta s} (\mu) \equiv \int_{s_{\rm min}=6\ h^{-1}\ \rm{Mpc}}^{s_{\rm max}=40\ h^{-1}\ \rm{Mpc}} \xi (s,\mu)\ ds.$$ It was then normalized to remove the uncertainty from clustering magnitude and the galaxy bias, $$\hat\xi_{\Delta s}(\mu) \equiv \frac{\xi_{\Delta s}(\mu)}{\int_{0}^{\mu_{\rm max}}\xi_{\Delta s}(\mu)\ d\mu}.$$ As described in Equation \[eq:deltahatxi\], the difference between $\hat\xi_{\Delta s}(\mu)$ measured at two different redshifts $z_i,\ z_j$ characterizes the amount of the redshift evolution of LSS distortion. SDSS DR12 has 361759 LOWZ galaxies at $0.15<z <0.43$, and 771567 CMASS galaxies at $0.43< z < 0.693$. We split these galaxies into six, non-overlapping redshift bins of $0.150<z_1<0.274<z_2<0.351<z_3<0.430<z_4<0.511<z_5<0.572<z_6<0.693$ [^1] [@Li2016]. [@Li2014; @Li2015] demonstrated that $\delta \hat\xi_{\Delta s}(z_i,z_j,\mu)$ is dominated by the AP distortion while being rather insensitive to the RSD distortion, enabling us to avoid the large contamination from the latter and probe the AP distortion information on relative small clustering scales. The only difference in our treatment from [@Li2016] is that here we slightly improve the method and adopt a “full-covariance matrix” likelihood $$\label{eq:chisq2} {\cal P}_{\rm AP}({\bf w}|{\bf D}) \propto \exp\left( -\frac{1}{2}\ {\bf \theta}_{\rm AP}\ \bf{C}_{\rm AP}^{-1}\ {\bf \theta}_{\rm AP}\right ),$$ where the vector $${\bf \theta}_{\rm AP} = \left[ \hat\xi_{\Delta s}(z_2,z_1,\mu_j),\hat\xi_{\Delta s}(z_3,z_2,\mu_j),.., \hat\xi_{\Delta s}(z_6,z_5,\mu_j)\right]$$ summarizes the redshift evolution among the six redshift bins into its $5\times n_{\mu}$ components ($n_\mu$ is the number of binning in $\xi_{\Delta s}$). The covariance matrix ${\bf C}_{\rm AP}$ is estimated using the 2,000 MultiDark-Patchy mocks [@MDPATCHY]. Compared with [@Li2016], where the 1st redshift bin is taken as the reference, this current approach includes the statistical uncertainties in the system and avoids the particular dependence on which specific redshift bin is chosen as the reference. A detailed description of this improved methodology was presented in [@Li2019]. Results {#sec:results} ======= The derived constraints on $w$ as a function of redshift are plotted in Figure \[fig\_wz\]. The red solid lines represent the 68.3% CL constraints based on Planck+SNIa+BAO+$H_0$, while the AP-added results are plotted in blue filled. The reconstructed $w(z)$ from Planck+SNIa+BAO+$H_0$ is fully consistent with the cosmological constant; the $w=-1$ line lies within the 68.3% CL region. In the plotted redshift range ($0<z<2.5$), the upper bound of $w$ is constrained to $\lesssim-0.8$, while the lower bound varies from -1.3 at $z=0$ to -2.0 at $z\gtrsim2$, dependent on the redshift. The best constrained epoch lies around $z=0.2$. These features are consistent with the previous results presented in the literature using a similar dataset [@Zhao:2017cud]. The constraints are much improved after adding AP to the combined dataset. At $z\lesssim 0.7$, i.e. the redshift range of the SDSS galaxies analyzed by the AP method, the uncertainty of $w(z)$ is reduced by $\sim$50%, reaching as small as 0.2. It then increases to 0.4-1.0 at higher redshift ($0.7<z<2.5$). This highlights the power of the AP method in constraining the properties of dark energy, which were shown in [@Li2016; @Li2018]. The most interesting phenomenon from our studies is that the result indicates a mild discrepancy with a constant $w=-1$. At $0.5\lesssim z\lesssim0.7$, $w>-1$ is slightly favored ($\lesssim2\sigma$). The statistical significance of this result is not large enough to claim a detection of deviation from a cosmological constant, however this may be readdressed in the near future as the constraining power will become much improved when combining tomographic AP with the upcoming experiments of DESI [@DESI] or EUCLID [@EUCLID]. The results also slightly favor a dynamical behavior of DE. At $z=0-0.5$, we find phantom-like dark energy $-1.2\lesssim w \lesssim-1.0$, while at higher redshift $z=0.5-0.7$ it becomes quintessence-like, $-1.0 \lesssim w \lesssim -0.6$. Theoretically, this is known as the quintom dark energy [@quintom1]. The advantage of the tomographic AP method is that, it makes use of the clustering information in a series of redshift bins (rather than compresses the whole sample into a single effective redshift). Thus, it is able to capture the dynamical behavior of dark energy within narrow ranges of $\Delta z$. Our results are consistent with the $w(z)$ obtained in [@Li2018], where the authors used the Planck+SNIa+BAO+$H_0$+AP dataset to constrain the CPL parametrization $w=w_0+w_a \frac{z}{1+z}$. They found 100% improvement in the DE figure-of-merit and a slight preference of dynamical dark energy. Benefitting from a more general form of a non-parameteric $w(z)$, we are able to obtain more detailed features in the reconstruction. Finally, we note that the results with and without AP are in good consistency with each other. This implies that the information obtained from the AP effect agrees well with the other probes. Since the clustering information probed by AP is independent from those probed by BAO (see the discussion in [@zhangxue2018]), to some extent, in this analysis these two different LSS probes compliment and validate each other. This is also consistent with the results of [@Li2016], where we found the contour region constrained by AP consistently overlaps with those of SNIa, BAO and CMB. Concluding Remarks {#sec:conclusion} ================== In this work, we consider a very general, non-parametric form for the evolution of the dark energy equation-of-state, $w(z)$. We obtain cosmological constraints by combining our tomographic AP method with other recent observational datasets of SNIa+BAO+CMB+$H_0$. As a result, we find that the inclusion of AP improves the low redshift ($z<0.7$) constraint by $\sim50\%$. Moreover, our result favors a dynamical DE at $z\lesssim1$, and shows a mild deviation ($\lesssim2\sigma$) from $w=-1$ at $z=0.5-0.7$. We did not discuss the systematics of the AP method in details. This topic has been extensively studied in [@Li2016; @Li2018], where the authors found that for the current observations the systematical error is still much less than the statistical uncertainty. We note that our constraint on $w(z)$ at $z\lesssim0.7$ is the tightest within the current literature. The accuracy we achieved is as good as that of @Zhao:2017cud in their “ALL16” combination, where they used the Planck+SNIa+BAO+$H_0$ datasets[^2], combined with the WiggleZ galaxy power spectra [@Parkinson2012], the CFHTLenS weak lensing shear angular power spectra [@Heymans2013], the $H(z)$ measurement using relative age of old and passively evolving galaxies based on a cosmic chronometer approach [OHD; @Moresco2016], and the Ly$\alpha$ BAO measurements [@Deblubac2015]. In comparison, we use a much smaller number of datasets to achieve a similar low-redshift $w(z)$ constraint. This highlights the great power of our tomographic AP method using anisotropic clustering on small scales. At higher redshift ($z\gtrsim0.7$) our constraint is weaker than @Zhao:2017cud. It would be interesting to include more datasets [e.g. the ones used in their paper, the SDSS IV high redshift results, @2019MNRAS.482.3497Z] and then re-perform this analysis. The dynamical behavior of dark energy at $z\approx0.5-0.7$ has also been found in many other works [@Zhao:2017cud; @Wang:2018fng]. Due to the limitation of current observations, it is not possible to claim a detection of dynamical dark energy at $>5\sigma$ CL. We expect this can be achieved (or falsified) in the near future aided by more advanced LSS experiments, such as DESI [@DESI], Euclid [@EUCLID], and LSST [@LSST]. We thank Gong-bo Zhao, Yuting Wang and Qing-Guo Huang for helpful discussion. XDL acknowledges the supported from NSFC grant (No. 11803094). YHL acknowledges the support of the National Natural Science Foundation of China (Grant No. 11805031) and the Fundamental Research Funds for the Central Universities (Grant No. N170503009). CGS acknowledges financial support from the National Research Foundation of Korea (Grant No. 2017R1D1A1B03034900, 2017R1A2B2004644 and 2017R1A4A101517). Based on observations obtained with Planck (<http://www.esa.int/Planck>), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is <http://www.sdss3.org>. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. Ade, P.A.R., Aghanim, N., & Arnaud, M., et al. arXiv:1502.01589 Aghamousa, A., 2016, arXiv:1611.00036 Alam, S., Ata, M., & Bailey, S., et al. 2016, submitted to MNRAS (arXiv:1607.03155) Alcock, C., & Paczynski, B. 1979, Nature, 281, 358 Anderson, L., Aubourg, É., & Bailey, S. et al. 2014, MNRAS, 441, 24 Ballinger, W.E., Peacock, J.A., & Heavens, A.F. 1996, MNRAS, 282, 877 Betoule, M., Kessler, R., & Guy, J., et al. 2014, A&A, 568, 32 Beutler, F., Blake, C., & Colless, M., et al. 2011, MNRAS, 416, 3017 Blake, C., Glazebrook, K., & Davis, T. M., 2011, MNRAS, 418, 1725 Crittenden, R.G. et al. 2012, JCAP, 02, 048 Crittenden, R. G., Pogosian, Li, & Zhao, G.-B., 2009. JCAP, 0912, 025 Crittenden, R. G., Zhao, G.-B., & Pogosian, Li, et al. 2012. JCAP, 1202, 048 Delubac, T. et al. 2015, Astron. & Astrophys. 574, A59. Efstathiou, G. 2014, MNRAS, 440, 1138 Feng, B., Wang., X. L., & Zhang, X. M., 2005. Phys. Lett. B., 607, 35 Heymans, C. et al. 2013, Mon. Not. R. Astron. Soc. 432, 2433–2453. 1303.1808. Kim, J., Park, C., L’Huillier, B., & Hong, S. E. 2015, JKAS, 48, 213 Kitaura, F.S., Rodriguez-Torres, S., Chuang, C.-H., et al. arXiv:1509.06400 Laureijs, R., Amiaux, J., & Arduini, S., et al. 2011, arXiv:1110.3193 Marshall, Phil, Anguita, Timo, & Bianco, F. B., et al. 2017, arXiv:1708.04058 Lavaux, G., & Wandelt, B.D. 2012, ApJ, 754, 109 Li, M., Li, X.-D., Wang, S., & Wang, Y. 2011, Commun. Theor. Phys., 56, 525 Li, X.-D., Park, C., Forero-Romero, J., & Kim, J. 2014, ApJ, 796, 137 Li, X.-D., Park, C., Sabiu, C.G., & Kim, J. 2015, MNRAS, 450, 807 Li, X.-D., Park, C., & Sabiu, C.G., et al. 2016, ApJ, 832, 103 Li, X.-D., Sabiu, C.G., & Park, C., et al. 2018, ApJ, 856, 88 Li, X.-D., Miao, H, & Wang, X., et al. 2019, submitted to ApJ Mao, Q., Berlind, A.A., Scherrer, R.J., et al. 2016, submitted to ApJ Matsubara T., & Suto, Y. 1996, ApJ, 470, L1 Moresco, M. et al. 2016, J. Cosmol. Astropart. Phys. 5, 014. 1601. 01701. Outram, P.J., Shanks, T., Boyle, B.J., Croom, S.M., Hoyle, F., Loaring, N.S., Miller, L., & Smith, R.J. 2004, MNRAS, 348, 745 Park, C., & Kim, Y.-R. 2010, ApJL, 715, L185 Parkinson, D. et al. 2012, Phys. Rev. D 86, 103518. 1210.2130. Perlmutter, S., Aldering, G., & Goldhaber, G., et al. 1999, ApJ, 517, 565 Riess, A.G., Filippenko, A.V., & Challis, P., et al. 1998, AJ, 116, 1009 Riess, A.G., Macri, L., & Casertano, S., et al. 2011, ApJ, 730, 119 Ross, A.J., Samushia, L., & Howlett, C., et al. 2015, MNRAS, 449, 835 Ryden, B.S. 1995, ApJ, 452, 25 Weinberg, S. 1989, Reviews of Modern Physics, 61, 1 Weinberg, D.H, Mortonson, M.J., Eisenstein, D.J., et al. 2013, Physics Reports, 530, 87 Wang, Y., Pogosian, L., Zhao, G.-B., & Zucca, A. 2018. accepted by ApJL Yoo, J., & Watanabe, Y. 2012, International Journal of Modern Physics D, 21, 1230002 Zhang, X., Huang, Q.-G., & Li, X.-D. 2018. Mon. Not. R. Astron. Soc., 483, 1655 Zhao, G.-B., Crittenden, R.-G., Pogosian, L., & Zhang, X, 2012. Phys. Rev. Lett., 109, 171301 Zhao, G.-B., Raveri, M., & Pogosian, L., et al. 2017, Nat. Astron., 1, 627 Zhao, G.-B. et al. 2017, Mon. Not. R. Astron. Soc. 466, 762 Zhao G.-B., et al., 2019, MNRAS, 482, 3497 [^1]: The boundaries are determined so that, for LOWZ and CMASS samples, the number of galaxies are same in each bin, respectively. [^2]: [@Zhao:2017cud] used the SDSS galaxy BAO measurements at nine effective redshifts, which are measurements at more redshift points than our adopted BAO dataset, and is expected to be more powerful in such a $w(z)$ reconstruction analysis.
--- abstract: 'We consider the evolution of the phase transition from the parent hexagonal phase $P6_{3}/mmc$ to the orthorhombic phase $Pmcn$ that occurs in several compounds of $A^{\prime }A^{\prime \prime }BX_{4}$ family as a function of the hcp lattice parameter $c/a$. For compounds of $K_{2}SO_{4}$ type with $c/a$ larger than the threshold value 1.26 the direct first-order transition $Pmcn-P6_{3}/mmc$ is characterized by the large entropy jump $\sim R\ln 2$. For compounds $Rb_{2}WO_{4}$, $K_{2}MoO_{4}$, $K_{2}WO_{4}$ with $c/a<1.26$ this transition occurs via an intermediate incommensurate $(Inc)$ phase. DSC measurements were performed in $Rb_{2}WO_{4}$ to characterize the thermodynamics of the $Pmcn-Inc-P6_{3}/mmc$ transitions. It was found that both transitions are again of the first order with entropy jumps $0.2\cdot R\ln 2$ and $0.3\cdot R\ln 2$. Therefore, at $c/a \sim 1.26$ the $A^{\prime }A^{\prime \prime }BX_{4}$ compounds reveal an unusual Lifshitz point where three first order transition lines meet. We propose the coupling of crystal elasticity with $BX_{4}$ tetrahedra orientation as a possible source of the transitions discontinuity.' address: - | L.D.Landau Institute for Theoretical Physics, 117940, Moscow, Russia\ and Departamento de F' isica, Universidade Federal de Minas Gerais\ Caixa Postal 702, 30161-970, Belo Horizonte, Minas Gerais, Brazil - | Departamento de F' isica, Universidade Federal de Minas Gerais\ Caixa Postal 702, 30161-970, Belo Horizonte, Minas Gerais, Brazil - 'L.M.M.I., Université de Toulon et du Var, BP 132, 83957 La Garde cédex, France' author: - 'I. Luk’yanchuk' - 'A. Jório' - 'P. Saint-Grégoire' title: | Thermodynamics of the incommensurate state in $Rb_{2}WO_{4}$:\ on the Lifshitz point in $A^{\prime }A^{\prime \prime }BX_{4}$ compounds --- -10pt Introduction ============ The orientational ordering of $BX_{4}$ tetrahedra drives a rich sequence of structural phases in ionic $A^{\prime }A^{\prime \prime }BX_{4}$ compounds of $K_{2}SO_{4}$ type. In the present communication we are interested in the nature of the phase transition from the parent high-symmetry phase $% P6_{3}/mmc$ (like $\alpha $-$K_{2}SO_{4}$) to the orthorhombic phase $Pmcn$ (like $\beta $-$K_{2}SO_{4}$) that occurs at high temperatures ($T\sim 600-800K$) either directly: $$Pmcn\stackrel{T_{c}}{-}P6_{3}/mmc \label{Dir}$$ (e.g. in $K_{2}SO_{4}$, $Rb_{2}SeO_{4}$, $K_{2}SeO_{4}$) or via an intermediate $1q-$incommensurate ($Inc$) phase: $$Pmcn\stackrel{T_{l}}{-}Inc\stackrel{T_{i}}{-}P6_{3}/mmc \label{Inc}$$ as in molybdates and tungstates $Rb_{2}WO_{4}$, $K_{2}MoO_{4}$, $K_{2}WO_{4}$. The last one has the modulation vector ${\bf q= }(0,q_{b},0)\,$ that can be alternatively directed in two other equivalent directions of the $120^{0}$ star of the hexagonal Brillouin zone. All the transitions are of the order-disorder type and are characterized by the vertical (up/down) orientations of $BX_{4}$ tetrahedra. Other, low temperature transitions in $% A^{\prime }A^{\prime \prime }BX_{4}$ compounds that are related with the planar orientation of tetrahedra are beyond our consideration (for details, see Refs. [@Kur; @KurRev; @L]). From the viewpoint of the Landau theory of phase transitions, only the lock-in $Pmcn-Inc$ transition should be of the first order. The $Pmcn-P6_{3}/mmc$ transition should be of the second order since $Pmcn$ is a subgroup of $P6_{3}/mmc$ and in the Landau functional neither third order nor Lifshits terms are present. The transition $Inc-P6_{3}/mmc$ should also be of the second order as a transition to the incommensurate phase of the type II. The recently proposed hcp Ising model [@L] correctly describes the high-temperature phase diagram of $A^{\prime }A^{\prime \prime }BX_{4}$ compounds. In this model the $Pmcn-P6_{3}/mmc$ and $Inc-P6_{3}/mmc$ transitions are of the second order. The experimental properties of $\ $the $Pmcn-P6_{3}/mmc$ transitions in various compounds of $A^{\prime }A^{\prime \prime }BX_{4}$ family are collected from Refs.[@G; @M; @F; @S; @Rao; @Lop; @W] in Table I as function of the geometrical factor $c/a$ of their hcp structure. As was shown in our previous study [@L] this is the unique parameter that drives the actual phase sequence: for $c/a>1.26$ the transition is direct, whereas for $% c/a<1.26$ the sequence (\[Inc\]) takes place. In disagreement with the theoretical prediction, the direct $Pmcn-P6_{3}/mmc$ transition is of the first order with a large jump of the molar entropy ($\sim R\ln 2$) [@S; @Lop] and of the lattice constants ($\sim 2\%$) [@M; @F]. The Incommensurate phase in $Rb_{2}WO_{4}$, $K_{2}MoO_{4}$, $K_{2}WO_{4}$ was relatively poorly studied because of the high hygroscopic nature of these compounds [@W; @M1]. It is known that the $Inc-P6_{3}/mmc$ transition reveals a substantial discontinuity of the lattice parameter ($\sim 0.2-0.7\%$) [@W]. To characterize the thermodynamics of the $Pmcn-Inc-P6_{3}/mmc$ transition we performed Differential Scanning Calorimetry (DSC) measurements in $Rb_{2}WO_{4}$ that are reported in Sec. II. It was found that both transitions are of the first order with entropy jumps of $% 0.2R\ln 2$ and $0.3R\ln 2$. The $\ Inc-P6_{3}/mmc$ transition is a rare example of incommensurate transition that occurs discontinuously. At $c/a\sim 1.26$ the critical temperatures $T_{l}$, $T_{i}$, $T_{c}$ coincide and the $A^{\prime }A^{\prime \prime }BX_{4}$ compounds seem to reveal a triple Lifshitz point, that was found previously only in few experimental systems (for a review see Ref. [@UFN]). The particular property of this Lifshitz point in $A^{\prime }A^{\prime \prime }BX_{4}$ compounds is that all the incoming transition lines are of the first order. This possibility was theoretically studied in Ref. [@NaNo] where the discontinuities were modeled by the negative forth-order terms in the Landau functional. To our knowledge this is also the unique example of Lifshitz point in a system where the modulation vector can be directed in more than one equivalent direction. The main question raised by these systems is why strong discontinuities at $Pmcn-P6_{3}/mmc$ and $Inc-P6_{3}/mmc$ transitions appear. Note first that they cannot be ascribed to fluctuation effects that were widely studied last decades in relation with transitions to the modulated phases [@Braz; @Kats; @Horn; @Muk; @Bar]. In such a case the first order character is attributed to the lack of a stable fixed point as in $BaMnF_4$ [@StGreg] and the discontinuity is expected to be small due to the smallness of the critical region. We propose that the observed discontinuities are caused by the coupling of the order parameter with elasticity of the crystal that is known [@Domb; @Sal; @Lar] to be able to change the order of transition. Introducing in Sec. III the corresponding coupling to the mean-field treatment of the hcp Ising model [@L] and comparing the results with the measured jumps of the lattice constant and molar entropy we demonstrate that this coupling can be responsible for the transitions discontinuity. Experiment ========== DSC experiments were performed in $Rb_{2}WO_{4}$ crystals to characterize the thermodynamics of the $Pmcn-Inc-P6_{3}/mmc$ transitions. Due to the very high hygroscopic nature of the material, powder samples were prepared in a special camera in a dry nitrogen atmosphere. DSC experiments have been performed using a Mettler-TA3000 equipment, between room temperature and $% 820K$. The heating/cooling rate was $5K/min$. DSC thermograms of the investigated sample show the presence of two reversible enthalpic anomalies at about $T_{l}= 660K$ and $T_{i}= 746K$, the lock-in and the incommensurate phase transitions, respectively (see Fig. I). The measured molar entropy jumps are $\Delta S_{Tl}= 1.4J/K\cdot mol$ and $\Delta S_{Ti}= 1.8J/K\cdot mol$ (approximately $80\%$ of $\Delta S_{Ti}$ are taken from the $\delta -$peak of DSC anomaly at $T_{i}$ and other $20\%$ from the residual specific heat decrease in the temperature interval of about $8K$ below $T_{i}$, see Fig. I). These values are given in the Table I in units of $R\ln 2$. An hysteresis of $12.5K$ was observed for the $Pmcn-Inc$ transition, ($% T_{l}= 664\pm 0.5K$ for heating and $651.5\pm 0.5K$ for cooling). In contrast, the $Inc-P6_{3}/mmc$ transition reveals no hysteresis within the error bar of $\pm 0.5K$. This is consistent with extremely small hysteresis of $1K$ observed for $Pmcn-P6_{3}/mmc$ transition in $K_{2}SO_{4}$ [@Rao]. Discussion ========== The high-temperature order-disorder transitions in $A^{\prime }A^{\prime \prime }BX_{4}$ compounds are described by the in-site averages of the vertical orientation of $BX_{4}$ tetrahedra, $\sigma _{i}= <S_{i}>$ where the pseudo-spin $S_i$ is equal to $\pm 1$ for the up/down tetrahedra orientations [@L]. The variables $\sigma_i$ are equal to zero in the disordered high-temperature phase $P6_{3}/mmc$. In the low-temperature phase $Pmcn$, they take the equal amplitudes $\sigma _{i}= \pm \sigma $ and alternate according to $Pmcn$ symmetry. In the incommensurate phase a modulation $\sigma _{i}= \sigma _{q}(e^{i{\bf qr}_{i}}+e^{-i{\bf qr}_{i}})= 2\sigma _{q}\cos {\bf qr}_{i}$ occurs. The absolute values of $\sigma _{i}$, and hence of amplitudes $\sigma $ and $% 2\sigma _{q}$ (that define the corresponding order parameters), are smaller than one; the less they are, the more $BX_{4}$ tetrahedra are disordered. Because of the discontinuity of the $Pmcn-P6_{3}/mmc$ and $% Inc-P6_{3}/mmc$ transitions the amplitudes $\sigma $ and $2\sigma _{q}$ have nonvanishing values below the critical temperatures $T_{c}$ and $T_{i}$. We estimate $% \sigma $ and $2\sigma _{q}$ in the ordered states from the entropy jump at the transition: $$\frac{\Delta S}{R}= <\frac{1}{2}((1+\sigma _{i})\ln (1+\sigma _{i})+(1-\sigma _{i})\ln (1-\sigma _{i}))>_{i} \label{entropy}$$ that is measured experimentally (see Table I). For $K_{2}SeO_{4}$ and $% K_{2}SO_{4}$, the inequality $\Delta S>R\ln 2$ holds, which means that in the low temperature phase the $BX_{4}$ tetrahedra are perfectly ordered ($\sigma \sim 1$) and, possibly, other degrees of freedom are involved in the transition. Taking $\sigma _{i}$ in the incommensurate phase of $Rb_{2}WO_{4} $ as $2\sigma _{q}\cos {\bf qr}_{i}$, from $\Delta S= 0.3R\ln 2$ we get $% 2\sigma _{q}\sim 0.8$ that again demonstrates the high degree of tetrahedra ordering. In the mean-field approach of the hcp Ising model [@L], the phase transitions from $P6_{3}/mmc$ to $Pmcn$ and to $Inc$ phases were found to be continuous and the free energy (per molecule) was expanded over the small values of parameters $\sigma $ and $2\sigma _{q}$ as: $$f_{com}= \frac{k}{2}(T-T_{c})\sigma ^{2}+\frac{kT}{12}\sigma ^{4} \label{fcom}$$ for $Pmcn-P6_{3}/mmc$ transition, and as: $$f_{inc}= \frac{k}{4}(T-T_{i})(2\sigma _{q})^{2}+\frac{kT}{32}(2\sigma _{q})^{4} \label{finc}$$ for $Inc-P6_{3}/mmc$ transition. The critical temperatures $T_{c}$, $T_{i}$ are functions of interaction parameters $J_{ij}$. They coincide at the Lifshitz point and are correlated with the geometrical factor $c/a$ as follows: $T_{c}<T_{i}$ when $a/c<1.26$ and $% T_{c}>T_{i}$ when $a/c>1.26$. To account for the discontinuity of the transitions we propose that coupling of tetrahedra orientation with crystal elasticity is responsible for this phenomenon. Our further consideration is analogous to compressible Ising model, proposed by Domb [@Domb]. For estimation purpose we consider here only the coupling with the strain $e_{3}$ along the hexagonal axis and omit other elastic degrees of freedom. Account of those is difficult because of the absence of experimental data, but it can only improve our estimations. The elastic contribution to the free energy is written as: $$f_{el}= \gamma \sigma ^{2}e_{3}+\frac{1}{2}V_{mol}C_{33}e_{3}^{2} \label{felcom}$$ where $\gamma \sigma ^{2}e_{3}$ is the coupling of the order parameter with elastic strain and $V_{mol}C_{33}e_{3}^{2}$ is the proper elastic energy of the crystal (per unit volume $V_{mol}$ of the molecule). After minimization we get the strain in the $Pmcn$ phase: $e_{3}= \Delta c/c= -\gamma \sigma ^{2}/V_{mol}C_{33}$. Substituting it back to (\[felcom\]) we find that coupling with elastic strain renormalizes the quartic term in (\[fcom\]) and the total free energy is written as: $$\begin{aligned} f_{com}+f_{el} &= &\frac{k}{2}(T-T_{c})\sigma ^{2} \label{ftotcom} \\ &&+(kT/12-\gamma ^{2}/2C_{33}V_{mol})\sigma ^{4} \nonumber\end{aligned}$$ The quartic term becomes negative when the elastic contribution $\gamma ^{2}/2C_{33}V_{mol}$ exceeds the Ising thermal energy $kT/12$. The transition then is of the first order and the amplitude of $Pmcn$ order parameter is stabilized by the higher order terms. We estimate the value of the coupling constant $\gamma $ from the relation $% \gamma $ = $-\Delta c/c\cdot V_{mol}C_{33}/\sigma ^{2}$ that, in $K_{2}SO_{4} $ with $C_{33}= 55\cdot 10^{9}N/m^{2}$ [@C33], $\sigma ^{2}\sim 1,$ $% V_{mol}\sim 120\AA ^{3}$ [@M] and $\Delta c/c= 0.025$ gives $\gamma \sim -1.7\cdot 10^{-19}J$. Then, the elastic contribution $\gamma ^{2}/2C_{33}V_{mol}\sim 2.1\cdot 10^{-21}J$ is indeed larger than $% kT_{c}/12= 10^{-21}J$ that justifies the role of the elastic degrees of freedom in the discontinuity of $Pmcn-P6_{3}/mmc$ transition. Consider now the $Inc-P6_{3}/mmc$ transition. Assuming that the elastic coupling is given by $\gamma <\sigma^2_{i}> e_{3}$= $\frac{1}{2}\gamma (2\sigma _{q})^{2}e_{3}$, we come to the effective functional $$\begin{aligned} f_{inc}+f_{el} &= &\frac{k}{4}(T-T_{i})(2\sigma _{q})^{2} \label{ftotinc} \\ &&+(kT/32-\gamma ^{2}/8C_{33}V_{mol})(2\sigma _{q})^{4} \nonumber\end{aligned}$$ We perform the estimation of the quartic coefficient for $Rb_{2}WO_{4}$ in an analogous way as the precedent, taking $2\sigma _{q}\sim 0.8$ and $% V_{mol}\sim 142\AA ^{3}$ [@W]. Since the elastic constant $% C_{33}$ is not available we assume it has the same value as in $K_{2}SO_{4}$. The jump of the lattice parameter $\Delta c/c$ is assumed to be of the same order of $0.007$ as in $K_{2}MoO_{4}$. The calculation gives: $\gamma = -2\Delta c/c\cdot V_{mol}C_{33}/(2\sigma _{q})^{2}\sim -2\cdot 10^{-19}J$ and $\gamma ^{2}/8C_{33}V_{mol}\sim 13\cdot 10^{-22}J$ that again is larger than the bare forth-order coefficient $kT_{i}/32= 3\cdot 10^{-22}J$. To conclude, we suggest that the Lifshitz point occurs in $A^{\prime }A^{\prime \prime }BX_{4}$ compounds at $c/a\sim 1.26$ where three first-order transition lines meet together. One can expect to achieve this point experimentally, either by preparation of solid solution $Rb_2W_xMo_{1-x}O_4$ or by submitting $K_{2}SeO_{4}$ or $Tl_{2}SeO_{4}$ (with $c/a= 1.27$ and $1.26$) to an uniaxial pressure along $c$. Analyzing experimental data we demonstrate that the coupling of the order parameter with crystal elasticity can be responsible for the discontinuity of transition. We stress another peculiar feature of the $Pmcn-P6_{3}/mmc$ and $% Inc-P6_{3}/mmc$ transitions. Despite of the strong entropy jump ($\sim R\ln 2 $), they have a very low hysteresis (less than $1K$) that cannot be explained on the basis of the available models [@Domb; @Sal]. It is interesting to note that the $Pmcn-P6_{3}/mmc$ transition occurs also in another compound of $A^{\prime }A^{\prime \prime }BX_{4}$ family - $KLiSO_{4} $ that has a large ratio $c/a= 1.69$. Unlike other cases, this transition is either of the second or of the weak first order with the entropy jump less then $% 0.1R\ln 2$ [@B] and with no visible jump of the lattice constants [@Bob]. Then, it is quite probable, that the order of transition changes from the first to the second one and that the $Pmcn-P6_{3}/mmc$ transition line reveals a tricritical point when $c/a$ increases. More systematic experiments however are needed to verify this hypothesis. We are grateful to M. A. Pimenta, R. L. Moreira, W. Selke, A. S. Chaves, F. C. de Sá Barreto, J. A. Plascak for helpful discussions and to A. M. Moreira for technical assistance. The work of I. L. was supported by the Brazilian Agency Fundacao de Amparo a Pesquisa em Minas Gerais (FAPEMIG) and by Russian Foundation of Fundamental Investigations (RFFI), Grant No. 960218431a The work of A. J. was supported by the Brazilian Agency Fundação Coordenação de Aperfeiçoamento de Pessoal de N' ivel Superior (CAPES). M. Kurzyński and M. Halawa, Phys. Rev., [**B34**]{}, 4846 (1986). M. Kurzyński, Act. Phys. Pol., [**B6**]{}, 1101 (1995) I.Luk’yanchuk, A. Jório and M. A. Pimenta, Phys.Rev. [**B57**]{}, 5086 (1998) G. Gatow, Acta Cryst., [**15**]{}, 419 (1962) A.J. Majumdar and R. Roy, J. Phys. Chem. [**69**]{}, 1684 (1965) G. Pannetier and M. Gaultier, Bull. Soc. Chim. Fr., 1069 (1966) C. H. Shomate and B. F. Naylor, J. Am. Chem. Soc., [**67**]{}, 72 (1945) C. N. R. Rao and K. J. Rao [*Phase Transitions in Solids*]{}, (McGraw-Hill Inc., 1978) A. Lopez Echarri, M. J. Tello and P. Gili, Sol. St. Com., [**36**]{}, 1021 (1980) J. Warczewski, Phase Transitions, [**1**]{} , 131 (1979) F. Tunistra and A. J. van den Berg, Phase Transitions, [**3**]{} , 275 (1983) and refs. therin Yu. M. Vysochanskiĭ and V. Yu. Slivka, Usp. Fiz. Nauk,= [**162**]{}, 139 (1992) \[Sov. Phys. Usp. [**35**]{}, 123 (1992)\] S. L. Qui, Mitra Dutta, H. Z. Cummins, J. P. Wicked and S. M. Shapiro, Phys. Rev. [**B34**]{}, 7901 (1986) S. A. Brazovsky, I. E. Dzyaloshinsky and A. R. Muratov, Zh. Eksp. Teor. Fiz. [**75**]{}, 1140 (1987) \[Sov. Phys. JETP, [**48**]{}, 573 (1987)\] E. I. Kats, V. V. Lebedev and A. R. Muratov, Phys. Reports [**228**]{}, 1 (1993) R. M. Hornreich, M. Luban and S. Schtrikman, Phys. Rev. Lett., [**35**]{}, 1678 (1975) D. Mukamel and M. Luban, Phys. Rev., [**18**]{}, 3631 (1978) C. Barbosa, Phys. Rev. [**B42**]{}, 6363 (1990) P. Saint-Grégoire, W. Kleemann, F. J. Schafer and J. Moret, J. Physique [**49**]{}, 463 (1988) C. Domb, J. Chem. Phys. [**25**]{}, 783 (1956) S. R. Salinas, J. Phys. C: Solid State Phys., [**7**]{}, 241 (1974) and refs. therin. A. I. Larkin and S. A. Pikin, Zh. Eksp. Teor. Fiz. [**56**]{}, 1664 (1969) \[Sov. Phys.-JETP, [**29**]{}, 891 (1969)\] LANDOLT-BÖRNSTEIN, New Series, Vol. III/18, [*Elastic, Piezoelectric and related constants of crystals,* ]{}Edited by K.-H.Hellwege and A.H.Hellwege, (Springer, 1984) T. Breczewski, P. Piskunowicz and G. Jaroma-Weiland, Acta Phys. Polonica, [**A66**]{} (1984) A. Righi and R. L. Moreira, private communication. ----------------- -------- ----------- ----------- ---------------------------------- --------------------------- ----------------------------- ---------------------- $c/a$ $T_{l},K$ $T_{i},K$ $(\frac{\Delta S}{R\ln 2})_{Tl}$ $(\frac{% $(\frac{\Delta c}{c})_{Tl}$ $(\frac{\Delta c}{c% \Delta S}{R\ln 2})_{Ti} $ })_{Ti}$ $K_{2}WO_{4}$ $1.24$ $643$ $707$ $<0.2\%$ $0.2\%$ $K_{2}MoO_{4}$ $1.24$ $593$ $733$ $<0.2\%$ $0.7\%$ $Rb_{2}WO_{4}$ $1.25$ $660$ $746$ $0.2$ $0.3$ $Tl_{2}SeO_{4}$ $1.26$ $K_{2}SeO_{4}$ $1.27$ $Rb_{2}MoO_{4}$ $1.27$ $Rb_{2}SeO_{4}$ $1.29$ $Cs_{2}SeO_{4}$ $1.29$ $K_{2}SO_{4}$ $1.29$ $Tl_{2}SO_{4}$ $1.30$ ----------------- -------- ----------- ----------- ---------------------------------- --------------------------- ----------------------------- ---------------------- : The critical temperatures $T_{l}$, $T_{i}$ for $Pmcn$ - $Inc$ - $% P6_{3}/mmc$ transitions and $T_{c}$, for $Pmcn$ - $P6_{3}/mmc$ transitions, the molar entropy jumps and the lattice parameter jumps as functions of lattice parameter $c/a$. The entropy jumps in $Rb_{2}WO_{4}$ were measured in the present study.
--- abstract: | This article was written for the Logic in Computer Science column in the February 2015 issue of the Bulletin of the European Association for Theoretical Computer Science. The intended audience is general computer science audience. The uncertainty principle asserts a limit to the precision with which position $x$ and momentum $p$ of a particle can be known simultaneously. You may know the probability distributions of $x$ and $p$ individually but the joint distribution makes no physical sense. Yet Wigner exhibited such a joint distribution $f(x,p)$. There was, however, a little trouble with it: some of its values were negative. Nevertheless Wigner’s discovery attracted attention and found applications. There are other joint distribution, all with negative values, which produce the correct marginal distributions of $x$ and $p$. But only Wigner’s distribution produces the correct marginal distributions for all linear combinations of position and momentum. We offer a simple proof of the uniqueness and discuss related issues. address: - | Mathematics Department\ University of Michigan\ Ann Arbor, MI 48109–1043, U.S.A. - | Microsoft Research\ One Microsoft Way\ Redmond, WA 98052, U.S.A. author: - Andreas Blass - Yuri Gurevich title: Negative Probability --- Introduction {#sec:intro} ============ “Trying to think of negative probabilities,” wrote Richard Feynman, “gave me a cultural shock at first” [@Feynman1987]. Yet quantum physicists tolerate negative probabilities. Feynman himself studied, in the cited paper, a probabilistic trial with four outcomes with probabilities $0.6$, $-0.1$, $0.3$ and $0.2$. We were puzzled. The standard interpretation of probabilities defines the probability of an event as the limit of its relative frequency in a large number of trials. “The mathematical theory of probability gains practical value and an intuitive meaning in connection with real or conceptual experiments” [@Feller §I.1]. Negative probabilities are obviously inconsistent with the frequentist interpretation. Of course, that interpretation comes with a tacit assumption that every outcome is observable. In quantum physics some outcomes may be unobservable. This weakens the frequentist argument against negative probabilities but does not shed much light on the meaning of negative probabilities. In the discrete case, a probabilistic trial can be given just by a set of outcomes and a probability function that assigns nonnegative reals to outcomes. One can generalize the notion of probabilistic trial by allowing negative values of the probabilistic function. Feynman draws an analogy between this generalization and the generalization from positive numbers, say of apples, to integers. But a negative number of apples may be naturally interpreted as the number of apples owed to other parties. We don’t know any remotely natural interpretation of negative probabilities. We attempted to have a closer look on what goes on. Heisenberg’s uncertainty principle asserts a limit to the precision with which position $x$ and momentum $p$ of a particle can be known simultaneously: $\displaystyle \sigma_x \sigma_p \ge \hbar/2$ where $\sigma_x, \sigma_p$ are the standard deviations and $\hbar$ is the (reduced) Planck constant. You may know the probability distributions (or the density function or the probability function; we will use the three terms as synonyms) of $x$ and $p$ individually but the joint probability distribution with these marginal distributions of $x$ and $p$ makes no physical sense[^1]. Does it make mathematical sense? More exactly, does there exist a joint distribution with the given marginal distributions of $x$ and $p$. In 1932, Eugene Wigner exhibited such a joint distribution [@Wigner1932]. There was, however, a little trouble with Wigner’s function. Some of its values were negative. The function, Wigner admits, “cannot be really interpreted as the simultaneous probability for coordinates and momenta.” But this, he continues, “must not hinder the use of it in calculations as an auxiliary function which obeys many relations we would expect from such a probability” [@Wigner1932]. Probabilistic functions that can take negative values became known as *quasi-probability distributions*. Richard Feynman described a specific quasi-probability distribution for discrete quantities, two components of a particle’s spin [@Feynman1987]. The uncertainty principle implies that these two quantities can’t have definite values simultaneously. So it seems plausible that an attempt to assign joint probabilities would again, as in Wigner’s case, lead to something strange — like negative probabilities. “Trying to think of negative probabilities gave me a cultural shock at first …It is usual to suppose that, since the probabilities of events must be positive, a theory which gives negative numbers for such quantities must be absurd. I should show here how negative probabilities might be interpreted” [@Feynman1987]. His attitude toward negative probabilities echoes that of Wigner: a quasi-probability distribution may be used to simplify intermediate computations. The meaning of negative probabilities remains unclear. Those intermediate computations may not have any physical sense. But if the final results make physical sense and can be tested then the use of a quasi-probability is justified. It bothered us that both Wigner and Feynman apparently pull their quasi-probability distributions from thin air. In particular, Wigner writes that his function “was found by L. Szilárd and the present author some years ago for another purpose” [@Wigner1932], but he doesn’t give a reference, and he doesn’t give even a hint about what that other purpose was. He also says that there are lots of other functions that would serve as well, but none without negative values. He adds that his function “seemed the simplest.” We investigated the matter and made some progress. We found a characterization of Wigner’s function that might be considered objective. \[pro:unique\] Wigner’s function is the unique quasi-distribution on the phase space that yields the correct marginal distributions not only for position and momentum but for all their linear combinations. > **Quisani[^2]:** Wait, I don’t understand the proposition. Wigner’s function is not a true distribution, so the notion of marginal distribution of Wigner’s function isn’t defined. Also, what does it mean for a marginal to be correct? > > **Authors[^3]:** The standard definition of marginals works also for quasi-probability distributions. A marginal distribution is correct if it coincides with the prediction of quantum mechanics. We’ll return to these issues in §\[sec:j2m\] and §\[sec:wigner\] respectively. > > [**Q:** ]{}To form a linear combination $ax+bp$ of position $x$ and momentum $p$ you add $x$, which has units of length like centimeters, and $p$, which has momentum units like gram centimeters per second. That makes no sense. > > [**A:** ]{}We are adding $ax$ and $bp$. Take $a$ to be a momentum and $b$ to be a length; then both $ax$ and $bp$ have units of action (like gram centimeters squared per second), so they can be added. If you want to make $ax+bp$ a pure number, divide by $\hbar$. > > [**Q:** ]{}Finally, is it obvious that Wigner’s quasi-distribution is not determined already by the correct marginal distributions for just the position and momentum, without taking into account other linear combinations? > > [**A:** ]{}This is obvious. There are modifications of Wigner’s quasi-distribution that still give the correct marginal distributions for position and momentum. For an easy example, choose a rectangle $R$ centered at the origin in the $(x,p)$ phase plane and modify Wigner’s $f(x,p)$ by adding a constant $c$ (resp. subtracting $c$) when $(x,p)\in R$ and the signs of $x$ and $p$ are the same (resp. different). For a smoother modification, you could add $cxp\exp(-ax^2-bp^2)$ where $a,b,c$ are positive constants (of appropriate dimensions). The idea to consider linear combinations of position and momentum came from Wigner’s paper [@Wigner1932] where he mentions that projections to such linear combinations preserve the expectations. In fact, the projections give rise to the correct marginals. This led us to the proposition. In the case of Feynman’s quasi-distribution mentioned above, one can’t use linear combinations of those two spin components to characterize the distribution. Nor is there a characterization using the spin component in yet another direction. Furthermore, if we only require the correct marginal distributions for the $x$ and $z$ spins, then there are genuine, nonnegative joint probability distributions with those marginals. Our investigation was supplemented by digging into the literature and talking to our colleagues, especially Nathan Wiebe. That brought us to “Quantum Mechanics in Phase Spaces: An Overview with Selected Papers” [@Zachos+]. It turned out that there was another, much earlier, approach to characterizing the Wigner quasi-probability distribution. The main ingredient for that earlier approach is a proposal by Hermann Weyl [@Weyl §IV.14] for associating Hermitian operators on $L^2$ to well-behaved functions $g(x,p)$ of position and momentum. José Enrique Moyal used Weyl’s correspondence to characterize Wigner’s quasi-distribution in terms of only the expectation values but for a wider class of functions rather than the marginal distributions for just the linear functions of position and momentum [@Moyal]. There is a trade-off here. The class of functions is wider but the feature to match is narrower. George Baker proved that any quasi-distribution on the position-momentum phase-space, satisfying his “quasi-probability distributional formulation of quantum mechanics,” is the Wigner function [@Baker]. The problem of an objective characterization of Wigner’s function also attracted the attention of Wigner himself [@Wigner1971; @OW]. The volume [@Zachos+] does not contain “our” characterization of Wigner’s function but it is known and due to Jacqueline and Pierre Bertrand [@Bertrand]. They found an astute name for the approach: tomographic. The tomographic approach gives an additional confirmation of the fact that behavior of our quantum system is not classical. The approach can be used to establish that the behavior of some other quantum systems is not classical. It has indeed been used that way in quantum optics; see [@Smithey; @Ourjoumtsev] for example. Still, in our judgment, our proof of the uniqueness of Wigner’s function is simpler and more direct than any other in the literature, and so we present it in §\[sec:wigner\]. In section §\[sec:weyl\] we establish Moyal’s characterization of Wigner’s function. §\[sec:feynman\] contains a cursory discussion related to Feynman’s four-outcome quasi-distribution. All our observations on that issue happened to be known as well, but we have yet to research the history of the negative probabilities in the discrete case. We intend to address the discrete case elsewhere. > [**Q:** ]{}So what is the meaning of negative probability? > > [**A:** ]{}We don’t know. > > [**Q:** ]{}The use of negative probabilities to validate the quantum character of a quantum system reminds me proofs by contradiction. Assume that the behavior is classical, produce a unique joint distribution, prove the existence of negative values and establish a contradiction. If this is the only use of negative probabilities then there is no need to interpret them semantically. > > [**A:** ]{}There are some attempts to use negative probabilities as a measure of “quantumness” [@Veitch1; @Veitch2]. We think that the jury is still out. > > [**Q:** ]{}I have yet another question. Recently, in this very column, Samson Abramsky wrote about contextuality which is another manifestation of non-classical behavior of quantum systems [@Abramsky]. I wonder what is the relation, if any, between negative probabilities and contextuality. > > [**A:** ]{}The discussion of that relation is beyond the scope of this paper. But please have a look at Robert Spekkens’s article [@Spekkens] with a rather telling title “Negativity and contextuality are equivalent notions of nonclassicality.” Acknowledgment {#acknowledgment .unnumbered} -------------- Many thanks to Nathan Wiebe who was our guide to the literature and the state of art on quasi-probabilities. Preliminaries {#sec:pre} ============= We tried to make the paper as accessible as possible; hence this section. We still assume some familiarity with mathematical analysis. By default in this paper integrals are from $-\infty$ to $+\infty$. The baby quantum theory that we use is covered in §3 of book [@Hall] titled “A first approach to quantum mechanics,”. Fourier transform {#sub:fourier} ----------------- The forward Fourier transform sends a function $f(x)$ to $$\hat f(\xi) = \frac1{\sqrt{2\pi}}\int f(x)\,e^{-i\xi x}\,dx.$$ and the inverse Fourier transform sends a function $g(\xi)$ to $$\check g(x) = \frac1{\sqrt{2\pi}} \int g(\xi)e^{i\xi x}\,d\xi,\\$$ Mathematically $x$ and $\xi$ are real variables. In applications, the dimension of $\xi$ is the inverse of that of $x$ so that $\xi x$ is a pure number. The forward and inverse Fourier transforms are defined also for functions of several variables. In particular, $$\begin{aligned} \hat f(\xi,\eta)&= \frac1{2\pi} \iint f(x,y)\, e^{-i(\xi x + \eta y)}\,dx\,dy,\\ \check g(x,y)&= \frac1{2\pi} \iint g(\xi,\eta) e^{i(\xi x + \eta y)}\, d\xi\,d\eta.\end{aligned}$$ > [**Q:** ]{}What about the convergence of the integrals? Are you going to ignore such details? > > [**A:** ]{}Yes, we are going to ignore such details. But Fourier transforms are used, with full mathematical rigor, even in some situations where the integrals don’t converge. > > [**Q:** ]{}I do not understand this. > > [**A:** ]{}The idea is to first define the Fourier transform as an operator on nice functions in $L^2({\mathbb R})$, for which the integrals clearly converge. Informally a function $f(x)$ is nice if it and its derivatives $f'(x), f''(x), f'''(x), \dots$ approach zero very rapidly as $x\to\infty$. The Fourier transform is an isometry on these nice functions, and the nice functions are dense in $L^2({\mathbb R})$, so the isometry extends to all of $L^2$. Details can be found in books on real analysis, like [@Kingman+] and [@Rudin]; alternatively, see [@Hall Appendix A.3.2]. Dirac’s delta function {#sub:delta} ---------------------- Dirac’s $\delta$-function is a generalized function such that for any nice function $f$, $$\int f(x) \delta(x) dx = f(0).$$ It follows that $$\int f(x) \delta(x-a) dx = \int f(x+a) \delta(x) dx = f(a).$$ Some divergent integrals, e.g. $\int e^{itx}dt$, can be seen as generalized functions in that sense. In fact, as generalized functions, $$\int e^{itx}dt = 2\pi\delta(x).$$ Indeed, $$\begin{aligned} \int dx\,f(x)\,\int e^{itx}dt &= \sqrt{2\pi} \int dt\, \frac1{\sqrt{2\pi}} \int f(x) e^{itx} dx\\ &= \sqrt{2\pi} \int \check f(t)\,dt\\ &= 2\pi \cdot \frac1{\sqrt{2\pi}} \int \check f(t)e^{-it0}dt = 2\pi f(0).\end{aligned}$$ > [**Q:** ]{}Are these nice functions the same as the nice functions mentioned earlier. > > [**A:** ]{}Yes, they are. Exponential operators {#sub:exponential} --------------------- The exponential $e^O$ of an operator $O$ over a topological vector space is the operator $$e^O = \sum_{k=0}^\infty \frac{O^k}{k!} = I + O + \frac12 O^2 + \frac16 O^3 + \dots$$ If $(X\psi)(x)= x\cdot\psi(x)$ then $(e^X\psi(x))= e^x \psi$, because $$(e^X\psi)(x) = \sum_{k=0}^\infty \frac1{k!} (X^k\psi)(x) = \psi \cdot \sum_{k=0}^\infty \frac1{k!} x^k = \psi \cdot e^x.$$ If $D$ is the derivative operator $\frac d{dx}$, then\ $e^{aD}\psi(x) = \psi(x+a)$. Indeed, $$\begin{aligned} e^{aD} \psi(x) &=\sum_{k=0}^\infty \frac{(aD)^k \psi(x)}{k!} = \sum_{k=0}^\infty \frac{D^kf(x)}{k!} a^k \\ &= \psi(x) + \frac{\psi'(x)}{1!}a + \frac{\psi''(x)}{2!}a^2 + \frac{\psi'''(x)}{3!}a^3 + \dots\end{aligned}$$ which is the Taylor series of $\psi(x+a)$ around point $x$. (Think of $a$ as $\Delta x$.) > [**Q:** ]{}What functions $f$ are you talking about? The Taylor series expansion of $f$ suggests that $f$ is analytic, that is real-analytic. > > [**A:** ]{}Our intention is that $f$ ranges over $L^2$. By the proof above, $e^{aD}$ is a shift $f(x)\mapsto f(x+a)$ on analytic functions. In particular, $e^{aD}$ is a shift on Gaussian functions $$\exp\left(-\frac{(x-b)^2}{2c^2}\right).$$ But Gaussian functions span a dense subspace of $L^2({\mathbb R})$, and there is a unique continuous extension of $e^{aD}$ to $L^2$, namely the shift $f(x)\mapsto f(x+a)$. > > [**Q:** ]{}The exponential $e^O$ has got to be a partial operator in general. > > [**A:** ]{}Yes, $e^O(x)$ is defined whenever the oprators $O^k$ are defined at $x$, and the series $\sum_{k=0}^\infty \frac{O^k(x)}{k!}$ converges. Joint-to-Marginal Lemma {#sec:j2m} ======================= Let $f(x,p)$ be an ordinary probability distribution or a quasi-distribution on ${\mathbb R}^2$. For any $z = ax+bp$ where $a,b$ are not both zero, the marginal distribution $g(z)$ of $z$ can be defined thus: $$g(z) = \begin{cases} \displaystyle \frac1b \int f(x, \frac1b (z-ax))\,dx &\mbox{if $b\ne0$}\\ \displaystyle \frac1a \int f(\frac1a (z-by), p)\,dp &\mbox{otherwise} \end{cases}$$ Here’s a justification in the case $b\ne0$. We have $$\begin{aligned} p &= \frac1b(z-ax),\\ dp &= \frac1b(dz-a\,dx),\\ f(x,p)\,dx\,dp &= f\big(x,\frac1b(z-ax)\big)\frac1b\,dx\,dz.\end{aligned}$$ We are relying here on the formalism of differential 2-forms [@Flanders] for area elements, so that $dx\,dz$ really means $dx\wedge dz$ and we have used that $dx\wedge dx=0$. The use of differential forms makes computations like this easier, and it fits well with physics, e.g., with Maxwell’s equations and with general relativity. One could, however, avoid differential forms here and get the same result by considering the Jacobian determinant of the change of variables. For any real $u\le v$, the probability that $u\le z\le v$ should be $$\begin{aligned} \int_u^v g(z) dz &= \iint_{u\le ax+bp\le v} f(x,p)\,dx\,dp \\ &= \iint_{u\le ax+bp\le v} \frac1b f\big(x,\frac1b(z-ax)\big)\,dx\,dz \\ &= \int_u^v dz \int_{-\infty}^\infty \frac1b f\big(x,\frac1b(z-ax)\big)\,dx.\end{aligned}$$ Since the first and last expressions coincide for all $u\le v$, we have $$g(z) = \frac1b \int f\big(x,\frac1b(z-ax)\big)\,dx.$$ For any $a,b$ not both zero, the following statements are equivalent. 1. $g(z)$ is the marginal distribution of $z = ax + bp$. 2. $\displaystyle \hat g(\zeta) = \sqrt{2\pi}\cdot\hat f(a\zeta,b\zeta).$ To prove (1)$\to$(2), suppose (1) and compare the forward Fourier transforms of $g$ and $f$: $$\begin{aligned} \hat g(\zeta)&= \frac1{\sqrt{2\pi}}\int g(z)e^{-i\zeta z}\,dz\\ &=\frac1{\sqrt{2\pi}}\iint f\big(x,\frac1b(z-ax)\big) e^{-i\zeta z}\frac1b\,dx\,dz\\ &=\frac1{\sqrt{2\pi}}\iint f(x,p)e^{-i\zeta(ax+bp)}\,dx\,dp.\\[1ex] \hat f(\xi,\eta)&= \frac1{2\pi}\,\iint f(x,p)e^{-i(\xi x+\eta p)}\,dx\,dp.\end{aligned}$$ We have $\hat g(\zeta) = \sqrt{2\pi} \hat f(a\zeta,b\zeta)$. To prove (2)$\to$(1), suppose (2) and use the implication (1)$\to$(2). If $h$ is the marginal distribution of $z = ax + by$ then $$\hat h(\zeta) = \sqrt{2\pi}\cdot\hat f(a\zeta,b\zeta) = \hat g(\zeta),$$ and therefore $g = h$. \[cor:j2m\] For any real $\alpha,\beta$ not both zero, $\hat f(\alpha,\beta) = \frac1{\sqrt{2\pi}}\, \hat g(\zeta)$ where $g(z)$ is the marginal distribution for the linear combination $z=ax+bp$ such that $\alpha=a\zeta$, $\beta=b\zeta$ for some $\zeta$. Wigner uniqueness {#sec:wigner} ================= The purpose of this section is to prove the Wigner Uniqueness proposition. For simplicity we work with one particle moving in one dimension, but everything we do in this section generalizes in a routine way to more particles in more dimensions. In classical mechanics, the position $x$ and momentum $p$ of the particle determine its current state. The set of all possible states is the phase space of the particle. By Corollary \[cor:j2m\], an ordinary distribution $f(x,p)$ on the phase space is uniquely determined by its marginal distributions for all linear combinations $ax+bp$ where $a,b$ are not both zero. In the quantum case, a state of the particle is given by a normalized (to norm 1) vector ${\ensuremath{|\psi\rangle}}$ in $L^2({\mathbb R})$. The position and momentum are given by Hermitian operators $X$ and $P$ where $$(X\psi)(x)= x\cdot\psi(x)\quad\text{and}\quad (P\psi)(x)= -i\hbar\frac{d\psi}{dx}(x).$$ For any $a,b$ not both zero, the linear combination $z = ax + by$ is given by the Hermitian operator $Z=aX+bP$. In a state ${\ensuremath{|\psi\rangle}}$, there is a probability distribution $g(z)$ (for the measurement) of the values of $z$. For a function $h(z)$ of $z$, the expectation of $h(z)$ is ${\ensuremath{\langle\psi | h(Z) | \psi\rangle}}$. The following technical lemma plays a key role in our proof of the uniqueness of Wigner’s quasi-distribution. \[lem:key\] $${\ensuremath{\langle\psi|}} e^{-i (\alpha X + \beta P)}{\ensuremath{|\psi\rangle}} = e^{i\alpha\beta\hbar/2} \int\psi^*(y)e^{-i\alpha y}\psi(y-\beta\hbar)\,dy.$$ We want to split the exponential into a factor with $X$ times a factor with $P$. This is not as easy as it might seem, because $X$ and $P$ don’t commute. We have, however, two pieces of good luck. First, there is Zassenhaus’s formula, which expresses the exponential of a sum of non-commuting quantities as a product of (infinitely) many exponentials, beginning with the two that one would expect from the commutative case, and continuing with exponentials of nested commutators: $$e^{A+B}=e^Ae^Be^{-\frac12[A,B]}\cdots,$$ where the “$\cdots$” refers to factors involving double and higher commutators. > [**Q:** ]{}You gave no reference to Zassenhaus’s paper. > > [**A:** ]{}Apparently, Zassenhaus never published this result, but there’s a paper [@Casas+] that shows how to compute the next terms. It also has a pointer to early uses of the formula. The second piece of good luck is that $[X,P]=i\hbar I$, where $I$ is the identity operator. (In the future, we’ll usually omit writing $I$ explicitly, so we’ll regard this commutator as the scalar $i\hbar$.) Since that commutes with everything, all the higher commutators in Zassenhaus’s formula vanish, so we can omit the “$\cdots$” from the formula. We have $${\ensuremath{\langle\psi|}} e^{-i\alpha X-i\beta P}{\ensuremath{|\psi\rangle}}\\ = {\ensuremath{\langle\psi|}} e^{-i\alpha X}e^{-i\beta P}e^{\alpha\beta[X,P]/2}{\ensuremath{|\psi\rangle}}.$$ The last of the three exponential factors here arose from Zassenhaus’s formula as $$-\frac12[-i\alpha X,-i\beta P] = \frac12\alpha\beta[X,P] = i\alpha\beta\hbar/2.$$ That factor, being a scalar, can be pulled out of the bra-ket. Taking into account §\[sub:exponential\], $${\ensuremath{\langle\psi|}} e^{-i (\alpha X + \beta P)}{\ensuremath{|\psi\rangle}}= e^{i\alpha\beta\hbar/2} \int \psi^*(y) e^{-i\alpha y} \psi(y-\beta\hbar)\,dy.$$ Now we are ready to prove the Wigner Uniqueness proposition. Suppose that a quasi-distribution $f(x,p)$ yields correct marginal distributions for all linear combinations of position and momentum. For any real $\alpha,\beta$ not both zero, let $a,b,g,\zeta$ be as in Corollary \[cor:j2m\]. Then $$\label{1} \begin{aligned} \hat f(\alpha,\beta) = \frac1{\sqrt{2\pi}} \hat g(\zeta) &=\frac1{2\pi}\int g(z)e^{-i\zeta z}\,dz\\ &=\frac1{2\pi} {\ensuremath{\langlee^{-i\zeta Z}\rangle}} =\frac1{2\pi}{\ensuremath{\langle\psi|}} e^{-i\zeta Z}{\ensuremath{|\psi\rangle}}\\ &=\frac1{2\pi}{\ensuremath{\langle\psi|}} e^{-i\zeta (aX+bP)}{\ensuremath{|\psi\rangle}}. \end{aligned}$$ By Lemma \[lem:key\], $$\label{2} \hat f(\alpha,\beta) = \frac{e^{i\alpha\beta\hbar/2}} {2\pi} \int\psi^*(y)e^{-i\alpha y}\psi(y-\beta\hbar)\,dy.$$ To get $f(x,p)$, apply the (two-dimensional) inverse Fourier transform. $$f(x,p)=\frac1{(2\pi)^2}\iiint \psi^*(y)e^{-i\alpha y}e^{i\alpha\beta\hbar/2}\psi(y-\beta\hbar) e^{i\alpha x}e^{i\beta p}\,dy\,d\alpha\,d\beta.$$ Collecting the three exponentials that have $\alpha$ in the exponent, and noting that $\alpha$ appears nowhere else in the integrand, perform the integration over $\alpha$ and (recall §\[sub:delta\]) get a Dirac delta function: $$\int e^{-i\alpha(y-\frac{\beta\hbar}2-x)}\, d\alpha=2\pi\delta(y-x-\frac{\beta\hbar}2).$$ That makes the integration over $y$ trivial, and what remains is $$\label{wigner} f(x,p)=\frac1{2\pi}\int\psi^*(x+\frac{\beta\hbar}2)\psi(x-\frac{\beta\hbar}2) e^{i\beta p}\,d\beta,$$ which is Wigner’s quasi-distribution. To check that Wigner’s quasi-distribution yields correct marginal distribution note that the derivation of from is reversible. This completes the proof of the Wigner Uniqueness proposition. Weyl’s correspondence {#sec:weyl} ===================== There is another approach to characterizing the Wigner quasi-probability distribution, using the expectation values for a wide class of functions rather than the marginal distributions for just the linear functions of position and momentum. The main ingredient for this approach is a proposal by Hermann Weyl [@Weyl §IV.14] for associating a Hermitian operator on $L^2({\mathbb R})$ to any (well-behaved) function $g(x,p)$ of position and momentum. Weyl’s proposal is to first form the Fourier transform $\hat g(\alpha,\beta)$ of $g(x,p)$, and then apply the inverse Fourier transform with the Hermitian operators $X$ and $P$ in place of the classical variables $x$ and $p$. Thus, the Weyl correspondence associates to $g(x,p)$ the operator $$g(X,P) = \frac1{2\pi} \iint \hat g(\alpha,\beta) e^{i(\alpha X+\beta P)}\,d\alpha\,d\beta.$$ If one grants that this is a reasonable way of converting phase-space functions $g(x,p)$ to operators $g(X,P)$, then a desirable property of a phase-space quasi-probability distribution $f(x,p)$ would be that the expectation of $g(X,P)$ in a quantum state  is the same as the expectation of $g(x,p)$ under $f(x,p)$. We shall show that the Wigner distribution is uniquely characterized by enjoying this desirable property for all well-behaved $g$. Indeed, the expectation of $g(X,P)$ in state  is $${\ensuremath{\langle\psi|}} g(X,P){\ensuremath{|\psi\rangle}} = \frac1{2\pi} \iint\hat g(\alpha,\beta) {\ensuremath{\langle\psi|}} e^{i(\alpha X+\beta P)}{\ensuremath{|\psi\rangle}}\,d\alpha\,d\beta,$$ and the expectation of $g(x,p)$ under the distribution $f(x,p)$ is $$\iint g(x,p)f(x,p)\,dx\,dp = \iint\hat g(\alpha,\beta)\hat f(\alpha,\beta) \,d\alpha\,d\beta.$$ This last equation is a consequence of the fact, mentioned in §\[sub:fourier\], that the Fourier transform is a unitary operator and therefore preserves the inner product structure of $L^2({\mathbb R})$. Since these two expectations agree for all (well-behaved) $g$, $$\hat f(\alpha,\beta) = \frac1{2\pi} {\ensuremath{\langle\psi|}} e^{i(\alpha X+\beta P)}{\ensuremath{|\psi\rangle}}.$$ But this is the part of equation that was used to derive Wigner’s formula . > [**Q:** ]{}I wonder how Weyl arrived at his proposal. > > [**A:** ]{}Weyl presents his proposal in [@Weyl] without any motivation, so we don’t know how he came up with it, but we can speculate. The title of [@Weyl] indicates that Weyl was working in a group-theoretic context. As a result, the Fourier transform, expressing functions on ${\mathbb R}$ as combinations of the characters $e^{i\alpha x}$ of the group $({\mathbb R},+)$ would be in the forefront of his considerations. Now consider his goal — to somehow convert a classical function $g(x,p)$ into an operator. Roughly speaking, he would want to substitute the operators $X$ and $P$ for the classical variables $x$ and $p$. An obvious difficulty is that the same functions $g(x,p)$ might have two different expressions, for example $xp=px$, which are no longer equivalent when operators are substituted, $XP\neq PX$. So it is reasonable to try to choose, from the many expressions for a function $g(x,p)$, one particular, reasonably canonical expression, into which one can substitute $X$ and $P$. The Fourier expansion, $\int \hat g(\alpha,\beta)\exp(i(\alpha x+\beta p))\,dx\,dp$ has those properties. It depends only on the function $g$, not on how one chooses to express it, and there is no problem substituting $X$ and $P$ for $x$ and $p$. Feynman and spins {#sec:feynman} ================= Richard Feynman studied “an analogue of the Wigner function for a spin $\frac12$ system or other two state system” [@Feynman1987]. He chose the $z$ and $x$ components of the spin to serve as the analogs of the position and momentum in Wigner’s formula. [**Q:** ]{}His case should be much simpler than Wigner’s case. [**A:** ]{}Not necessarily. While the commutator $[X,P]$ is a scalar, the commutator of the $z$ and $x$ components of a spin is the $y$-component times $i\hbar$. This is but one of several complications. [**Q:** ]{}Is Feynman’s quasi-distribution determined by the correct marginals for all linear combinations of the $x$ and $z$ spins? [**A:** ]{}That question sounds reasonable until you look at it a little more closely. To fix notation, let’s describe spin by means of the standard Pauli matrices $$X= \begin{pmatrix} 0&1\\1&0 \end{pmatrix},\qquad Y= \begin{pmatrix} 0&-i\\i&0 \end{pmatrix},\qquad Z= \begin{pmatrix} 1&0\\0&-1 \end{pmatrix}.$$ The usual matrix representation of the spins for a spin $\frac12$ particle is given by these matrices divided by 2, but it’s convenient to skip those extra factors $\frac12$; if you like, imagine that we are measuring angular momentum in units of $\hbar/2$ instead of $\hbar$. So each of our matrices has eigenvalues $\pm1$, with $+1$ meaning spin along the corresponding positive axis and $-1$ along the corresponding negative axis. For example, the two basis states of spin up and spin down along the $z$ axis are the eigenvectors for eigenvalues $1$ and $-1$ of $Z$; equivalently, they correspond to eigenvalues $1$ and $0$ for $(I+Z)/2$, where $I$ is the identity operator. This point of view is useful because it implies that, in any state ${\ensuremath{|\psi\rangle}}$, $(1+{\ensuremath{\langleZ\rangle}})/2$ is the probability that the $z$ spin is up. Here, as before, angle brackets denote expectations. Similarly, $(1-{\ensuremath{\langleZ\rangle}})/2$ is the probability that the $z$ spin is down. Of course, analogous formulas apply to the $x$ and $y$ components of the spin. Feynman, in analogy to Wigner, introduces a quasi-probability distribution $f$ for the pair of non-commuting observables $Z$ and $X$. So $f$ has four components, $f_{++},f_{+-},f_{-+}$, and $f_{--}$, as the quasi-probability of $Z$ and $X$ having the values $\pm1$ given by the subscripts of $f$. Now let’s look at a linear combination of $Z$ and $X$, for example the simplest nontrivial one, $Z+X$. From the point of view of quasi-probabilities, - with probability $f_{++}$, $Z$ has value 1 and $X$ has value 1, so $Z+X$ has value 2, - with probability $f_{+-}$, $Z$ has value 1 and $X$ has value $-1$, so $Z+X$ has value 0, - with probability $f_{-+}$, $Z$ has value $-1$ and $X$ has value 1, so $Z+X$ has value 0, and - with probability $f_{--}$, $Z$ has value $-1$ and $X$ has value $-1$, so $Z+X$ has value $-2$. Altogether, the possible values of $Z+X$ are 2, 0, and $-2$, with quasi-probabilities $f_{++}$, $f_{+-}+f_{-+}$, and $f_{--}$, respectively. So your proposed analog of the result for Wigner’s distribution would assume that $f$ is chosen so that these probabilities agree, in a given state, with the probabilities computed by quantum mechanics. But such agreement is impossible, because, according to quantum mechanics, the possible values of $Z+X$ are the eigenvalues of this operator, namely $\pm\sqrt2$, which are completely different from the $2,0,-2$ arising from the quasi-probabilities. It is easy to check that the possible values of any nontrivial linear combination of $Z$ and $X$ are completely different from the values arising from the quasi-probabilities. [**Q:** ]{}OK, let’s require the minimum that Feynman obviously intended, namely that $f$ should produce the correct marginal distributions of $Z$ and $X$. Does this determine $f$ uniquely? [**A:** ]{}At first sight, this looks promising. Requiring the correct marginals for the two variables, each having two possible values, gives us four equations for the four unknown components $f_{\pm\pm}$ of $f$: $$\begin{aligned} f_{++}+f_{+-}&=\frac12(1+{\ensuremath{\langleZ\rangle}})\\ f_{-+}+f_{--}&=\frac12(1-{\ensuremath{\langleZ\rangle}})\\ f_{++}+f_{-+}&=\frac12(1+{\ensuremath{\langleX\rangle}})\\ f_{+-}+f_{--}&=\frac12(1-{\ensuremath{\langleX\rangle}}).\end{aligned}$$ But there’s redundancy in the equations; only three of them are independent, so there’s one free parameter in the general solution. In fact, it’s easy to write down the general solution: $$\begin{aligned} f_{++}&=\frac14(1+{\ensuremath{\langleZ\rangle}}+{\ensuremath{\langleX\rangle}}+t)\\ f_{+-}&=\frac14(1+{\ensuremath{\langleZ\rangle}}-{\ensuremath{\langleX\rangle}}-t)\\ f_{-+}&=\frac14(1-{\ensuremath{\langleZ\rangle}}+{\ensuremath{\langleX\rangle}}-t)\\ f_{--}&=\frac14(1-{\ensuremath{\langleZ\rangle}}-{\ensuremath{\langleX\rangle}}+t),\end{aligned}$$ where $t$ is arbitrary. Feynman’s formulas correspond to $t={\ensuremath{\langleY\rangle}}$ but we see no reason to prefer Y over, for example, $-{\ensuremath{\langleY\rangle}}$. [**Q:** ]{}Put the freedom in choosing $t$ to some use. How about minimizing the negativity in $f$? In other words, adjust $t$ to bring $f$ as close as possible to being a genuine probability distribution. [**A:** ]{}That idea works better than we originally expected. One can get rid of the negativity altogether. For each state , there is a choice of $t$ that makes all four components of $f$ nonnegative. Indeed, write down the four inequalities $f_{\pm\pm}\geq0$ using the formulas above for these $f_{\pm\pm}$’s. Solve each one for $t$. You find two lower bounds on $t$, namely $$\begin{aligned} -1-{\ensuremath{\langleZ\rangle}}-{\ensuremath{\langleX\rangle}} &\text{ (from $f_{++}\ge0$)}\\ -1+{\ensuremath{\langleZ\rangle}}+{\ensuremath{\langleX\rangle}} &\text{ (from $f_{--}\ge0$)},\end{aligned}$$ and two upper bounds, namely $$\begin{aligned} 1+{\ensuremath{\langleZ\rangle}}-{\ensuremath{\langleX\rangle}} &\text{ (from $f_{+-}\ge0$)}\\ 1-{\ensuremath{\langleZ\rangle}}+{\ensuremath{\langleX\rangle}} &\text{ (from $f_{-+}\ge0$)}.\end{aligned}$$ An appropriate $t$ exists if and only if both of the lower bounds are less than or equal to both of the upper bounds. That gives four inequalities, which simplify to $-1\leq{\ensuremath{\langleZ\rangle}}\leq 1$ and $-1\leq{\ensuremath{\langleX\rangle}}\leq 1$. But these are always satisfied, because the eigenvalues of $Z$ and $X$ are $\pm1$. [**Q:** ]{}I am confused. The uncertainty principle asserts that you cannot measure $Z$ and $X$ at once. Accordingly one would expect that the joint probability distribution $f_{\pm\pm}$ should not exist or, as in Wigner’s case, should have at least one negative value. [**A:** ]{}The relevant difference between Wigner’s and Feynman’s cases seems to be this. In Wigner’s case, there is naturally a rich set of marginals that the joint probability distribution is supposed to produce, namely the probability distributions of all linear combinations of the position $x$ and the momentum $p$ of the particle. In Feynman’s case, the natural set of marginals is too poor, just the probability distributions of $Z$ and $X$. [**Q:** ]{}Did Feynman find a good use for quasi-probabilities? [**A:** ]{}He introduced negative probabilities in connection to a problem of infinities in quantum field theory. “Unfortunately I never did find out how to use the freedom of allowing probabilities to be negative to solve the original problem of infinities in quantum field theory!” [@Feynman1987]. [**Q:** ]{}Still, the idea to use quasi-probability distributions to simplify intermediate computations looks attractive to me. [**A:** ]{}You are in good company. [99]{} Samson Abramsky, “Contextual semantics: From quantum mechanics to logic, databases, constraints, and complexity,” in Logic in Computer Science Column, Bulletin of EATCS 113, 26 pages (2014). George A. Baker, Jr., “Formulation of quantum mechanics based on the quasi-probability distribution induced on phase space,” Physical Review 109:6 (1958) 2198–2206. Reprinted in [@Zachos+], 235–243. Jacqueline Bertrand and Pierre Bertrand, “A tomographic approach to Wigner’s function,” Foundations of Physics 17:4 (1987) 397–405. Fernando Casas, Ander Murua, and Mladen Nadinic, “Efficient computation of the Zassenhaus formula,” Computer Physics Communications 183 (2012) 2386–2391. William Feller, *An Introduction to Probability Theory and Its Applications, Volume 1*, John Wiley (1950), 3rd edition (1968). Richard P. Feynman, “Negative probabilities,” in *Quantum Implications: Essays in Honor of David Bohm*, ed. F. D. Peat and B. Hiley, Routledge & Kegan Paul (1987) 235–248. Reprinted in [@Zachos+], 426–439. Harley Flanders, *Differential Forms with Applications to the Physical Sciences,* Academic Press (1963). 2nd edition, Dover (1989). Brian C. Hall, *Quantum Theory for Mathematicians,* Springer-Verlag, Graduate Texts in Mathematics 267 (2013). John F. C. Kingman and S. James Taylor, *Introduction to Measure and Probability,* Cambridge Univ. Press (1966). José E. Moyal, “Quantum mechanics as a statistical theory,” Proc. Cambridge Phil. Soc. 45 (1949) 99–124. Reprinted in [@Zachos+], 167–192. G. Nogues, A. Rauschenbeutel, S. Osnaghi, P. Bertet, M. Brune, J. M. Raimond, S. Haroche, L. G. Lutterbach, and L. Davidovich, “Measurement of a negative value for the Wigner function of radiation,” Physical Review A 62, 054101 (2000). Robert F. O’Connell and Eugene P. Wigner, “Quantum-mechanical distribution functions: conditions for uniqueness,” Physics Letters 83A:4 (1981), 145–148. Alexei Ourjoumtsev, Rosa Tualle-Broui, and Philippe Grangier, “Quantum homodyne tomography of a two-photon Fock state,” arXive:quant-ph/0603284 (2006). Walter Rudin, *Real and Complex Analysis,* McGraw-Hill (1966). 3rd edition (1987). D. T. Smithey, M. Beck, M. G. Raymer, and A. Faridani, “Measurement of Wigner distribution and the density matrix of a light mode using optical homodyne tomography,” Physical Review Letters 70:9 1244-1247 (1993). Robert W. Spekkens, “Negativity and contextuality are equivalent notions of nonclassicality,” arXiv:0710.5549v2 (2008). Victor Veitch, Christopher Ferrie, David Gross, and Joseph Emerson, “Negative quasi-probability as a resource for quantum computation,” arXiv:1201.1256 (2012). Victor Veitch, S. A. Hamed Mousavian, Daniel Gottesman, and Joseph Emerson, “The resource theory of stabilizer quantum computation,” New Journal of Physics 16 013009 (Jan. 9, 2014). Hermann Weyl, *Quantenmechanik und Gruppentheorie*, Zeitschrift für Physik 46 1–46. Reprinted in [@Zachos+], 45–90. Eugene P. Wigner, “On the quantum correction for thermodynamic equilibrium,” Physical Review 40 (1932) 749–759. Reprinted in [@Zachos+], 100–110. Eugene P. Wigner, “Quantum mechanical distribution functions revisited,” in W. Yourgrau and A. van der Merwe (eds), *Perspective in Quantum Theory,* MIT Press, MA, 1971) 25–36 (1971). Cosmas K. Zachos, David B. Fairlie, and Thomas L. Curtright (eds.), *Quantum Mechanics in Phase Space: An Overview with Selected Papers,* World Scientific Series in 20th Century Physics, vol. 34 (2005). [^1]: For general information about joint probability distributions and their marginal distributions see [@Feller §IX.1] [^2]: Readers of this column may remember Quisani, an inquisitive former student of the second author. [^3]: speaking one at a time
--- abstract: 'We define the knotting probability of a knot $K$ by the probability for a random polygon (RP) or self-avoiding polygon (SAP) of $N$ segments having the knot type $K$. We show fundamental and generic properties of the knotting probability particularly its dependence on the excluded volume. We investigate them for the SAP consisting of hard cylindrical segments of unit length and radius $r_{\rm ex}$. For various prime and composite knots we numerically show that a compact formula describes the knotting probabilities for the cylindrical SAP as a function of segment number $N$ and radius $r_{\rm ex}$. It connects the small-$N$ to the large-$N$ behavior and even to lattice knots in the case of large values of radius. As the excluded volume increases the maximum of the knotting probability decreases for prime knots except for the trefoil knot. If it is large, the trefoil knot and its descendants are dominant among the nontrivial knots in the SAP. From the factorization property of the knotting probability we derive a relation among the estimates of a fitting parameter for all prime knots, which suggests the local knot picture. Here we remark that the cylindrical SAP gives a model of circular DNA which are negatively charged and semiflexible, where radius $r_{\rm ex}$ corresponds to the screening length.' author: - Erica Uehara and Tetsuo Deguchi title: 'Knotting probability of self-avoiding polygons under a topological constraint' --- Introduction ============ Statistical and dynamical properties of ring polymers under a topological constraint have attracted much interest in various branches of physics, chemistry and biology [@Kramers; @Semlyen; @Bates]. The topology of a ring polymer in solution is specified by a knot type (Fig. 1). Ring polymers with trivial topology are observed in nature such as circular DNA [@Vinograd]. Moreover, DNA with many knot types have been derived in experiments [@Nature-trefoil; @DNAknots]. Topological structures related to knots or pseudo-knots have been discussed in association with protein folding [@Taylor]. Naturally occurring proteins whose ends connected to give a circular topology has been recently discovered [@Craik]. Furthermore, a molecular knot with eight crossings in nanoscale has been successfully synthesized quite recently [@Woltering]. Due to novel developments in experimental techniques during the last decade, ring polymers are now effectively synthesized in chemistry [@Tezuka2000; @Tezuka2001; @Grubbs; @Takano05; @Takano07; @Grayson; @Tezuka2010; @Tezuka2011; @Tezuka-book]. We define the knotting probability of a knot by the probability for a random polygon (RP) or self-avoiding polygon (SAP) consisting of $N$ segments having the given knot type. It plays a fundamental role in the topological properties of ring polymers in solution. For instance, the mean-square radius of gyration of a knotted ring polymer depends not only on the knot type but also on the characteristic length of the knotting probability, which will be defined later in the paper. The knotting probabilities have been studied for some models of RP and SAP through numerical simulations [@Vologodskii; @Michels-Wiegel; @Janse; @van; @Rensburg; @Koniaris-Muthukumar; @knotP; @JKTR; @TD95; @Deguchi-Tsurusaki1997; @Orlandini1996; @Orlandini1998; @PLA2000; @Katritch00; @Yao; @Marcone; @Stella; @Rechnitzer; @Tubiana; @UD2015], rigorous methods [@Sumners-Whittington; @Pippenger] and DNA experiments [@Rybenkov; @Shaw-Wang; @Plesa]. The knotting probabilities have been measured in experiments, first by performing the reaction process of randomly closing nicked circular DNA [@Rybenkov; @Shaw-Wang]. In the researches circular DNA are modeled as SAP consisting of impenetrable cylinders, and the knotting probability is evaluated in the simulation for segment number $N$ up to 60. The results are compared with the experiments where segment number $N$ is rather small such as $N$ less than 30. However, the knotting probability of large circular DNA such as 166 kbp has been measured recently in solid-state nanopore experiments [@Plesa]. Thus, the knotting probability for SAP with a large segment number $N$ such as $N=500$ can be systematically investigated in experiments. In the paper we show fundamental and generic properties of the knotting probability of a given knot, in particular, how it depends on the excluded volume. In order to investigate the excluded-volume effect on topological properties systematically, we introduce an off-lattice model of SAP which give random configurations of a cyclic sequence of $N$ cylindrical segments of radius $r_{\rm ex}$ where a given pair of segments do not overlap except for neighboring ones. We describe the knotting probability as a function of not only segment number $N$ but also radius $r_{\rm ex}$ by introducing a compact formula with four fitting parameters. The generic properties presented in the paper should be useful for studying the knotting probability with experiments in various different fields. We show numerically how the fitting parameters for expressing the knotting probability depend on cylindrical radius $r_{\rm ex}$, i.e., the excluded-volume parameter. We also show that the four-parameter formula describes the knotting probability very well for various knots over a wide range of segment number $N$. The simulation results of the cylindrical SAP with several values of cylindrical radius lead to a systematic and unifying viewpoint on the knotting probability for many different models of ring polymers in solution. For instance, the knotting probability ratio is consistent with that of lattice SAP if cylindrical radius $r_{\rm ex}$ is large such as satisfying $2 r_{\rm ex}=1/4$: the diameter of cylindrical segments is given by one fourth of the bond length. Moreover, we show that if the cylindrical radius is large the trefoil knot and its descendants are dominant among the nontrivial knots appearing in an ensemble of SAP. We also show that the maximum of the knotting probability of the trefoil knot slightly increases with respect to the cylindrical radius, while those of other prime knots decrease exponentially with respect to it. Here we remark that the dependence of the knotting probability on the radius of cylindrical segments shown in the present study is consistent with the previous small-$N$ results [@Rybenkov; @Shaw-Wang] and generalizes them into those of the large $N$ case. The cylindrical SAP model employed in the present research generates random sequences of impenetrable cylinders of unit length with radius $r_{\rm ex}$ where neighboring pairs of cylindrical segments can overlap while other pairs do not [@UD2015]. Here we recall that it is a SAP model of semi-flexible ring polymers such as circular DNA [@Rybenkov; @Shaw-Wang]. In the model the radius $r_{\rm ex}$ corresponds to the screening length or the length scale of screening effect due to counter ions surrounding DNA [@Schellman; @Stigter; @Rybenkov; @Shaw-Wang]. DNA are negatively charged polyelectrolytes and the screening effect of counter ions may be nontrivial [@Schellman; @Stigter]. We assume that DNA chain is hard to bend due to electrostatic repulsive forces and hence DNA are approximated as a sequence of thin long cylinders where some fraction of counter ions are bound to DNA due to the Manning condensation [@LeBret]. The effective thickness of DNA molecules is determined by the concentration of counter ions in solution [@Stigter]. Typically, the bare radius of DNA corresponds to the radius $r_{\rm ex}=0.01$ in the case of cylindrical segments of unit length [@Grosberg-book]. ![Unknot (the trivial knot, $0_1$) and the prime knots with up to seven minimal crossings. []{data-label="fig0"}](FigKP1.pdf){width="0.8\hsize"} Let us explain important properties of the knotting probability. In a model of RP or SAP of $N$ segments we denote the knotting probability of a knot $K$ by $P_K(N)$. It was shown that the knotting probabilities for the bead-rod model are well approximated as a function of $N$ by [@Deguchi-Tsurusaki1997] $$P_K(N)=C_K\tilde{N}^{m(K)}\exp(-\tilde{N}) \, , \label{eq:4formula}$$ where $\tilde{N}$ is given by $$\tilde{N}=\frac{N-\Delta N(K)}{N_K} \, . \label{eq:finite-size}$$ We call parameters $C_K$, $m(K)$, $N_K$ and $\Delta N(K)$ the knot coefficient (or coefficient), the knot exponent (or exponent), the characteristic length and the finite-size correction of the knotting probability of knot $K$, respectively. We derive formula (\[eq:4formula\]) by assuming the large-$N$ asymptotic expansion of the knotting probabilities. For simplicity, let us consider a model of lattice polygons. We denote by $Z_{K}(N)$ the number of lattice polygons of $N$ segments with a knot $K$ and by $Z_{All}(N)$ that of no topological constraint. The knotting probability of knot $K$ is given by $P_K(N)=Z_K(N)/Z_{All}(N)$. We assume that for topological conditions $K$ including no constraint $All$ the numbers $Z_K(N)$ have the large-$N$ asymptotic expansion $$\log Z_K(N) = \kappa_K N + m_K \log N + \log Z_K^{(0)} + O(1/N) . \label{eq:ZK}$$ By taking the exponential of Eq. (\[eq:ZK\]) and introducing parameters $N_K$ and $m(K)$ by $1/N_K = \kappa_{All}-\kappa_K$ and $m(K) = m_K-m_{All}$, respectively, we have $$P_K(N)= C_K \left( {N} /{N_K} \right)^{m(K)} \exp \left(- N/N_K \right) . \label{eq:3formula}$$ We call it the asymptotic formula of the knotting probability. We obtain Eq. (\[eq:4formula\]) by replacing $N/N_K$ in Eq. (\[eq:3formula\]) with Eq. (\[eq:finite-size\]). In several models of RP and SAP the estimates of $N_K$ for different knots are given by almost the same value as far as investigated. We therefore call it the characteristic length of the knotting probability[@JKTR], and denote it by $N_0$. For the cylindrical SAP the $N_K$s are evaluated as the same for 145 knots with respect to errors [@UD2015]. For a composite knot consisting of knots $K_1$ and $K_2$, denoted by $K_1 \# K_2$, its exponent and coefficient are approximately equal to the sum and the product of those of knots $K_1$ and $K_2$, respectively, in several models of RP and SAP. We call such properties factorization properties of exponents and coefficients, respectively. It was suggested that the factorization properties are derived from the local knot conjecture that the knotted region in a knotted SAP is localized [@Orlandini1998; @Katritch00; @Marcone]. In the present paper we shall show that coefficients $C_K$ for almost all the prime knots are well approximated by exponentially decaying functions of radius $r_{\rm ex}$ in the cylindrical SAP model. We show it numerically for the prime knots with less than or equal to seven minimal crossings. Quite interestingly, only for the trefoil knot the coefficient $C_{3_1}$ increases with respect to radius $r_{\rm ex}$. It follows that the majority of nontrivial knots are given by the trefoil knot and its composite knots if radius $r_{\rm ex}$ is rather large such as $r_{\rm ex}=0.1$. The knot coefficients $C_K$ have an interesting property that the sum of the knot coefficients over all prime knots is given by 1. We call it the sum rule of knot coefficients for prime knots. We shall show that it is consistent with the formula of the knotting probability. We derive an infinite number of sum rules such that the sum of the knot coefficients over all composite knots consisting of $n$ prime knots is given by $1/n!$ for positive integers $n$. We shall show in section 5 that they are derived from the factorization properties of exponents and coefficients. Furthermore, we suggest that the sum rules give a numerical support for the local knot conjecture. In order to investigate how far the asymptotic behavior is important in the knotting probability as a function of segment number $N$ we apply the three-parameter asymptotic formula (\[eq:3formula\]) to the data points of the knotting probability against segment number $N$. The estimates of exponent $m(K)$ of a knot $K$ are much closer to some integers than in the case of the four-parameter formula (\[eq:4formula\]), although the $\chi^2$ values of the fitted curves are larger than those of Eq. (\[eq:4formula\]). It seems that the results of the asymptotic formula (\[eq:3formula\]) is more similar to those of on-lattice SAP, where the estimates of the entropic exponent are given by integers [@Orlandini1996; @Orlandini1998]. Here we remark that the entropic exponent corresponds to the exponent $m(K)$ of a knot $K$ in the notation of the present paper. The contents of the paper consist of the following. In section 2 we explain the algorithm for generating the cylindrical SAP with radius $r_{\rm ex}$ and then present some knot invariants by which we detect the knot type of a given polygon. We also give the numbers of polygons we have generated in the present research, which lead to the estimates of statistical errors. In section 3 we show that the four-parameter formula (\[eq:4formula\]) gives good fitted curves to the data points of the knotting probability versus segment number $N$. We exhibit fitted curves to the data of the knotting probability against $N$ for several prime and composite knots. We then present fundamental and generic properties of the knotting probability as addressed briefly in Introduction. In section 4 we formulate important properties of knot coefficients $C_K$. First, we argue that knot coefficient of a knot $K$ determines the maximum value of the knotting probability of the knot $K$. Second, we show numerically how the parameters $C_K$ for prime knots depend on the radius of cylindrical segments of the SAP. Third, we numerically confirm the factorization property of knot coefficients. In section 5 we show some important aspects of the knotting probability. We argue numerically that the results of the cylindrical SAP at $r_{\rm ex}=1/8$ correspond to those of lattice SAP. We derive the sum rules for knot coefficients $C_K$ from Eq. (\[eq:4formula\]) of the knotting probability. We numerically confirm the sum rule of coefficients $C_K$ for prime knots. We suggest that it gives a numerical support for the local knot conjecture. In section 6 we discuss how effective the asymptotic expansion of the knotting probability is. In section 7 we give some concluding remarks. Numerical methods ================= Let us explain the method for evaluating the knotting probability of a given knot $K$ for the cylindrical SAP: We generate an ensemble of cylindrical SAP by the Monte-Carlo method, detect the knot type of the SAP by calculating some knot invariants, and then evaluate the knotting probability of the knot $K$ for the cylindrical SAP. Algorithm for generating cylindrical SAP ---------------------------------------- We construct an ensemble of SAP consisting of $N$ cylindrical segments with radius $r_{\rm ex}$ as follows [@UD2015]. First, we construct an initial polygon by an equilateral regular $N$-gon, where the vertices have numbers from $1$ to $N$, consecutively. Second, we choose two vertices randomly out of the $N$ vertices. Suppose that they are given by numbers $p_1$ and $p_2$.We rotate a sub-chain between the vertices $p_1$ and $p_2$ around the straight line connecting them by an angle chosen randomly from $0$ to $2\pi$. Third, we check whether the rotated sub-chain has any overlap with the other part of the polygon or not. If the distance between every pair of non-neighboring segments (or polygonal edges) of the polygon is larger than $2 r_{\rm ex}$, we find that the polygon does not have any overlap. If it has no overlap, we employ the rotated configuration as the cylindrical SAP in the next Monte-Carlo step. If it has an overlap, we employ the previous configuration of SAP before rotation in the next Monte-Carlo step. Then, we repeat this procedure many times such as $2N$ times. In the case of $r_{\rm ex}=0$, the SAPs generated by the above algorithm are given by equilateral random polygons. The algorithm for generating cylindrical SAPs with $r_{\rm ex}=0$ is also called the polygonal folding method (PFM) [@Millett1994]. The ergodicity of PFM is shown in Ref. [@Millett1994; @Kapovich1996] (see also [@Millett2011]). Method for evaluating the knotting probability ---------------------------------------------- We detect the knot type of a given SAP by evaluating mainly the values of the two knot invariants: The absolute value of the Alexander polynomial $|\Delta_K(t)|$ evaluated at $t=-1$ and the Vassiliev invariant of the second order $v_2(K)$ for a knot $K$. If a given SAP has the same values of two knot invariants as a knot $K$, we assume that the topology of the polygon is given by the knot $K$. For some cases we also evaluate the Vassiliev invariant of the third order, as we shall see later. Some pairs of knots have the same values of the two knot invariants in common. For example, both knot $7_4$ and knot $3_1\# 5_1$ have the same values $|\Delta_K(-1)|=15$ and $v_2(K)= 4$. Therefore, we cannot distinguish between knot $7_4$ and knot $3_1\sharp5_1$ only by calculating the two knot invariants. In order to distinguish them we evaluate the Vassiliev invariants of the third order for such polygons. The Vassiliev invariants of any order can be calculated by the method of the quasi-classical expansion of the $R$-matrix of the quantum group [@Deguchi-Tsurusaki-PLA; @JKTR]. However, we employ the algorithm due to Polyak and Viro to calculate the Vassiliev invariants of the second order and the third order [@Polyak-Viro]. In fact, by the latter method the Vassiliev invariants are calculated only through the Gauss codes [@Murasugi] (or the Dowker codes). Number of SAPs generated in the simulation ------------------------------------------ In the present simulation for each value of radius $r_{\rm ex}$ we generated $2\times10^5$ polygons for $N \le 4, 000$, $10^5$ polygons for $N$ satisfying $4, 000 \le N \le 6, 000$, $5 \times10^4$ polygons for $N$ satisfying $6, 000 \le N \le 8, 000$, and $4 \times10^4$ polygons for $N$ satisfying $8, 000 \le N \le10, 000$. In the present simulation the number of segments $N$ is given from $100$ to $3, 000$ for the cylindrical SAP of zero thickness ($r_{\rm ex}= 0$), i.e. equilateral random polygons; from $100$ to $3000$ for cylindrical SAP with $r_{\rm ex}=0.005$ and $0.01$; from $100$ to $4000$ with $r_{\rm ex}=0.02$; from $100$ to $5, 000$ with $r_{\rm ex}=0.03$; from $100$ to $7, 000$ with $r_{\rm ex}=0.04$; from $100$ to $8, 000$ with $r_{\rm ex}=0.05$; from $100$ to $10^4$ with $r_{\rm ex}=0.06$, $0.08$ and $0.1$. \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Knotting probabilities of various knots ======================================= Knotting probability for prime knots ------------------------------------ ![Knotting probability of the trefoil knot ($3_1$) versus the number of segments $N$ for the cylindrical SAPs with ten different values of the cylindrical radius. The plots for the values of radius $r_{\rm ex}$ given by 0.0, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.08 and 0.10, are depicted by circles (red), stars, upper triangles, diamonds, squares, lower triangles, saltires or Xs, crosses, circles (blue) and stars (purples), respectively. The fitted curves have the best estimates of the parameters of Eq. (\[eq:4formula\]) listed in Table \[tab:31\]. []{data-label="fig:31"}](FigKP2.pdf){width="0.95\hsize"} ------------------------------------------------------------------------------------------------------------- $r_{\rm ex}$ $C_K$ $m(K)$ $N_K$ $\Delta N(K)$ $\chi^2$/DF -------------- --------------------- --------------------- ------------------- ---------------- ------------- 0 $0.6183 \pm 0.0012$ $0.852 \pm 0.014$ $257.1 \pm 1.2$ $18.4 \pm 2.2$ $0.56$ 0.005 $0.6643 \pm 0.0018$ $0.832 \pm 0.015$ $380.5 \pm 2.3$ $22. \pm $1.14$ 2.7$ 0.01 $0.7039 \pm 0.0017$ $0.856 \pm 0.013$ $517.7 \pm 3.1$ $19. \pm $1.19$ 2.5$ 0.02 $0.7644 \pm 0.0010$ $0.8929 \pm 0.0067$ $867.7 \pm 3.3$ $13.9 \pm $0.68$ 1.7$ 0.03 $0.8087 \pm 0.0013$ $0.9303 \pm 0.0074$ $1348.3 \pm 6.9$ $9.1 \pm $1.13$ 2.2$ 0.04 $0.8404 \pm 0.0014$ $0.9414 \pm 0.0064$ $2047. \pm 11.$ $6.7 \pm $1.15$ 2.3$ 0.05 $0.8607 \pm 0.0013$ $0.9369 \pm 0.0051$ $3051. \pm 17.$ $13.7 \pm $0.91$ 2.0 $ 0.06 $0.8765 \pm 0.0011$ $0.9385 \pm 0.0034$ $4455. \pm 21.$ $14.2 \pm $0.41$ 1.5$ 0.08 $0.902 \pm 0.0031$ $0.9513 \pm 0.0069$ $8770. \pm 140.$ $18.4 \pm $0.94$ 2.9$ 0.1 $0.926 \pm 0.0120$ $0.9520 \pm 0.0076$ $16690. \pm 550.$ $24.5 \pm $1.01$ 3.5$ ------------------------------------------------------------------------------------------------------------- ### Maximum probability of trefoil knot increases as the excluded volume of SAP increases Let us denote by the symbol $P_K(N, r_{\rm ex})$ the knotting probability of a knot $K$ for the cylindrical SAP consisting of $N$ cylindrical segments with radius $r_{\rm ex}$. In Fig. \[fig:31\] the knotting probabilities of the trefoil knot ($3_1$) for the cylindrical SAP with radius $r_{\rm ex}$ are plotted against segment number $N$ for various values of cylindrical radius $r_{\rm ex}$. Here we have plotted them for ten different values of cylindrical radius such as $r_{\rm ex}= 0.0, 0.005, 0.01, 0.02 \cdots,$ and so on. We observe in Fig. \[fig:31\] that the maximum value of the knotting probability of knot $3_1$ increases as radius $r_{\rm ex}$ increases. The peak height of each plot increases gradually as radius $r_{\rm ex}$ increases, while the peak position, i.e. the number of segments $N$ at which the knotting probability gives the maximum value, is shifted to the right as radius $r_{\rm ex}$ increases. The peak position is approximately given by the characteristic length $N_{3_1}$ if we assume eq. (\[eq:4formula\]). Here we remark that the exponent of trefoil knot, $m(3_1)$, is estimated as roughly equal to 1.0, as shown later. In Fig. \[fig:41\] the knotting probabilities of the figure-eight knot ($4_1$) for the cylindrical SAP with radius $r_{\rm ex}$ are plotted against segment number $N$ for various values of radius $r_{\rm ex}$. In Fig. \[fig:41\] the maximum value of the knotting probability of knot $4_1$ decreases with respect to radius $r_{\rm ex}$. The fitted curves in Figs. \[fig:31\] and \[fig:41\] are given by formula (\[eq:4formula\]). They are good, since the $\chi^2$ values are less than 2.0 for all the curves. Here we remark that the best estimates of parameters of Eq. (\[eq:4formula\]) are listed in Tables and \[tab:31\] and \[tab:41\] for knots $3_1$ and $4_1$, respectively, together with the $\chi^2$ value per degree of freedom (DF). ![Knotting probability of the figure-eight knot $4_1$ for cylindrical SAPs with ten different values of the cylindrical radius: $r_{\rm ex}=0.0$, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.08 and 0.1. The fitted curves have the best estimates of the parameters of Eq. (\[eq:4formula\]) listed in Table \[tab:41\]. []{data-label="fig:41"}](FigKP3.pdf){width="0.95\hsize"} The knotting probabilities of such prime knots that have less than or equal to seven minimal crossings for the cylindrical SAP with radius $r_{\rm ex}$ are fitted by formula (\[eq:4formula\]). The best estimates of the parameters of Eq. (\[eq:4formula\]) together with the $\chi^2$ value per DF are listed in Tables \[tab:56\] and \[tab:7\] in Appendix A: for knots $5_1$, $5_2$, $6_1$, $6_2$, and $6_3$ in Table \[tab:56\]; for $7_1$, $7_2$, $7_3$ $7_4$, $7_5$, $7_6$ and $7_7$ in Tables \[tab:7\]. The $\chi^2$ values are smaller than 2.0 for all the fitted curves. We can show that the maximum value of the knotting probability of a knot $K$ is determined by the coefficient $C_K$. We shall show it in section \[sec:maxP\] by making use of Eq. (\[eq:4formula\]). The increase of the maximum value of the knotting probability for a nontrivial knot with respect to the excluded volume should be interesting and not trivial. It is only the case for the trefoil knot ($3_1$) among all the prime knots as far as we have investigated. For other prime knots the maximum value of the knotting probability decreases as radius $r_{\rm ex}$ increases. We confirm it from the estimates of knot coefficients $C_K$ for knots $4_1$ in Table \[tab:41\], $5_1$, $5_2$, $6_1$, $6_2$, and $6_3$ in Tables \[tab:56\], $7_1$, $7_2$, $7_3$ $7_4$, $7_5$, $7_6$ and $7_7$ in Table \[tab:7\] in Appendix A. We observe in Figs. \[fig:31\] and \[fig:41\] as well as in those of knots $5_1$ and $5_2$ that the maximum value of the knotting probability for knot $K$ decreases as the minimal crossing number of the knot $K$ increases at least among the four prime knots, $3_1$, $4_1$, $5_1$ and $5_2$. ### Fitted curves with knot exponent close to 1 The estimate of exponent $m(K)$ of a knot $K$ is roughly given by 1.0 for the four prime knots, $3_1$, $4_1$, $5_1$ and $5_2$. In Table \[tab:31\] the exponent $m(3_1)$ of the trefoil knot is approximately given by 1.0 . However, if we consider the estimates of errors, it is clearly smaller than 1.0 with respect to errors. Here we recall that the best estimates of parameters of eq. (\[eq:4formula\]) are listed in Tables \[tab:31\] and \[tab:41\] together with the $\chi^2$ value per DF for knots $3_1$ and $4_1$, respectively. ### Small-$N$ region When segment number $N$ is small such as much smaller than the characteristic length $N_K$, the knotting probability of a nontrivial knot $K$ can be approximated by a straight line as a function of $N$. We observe it in Figs. \[fig:31\] and \[fig:41\] for knots $3_1$ and $4_1$, respectively. We also see it for knots $5_1$ and $5_2$. In the small-$N$ region if we fix a number of segments $N$ the knotting probability $P_K(N, r_{\rm ex})$ decreases with respect to radius $r_{\rm ex}$, as shown in Figs. \[fig:31\] and \[fig:41\]. If we assume the four-parameter formula (\[eq:4formula\]) it is a consequence of the fact that the characteristic length increases rapidly with respect to the excluded-volume parameter, i.e. the cylindrical radius $r_{\rm ex}$. For small-$N$ region such as $N \ll N_K$, formula (\[eq:4formula\]) is approximated by a linear function of $N$ as $$P_K(N, r_{\rm ex}) \approx C_K {\frac {N-\Delta N(K)}{N_K}} \, . \label{eq:small-N}$$ Here for simplicity we have assumed that the exponent $m(K)$ of knot $K$ is almost given by 1 if knot $K$ is a prime knot. As radius $r_{\rm ex}$ increases the characteristic length $N_K$ increases rapidly while the coefficient $C_K$ does not change very much, so that the knotting probability decreases with respect to $r_{\rm ex}$ for a given fixed number of segments $N$. -------------------------------------------------------------------------------------------------------------- $r_{\rm ex}$ $C_K$ $m(K)$ $N_K$ $\Delta N(K)$ $\chi^2$/DF -------------- ----------------------- ------------------- -------------------- ---------------- ------------- 0 $0.13113 \pm 0.00090$ $0.841 \pm 0.041$ $257.5 \pm 3.8$ $31.9 \pm 6.0$ $1.43$ 0.005 $0.1279 \pm 0.00078$ $0.793 \pm 0.027$ $383.5 \pm 4.5$ $38.0 \pm $0.94$ 4.4$ 0.01 $0.12575 \pm 0.00062$ $0.929 \pm 0.033$ $492.8 \pm 7.2$ $20.3 \pm $1.36$ 6.0$ 0.02 $0.11522 \pm 0.00047$ $0.887 \pm 0.018$ $860.8 \pm 9.3$ $26.7 \pm $0.89$ 4.2$ 0.03 $0.10542 \pm 0.00040$ $0.902 \pm 0.014$ $1355. \pm 15.$ $31.4 \pm $0.75$ 3.8$ 0.04 $0.09443 \pm 0.00046$ $0.891 \pm 0.015$ $2096. \pm 28.$ $34.3 \pm $0.96$ 4.5$ 0.05 $0.08711 \pm 0.00033$ $0.937 \pm 0.012$ $3004. \pm 39.$ $26.8 \pm $0.59$ 4.4$ 0.06 $0.07865 \pm 0.00058$ $0.932 \pm 0.019$ $4420. \pm 120.$ $27.5 \pm $1.34$ 7.7$ 0.08 $0.0663 \pm 0.0011$ $0.938 \pm 0.031$ $8820. \pm 670.$ $46. \pm $1.82$ 11.$ 0.1 $0.0595 \pm 0.0025$ $0.981 \pm 0.024$ $15700. \pm 1600.$ $39. \pm $0.74$ 10.$ -------------------------------------------------------------------------------------------------------------- Here we remark that the finite-size corrections $\Delta N(K)$ increase slightly as cylindrical radius $r_{\rm ex}$ increases. ### Knots with the same crossing number For the knotting probabilities of knots $5_1$ and $5_2$ that of knot $5_2$ is almost twice as large as that of knot $5_1$, although they have the same minimal crossing numbers. We can confirm it from the estimates of knot coefficients $C_K$ listed in Table \[tab:56\] of Appendix A. Here we remark that knot $5_1$ is a torus knot, while knot $5_2$ is a twsit knot [@Murasugi]. Interestingly, it is also the case for knots $7_1$ and $7_2$. The knot coefficient of knot $7_2$, which is a twist knot, is more than twice as large as that of knot $7_1$, which is a torus knot, as listed in Table \[tab:7\] of Appendix A. Knotting probabilities of composite knots ----------------------------------------- Let us introduce prime knots and composite knots [@Murasugi]. If a diagram of a knot $K$ is decomposed into two diagrams of nontrivial knots $K_1$ and $K_2$ by cutting two points in the diagram of $K$, we say that it is composed of the two knots and denote it by $K=K_1 \# K_2$. We also say that it is the product of them. If a knot $K$ cannot be decomposed into a product of two nontrivial knots, we say that it is prime. ### Factorization properties of exponents and coefficients By applying Eq. (\[eq:4formula\]) to the data points of the knotting probability versus segment number $N$ we observe that the best estimate of the exponent $m(K)$ for a composite knot $K= K_1 \# K_2$ is given by the sum of the best estimates of the exponents for constituent knots $K_1$ and $K_2$ [@JKTR] $$m(K_1 \# K_2) = m(K_1) + m(K_2) \, . \label{eq:factor0}$$ We call it the factorization property of exponents $m(K)$. An analytical derivation of Eq. (\[eq:factor0\]) was argued by assuming the local knot picture [@TD95]. In several models of RP and SAP the estimate of exponent $m(K)$ of a composite knot is approximately given by the number of prime knots of which the composite knot consists. Similarly, in several models of RP and SAP we observe the factorization property of knot coefficients $C_K$: for a composite knot $K_1 \# K_2$ consisting of knots $K_1$ and $K_2$ we have $C_{K_1 \# K_2} = C_{K_1} C_{K_2}$, if $K_1 \ne K_2$, among the estimates of coefficients $C_{K_1 \# K_2}$, $C_{K_1}$ and $C_{K_2}$. For a composite knot $K$ consisting of $n$ prime knots such that there are $n_j$ copies of prime knots $K_j$ and the sum of integers $n_j$ is given by $n$, we have $$C_K = \prod_j \left( C_{K_j} \right)^{n_j} / n_j ! \, . \label{eq:fact3}$$ We remark that in the present research we shall numerically show the factorization property for the fitting parameters $C_K$ of the four-parameter formula (\[eq:4formula\]) where finite-size corrections $\Delta N(K)$ are taken into account. For lattice knots the factorization property among coefficients $C_K$ for large $N$ is studied numerically by making use of the asymptotic expansion of the knotting probability [@Stella], which corresponds to Eq. (\[eq:3formula\]) where no finite-size corrections $\Delta N(K)$ are considered. It has been suggested that the factorization properties of exponents $m(K)$ and coefficients $C_K$ are favorable to the local-knot picture. Here we recall the local knot conjecture that for a RP or SAP with a nontrivial knot, the knotted part of the RP or SAP is localized in some way [@Orlandini1998; @Katritch00; @Marcone]. However, it is not trivial to show the suggestion even numerically [@Tubiana]. ### Fitted curves for composite knot of two trefoil knots ![Knotting probability of composite knot $3_1 \# 3_1$ versus the number of segments $N$ for the cylindrical SAP with ten different values of radius: $r_{\rm ex}=0.0$, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.08 and 0.1. The fitted curves are given by applying Eq. (\[eq:4formula\]), where the best estimates of the parameters are listed in Table \[tab:3131\]. []{data-label="fig:3131"}](FigKP4.pdf){width="0.9\hsize"} In Fig. \[fig:3131\], the numerical estimates of the knotting probability of composite knot $3_1\# 3_1$ are plotted against the number of segments $N$ for the cylindrical SAPs with ten different values of cylindrical radius $r_{\rm ex}$. Here we remark that the composite knot $3_1\# 3_1$ consists of the product of two trefoil knots $3_1$ and $3_1$. The fitted curves are given by eq. (\[eq:4formula\]). The fitted curves fit very well to the data points. The $\chi^2$ values are small. Here, the best estimates of the fitting parameters and the $\chi^2$ values are given in Table \[tab:3131\]. ----------------------------------------------------------------------------------------------------------- $r_{\rm ex}$ $C_K$ $m(K)$ $N_K$ $\Delta N(K)$ $\chi^2$/DF -------------- --------------------- --------------------- ----------------- ---------------- ------------- 0 $0.2010 \pm 0.0021$ $1.759 \pm 0.018$ $261.1 \pm 1.3$ $26.3 \pm 1.8$ $0.61$ 0.005 $0.2367 \pm 0.0033$ $1.817 \pm 0.024$ $380.5 \pm 2.8$ $19.3 \pm $1.19$ 2.7$ 0.01 $0.2740 \pm 0.0031$ $1.801 \pm 0.020$ $522.2 \pm 3.9$ $24.8 \pm 2.7$ $1.26$ 0.02 $0.3212 \pm 0.0017$ $1.8430 \pm 0.0098$ $873.0 \pm 3.7$ $24.6 \pm $0.47$ 1.8$ 0.03 $0.3491 \pm 0.0033$ $1.897 \pm 0.016$ $1354. \pm 12.$ $21.1 \pm $1.28$ 3.7$ 0.04 $0.3744 \pm 0.0028$ $1.899 \pm 0.012$ $2069. \pm 16.$ $22.4 \pm $0.87$ 3.6$ 0.05 $0.3989 \pm 0.0034$ $1.899 \pm 0.013$ $3099. \pm 31.$ $33.3 \pm $0.86$ 4.4$ 0.06 $0.4028 \pm 0.0057$ $1.940 \pm 0.017$ $4337. \pm 70.$ $31.1 \pm $1.26$ 6.8$ ----------------------------------------------------------------------------------------------------------- It is clear in Fig. \[fig:3131\] that the maximum value of the knotting probability of the composite knot $3_1\# 3_1$ increases as cylindrical radius $r_{\rm ex}$ increases. The peak position is approximately given by twice the characteristic length $N_0$. It is compatible with the fact that the estimate of the exponent of the composite knot is approximately given by 2.0: $m(3_1 \# 3_1) =2.0$ . Here we recall that the characteristic length $N_0$ also increases with respect to the cylindrical radius. The numerical data of the knotting probability of a composite knot $3_1\# 3_1 \# 3_1$ are plotted against the number of segments $N$ for the cylindrical SAP with eight different values of cylindrical radius $r_{\rm ex}$ in Fig. \[fig:313131\] of Appendix A. The fitted curves given by eq. (\[eq:4formula\]) are good. The best estimates are listed in Table \[tab:313131\] of Appendix A. We suggest that the maximum value of the knotting probability of the trefoil knot $3_1$ and that of a composite knot consisting of only the trefoil knot $3_1$ increases as the excluded-volume parameter $r_{\rm ex}$ increases, while the maximum value of the knotting probability of any prime knot other than knot $3_1$ decreases exponentially with respect to cylindrical radius $r_{\rm ex}$. ### Fitted curves for other composite knots In Fig. \[fig:composite\] the knotting probabilities of various composite knots consisting of knots $3_1$ and $4_1$ such as $3_1 \# 4_1$, $3_1 \# 3_1 \# 4_1$ etc., are plotted against the number of segments $N$ for the cylindrical SAP in the case of zero thickness, i.e. for $r_{\rm ex}$=0. The fitted curves given by (\[eq:4formula\]) are good and have small $\chi^2$ values per DF. The peak positions in Fig. \[fig:composite\] are classified to the three types: those of composite knots consisting of two prime knots such as $3_1 \# 4_1$, those of composite knots consisting of three prime knots such as $3_1 \# 3_1 \# 4_1$, and those of composite knots consisting of four prime knots such as $3_1 \# 3_1 \# 3_1 \# 4_1$. We have plotted the estimates of the knotting probabilities of composite knots against the number of segments $N$ for a large number of composite knots such as 130 composite knots [@UD2015]. We observe the factorization properties of knot exponents and knot coefficients such as given in Eq. (\[eq:factor0\]) and Eq. (\[eq:fact3\]), respectively. ![ Knotting probabilities of several composite knots that contain the trefoil knot $3_1$ and the figure-eight knot $4_1$ for the cylindrical SAP with zero thickness $r_{\rm ex}=0$, i.e., equilateral random polygons. Here the data points of composite knots $3_1 \# 4_1$, $4_1 \# 4_1$, $3_1 \# 3_1 \# 3_1$, $3_1 \# 3_1 \# 4_1$, $3_1 \# 4_1 \# 4_1$, $3_1 \# 3_1 \# 3_1 \# 3_1$, $3_1 \# 3_1 \# 3_1 \# 4_1$, $3_1 \# 3_1 \# 4_1 \# 4_1$ are depicted by circles, diamonds, upper triangles, stars, squares, lower triangles, saltires or Xs and crosses, respectively. The fitted curves are given by Eq. (\[eq:4formula\]). []{data-label="fig:composite"}](FigKP5.pdf){width="1.0\hsize"} Fundamental properties of knot coefficients =========================================== Knot coefficients determine the maximum of knotting probability {#sec:maxP} --------------------------------------------------------------- We now show that the knot coefficients $C_K$ mainly determine the maximum value of the knotting probability of a knot $K$. We recall that the estimates of exponents $m(K)$ for several prime knots are close to 1.0 but not equal to 1.0 with respect to errors. Here we also recall that the best estimates of the exponents $m(K)$ defined in Eq. (\[eq:4formula\]) for knots $3_1$ and $4_1$ are given in Tables \[tab:31\] and \[tab:41\], respectively, and for knots with five and six minimal crossings and those of seven minimal crossings are listed in Tables \[tab:56\] and \[tab:7\], respectively, in Appendix A. The range of the best estimates of exponent $m(K)$ is given from 0.8 to 1.2 for the prime knots. The best estimate of $m(K)$ increases slightly as cylindrical radius $r_{\rm ex}$ increases. Let us express the maximum values of the knotting probability in terms of the fitting parameters of Eq. (\[eq:4formula\]). By taking its derivative with respect to $N$ we have $$\frac{dP(N,r_{\rm ex},K)}{dN}=\frac{C_K}{N_K}\tilde{N}^{m(K)-1}(m(K)-\tilde{N})\exp(-\tilde{N}) \label{Eq005}$$ Therefore, the knot probability of a nontrivial knot $K$ has the maximum value at $\tilde{N}=m(K)$: $$N = m(K) N_K + \Delta N(K) .$$ The maximum value of the knotting probability of knot $K$ is thus given by $${\rm Max Prob}(K) = C_K m(K)^{m(K)}\exp(-m(K)) \, . \label{Eq006}$$ The value of $m(K)^{m(K)} \exp[-m(K)]$ does not change very much when the value of exponent $m(K)$ varies from $0.8$ to $1.2$. In fact, the value of $m(K)^{m(K)}\exp(-m(K))$ is given by $0.376$ for $m(K)=0.8$, and $0.368$ for $m(K)=1$. Therefore, the maximum value of the knotting probability of a given prime knot $K$ depends mainly on the coefficient $C_K$. How the knot coefficients of prime knots depends on the cylindrical radius -------------------------------------------------------------------------- The coefficients $C_K$ of the prime knots $K$ with up to seven crossings are plotted against cylindrical radius $r_{\rm ex}$ in Fig. \[figCK\] in the semi-logarithmic scale. ![Coefficients $C_K$ versus cylindrical radius $r_{\rm ex}$ for the primes knots with crossing number being less than or equal to seven in the semi-logarithmic scale. Fitted curves are given by applying Eq. (\[eq:C31\]) to the data of the trefoil knot and (\[eq:CKprime\]) to the data of the other prime knots. []{data-label="figCK"}](FigKP6.pdf){width="0.9\hsize"} In Fig. \[figCK\] it is clear that the coefficients $C_K$ for the prime knots other than the knot $3_1$ decay exponentially with respect to cylindrical radius $r_{\rm ex}$. Furthermore, as the minimal crossing number of a knot $K$ increases, the absolute value of the gradient of the fitted line to the data points increases. Let us express the coefficient $C_K$ of a prime knot $K$ as a function of cylindrical radius $r_{\rm ex}$. For any given prime knot other than the trefoil knot $3_1$ we introduce as a fitting formula an exponentially decaying function of cylindrical radius $r_{\rm ex}$ as follows. $$C_K(r_{\rm ex})=a_1(K) \exp(-b_1(K) r_{\rm ex}) \label{eq:CKprime}$$ Here, parameter $b_1(K)$ denotes the constant of exponential decay with respect to the cylindrical radius $r_{\rm ex}$, and parameter $a_1(K)$ the coefficient for it. In Table \[Tab006\] the best estimates of fitting parameters $a_1(K)$ and $b_1(K)$ of eq. (\[eq:CKprime\]) together with the $\chi^2/{\rm DF}$ values are listed. In Fig. \[figCK\] the fitted curves are given by eq. (\[eq:CKprime\]). We observe that the fitted curves are good since the $\chi^2$ values per DF are less than 2.0 for all of them. Knot type $K$ $a_1(K)$ $b_1(K) $ $\chi^2/{\rm DF}$ --------------- ------------------------- ------------------ ------------------- $4_1$ $0.1357 \pm 0.0013$ $8.82 \pm 0.27$ $8.70$ $5_1$ $0.04387 \pm 0.00042$ $20.81 \pm 0.34$ $3.56$ $5_2$ $0.07741 \pm 0.00046$ $21.91 \pm 0.22$ $3.02$ $6_1$ $0.02234 \pm 0.00029$ $34.30 \pm 0.56$ $3.96$ $6_2$ $0.02389 \pm 0.00050$ $35.97 \pm 0.82$ $7.82$ $6_3$ $0.01580 \pm 0.00037$ $40.8 \pm 1.1$ $5.56$ $7_1$ $0.003022 \pm 0.000038$ $47.02 \pm 0.79$ $0.42$ $7_2$ $0.00665 \pm 0.00010$ $47.52 \pm 0.82$ $0.76$ $7_3$ $0.00538 \pm 0.00011$ $48.0 \pm 1.2$ $1.17$ $7_4$ $0.002866 \pm 0.000071$ $50.2 \pm 1.5$ $0.90$ $7_5$ $0.00778 \pm 0.00014$ $51.22 \pm 0.99$ $1.83$ $7_6$ $0.009159 \pm 0.000071$ $52.02 \pm 0.44$ $0.41$ $7_7$ $0.00624 \pm 0.00021$ $55.6 \pm 1.7$ $1.80$ : Best estimates of parameters in eq. (\[eq:CKprime\]) which expresses the coefficients $C_K$ of prime knots other than the trefoil knot as a function of cylindrical radius $r_{\rm ex}$. []{data-label="Tab006"} In Table \[Tab006\] we observe that as the minimal crossing number of knot $K$ increases the best estimate of $a_1$ of a knot $K$ becomes smaller while that of $b_1$ of a knot $K$ becomes larger. That is, if a knot $K$ is more complex than a given fixed knot, the knotting probability of the knot $K$ is smaller than that of the given fixed knot. For the trefoil knot $3_1$ we express the coefficient of knotting probability, $C_{3_1}$, as a function of cylindrical radius $r_{\rm ex}$ by the following function: $$C_{3_1}(r_{\rm ex}) = a_0(3_1) (1 -a_1(3_1) \exp(- b_1(3_1) r_{\rm ex})) \, . \label{eq:C31}$$ The coefficient of the trefoil knot $C_{3_1}$ approaches a constant value $a_0(3_1) \approx 0.92$ exponentially with respect to cylindrical radius $r_{\rm ex}$. The best estimates are given by $a_0(3_1)= 0.919 \pm 0.003$, $a_1(3_1)=0.327 \pm 0.002$ and $b_1(3_1)= 33.1 \pm 0.8$ with $\chi^2/{\rm DF}= 1.8$. We conclude that the fitted curve is good since the $\chi^2$ values per DF is less than 2.0. The coefficient $C_K$ of the trefoil knot $3_1$ increases gradually from $C_{3_1}=0.62$ at $r_{\rm ex}=0$ to $C_{3_1}=0.91$ at $r_{\rm ex}=0.1$ as cylindrical radius $r_{\rm ex}$ increases. The coefficient $C_{3_1}$ becomes relatively large as cylindrical radius $r_{\rm ex}$ becomes large. For an illustration, two examples of the ratios among coefficients $C_K$s are given as follows. $$C_{3_1}:C_{4_1}:C_{5_1}:C_{5_2} \sim 14: 3 : 1 : 1.8, \quad {\rm for} \, r_{\rm ex}=0 \label{Eq007}$$ $$C_{31}:C_{41}:C_{51}:C_{52}\sim119:7:1: 1.4, \quad {\rm for} \, r_{\rm ex}=0.1 \label{Eq008}$$ Thus, the maximum probability of knot $3_1$ is $119$ times larger than that of knot $5_1$ in the case of $r_{\rm ex}=0.1$. We thus suggest that the maximum value of the knotting probability of knot $3_1$ does not decrease even if the excluded volume becomes very large. Here we recall that the maximum knotting probability of a prime knot $K$ is almost determined by the coefficient $C_K$, and that coefficient of knot $3_1$ increases gradually as the cylindrical radius $r_{\rm ex}$ increases, and hence a large number of SAPs being equivalent to knot $3_1$ or its composite knots are generated when the cylindrical radius $r_{\rm ex}$ is large. When the cylindrical radius is large such as $r_{\rm ex}=0.1$, the majority of nontrivial knots are given by the trefoil knot and its composite knots. Factorization of knot coefficients for composite knots ------------------------------------------------------- We now show numerically that the factorization property of knot coefficients $C_K$ holds for such composite knots $K$ that consist of two prime knots ($n=2$) and three prime knots ($n=3$), respectively. We shall show it for the best estimates of the knot coefficients $C_K$ of the fitted curves given by Eq. (\[eq:4formula\]) applied to the data of the knotting probabilities. Let us recall the factorization property of knot coefficients $C_K$ explicitly in the case of $n=2$: If a knot $K$ is given by the product of two different prime knots $K_1$ and $K_2$, the coefficient of the composite knot $K=K_1 \# K_2$ is given by $$C_{K_1 \# K_2} = C_{K_1} C_{K_2} , \label{eq:fac1}$$ while if it consists of the product of a pair of the same prime knot $K_1$, the coefficient of the composite knot $K_1 \# K_1$ is given by $$C_{K_1 \# K_1} = {C_{K_1}^2}/{2 !} \, . \label{eq:fac2}$$ Assuming the factorization properties (\[eq:fac1\]) and (\[eq:fac2\]) for the coefficients $C_K$ of composite knots $K=K_1 \# K_2$ consisting of two prime knots $K_1$ and $K_2$, we can evaluate numerically the coefficients $C_K$ of the composite knots $K$. Here we make use of expressions (\[eq:CKprime\]) and (\[eq:C31\]) as functions of cylindrical radius $r_{\rm ex}$, and taking their products. For instance, in the case of the composite knot of two figure-eight knots $K=4_1 \# 4_1$ we evaluate the coefficient $C_{4_1 \# 4_1}$ by $$C_{4_1 \# 4_1}(r_{\rm ex}) = c_1(4_1)^2 \exp\left( - 2 d_1(4_1) r_{\rm ex} \right)/2! . \label{eq:ab4141}$$ We thus have two methods for evaluating knot coefficients $C_K$ for such composite knots $K$ consisting of two prime knots $K_1$ and $K_2$. In the first method, by applying Eq. (\[eq:4formula\]) we derive fitted curves to the data points of the knotting probabilities of composite knots $K=K_1 \# K_2$, and evaluate coefficients $C_K$ by the best estimates of the fitting parameters of Eq. (\[eq:4formula\]). In the second method, we evaluate knot coefficients $C_K$ for composite knots consisting of prime knots $K_1$ and $K_2$ by taking the product of the coefficients $C_{K_1}$ and $C_{K_2}$ given by (\[eq:CKprime\]) and (\[eq:C31\]) as functions of cylindrical radius. ![Coefficient $C_K$ of a composite knot $K=K_1 \# K_2$ versus cylindrical radius $r_{\rm ex}$. The data points for knots $3_1 \# 3_1$, $3_1 \# 4_1$, $3_1 \# 5_1$, $3_1 \# 5_2$, and $4_1 \# 4_1$ are depicted by filled circles, diamonds, upper triangles, stars, and squares, respectively. They are obtained from the fitted curves given by Eq. (\[eq:4formula\]). To each composite knot $K=K_1 \# K_2$ the dotted curve is given by the product of $C_{K_1}$ and $C_{K_2}$ as functions of cylindrical radius $r_{\rm ex}$ given in Eqs. (\[eq:CKprime\]) and (\[eq:C31\]). []{data-label="figCKCK"}](FigKP7.pdf){width="1.0\hsize"} In Fig. \[figCKCK\] we plot the estimates of the coefficient $C_K$ of a composite knot $K=K_1 \# K_2$ consisting of two prime knots $K_1$ and $ K_2$ against cylindrical radius $r_{\rm ex}$. They are shown by the data points in Fig. \[figCKCK\]. Here we recall that the best estimate of $C_K$ is evaluated by applying Eq. (\[eq:4formula\]) to the knotting probabilities of the composite knot $K$ plotted against segment number $N$ for the cylindrical SAP with a given value of radius $r_{\rm ex}$. To each of the five composite knots, the dotted curve is drawn by making use of the expression as a function of cylindrical radius $r_{\rm ex}$ given by the product of Eqs. (\[eq:CKprime\]) and (\[eq:C31\]). For instance, we recall (\[eq:ab4141\]) for the composite knot of two figure-eight knots. The dotted curves are very close to the data points and almost overlapping them, as shown in In Fig. \[figCKCK\]. The agreement of the results of the two method should be quite remarkable. Here we remark that they have no parameters to fit. We have thus numerically confirmed that the factorization properties (\[eq:fac1\]) and (\[eq:fac2\]) for $n=2$ hold among the best estimates of coefficients $C_K$ in the cylindrical SAP with various different values of cylindrical radius. ![Coefficients $C_K$ for composite knots $K$ consisting of three prime knots: $K_1 \# K_2 \# K_3$ versus cylindrical radius $r_{\rm ex}$. The data points for knots $3_1 \# 3_1 \# 3_1$, $3_1 \# 3_1 \# 4_1$, $3_1 \# 3_1 \# 5_1$, $3_1 \# 3_1 \# 5_2$, and $3_1 \# 4_1 \# 4_1$ are depicted by filled circles, diamonds, upper triangles, stars, and squares, respectively. They are evaluated by applying Eq. (\[eq:4formula\]) to the data of the knotting probabilities. The dotted curves are given by taking the product of the coefficients $C_K$ of the constituent prime knots. For instance, the coefficient $C_{3_1 \# 3_1 \# 3_1}$ is calculated as $\left( C_{3_1}\right)^3/3!$. []{data-label="figCKCKCK"}](FigKP8.pdf){width="1.0\hsize"} Also for such composite knots $K$ that consist of three prime knots as $K=K_1 \# K_2 \# K_3$, we have the same two methods for evaluating the knot coefficients $C_K$. First, by applying Eq. (\[eq:4formula\]) we derive fitted curves to the data points of the knotting probabilities of composite knots $K=K_1 \# K_2 \# K_3$ plotted against segment number $N$, we evaluate coefficients $C_K$ as the best estimates of the fitting parameter. Second, we evaluate $C_K$ by taking the product of $C_{K_1}$, $C_{K_2}$ and $C_{K_3}$ given by Eqs. (\[eq:CKprime\]) and (\[eq:C31\]). In Fig. \[figCKCKCK\] we plot the estimates of the coefficient $C_K$ of a composite knot $K=K_1 \# K_2 \# K_3$ consisting of three prime knots $K_1$, $ K_2$ and $K_3$ against cylindrical radius $r_{\rm ex}$. Here we recall that the best estimate of $C_K$ for a composite knot $K$ obtained by applying Eq. (\[eq:4formula\]) to the knotting probability of the composite knot $K$ for the cylindrical SAP with a given value of radius $r_{\rm ex}$, and it is shown by a data point in Fig. \[figCKCKCK\]. To each of the composite knots $K$ the dotted curve is drawn by making use of the expressions as functions of cylindrical radius $r_{\rm ex}$ given by Eqs. (\[eq:CKprime\]) and (\[eq:C31\]). The dotted curves are drawn by calculating the numerical values against radius $r_{\rm ex}$ by making use of the coefficients of the prime knots as functions of cylindrical radius $r_{\rm ex}$ and do not have any fitting parameters in order to make them fit to the data points of Fig. \[figCKCKCK\]. However, the dotted curves are very close to the data points. We have thus numerically confirmed that the factorization properties (\[eq:fact3\]) for $n=3$ hold in the cylindrical SAP for various values of cylindrical radius. Various properties of the fitting parameters for knotting probability ===================================================================== Thickness dependence of the knotting probability in the small segment number region ----------------------------------------------------------------------------------- We show the connection of the thickness dependence of knot coefficients $C_K$ to the gradient of the knotting probability as a linear function of segment number $N$ in the small-$N$ region studied in Refs. . If the number of segments $N$ is much smaller than the characteristic length of knotting probability $N_0$, then the four-parameter formula is approximated by a linear function of $N$ as Eq. (\[eq:small-N\]). Here, the gradient of the graph as a function of $N$ is given by $C_K/N_K$ and it is approximately expressed as a function of cylindrical radius $r_{\rm ex}$ as $${\frac {C_{3_1} (r_{\rm ex} )} { N_{3_1}(r_{\rm ex}) } } = \frac { a_0(3_1) (1- a_1(3_1) \exp(- b_1(3_1) r_{\rm ex}) ) } {c_0(3_1) + c_1(3_1) \exp(d_1(3_1) r_{\rm ex}) } \label{eq:grad31}$$ for the trefoil knot and $${\frac {C_K(r_{\rm ex} )} {N_K(r_{\rm ex} ) }} = \frac {a_1(K) \exp(- b_1(K) r_{\rm ex})} {c_0(K) + c_1(K) \exp(d_1(K) r_{\rm ex})} \label{eq:gradient}$$ for prime knots other than the trefoil knot. In Refs. [@Rybenkov; @Shaw-Wang] the graph of the knotting probability of a knot against segment number $N$ is expressed as a linear function of $N$ and its gradient is approximated by an exponentially decaying function of the diameter of cylindrical segments. Here the gradient corresponds to the ratio $C_K/N_K$ in the present paper. The thickness dependence of the gradient $C_K/N_K$ expressed as eq. (\[eq:gradient\]) is more complex than an exponentially decaying function. We can show that the expressions of Eq. (\[eq:grad31\]) and (\[eq:gradient\]) for the gradient $C_K/N_K$ generalize the thickness dependence given in Refs. [@Rybenkov; @Shaw-Wang] valid in the small-$N$ region to that of much wider ranges of segment number $N$: The expressions (\[eq:grad31\]) and (\[eq:gradient\]) coincide with the approximate exponential dependence of the knotting probability on the thickness of cylindrical segments shown in Refs. [@Rybenkov; @Shaw-Wang] for small $N$, and they are valid also for large $N$. Let us express the approximate exponential-dependence as $\exp( - \gamma_K r_{\rm ex})$ in terms of cylindrical radius $r_{\rm ex}$. In Ref. [@Rybenkov] the decay constants $\gamma_K$ are estimated by 44, 62, 84 and 84 for the trefoil knot $3_1$, the figure-eight knot $4_1$, knot $5_1$ and knot $5_2$, respectively. It is easy to show explicitly that the graph of $C_K(r_{\rm ex} ) / N_K (r_{\rm ex} )$ versus cylindrical radius $r_{\rm ex}$ almost completely overlaps with that of $ \left( C_K(0) / N_K (0) \right) \exp( - \gamma_K r_{\rm ex}) $ versus cylindrical radius $r_{\rm ex}$ for each knot $K$ among the four prime knots: $3_1$, $4_1$, $5_1$ and $5_2$. Connection to lattice knots: Universal ratios of knotting probabilities of prime knots --------------------------------------------------------------------------------------- It is numerically shown in Ref. [@Rechnitzer] that the asymptotic behavior of the ratio of knotting probabilities does not depend on the types of lattices for some knots. It is suggested that the ratio of the knotting probability of a knot $K_1$ to that of a knot $K_2$ is given by a universal value for each pair of knots $K_1$ and $K_2$. From the viewpoint of formula (\[eq:4formula\]) the knotting probability ratio in the large $N$ limit is expressed in terms of the ratio of coefficients $C_K$: $$\begin{aligned} {\frac {P(N, r_{\rm ex}, K_1)} {P(N, r_{\rm ex}, K_2)}} = \frac {C_{K_1}} {C_{K_2}} . \end{aligned}$$ Let us now make an application of Eq. (\[eq:CKprime\]) for expressing the thickness dependence of knot coefficents $C_K(r_{\rm ex})$. Here we recall that the best estimates of are given in Table \[Tab006\]. The knotting probability ratio of $K={4_1}$ to $5_1$ is given by 15 in Ref. [@Rechnitzer]. Applying formula (\[eq:CKprime\]) we have $$\frac {C_{4_1}} {C_{5_1}} = \frac {0.136 \exp(-8.8 \, r_{\rm ex})} {0.044 \exp( - 21.0 \, r_{\rm ex})} = 3.09 \exp(12.2 \, r_{\rm ex}) \, .$$ By equating it to the ratio 15 we have the estimate of the corresponding cylindrical radius $$r_{\rm ex} = 0.13 .$$ Similarly, for knot probability ratio of $K=4_1$ to $5_2$, by setting the equation: $C_{4_1}/C_{5_2}= 9$ we have $$r_{\rm ex} = 0.12 .$$ It is interesting to note that both of the values of radius $r_{\rm ex}$ are close to 1/8. Here, the diameter $2 r_{\rm ex}$ is given by 1/4. We suggest that the results of the cylindrical SAP with $r_{\rm ex}=1/8$ are consistent with those of lattice SAP. We have a conjecture that if we evaluate the statistical length of SAP on lattice, then the characteristic length of knotting probability for lattice SAP corresponds to that of off-lattice SAP with $r_{\rm ex}=1/8$. Sum rules of knot coefficients and factorization properties ----------------------------------------------------------- We now argue that knot coefficients $C_K$ satisfy simple relations, which we call sum rules. Furthermore, we show that the sum rules are consistent with the factorization properties of coefficients $C_K$ for composite knots. Let us classify all knots into classes of knots consisting of $n$ prime knots. For instance, we have $n=1$ for prime knots, $n=2$ for composite knots consisting of two prime knots, etc. Suppose that a given composite knot $K$ consists of $n$ prime knots. Then, we denote the number of constituent prime knots $n$ by $|K|$, i.e. $|K|=n$. For simplicity, we assume that the characteristic lengths $N_K$ for knots $K$ are given by the same number $N_0$, the exponent $m(K)$ is given by the number of such prime knots that are constituent knots of $K$, the finite-size corrections $\Delta N(K)$ are given by zero. We remark that the sum of the knotting probabilities over all knots is given by 1. We therefore have $$\begin{aligned} 1 & = & \sum_{n=0}^{\infty} \sum_{|K|=n} P_K(N) \nonumber \\ & = & \sum_{n=0}^{\infty} \sum_{|K|=n} C_K x^{n} \exp(-x) \, . \label{eq:sumPK} \end{aligned}$$ Here variable $x$ is defined by $x=N/N_0$. The symbol $\sum_{|K|=n}$ denotes the sum over all such composite knots that consist of $n$ prime knots. It follows from Eq. (\[eq:sumPK\]) that we have $$e^{x} = \sum_{n=0}^{\infty} x^{n} \sum_{|K|=n} C_K \, .$$ Through the Taylor expansion of the exponential function we have an infinite number of conditions for the knot coefficients $C_K$ $$\sum_{|K|=n} C_K = 1/n ! . \label{eq:sum-rule}$$ The sum of the knot coefficients over such knots that consist of $n$ prime knots is given by $1/n!$. In the case of $n=2$, by expressing the composite knot $B$ as $K=K_1 \# K_2$ we have $$\begin{aligned} & & \sum_{|K|=2} C_K = \sum_{K_1<K_2; prime} C_{K_1 \# K_2} + \sum_{K_1 =K_2: prime} C_{K_1 \# K_2} \nonumber \\ & = & {\frac 1 2} \sum_{K_1: prime} \sum_{K_2 \ne K_1: prime} C_{K_1 \# K_2} + \sum_{K_1 =K_2: prime} C_{K_1 \# K_2} \nonumber \\\end{aligned}$$ If we assume for different constituent knots $K_1 \ne K_2$ we have $ C_{K_1 \# K_2} = C_{K_1} C_{K_2}$ and for the same knot we have $ C_{K_1 \# K_1} = C_{K_1}^2 /2! $, then we have $$\sum_{|K|=2} C_K = \left( \sum_{K_1: prime} C_{K_1} \right)^2 / 2!$$ Thus, the sum rule for $n=2$ (i.e., the condition for $n=2$) is derived if the sum of the coefficients $C_{K_1}$ of prime knots $K_1$ over all the prime knots is given by 1: $$\sum_{K_1: prime} C_{K_1} = 1 . \label{eq:sum-prime}$$ We call Eq. (\[eq:sum-prime\]) the sum rule for $n=1$. If a composite knot $K$ consists of $n$ prime knots where $n= n_1 + n_2 + \cdots $ and $n_1$ knots are given by the same prime knot $K_1$, $n_2$ knots by $K_2$, etc., we have factorization properties (\[eq:fact3\]) as follows. $$C_K = \left( C_{K_1} \right) ^{n_1}/n_1 ! \, \cdot \, \left( C_{K_2} \right) ^{n_2}/n_2 !\cdots . \label{eq:facn}$$ Then, the sum of coefficients $C_K$ over all composite knots consisting of $n$ prime knots is expressed with the sum of coefficients $C_K$ over all prime knots as $$\sum_{|K|=n} C_K = \left( \sum_{K_1: prime} C_{K_1} \right)^n / n !$$ It follows that all the conditions (\[eq:sum-rule\]) are derived from the condition (\[eq:sum-prime\]) that the sum of coefficients $C_{K_1}$ over all prime knots $K_1$ is given by 1. $r_{\rm ex}$ Sum of prime knot coefficients: $\sum_{|K|=1} C_K$ -------------- ---------------------------------------------------- 0.0 $ 0.9737 \pm 0.0049$ 0.005 $0.9855 \pm 0.0050$ 0.01 $0.9967 \pm 0.0038$ 0.02 $1.0021 \pm 0.0027$ 0.03 $1.0056 \pm 0.0029$ 0.04 $1.0051 \pm 0.0029$ 0.05 $0.9990 \pm 0.0022$ 0.06 $0.9969 \pm 0.0024$ 0.08 $0.9921 \pm 0.0049$ 0.1 $1.0039 \pm 0.0166$ : The sum of coefficients $C_K$ over some prime knots for cylindrical SAPs with ten different values of cylindrical radius $r_{\rm ex}$. We take the sum over the prime knots with up to seven crossings from $r_{\rm ex}=0$ to $r_{\rm ex}=0.04$, up to six crossings for $r_{\rm ex}=0.05, 0.06$, and up to five crossings for $r_{\rm ex}=0.08, 0.10$. []{data-label="tab:sumrule"} Let us confirm the sume rule (\[eq:sum-prime\]) numerically. In Table \[tab:sumrule\] we present the sum of the best estimates of coefficients $C_K$ over the prime knots with up to some number of crossings for the cylindrical SAP with radius $r_{\rm ex}$ in each value of radius among the ten different values. We observe that the sum rule (\[eq:sum-prime\]) is numerically satisfied with respect to errors by the coefficients $C_K$ of the prime knots we have investigated. It is impressive to see how well the sum rule holds with respect to errors, although we do not assume that $m(K)=1$ or $\Delta N(K)= 0$ for all prime knots. Here we recall that the sum rule is derived from the knotting probability formula (\[eq:4formula\]) by assuming some properties of the fitting parameters such as the factorization properties of exponents $m(K)$ and coefficients $C_K$. We suggest that the confirmation of the sum rule for prime knots (i.e., $n=1$) shown in Table \[tab:sumrule\] gives a numerical support for the validity of the local knot picture in the knotting probability. Here we assume that the factorization properties of knot exponents and knot coefficients for the knotting probability are derived from the local knot picture that the knotted region of a knotted SAP is localized in the whole configuration of the SAP. Asymptotic expansion of the knotting probability ================================================ We now investigate how far the three-parameter asymptotic formula (\[eq:3formula\]) is effective for describing the knotting probability as a function of the segment number $N$. Here we recall that the four-parameter fitting formula (\[eq:4formula\]) is derived by modifying the asymptotic formula (\[eq:3formula\]) and also that it corresponds to the asymptotic expansion (\[eq:ZK\]) of the logarithm of the partition function $Z_K(N)$ of a RP or SAP with fixed knot $K$ with respect to the inverse of the segment number $N$. We also recall that the ratio of the partition function $Z_K(N)$ to that of no topological constraint $Z(N)$ corresponds to the knotting probability $P_K(N)$ of the knot $K$. It seems that the results of the asymptotic formula are closer to those of on-lattice models of SAP than those of the four-parameter formula (\[eq:4formula\]). We apply the asymptotic formula (\[eq:3formula\]) to the data of the knotting probabilities of various knots plotted against the number of segments $N$ for the cylindrical SAP with several different values of cylindrical radius $r_{\rm ex}$. The best estimates of the parameters of Eq. (\[eq:3formula\]) are listed in Tables \[tab:3para3141\] and \[tab:3para5152\] in Appendix B for the four knots: $3_1$, $4_1$, $5_1$ and $5_2$. We find that the $\chi^2$ values of the fitted curves are larger than those of the four-parameter formula (\[eq:4formula\]) but they are not very large. Furthermore, the estimates of the exponent $m(K)$ for prime knots are closer to 1 than those of the four-parameter formula (\[eq:4formula\]). Here we recall that the estimates of the exponent $m(K)$ are given by integers for on-lattice SAP. We suggest that the finite-size effect is significant in the knotting probability for off-lattice models of RP or SAP. If we neglect the small-$N$ region and consider only the large-$N$ region we expect that the knotting probability is well approximated by the asymptotic formula (\[eq:3formula\]). However, for the off-lattice models with the number of segments from $N=100$ to $N=1000$, the finite-size effects are important, and hence the knotting probability is well approximated by the four-parameter formula (\[eq:4formula\]) in which we take into account corrections due to the finite-size effect. Concluding Remarks ================== The knotting probability is significant in the topological properties of knotted ring polymers in solution. In particular, the characteristic length of the knotting probability plays a central role not only in the knotting probability but also in the scaling behavior of the RP or SAP under a topological constraint. We have studied the dependence of the knotting probability on the thickness of cylindrical segments for the cylindrical SAP. We have numerically shown that the maximum value of the knotting probability of the trefoil knot increases with respect to the radius of cylindrical segments, $r_{\rm ex}$, while those of other prime knots decrease exponentially with respect to it. Here we recall that cylindrical radius $r_{\rm ex}$ is the excluded-volume parameter of the cylindrical SAP. We expect that the results of the knotting probability for the cylindrical SAP are useful for studying the knotting probabilities for various other models of semi-flexible ring polymers. Through results of the cylindrical SAP we can predict possible topological effects in other models of RP or SAP. For instance, the cylindrical SAP model interpolates off-lattice RP or SAP with the lattice SAP. In the case of zero radius, i.e. for $r_{\rm ex}=0$, the cylindrical SAP model reduces to the model of equilateral RP. For $r_{\rm ex}=1/8$ it becomes rather close to the lattice SAP. Acknowledgements {#acknowledgements .unnumbered} ================ One of the authors (T.D.) would like to thank many participants of the workshop at ICTP for helpful discussion: the Workshop on Knots and Links in Biological and Soft Matter Systems, September 26-30, 2016, Trieste, Italy. The present research is partially supported by the Grant-in-Aid for Scientific Research No. 26310206. Best estimates of fitting parameters for some knots =================================================== For the trivial knot -------------------- We denote the knotting probability of the trivial knot for the cylindrical SAP of $N$ hard cylindrical segments with radius $r_{\rm ex}$ by $P_{0_1}(N, r_{\rm ex})$. We also call it the unknotting probability of the SAP. It was shown that the unknotting probability decays exponentially with respect to segment number $N$ for a model of random polygons [@Michels-Wiegel] and for the bead-rod model of SAP with different bead radii [@Koniaris-Muthukumar]. We express the unknotting probability as a function of $N$ most simply by $$P_{0_1}(N,r_{\rm ex})=C_0\exp(-{N}/{N_0}) \, . \label{eq:unknot}$$ There are two fitting parameters in Eq. (\[eq:unknot\]): $N_0$ and $C_0$, which are different from the four parameters employed in the research of Ref. [@UD2015]. The best estimates of $N_0$ and $C_0$ are listed in Table \[tab:knot01\]. We call $N_{0}$ the characteristic length for the knotting probability of the trivial knot and $C_0$ the coefficient of the knotting probability of the unknot. $r_{\rm ex}$ $C_0$ $N_{0_1}$ $\chi^2/{\rm DF}$ -------------- ----------------------- ------------------- ------------------- 0 $1.0531 \pm 0.0034$ $245.89 \pm 0.58$ $2.36$ 0.005 $1.0045 \pm 0.0061$ $356.0 \pm 1.5$ $9.01$ 0.01 $0.9909 \pm 0.0056$ $483.4 \pm 2.1$ $10.64$ 0.02 $0.9815 \pm 0.0039$ $817.7 \pm 3.0$ $8.38$ 0.03 $0.9852 \pm 0.0026$ $1290.2 \pm 4.0$ $5.55$ 0.04 $0.9865 \pm 0.0017$ $1965.6 \pm 4.8$ $3.12$ 0.05 $0.9905 \pm 0.0012$ $2903.1 \pm 6.0$ $1.90$ 0.06 $0.99262 \pm 0.00099$ $4205. \pm 10.$ $1.54$ 0.08 $0.99648 \pm 0.00079$ $8290. \pm 23.$ $0.80$ 0.1 $0.99789 \pm 0.00044$ $15348. \pm 47.$ $0.43$ : Best estimates of the parameters in Eq. (\[eq:unknot\]) for the knotting probability of the trivial knot (the unknot) $0_1$ and the $\chi^2$ values per DF. []{data-label="tab:knot01"} For nontrivial knots -------------------- ![Knotting probability of composite knot $3_1 \sharp 3_1 \sharp 3_1$ versus the number of segments $N$. []{data-label="fig:313131"}](FigKP9.pdf){width="0.80\hsize"} The knotting probabilities of composite knot $3_1 \sharp 3_1 \sharp 3_1$ are plotted against segment number $N$ for different values of radius $r_{\rm ex}$. The best estimates of the parameters for Eq. (\[eq:4formula\]) are listed in Table \[tab:313131\]. [c|ccccc]{} $r_{\rm ex}$ & $C_K$ & $m(K)$ & $N_{K}$ & $\Delta N(K)$ & $\chi^2/{\rm DF}$\ \ 0 & $0.04376 \pm 0.00038$ & $0.897 \pm 0.059$ & $255.8 \pm 5.5$ & $38.3 \pm 7.7$ & $1.13$\ 0.005 & $0.03952 \pm 0.00057$ & $0.798 \pm 0.061$ & $386. \pm 11.$ & $45.8 \pm 9.0$ & $1.71$\ 0.01 & $0.03595 \pm 0.00021$ & $0.944 \pm 0.038$ & $495.4 \pm 8.5$ & $25.4 \pm 6.6$ & $0.57$\ 0.02 & $0.02899 \pm 0.00026$ & $0.923 \pm 0.043$ & $843. \pm 21.$ & $32.9 \pm 9.2$ & $1.36$\ 0.03 & $0.02307 \pm 0.00021$ & $0.895 \pm 0.032$ & $1343. \pm 33.$ & $43.2 \pm 7.7$ & $0.95$\ 0.04 & $0.01881 \pm 0.00024$ & $0.929 \pm 0.041$ & $2006. \pm 74.$ & $41. \pm 12.$ & $1.58$\ 0.05 & $0.01535 \pm 0.00015$ & $0.932 \pm 0.028$ & $2974. \pm 95.$ & $49.1 \pm 8.5$ & $0.71$\ 0.06 & $0.01283 \pm 0.00024$ & $0.938 \pm 0.044$ & $4360. \pm 280.$ & $55. \pm 14.$ & $1.49$\ 0.08 & $0.0095 \pm 0.0003$ & $1.022 \pm 0.067$ & $7800. \pm 1100.$ & $31. \pm 26.$ & $0.93$\ 0.1 & $0.0074 \pm 0.0011$ & $1.086 \pm 0.095$ & $13000. \pm 4300.$ & $57. \pm 37.$ & $1.41$\ \ 0 & $0.07693 \pm 0.00041$ & $0.892 \pm 0.034$ & $250.8 \pm 3.2$ & $41.8 \pm 4.3$ & $0.70$\ 0.005 & $0.06943 \pm 0.00048$ & $0.871 \pm 0.039$ & $371.0 \pm 6.1$ & $38.2 \pm 5.8$ & $1.04$\ 0.01 & $0.06306 \pm 0.00032$ & $0.959 \pm 0.033$ & $482.1 \pm 7.1$ & $26.5 \pm 5.6$ & $0.75$\ 0.02 & $0.04985 \pm 0.00029$ & $0.906 \pm 0.027$ & $844. \pm 14.$ & $35.0 \pm 5.6$ & $0.90$\ 0.03 & $0.03959 \pm 0.00032$ & $0.92 \pm 0.031$ & $1339. \pm 32.$ & $41.1 \pm 7.5$ & $1.45$\ 0.04 & $0.03200 \pm 0.00019$ & $0.951 \pm 0.021$ & $1971. \pm 37.$ & $35.5 \pm 6.3$ & $0.66$\ 0.05 & $0.02578 \pm 0.00020$ & $0.934 \pm 0.023$ & $3023. \pm 80.$ & $46.7 \pm 7.4$ & $0.79$\ 0.06 & $0.02119 \pm 0.00029$ & $0.938 \pm 0.033$ & $4340. \pm 210.$ & $55. \pm 11.$ & $1.36$\ 0.08 & $0.01431 \pm 0.00043$ & $0.942 \pm 0.054$ & $8100. \pm 1000.$ & $63. \pm 16.$ & $1.51$\ 0.1 & $0.0108 \pm 0.0010$ & $1.022 \pm 0.064$ & $13300. \pm 3100.$ & $67. \pm 23.$ & $1.11$\ \ 0 & $0.02255 \pm 0.00026$ & $0.808 \pm 0.045$ & $271.4 \pm 5.3$ & $60.4 \pm 5.$ & $0.72$\ 0.005 & $0.01888 \pm 0.00026$ & $0.872 \pm 0.063$ & $383. \pm 11.$ & $57.2 \pm 7.5$ & $1.14$\ 0.01 & $0.01596 \pm 0.00011$ & $0.921 \pm 0.034$ & $501.8 \pm 8.7$ & $50.4 \pm 4.7$ & $0.32$\ 0.02 & $0.01092 \pm 0.00016$ & $0.891 \pm 0.056$ & $875. \pm 33.$ & $52. \pm 10.$ & $1.21$\ 0.03 & $0.007856 \pm 0.000080$ & $1.005 \pm 0.049$ & $1243. \pm 43.$ & $45. \pm 11.$ & $0.70$\ 0.04 & $0.00544 \pm 0.00012$ & $0.892 \pm 0.055$ & $2050. \pm 120.$ & $69. \pm 12.$ & $1.26$\ 0.05 & $0.004112 \pm 0.000070$ & $0.987 \pm 0.058$ & $2840. \pm 170.$ & $44. \pm 18.$ & $0.73$\ 0.06 & $0.003126 \pm 0.000086$ & $1.035 \pm 0.082$ & $3960. \pm 410.$ & $41. \pm 29.$ & $1.01$\ \ 0 & $0.02420 \pm 0.00028$ & $0.963 \pm 0.082$ & $249.9 \pm 7.2$ & $38. \pm 10.$ & $1.20$\ 0.005 & $0.02017 \pm 0.00023$ & $0.953 \pm 0.068$ & $360. \pm 10.$ & $44.2 \pm 8.9$ & $1.01$\ 0.01 & $0.01675 \pm 0.00023$ & $0.944 \pm 0.073$ & $492. \pm 18.$ & $48. \pm 10.$ & $1.46$\ 0.02 & $0.01141 \pm 0.00015$ & $0.862 \pm 0.043$ & $859. \pm 26.$ & $62.9 \pm 6.9$ & $0.93$\ 0.03 & $0.00782 \pm 0.00016$ & $0.873 \pm 0.058$ & $1373. \pm 70.$ & $68. \pm 10.$ & $1.61$\ 0.04 & $0.005412 \pm 0.000084$ & $1.036 \pm 0.069$ & $1785. \pm 99.$ & $43. \pm 18.$ & $1.17$\ 0.05 & $0.003926 \pm 0.000086$ & $1.012 \pm 0.077$ & $2710. \pm 220.$ & $52. \pm 22.$ & $1.30$\ 0.06 & $0.003132 \pm 0.000068$ & $1.222 \pm 0.097$ & $3430. \pm 310.$ & $-16. \pm 41.$ & $0.68$\ \ 0 & $0.01593 \pm 0.00024$ & $0.832 \pm 0.065$ & $260.5 \pm 7.2$ & $60.3 \pm 6.9$ & $0.92$\ 0.005 & $0.01319 \pm 0.00020$ & $1.063 \pm 0.084$ & $337. \pm 11.$ & $35. \pm 11.$ & $0.79$\ 0.01 & $0.01057 \pm 0.00017$ & $0.943 \pm 0.086$ & $489. \pm 21.$ & $49. \pm 12.$ & $1.30$\ 0.02 & $0.00666 \pm 0.00014$ & $0.863 \pm 0.068$ & $869. \pm 42.$ & $64. \pm 11.$ & $1.35$\ 0.03 & $0.00430 \pm 0.00011$ & $0.885 \pm 0.084$ & $1378. \pm 92.$ & $43. \pm 20.$ & $1.24$\ 0.04 & $0.003081 \pm 0.000080$ & $0.929 \pm 0.074$ & $1960. \pm 140.$ & $68. \pm 16.$ & $1.21$\ 0.05 & $0.002045 \pm 0.000081$ & $0.829 \pm 0.074$ & $3220. \pm 340.$ & $84. \pm 13.$ & $1.34$\ 0.06 & $0.001581 \pm 0.000055$ & $1.22 \pm 0.15$ & $3070. \pm 420.$ & $19. \pm 55.$ & $1.07$\ [c|ccccc]{} $r_{\rm ex}$ & $C_K$ & $m(K)$ & $N_{K}$ & $\Delta N(K)$ & $\chi^2/{\rm DF}$\ \ 0 & $0.003015 \pm 0.000072$ & $0.97 \pm 0.16$ & $256. \pm 15.$ & $43. \pm 19.$ & $0.65$\ 0.005 & $0.00241 \pm 0.000076$ & $0.84 \pm 0.12$ & $371. \pm 22.$ & $71. \pm 11.$ & $0.69$\ 0.01 & $0.00182 \pm 0.00016$ & $1.29 \pm 0.30$ & $431. \pm 47.$ & $12. \pm 44.$ & $1.32$\ 0.02 & $0.001158 \pm 0.000054$ & $1.16 \pm 0.25$ & $744. \pm 95.$ & $24. \pm 49.$ & $1.33$\ 0.03 & $0.000757 \pm 0.000027$ & $1.02 \pm 0.16$ & $1320. \pm 170.$ & $55. \pm 34.$ & $0.88$\ 0.04 & $0.000429 \pm 0.000039$ & $0.84 \pm 0.21$ & $2410. \pm 600.$ & $68. \pm 50.$ & $1.36$\ \ 0 & $0.00677 \pm 0.00019$ & $1.1 \pm 0.15$ & $238. \pm 13.$ & $45. \pm 16.$ & $1.27$\ 0.005 & $0.00534 \pm 0.00022$ & $1.09 \pm 0.20$ & $339. \pm 26.$ & $43. \pm 24.$ & $2.18$\ 0.01 & $0.004117 \pm 0.000069$ & $1.037 \pm 0.091$ & $478. \pm 21.$ & $53. \pm 11.$ & $0.61$\ 0.02 & $0.002458 \pm 0.000082$ & $0.89 \pm 0.12$ & $878. \pm 72.$ & $57. \pm 21.$ & $1.38$\ 0.03 & $0.001609 \pm 0.000052$ & $0.94 \pm 0.11$ & $1360. \pm 120.$ & $72. \pm 18.$ & $1.12$\ 0.04 & $0.001018 \pm 0.000035$ & $1.16 \pm 0.18$ & $1710. \pm 230.$ & $45. \pm 43.$ & $1.43$\ \ 0 & $0.00550 \pm 0.00022$ & $1.16 \pm 0.18$ & $228. \pm 14.$ & $39. \pm 18.$ & $1.17$\ 0.005 & $0.00413 \pm 0.00010$ & $0.95 \pm 0.13$ & $366. \pm 22.$ & $55. \pm 15.$ & $0.99$\ 0.01 & $0.003430 \pm 0.000077$ & $1.04 \pm 0.13$ & $472. \pm 27.$ & $40. \pm 18.$ & $0.81$\ 0.02 & $0.001995 \pm 0.000059$ & $0.845 \pm 0.088$ & $877. \pm 57.$ & $67. \pm 14.$ & $0.75$\ 0.03 & $0.001255 \pm 0.000059$ & $0.84 \pm 0.11$ & $1420. \pm 150.$ & $83. \pm 13.$ & $1.28$\ 0.04 & $0.00081 \pm 0.000033$ & $0.927 \pm 0.094$ & $1920. \pm 200.$ & $92.6 \pm 9.4$ & $0.89$\ \ 0 & $0.00278 \pm 0.00018$ & $1.23 \pm 0.25$ & $239. \pm 18.$ & $19. \pm 29.$ & $0.8$\ 0.005 & $0.002228 \pm 0.000062$ & $0.99 \pm 0.18$ & $373. \pm 28.$ & $42. \pm 23.$ & $0.68$\ 0.01 & $0.001783 \pm 0.000050$ & $0.95 \pm 0.16$ & $504. \pm 40.$ & $38. \pm 26.$ & $0.68$\ 0.02 & $0.001011 \pm 0.000030$ & $0.93 \pm 0.10$ & $816. \pm 58.$ & $76. \pm 12.$ & $0.55$\ 0.03 & $0.000659 \pm 0.000027$ & $1.05 \pm 0.19$ & $1180. \pm 160.$ & $62. \pm 32.$ & $1.09$\ 0.04 & $0.000378 \pm 0.000041$ & $1.38 \pm 0.42$ & $1670. \pm 370.$ & $-90. \pm 150.$ & $0.88$\ \ 0 & $0.00772 \pm 0.00016$ & $1.06 \pm 0.12$ & $246. \pm 10.$ & $44. \pm 13.$ & $0.88$\ 0.005 & $0.00625 \pm 0.00014$ & $0.98 \pm 0.12$ & $353. \pm 19.$ & $52. \pm 14.$ & $1.15$\ 0.01 & $0.004647 \pm 0.000075$ & $0.906 \pm 0.071$ & $506. \pm 19.$ & $58.9 \pm 9.$ & $0.49$\ 0.02 & $0.002754 \pm 0.000045$ & $1.053 \pm 0.09$ & $775. \pm 40.$ & $44. \pm 16.$ & $0.61$\ 0.03 & $0.001645 \pm 0.000037$ & $1.12 \pm 0.12$ & $1104. \pm 86.$ & $53. \pm 23.$ & $0.9$\ 0.04 & $0.001065 \pm 0.000036$ & $1.23 \pm 0.16$ & $1580. \pm 170.$ & $67. \pm 37.$ & $1.28$\ \ 0 & $0.00919 \pm 0.00018$ & $1.04 \pm 0.12$ & $248. \pm 10.$ & $46. \pm 12.$ & $1.14$\ 0.005 & $0.00705 \pm 0.00011$ & $0.958 \pm 0.081$ & $366. \pm 13.$ & $54.5 \pm 9.3$ & $0.63$\ 0.01 & $0.00549 \pm 0.00010$ & $1.08 \pm 0.10$ & $468. \pm 22.$ & $44. \pm 14.$ & $0.82$\ 0.02 & $0.003167 \pm 0.000077$ & $1.12 \pm 0.14$ & $738. \pm 51.$ & $22. \pm 28.$ & $1.12$\ 0.03 & $0.001907 \pm 0.000049$ & $1.036 \pm 0.096$ & $1241. \pm 97.$ & $80. \pm 14.$ & $1.19$\ 0.04 & $0.001165 \pm 0.000032$ & $1.1 \pm 0.14$ & $1780. \pm 200.$ & $39. \pm 37.$ & $0.98$\ \ 0 & $0.00602 \pm 0.00050$ & $1.42 \pm 0.21$ & $246. \pm 15.$ & $22. \pm 23.$ & $1.77$\ 0.005 & $0.00475 \pm 0.00016$ & $1.26 \pm 0.12$ & $388. \pm 18.$ & $29. \pm 16.$ & $0.7$\ 0.01 & $0.003500 \pm 0.000084$ & $1.19 \pm 0.11$ & $526. \pm 27.$ & $32. \pm 17.$ & $0.58$\ 0.02 & $0.002139 \pm 0.000046$ & $1.12 \pm 0.11$ & $847. \pm 59.$ & $61. \pm 17.$ & $0.93$\ 0.03 & $0.001099 \pm 0.000048$ & $0.97 \pm 0.16$ & $1390. \pm 190.$ & $66. \pm 28.$ & $1.43$\ 0.04 & $0.000673 \pm 0.000034$ & $1.28 \pm 0.24$ & $1600. \pm 260.$ & $41. \pm 61.$ & $1.3$\ ---------------------------------------------------------------------------------------------------------- $r_{\rm ex}$ $C_K$ $m(K)$ $N_K$ $\Delta N(K)$ $\chi^2$/DF -------------- --------------------- ------------------- ------------------ ---------------- ------------- 0 $0.0390 \pm 0.0023$ $2.806 \pm 0.058$ $257.8 \pm 2.9$ $14.1 \pm 5.2$ $1.69$ 0.005 $0.0546 \pm 0.0024$ $2.810 \pm 0.044$ $379.7 \pm 3.9$ $15.4 \pm $1.11$ 4.9$ 0.01 $0.0651 \pm 0.0023$ $2.853 \pm 0.035$ $511.0 \pm 4.8$ $16.6 \pm $0.73$ 4.6$ 0.02 $0.0848 \pm 0.0026$ $2.869 \pm 0.03$ $859.8 \pm 8.2$ $19.2 \pm $0.71$ 5.6$ 0.03 $0.0944 \pm 0.0044$ $2.927 \pm 0.044$ $1336. \pm 23.$ $15. \pm $1.17$ 10.$ 0.04 $0.1085 \pm 0.0042$ $2.904 \pm 0.035$ $2049. \pm 32.$ $25. \pm $0.81$ 11.$ 0.05 $0.1106 \pm 0.0075$ $2.960 \pm 0.055$ $2948. \pm 84.$ $20. \pm 21.$ $1.16$ 0.06 $0.139 \pm 0.011$ $2.851 \pm 0.057$ $4660. \pm 180.$ $61. \pm 25.$ $0.87$ ----------------------------------------------------------------------------------------------------------
--- abstract: 'We study how autonomous robots can learn by themselves to improve their depth estimation capability. In particular, we investigate a self-supervised learning setup in which stereo vision depth estimates serve as targets for a convolutional neural network (CNN) that transforms a single still image to a dense depth map. After training, the stereo and mono estimates are fused with a novel fusion method that preserves high confidence stereo estimates, while leveraging the CNN estimates in the low-confidence regions. The main contribution of the article is that it is shown that the fused estimates lead to a higher performance than the stereo vision estimates alone. Experiments are performed on the KITTI dataset, and on board of a Parrot SLAMDunk, showing that even rather limited CNNs can help provide stereo vision equipped robots with more reliable depth maps for autonomous navigation.' author: - 'Diogo Martins, Kevin van Hecke, Guido de Croon' title: | **Fusion of stereo and still monocular depth estimates\ in a self-supervised learning context** --- Self-supervised learning, monocular depth estimation, stereo vision, convolutional neural networks Introduction {#sec:introduction} ============ Accurate 3D information of the environment is essential to several tasks in the field of robotics such as navigation and mapping. Current state-of-the-art technologies for robust depth estimation rely on powerful active sensors like Light Detection And Ranging (LIDAR). Despite the fact that smaller scale solutions as the Microsoft Kinect exist, they are still too heavy when the available payload and power consumption are limited, such as on-board of Micro Air Vehicles (MAVs). RGB cameras provide a good alternative, as they can be light, small, and consume little power. The traditional setup for depth estimation from images consists of a stereo system. Stereo vision has been vastly studied and is considered a reliable method. For instance, NASA’s rover Curiosity was equipped with stereo vision [@NASACuriosity] to help detecting potential obstacles in the desired trajectory. However, stereo vision exhibits limited performance in regions with low-texture or with repetitive patterns and when objects appear differently to both views or are partly occluded. Moreover, the resolution of the cameras and the distance between them - baseline - also affect the effective range of accurate depth estimation. Monocular depth estimation is also possible. Multi-view monocular [@Engel2014LSDSLAMLD] methods work in a way similar to stereo vision: single images are captured at different time steps and structures are matched across views. However, opposite to stereo, the baseline is not known, which hampers the process of absolute depth retrieval. This is a main challenge in this area and typically relies on additional sensors. at (1.9,3.2) ; at (1.9,1.8) [ Left stereo image]{}; (3.4,3.2) – (4.8, 3.2); at (6.55,3.2) [![We propose to merge depth estimates from stereo vision with monocular depth estimates from a still image. The robot can learn to estimate depths from still images by using stereo vision depths in a self-supervised learning approach. We show that fusing dense stereo vision and still mono depth gives better results than stereo alone. \[fig:mergingOverview\] ](./eigen.png "fig:"){width="80"}]{}; (8.1,3.2) – (9.7, 3.2); at (11.25,3.2) ; (3.4,0.45) – (4.8, 0.45); at (1.9,0.5) ; at (1.75,0.15) ; at (1.7,-1.3) [Stereo pair]{}; at (6.55,0.45) ; (8.1,0.45) – (9.7, 0.45); at (11.25,0.45) ; at (8.875,-1.3) [Stereo]{} ; at (17.2,-1) [Ground Truth]{} ; at (17.15,0.45) ; (4.8, 1.65) rectangle (12.95,-0.75); (12.8,0.45) – (14, 0.45); (12.8,3.2) – (14, 3.2); (14,3.2) – (14, 0.59); (14,-2.57) – (14, 0.1); at (14,0.35) [Fusion]{}; (14.7,0.35) – (15.75, 0.35); at (17.15,0.35) ; at (11.15,1.5) [Sparse stereo]{} ; at (11.25,0.35) ; (11.25,-1.55) – (11.25, -0.525); at (8.5,0.35) [SSL]{}; (10.1,0.35) – (9,0.35); (8.5,3.2) – (8.5,0.6); (8.0,0.35) – (6.57,0.35); (6.55,0.335) – (6.55,1.95); (4.8, 4.4) rectangle (12.95,2); at (8.875,4.7) [CNN]{} ; at (17.15,1.7) [Merged map]{} ; Depth estimation from single still images [@Eigen2014DepthMP; @Eigen2015PredictingDS; @Thesis:Janivecky] - “still-mono” - provides an alternative to multi-view methods in general. In this case, depth estimation relies on the appearance of the scene and the relationships between its components by means of features, such as texture gradients and color [@Saxena2005LearningDF]. The main advantage of still-mono compared to stereo vision is that since only one view is considered, *a priori* there are no limitations in performance imposed by the way objects appear in the field of view or their disposition in the scene. Thus, single mono estimators should not have problems related with very close or very far objects nor when these are partly occluded. As single still-mono depth estimation is less amenable to mathematical analysis than stereo vision, still-mono estimators often rely on learning strategies to infer depths from images [@Thesis:Janivecky; @Saxena2005LearningDF] . Thus, feature extraction for depth prediction is done by minimizing the error on a training set. Consequently, there are no warranties that the model will be able to generalize well to the operational environment, especially if there is a big gap between the operational and training environments. A solution to this problem is to have the robot learn depth estimation directly in its own environment. In [@garg2016unsupervised] a very elegant method was proposed, making use of the known geometry of the two cameras. In essence, this method trains a deep neural network to predict a disparity map that is then used together with the provided geometrical transformations to reconstruct (or predict) the right image. Follow-up studies have obtained highly accurate depth estimation results in this manner [@zhong2017self; @godard2017unsupervised]. In this article, we explore an alternative path to self-supervised learning of depth estimation in which we assume a robot to be equipped already with a functional stereo vision algorithm. The disparities of this stereo vision algorithm serve as supervised targets for training a deep neural network to estimate disparities from a single still image. Specifically, only sparse disparities in high-confidence image regions are used for the training process. The main contribution of this article is that we show that the *fusion* of the resulting monocular and stereo vision depth estimates gives more accurate results than the stereo vision disparities alone. Fig.\[fig:mergingOverview\] shows an overview of the proposed self-supervised learning setup. Related work {#sec:relatedWork} ============ Depth estimation from single still images {#subsec:litDepthSingle} ----------------------------------------- Humans are able to perceive depths with one eye, even if not moving. To this end, we make use of different monocular cues such as occlusion, texture gradients and defocus [@FoundVision; @Saxena2005LearningDF]. Various computer vision algorithms have been developed over the years to mimic this capability. The first approaches to monocular depth estimation used vectors of hand-crafted features to statistically model the scene. These vectors characterize small image patches preserving local structures and include features such as texture energy, texture gradients and haze computed at different scales. Methods such as Markov Random Fields (MRF) have been successfully used for regression [@Saxena2005LearningDF], while for instance Support Vector Machines (SVMs) have been used to classify each pixel in discrete distance classes [@Bipin2015AutonomousNO]. In the context of monocular depth estimation, CNNs are current state-of-the-art [@Mancini2017TowardDI; @Eigen2015PredictingDS; @Liu2016LearningDF]. The use of CNNs forgoes the need of using hand-crafted features. However, large amounts of data are required to ensure full convergence of the solution such that the weight’s space is properly explored. Despite the fact that different network architectures can be successfully employed, a common approach consists of stacking two or more networks that make depth predictions at different resolutions. One of the networks makes a global, coarse depth prediction that is consecutively refined by the other heaped networks. These networks will explore local context and incorporate finer-scale details in the global prediction. Different information, such as depth gradients, can also be incorporated [@Thesis:Janivecky]. Eigen *et al.* [@Eigen2014DepthMP] developed the pioneer study considering CNNs for depth prediction. An architecture consisting of two stacked networks making predictions at different resolutions was used. This architecture was further improved [@Eigen2015PredictingDS] by adding one more network for refinement and by performing the tasks of depth estimation, surface normal estimation and semantic labelling jointly. Since this first implementation several other studies have followed using different architectures [@Mancini2016FastRM], posing depth estimation as a classification problem [@Cao2016EstimatingDF] or considering a different loss function [@Laina2016DeeperDP]. A common ground to these ‘earlier’ deep learning studies is that high quality dense depth maps are used as ground truth during training time. These maps are typically collected using different hardware, such as LIDAR technology or Microsoft Kinect, and are manually processed in order to remove noise or correct wrong depth estimates. More recent work has focused on obtaining training data more easily and transferring the learned monocular depth estimation more successfully to the real world. For example, in [@mancini2017toward] a Fully Convolutional Network (FCN) is trained to estimate distances in various, visually highly realistic, simulated environments, in which ground-truth distance values are readily available. As mentioned in the introduction, recently, very successful methods have been introduced that learn to estimate distances in a still image by minimizing the reconstruction loss of the right image when estimating disparities in the left image, and viceversa [@garg2016unsupervised; @zhong2017self; @godard2017unsupervised]. Some of these methods are called ‘unsupervised’ by the authors. However, the main learning mechanism is supervised learning and in a robotic context the supervised targets would be generated from the robot’s own sensory inputs. Hence, we will discuss these methods under the subsection on self-supervised learning. Fusion of monocular and multi-view depth estimates {#subsec:fusionSparseDense} -------------------------------------------------- Different approaches have been considered to explore how monocular and multi-view cues (stereo can be posed as a particular case of multi-view where the views are horizontally aligned) can be considered together to increase accuracy of depth estimation. In [@Saxena2007DepthEU] MRFs are used to model depths in an over-segmented image according to an input vector of features. This vector includes (i) monocular cues such as edge filters and texture variations, (ii) the disparity map resultant of stereo matching and (iii) relationships between different small image patches. This model was then trained on a data set collected using a laser scanner. After running the model on the available test set, the conclusion was that the accuracy of depth estimation increases when considering information from monocular and stereo cues jointly. A different approach was presented by Facil *et al*. [@Fcil2016DeepSA]. Instead of jointly considering monocular and stereo cues, the starting point consists of two finished depth maps: one dense depth map generated by a single view estimator [@Eigen2014DepthMP] and a sparse depth map computed using a monocular multi-view method [@Engel2014LSDSLAMLD]. The underlying idea is that by combining the reliable structure of the scene given by the CNN’s map with the accuracy of selected low-error points from the sparse map it should be possible to generate a final, more accurate depth prediction. The introduced merging operation is a weighted interpolation of depths over the set of pixels in the multi-view map. The main contribution is an algorithm that improves the depth estimate by merging sparse multi-view with dense mono. However, there are two remarks which must be made: (i) this study was limited to the fusion of sparse multi-view and dense mono-depth and did not explore the fusion of two dense depth maps and (ii) the CNN was trained and tested in the same environment, which means that its performance was expected to be good. We hypothesize that if the CNN was tested in a different environment its performance would be lower, affecting the overall performance of the merging algorithm. Therefore, it is important to incorporate strategies that help reducing the gap between a CNN’s training and operational environment. Self-supervised learning is one possible option. Self-Supervised Learning {#subsec:litSSL} ------------------------ Self-supervised learning (SSL) is a learning setup in which robots perform supervised learning, where the targets are generated from their own sensors. A typical setup of SSL is one in which the robot uses a trusted primary sensor cue to train a secondary sensor cue. The benefit the robot draws from this, typically lies in the different nature of the sensor cues. For instance, one of the first successful applications of SSL was in the context of the DARPA Grand Challenge, in which the robot Stanley [@Thrun2006StanleyTR] used laser-based technology as supervisory input to train a color model for terrain classification with a camera. As the camera could see the road beyond the range of the laser scanner, using the stereo system in regions which were not covered by the laser extended the amount of terrain that was properly labeled as drivable or not. Having more information about the terrain ahead helped the team to drive faster and consequently win the challenge. Self-supervised learning of monocular depth estimation is a more recent topic [@van2015persistent; @Hecke2016PersistentSL; @lamers2016self]. [@Hecke2016PersistentSL] conducted the first study where stereo vision was used as supervisory input to teach a single mono estimator how to predict depths. However, as the focus of the study was more on the behavioral aspects of SSL and all algorithms had to run on a computationally limited robot in space [@van2017self], only the average depth of the scene was learned. Of course, the average depth does not suffice when aiming for more complex navigation behaviors. Hence, in [@Thesis:Paquim] a preliminary study was performed on how SSL can be used to train a dense single still mono estimator. Also [@garg2016unsupervised; @zhong2017self; @godard2017unsupervised] learn monocular dense depth estimation, but then by using an image reconstruction loss. Some of these articles use the term ‘unsupervised learning’, as there is no human supervision. Although it is just a matter of semantics, we would put them in the category of ‘self-supervised learning’, since the learning process is supervised and - when used on a robot - the targets come from the robot itself (with the right image values as learning targets when estimating depths in the left image). The current study is inspired by [@GuidoC], in which a first study was conducted to understand under which conditions it is beneficial to use ‘SSL fusion’, and in particular, the fusion of a trusted primary sensor cue and a trained secondary sensory cue. Both theoretical and empirical evidence was found that SSL fusion leads to better results when the secondary cue becomes accurate enough. SSL fusion was shown to work on a rather limited real-world case study of height estimation with a sonar and barometer. The goal of this article is not as much to obtain the best depth estimation results known to date, but to present a more complex, real-world case study of SSL fusion. To this end we perform SSL fusion of dense stereo and monocular depth estimates, with the latter learning from sparse stereo targets. The potential merit of this approach lies in showing that the concept of SSL fusion can also generalize to complex real-world cases, where the trusted primary cue is as accurate and reliable as stereo vision. Methodology overview {#sec:approach} ===================== \[subsec:sslfusionOverview\] Figure \[fig:mergingOverview\] illustrates the overall composition of the framework for SSL fusion of stereo and still-mono depth estimates. It can be broken down in four different ’blocks’: (i) the stereo estimation, (ii) the still-mono estimation, (iii) fusion of depth estimates and (iv) SSL. We expect that the fusion of stereo and monocular vision will give more accurate, dense results than stereo vision alone. This expectation is based on at least two reasons, the first of which is of a geometrical nature (see fig.\[fig:occlusionfilling\]). Considering fig. \[fig:occlusionfilling\], the two cameras face a brown wall in the background and a black object close-by. Stereo vision cannot provide depth information in the blue regions either because these are not in the field of view of both cameras or because the dark object is occluding them in one of the cameras. If no post-processing was applied, the robot would be “blind” in these areas. However, a single monocular estimator has no problem to provide depth estimates in those regions as it only requires one view. The second reason is while stereo depth estimation relies on triangulation, still monocular depth estimation relies on very different visual cues such as texture density, known object sizes, defocus, etc. Hence, some problems of stereo vision are in principle not a problem for still-mono: uniform or repetitive textures, very close objects, etc. at (-1,-0.3) [$C_1$]{}; (1,0) – (-2.65,2.8); (1,0) – (4.65,2.8); (-1.1,0)–(-0.9,0)–(-0.9,-0.1) – (-0.7,-0.1) – (-0.7,-0.5) – (-1.3,-0.5) – (-1.3,-0.1) – (-1.1,-0.1) – (-1.1,0); (-1.3,-0.1) – (-0.9,-0.1); at (1,-0.3) [$C_2$]{}; (-1,0) – (-4.6,2.8); (-1,0) – (2.7,2.8); (1.1,0)–(0.9,0)–(0.9,-0.1) – (0.7,-0.1) – (0.7,-0.5) – (1.3,-0.5) – (1.3,-0.1) – (1.1,-0.1) – (1.1,0); (1.3,-0.1) – (0.9,-0.1); (-4,3) – (-4,3.15) – (4,3.15) – (4,3) – (-4,3); (1,0) – (-2.29,3); (-1,0) – (-0.575,1.85); (-1,0) – (-0.74390275133,3); (-2.29,3) – (-0.85,3) – (-0.76,2.85) – (-0.825,2.05) – (-0.65,1.5) – (-2.29,3); (-1,0) – (-4.6,2.8) – (-2.65,2.8) – (0,0.75) – (-1,0); (1,0) – (4.65,2.8) – (2.7,2.8) – (0,0.75) – (1,0); (1,0) – (-0.84186046511,3); (-1,0) – (0.45,3); (1,0) – (-0.375,2); (1,1) – (1,-1) – (-1,-1) – (-1,1) – (1,1); (-0.74390275133,3) – (0.45,3) – (-0.175,1.7) – (-0.345,2.1775) – (-0.76,2.85)– (-0.74390275133,3); (4.25,3) – (4.75,3) – (4.75,2.75) – (4.25,2.75) – (4.25,3); at (6.2, 2.975) [ Area out of sight]{}; at (6.3, 2.65) [from both cameras]{}; (4.25,2.3) – (4.75,2.3) – (4.75,2.05) – (4.25,2.05) – (4.25,2.3); at (6.3, 2.275) [ ]{}; at (6.3, 1.95) ; (4.25, 1.5) – (4.75,1.5); at (6.4, 1.5) ; (4.25, 1.1) – (4.75,1.1); at (6.4, 1.1) ; (-0.76,2.85) – (-0.345,2.1775) – (-0.82,2.0) – (-0.76,2.85);; Monocular depth estimation {#subsec:monoEstimate} -------------------------- The monocular depth estimation is performed with the Fully Convolutional Network (FCN) as used in [@mancini2017toward]. The basis of this network is the well known VGG network of [@simonyan2014very], which is pruned of its fully connected layers. Out of the 16 layers of the truncated VGG network, the first 8 were kept fixed, while the others were finetuned for the task of depth estimation. In order to accomodate for this task, in [@mancini2017toward] two deconvolutional layers were added to the network that bring the neural representation back to the desired depth map resolution. In [@mancini2017toward], the FCN was trained on depth maps obtained from various visually highly realistic simulated environments. In the current study, we will train and fine-tune the same layers, but then using sparse stereo-based disparity measurements as supervised targets. Specifically, we first apply the algorithm of [@Hirschmller2008StereoPB] as implemented in OpenCV. Only the disparities at image locations with sufficient vertical contrast are used for training. To this end, we apply a vertical Sobel filter and threshold the output to obtain a binarized map. We use this map as a confidence map for the stereo disparity map. We use the KITTI data set [@Geiger2012CVPR], employing their provided standard partitioning of training and validation set. The FCN was trained for 1000 epochs. In each epoch 32 images were loaded, and from these images we sampled 100 times a smaller batch of 8 images for training. The loss function used was the mean absolute depth estimation error: $l = \frac{1}{N} \sum_{(x,y) \in C} |Z_{m_{(x,y)}} - Z_{s_{(x,y)}}|$, where $C$ is the set of confident stereo vision estimates. After training, the average absolute loss on the training set is $l = 0.01$. Dense stereo and dense still mono depth fusion {#subsec:denseFusion} ---------------------------------------------- In contrast to [@Fcil2016DeepSA], we propose the fusion of dense stereo vision and dense still mono. There are five main principles behind the fusion operation: (i) as the CNN is better at estimating relative depths [@Fcil2016DeepSA], its output should be scaled to the stereo range, (ii) when a pixel is occluded only monocular estimates are preserved, (iii) when stereo is considered reliable, its estimates are preserved, (iv) when in a region of low stereo confidence and if the relative depth estimates are dissimilar, then the CNN is trusted more, and (v) again when in a region of low stereo confidence but if the relative depth estimates are similar, then the stereo is trusted more. The scaling is done as follows. $$Z_{m_{(x,y)}} \leftarrow \textrm{min}(Z_s) + r_s \cdot \frac{Z_{m_{(x,y)}} - \textrm{min}(Z_m)}{r_m}$$ where $r_m = \textrm{max}(Z_m)-\textrm{min}(Z_m)$ and $r_s = \textrm{max}(Z_s) - \textrm{min}(Z_s)$, and $(x,y)$ is a pixel coordinate in the image. If the stereo output is invalid, as in the case of occluded regions, the depth in the fused map is set to the monocular estimate: $$Z_{(x',y')} \leftarrow Z_{m_{(x',y')}}$$ where $(x',y')$ is an invalid stereo image coordinate. For the remaining coordinates, the depths are fused according to: $$\begin{gathered} Z_{(x,y)} \leftarrow W_{c_{(x,y)}} \cdot Z_{s_{(x,y)}} + \left ( 1 - W_{c_{(x,y)}} \right ) \cdot \\ \biggl ( W_{s_{(x,y)}} \cdot Z_{s_{(x,y)}} + \left ( 1 - W_{s_{(x,y)}} \right ) \cdot Z_{m_{(x,y)}} \biggr ) \end{gathered}$$ where $W_{c_{(x,y)}}$ is a weight dependent on the confidence of the stereo map at pixel $(x,y)$, and $W_{s_{(x,y)}}$ a weight evaluating the ratio between the normalized estimates from the CNN and from the stereo algorithm at pixel $(x,y)$. These weights are defined below. Since stereo vision involves finding correspondences in the same image row, it relies on vertical contrasts in the image. Hence, we make $W_{c_{(x,y)}}$ dependent on such contrasts. Specifically, we convolve the image with a vertical Sobel filter and apply a threshold to obtain a binary map. This map is subsequently convolved with a Gaussian blur filter of a relatively large size and renormalized so that the maximal edge value would result in $W_{c_{(x,y)}}=1$. The blurring is performed to capture the fact that pixels close to an edge will likely still be well-matched due to the edge falling into their support region (e.g., matching window in a block matching scheme). Please see fig. \[fig:mergeWeightsSparseCNN\] for the resulting confidence map $W_c$. If $W_{c_{(x,y)}} < 1$, the monocular and stereo estimates will be fused together with the help of the weight $W_{s_{(x,y)}}$. In the proposed fusion, more weight will be given to the stereo vision estimate, if $Z_{s_{(x,y)}}$ and $Z_{m_{(x,y)}}$ are close together. However, when they are far apart, more weight will be placed on $Z_{m_{(x,y)}}$. The reasoning behind this is that typically monocular depth estimates capture quite well the rough structure of the scene, while stereo vision estimates are typically more accurate, but when wrong can result in quite large outliers. This leads to the following formula: $$\label{eq:ratioWeight} W_{s_{(x,y)}} = \begin{cases} \frac{ N_{Z_{m}(x,y)} }{N_{Z_{s}(x,y)}} \enspace if \enspace N_{Z_{s}(x,y)} > N_{Z_{m}(x,y)} \\ \frac{ N_{Z_{s}(x,y)} }{N_{Z_{m}(x,y)}} \enspace if \enspace N_{Z_{s}(x,y)} < N_{Z_{m}(x,y)} \end{cases}$$ where $N_{Z_{m}(x,y)} = Z_{m_{(x,y)}} / \textrm{max}(Z_m)$ and $N_{Z_{s}(x,y)} = Z_{s_{(x,y)}} / \textrm{max}(Z_s)$. Finally, after the merging operation a median filter with a $5 \times 5$ kernel is used to smooth the final depth map and reduce even more overall noise. Off-line experimental results {#sec:experimentalResults} ============================= To evaluate the performance of the merging algorithms the error metrics commonly found in the literature [@Eigen2014DepthMP] are used: - Threshold error: % of $y$ s.t. $max(\frac{y}{y^*}, \frac{y^*}{y}) = \delta < thr$ - Mean absolute relative difference: $\frac{1}{|N|} \sum_{y \in N} \frac{|y-y^*|}{y^*}$ - Mean squared relative difference: $\frac{1}{|N|} \sum_{y \in N} \frac{||y-y^*||^2}{y^*}$ - Mean linear RMSE: $\sqrt{\frac{1}{|N|} \sum_{y \in N}||y-y^*||^2}$ - Mean log RMSE:$\sqrt{\frac{1}{|N|} \sum_{y \in N}||\log y-\log y^*||^2}$ - Log scale invariant error:\ $\frac{1}{2N} \sum_{y \in N} \left( \log y - \log y^* + \frac{1}{N}\sum_{y \in N}(\log y^* - \log y) \right)^2$ , where $y$ and $y^*$ are the estimated and corresponding ground truth depth in meters, respectively, and $N$ is the set of points. The main results of the experiments are summarized in Table \[table\_errors\]. Note that stereo vision is evaluated separately on non-occluded pixels with ground truth and on all pixels with ground-truth. The other estimators in the table are always applied to all ground-truth pixels. The results of the proposed fusion scheme are shown on the right in the table (FCN), and on the left the results are shown for three variants that all leave out one part of the merging algorithm. Surprisingly, a version of the merging algorithm without monocular scaling actually works the best, and also outperforms the stereo vision algorithm more clearly than the merging algorithm with scaling in Table \[table\_errors\]. Still, for what follows, we report on the fusion results with scaling. ------------------------------------- ------- ----------- ---------- ----------- ------- ----------- -------------- ---------- ------- ----------- SSL Fused SSL SSL Fused SSL SSL Fused SSL Non-occ only Incl Occ SSL Fused SSL threshold $\delta \textless 1.25$ 0,38 0,52 0,72 **0,85** 0,25 0,77 0,92 0,88 0,38 0,60 threshold $\delta \textless 1.25^2$ 0,68 0,82 0,91 **0,96** 0,44 0,95 0,97 0,92 0,68 0,95 threshold $\delta \textless 1.25^3$ 0,84 0,93 0,96 **0,98** 0,59 0,97 0,98 0,93 0,84 0,98 abs relative difference 0,31 0,25 0,20 **0,20** 0,48 0,23 0,26 0,29 0,31 0,24 sqr relative difference 3,38 2,20 **2,16** 3,00 4,98 3,50 7,61 7,63 3,38 3,17 RMSE (linear) 10,22 7,86 7,67 **5,47** 10,24 5,92 6,29 7,19 10,22 6,14 RMSE (log) 0,48 0,36 0,27 **0,24** 1,01 0,33 0,28 2,46 0,48 0,30 RMSE (log, scale inv.) 0,05 0,04 0,03 **0,03** 0,31 0,05 0,04 2,95 0,05 0,03 ------------------------------------- ------- ----------- ---------- ----------- ------- ----------- -------------- ---------- ------- ----------- ![image](./fig5as.png){width="\textwidth"} ![image](./fig5bs.png){width="\textwidth"} In order to get insight into the fusion process, we investigate the absolute errors of the stereo and monocular depth estimators as a function of the ground-truth depth obtained with the laser scanner. The results can be seen in fig. \[fig:DVE\]. We make five main observations. First, in comparison to the FCN monocular estimator, stereo vision in general gives more accurate depth estimates, also at the larger depths. Second, it can be seen that the monocular estimator provides depth values that are closer than stereo vision, which was limited to a maximal disparity of 64 pixels. Third, the accuracy of stereo vision becomes increasingly ‘wavy’ towards 80 meters. This is due to the nature of stereo vision, in which the distance per additional pixel increases nonlinearly. The employed code determines subpixel disparity estimates up to a sixteenth of a pixel, but this does not fully prevent the increasing error when between pixel disparities further away. Fourth, stereo vision has a big absolute error peak at the low distances. This is due to large outliers, where stereo vision finds a better match at very large distances. Fifth, one may think that the monocular depth estimation far away is too bad for fusion. However, one has to realize that these results are made without scaling the monocular estimates - which can go beyond 80 meters, resulting in large errors. Moreover, investigation of the error $(y-y^*)$ shows that the monocular estimate is not biased in general. Finally, one has to realize that the majority of the pixels in the KITTI dataset lies close by, as can be seen in fig. \[fig:hist\_distances\]. Hence, the closer pixels are most important for the fusion result. Figure \[fig:comparison\_a\] and \[fig:comparison\_b\] provide a qualitative assessment of the results. It illustrates three of the findings. First, the stereo vision depth maps show that often close-by objects are judged to be very far away (viz. the peak in fig. \[fig:DVE\]. The most evident example is the Recreational Vehicle (RV) in the third column of fig. \[fig:comparison\_b\]. Second, the proposed fusion scheme is able to remove many of the stereo vision errors. Evidently, all occluded areas are filled in by monocular vision, improving depth estimation there. It also removes many of the small image patches mistakingly judged as very far away by the stereo vision. However, fusion does not always solve the entire problem - the aforementioned RV is an evident example of this. Indeed, the corresponding image in the fifth row shows that the fusion scheme puts a high weight on the stereo vision estimates (red), while the error in these regions is much lower for mono vision (blue in the sixth row). To help the reader interpret the fifth and sixth row; Ideally, if an image coordinate is colored in the sixth row (meaning that one method has a much higher error than the other), the image in the fifth row (confidence map) should have the same color. Third, the red areas in the images in the sixth row illustrate that monocular estimates are indeed less good than stereo vision at long distances. On-board experimental results {#sec:onboard} ============================= In order to investigate whether SSL stereo and mono fusion can also lead to better depth estimation on a computationally restricted robot, we have performed tests on board of a small drone. The experimental setup consisted of a Parrot SLAMDunk coupled to a Parrot Bebop drone and aligned with its longitudinal body axis. The stereo estimation used the semi-global block matching algorithm [@Hirschmller2008StereoPB] also used in the off-board experiments. On board, we used the raw disparity maps without any type of post-processing. For monocular depth perception we used a very light-weight Convolutional Neural Network (CNN), i.e., only the coarse network of Ivanecky’s CNN [@Thesis:Janivecky]. Due to computational limitations it was not possible to run the full network on-board. In these experiments, we use the network weight that were trained for the images of the NYU V2 data set and predicts depths up to 10 meters. To test the performance of the merging algorithm the drone was flown both indoors and outdoors. The algorithms ran in real-time at 4 frames per second with all the processing being done on board. Selected frames and corresponding depth maps from the test flights are shown in fig. \[fig:flightResults\]. There are clear differences between the three depth estimates. The stereo algorithm provides a semi-dense solution contaminated with a lot of noise (sparse purple estimates). Its performance is significantly deteriorated by the presence of the blades in lower regions of the images. The coarse network provides a solution without too much detail but where it is possible to understand the global composition of the scene. Finally, the merged depth map provides the most reliable solution. Except for the first row, where the bad monocular prediction induces errors in the final prediction, the merged map has more detail, less noise and the relative positions of the objects are better described. Although very preliminary, and in the absence of a ground-truth sensor, these results are promising for the on-board application of the proposed self-supervised fusion scheme. CONCLUSION {#sec:conclusion} ========== In this article we investigated the fusion of a stereo vision depth estimator with a self-supervised learned monocular depth estimator. To this end, we presented a novel algorithm for dense depth fusion that preserves stereo estimates in high stereo confidence areas and uses the output of a CNN to correct for possibly wrong stereo estimates in low confidence and occluded regions. The experimental results show that the proposed self-supervised fusion indeed leads to better results. The analysis suggests that in our experiments, stereo vision is more accurate than monocular vision at most distances, except close by. We identify three main directions of future research. First, the current fusion of stereo and mono vision still involves a predetermined fusion scheme. This, while the accuracy of the monocular estimates may depend on the environment and hardware. For this reason, in [@GuidoC], the robot used the trusted primary cue to determine the uncertainty of the learned secondary cue in an online process. This uncertainty was then used for fusing the two cues. A similar setup should be investigated here. Second, the performance obtained with our proposed fusion scheme is significantly lower than that of the image-reconstruction-based SSL setup in [@godard2017unsupervised]. We did not perform any thorough investigation of network structure, training procedure, etc. to optimize the monocular estimation performance, but such an effort would be of interest. Third, and foremost, we selected this task, as stereo vision is typically considered to be very reliable. The fact that even sub-optimal monocular estimators can be fused with stereo to improve the robot’s depth estimation, is encouraging for finding other application areas of SSL fusion. [10]{} Kumar Bipin, Vishakh Duggal, and K. Madhava Krishna. Autonomous navigation of generic monocular quadcopter in natural environment. , pages 1063–1070, 2015. Yuanzhouhan Cao, Zifeng Wu, and Chunhua Shen. Estimating depth from monocular images as classification using deep fully convolutional residual networks. , abs/1605.02305, 2016. Guido de Croon. Self-supervised learning: When is fusion of the primary and secondary sensor cue useful? , 2017. David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. , pages 2650–2658, 2015. David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In [*NIPS*]{}, 2014. Jakob Engel, Thomas Schops, and Daniel Cremers. Lsd-slam: Large-scale direct monocular slam. In [*ECCV*]{}, 2014. Jose M. Facil, Alejo Concha, Luis Montesano, and Javier Civera. Deep single and direct multi-view depth fusion. , abs/1611.07245, 2016. Ravi Garg, Vijay Kumar BG, Gustavo Carneiro, and Ian Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In [*European Conference on Computer Vision*]{}, pages 740–756. Springer, 2016. Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In [*Conference on Computer Vision and Pattern Recognition (CVPR)*]{}, 2012. Cl[é]{}ment Godard, Oisin Mac Aodha, and Gabriel J Brostow. Unsupervised monocular depth estimation with left-right consistency. In [*CVPR*]{}, volume 2, page 7, 2017. Steven B. Goldberg, Mark W. Maimone, and Larry Matthies. Stereo vision and rover navigation software for planetary exploration. , 2002. Heiko Hirschmuller. Stereo processing by semiglobal matching and mutual information. , 30, 2008. Jan Ivanecky. . Master’s thesis, Brno University of Technology, 2016. Guido de Croon Joao Paquim. . Master’s thesis, Delft University of Technology, 2016. Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth prediction with fully convolutional residual networks. In [*3DV*]{}, 2016. Kevin Lamers, Sjoerd Tijmons, Christophe De Wagter, and Guido de Croon. Self-supervised monocular distance learning on a lightweight micro air vehicle. In [*Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on*]{}, pages 1779–1784. IEEE, 2016. Fayao Liu, Chunhua Shen, Guosheng Lin, and Ian D. Reid. Learning depth from single monocular images using deep convolutional neural fields. , 38:2024–2039, 2016. Michele Mancini, Gabriele Costante, Paolo Valigi, and Thomas A. Ciarfuglia. Fast robust monocular depth estimation for obstacle detection with fully convolutional networks. , pages 4296–4303, 2016. Michele Mancini, Gabriele Costante, Paolo Valigi, Thomas A. Ciarfuglia, Jeffrey Delmerico, and Davide Scaramuzza. Toward domain independence for learning-based monocular depth estimation. , 2:1778–1785, 2017. Michele Mancini, Gabriele Costante, Paolo Valigi, Thomas A Ciarfuglia, Jeffrey Delmerico, and Davide Scaramuzza. Toward domain independence for learning-based monocular depth estimation. , 2(3):1778–1785, 2017. Ashutosh Saxena, Sung H. Chung, and Andrew Y. Ng. Learning depth from single monocular images. In [*NIPS*]{}, 2005. Ashutosh Saxena, Jamie Schulte, and Andrew Y. Ng. Depth estimation using monocular and stereo cues. In [*IJCAI*]{}, 2007. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. , 2014. Sebastian Thrun, Michael Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan Halpenny, Gabriel Hoffmann, Kenny Lau, Celia M. Oakley, Mark Palatucci, Vaughan Pratt, Pascal Stang, Sven Strohband, Cedric Dupont, Lars-Erik Jendrossek, Christian Koelen, Charles Markey, Carlo Rummel, Joe van Niekerk, Eric Jensen, Philippe Alessandrini, Gary R. Bradski, Bob Davies, Scott Ettinger, Adrian Kaehler, Ara V. Nefian, and Pamela Mahoney. Stanley: The robot that won the darpa grand challenge. , 23:661–692, 2006. Kevin van Hecke, Guido C. H. E. de Croon, Laurens van der Maaten, Daniel Hennes, and Dario Izzo. Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance. , abs/1603.08047, 2016. Kevin van Hecke, Guido CHE de Croon, Daniel Hennes, Timothy P Setterfield, Alvar Saenz-Otero, and Dario Izzo. Self-supervised learning as an enabling technology for future space exploration robots: Iss experiments on monocular distance learning. , 140:1–9, 2017. KG Van Hecke. Persistent self-supervised learning principle: Study and demonstration on flying robots. 2015. Brian Wandell. . Sinauer Associates Inc. Yiran Zhong, Yuchao Dai, and Hongdong Li. Self-supervised learning for stereo matching with self-improving ability. , 2017.
--- abstract: 'Eukaryotic cell motility involves a complex network of interactions between biochemical components and mechanical processes. The cell employs this network to polarize and induce shape changes that give rise to membrane protrusions and retractions, ultimately leading to locomotion of the entire cell body. The combination of a nonlinear reaction-diffusion model of cell polarization, noisy bistable kinetics, and a dynamic phase field for the cell shape permits us to capture the key features of this complex system to investigate several motility scenarios, including amoeboid and fan-shaped forms as well as intermediate states with distinct displacement mechanisms. We compare the numerical simulations of our model to live cell imaging experiments of motile [*Dictyostelium discoideum*]{} cells under different developmental conditions. The dominant parameters of the mathematical model that determine the different motility regimes are identified and discussed.' author: - 'Eduardo Moreno[^1]' - 'Sven Flemming[^2]' - 'Francesc Font[^3]' - 'Matthias Holschneider[^4]' - 'Carsten Beta[^5]' - 'Sergio Alonso[^6]' nocite: '[@*]' title: 'Modeling cell crawling strategies with a bistable model: From amoeboid to fan-shaped cell motion' --- *Keywords*: Pattern formation, *Dictyostelium discoideum*, cell motility, amoeboid crawling, keratocyte motion Introduction ============ The biochemical and biophysical mechanisms involved in cell motility have been extensively studied during the past years. They are among the most intriguing problems in cell biology, ranging from single cells to multicellular organisms. Before the cell begins to move, it has to define a front and a back to specify an axis of propagation. This process is known as cell polarization [@mogilner2012cell]. It sets the direction in which protrusions are formed that drive the cell forward. Cell locomotion has been extensively studied using keratocytes, which move in a highly persistent fashion and adopt a characteristic fan-like shape [@mogilner_experiment_2019]. Also neutrophils have been intensely investigated. They display a less persistent movement with more frequent random changes in direction that is known as amoeboid motility [@haastert_chemotaxis_2004]. A well-established model system to study actin-driven motility in eukaryotic cells is the social amoeba [*Dictyostelium discoideum*]{} ([*D. discoideium*]{}) [@annesley_dictyostelium_2009]. The cells of this highly motile single-celled microorganism typically display pseudopod-based amoeboid motility but also other forms, such as blebbing motility or keratocyte-like behavior have been observed. Many aspects of cell motility such as cytoskeletal mechanics [@goehring2013cell], intracellular signaling dynamics [@beta2017intracellular; @rappel2017mechanisms], or membrane deformation [@allard2013traveling] have been modeled using mathematical and computational methods. Cell polarity formation, which is a key features of motility mechanisms to determine the front and back of the cell, often shows bistable dynamics. A reaction-diffusion system with bistable kinetics is thus a common choice to model the intracellular polarity dynamics [@jilkine2011comparison]. Bistable conditions of an intracellular dynamical process can be obtained by a mass-controlling mechanism between the cytosolic and membrane attached concentrations of biochemical components [@mori2008wave; @otsuji2007mass]. This may be relevant at different levels of the cytoskeleton, for example, when different forms of actin are involved [@beta2008bistable; @schroth-diez_propagating_2009; @beta2010bistability] or at the level of the related signaling pathways, involving phospholipids and enzymes at the cell membrane [@matsuoka2018mutual; @altschuler2008spontaneous]. Cell polarity may be also induced by an external chemical gradient [@iglesias2008navigating]. There are several mathematical tools to simultaneously model the pattern formation process inside the cell and the dynamics of the cell border, which is required to obtain a full description of a crawling cell. One of the most commonly employed methods to model such a free-boundary problem is to introduce an additional phase field, which is one inside and zero outside the cell and keeps the correct boundary conditions while the borders are moving [@kockelkoren2003computational], even in the limit of the sharp interface between the interior and the exterior of the cell [@camley2013periodic]. The first attempts to employ a phase field modeling to study cell locomotion where applied to keratocyte motility [@shao2010computational; @ziebert2011model; @shao2012coupling] because the persistence of motion of these cells facilitates the implementation of the model. These models have also been extended to discuss, for example, the rotary motion of keratocytes [@Camley17] and the interactions among adjacent cells [@lober2014modeling]. Later, the use of the phase field has been extended to model other generic properties of the moving cells [@najem2013phase; @kulawiak2016modeling] and, in particular, has been also employed to describe the random motion of amoeboid cells, such as neutrophils or [*D. discoideum*]{}. The phase field approach has been employed to model the viscoelasticity of the cell [@moure2016computational; @moure2017phase], the effect of biochemical waves in the interior of the cell [@taniguchi2013phase], as well as wave-induced cytofission of cells [@flemming2019]. The random motion of the cell requires a stochastic bistable process in combination with a phase field [@Alonso18] to successfully recover the fluctuating displacements and shape deformations. Such a stochastic bistable model is able to capture the cell-to-cell variability observed in the motion patterns of amoeboid [*D. discoideum*]{} cells by tuning a single model parameter [@Alonso18]. However, [*D. discoideum*]{} cells are known to show a more diverse spectrum of motility modes, for example when certain genes are knocked out, when phosphoinositide levels are artificially altered, or under specific developmental conditions. This includes a phenotype which is reminiscent of keratocyte motility, where the cell adopts a fan-like shape and moves persistently, perpendicular to the elongated axis of the cell body – the so-called fan-shaped phenotype – and a form where cells adopt a pancake-like shape, moving erratically without a clear direction of polarization [@asano2004keratocyte; @miao2017altering; @cao2019plasticity]. Here, we perform a systematic analysis of a previously introduced model that is based on a stochastic bistable reaction-diffusion system in combination with a dynamic phase field [@Alonso18]. Such phenomenological model may provide a better understanding of how to relate the experimental parameters to specific cellular behaviors, because cell-to-cell variability often masks such relation. Along with the model, we analyze experimental data of a non-axenic [*D. discoideum*]{} wildtype cell line (DdB) that carries a knockout of the RasGAP homologue NF1 (DdB NF1 null cells). In this cell line, amoeboid and fan-shaped cells are observed, depending on the developmental conditions. A detailed comparison of the experimental data to simulations of the stochastic bistable phase field model is presented. By tuning the intensity of the noise and the area covered by the bistable field, the model simulations recover similar motility phenotypes as observed in experiments, ranging from highly persistent fan-shaped cells to standard amoeboid motion. Furthermore, the simulations predict intermediate unstable states and also a transition from straight to rotary motion of the fan-shaped cells. These forms of motility that have so far been neglected in [*D. discoideum*]{}, were also observed in the experimental data and are systematically studied in the framework of our mathematical model. Materials and Methods ===================== Experimental Methods -------------------- All experiments were performed with non-axenic *D. discoideum* DdB NF1 KO cells [@bloomfield2015neurofibromin], which were cultivated in 10 cm dishes with Sorensen’s buffer (8 g KH$_2$PO$_4$, 1.16 g Na$_2$HPO$_4$, pH 6.0) supplemented with 50 $\mu$M MgCl$_2$, 50 $\mu$M CaCl$_2$ and *Klebsiella aerogenes* at an OD$_{600}$ of 2. The cells expressed Lifeact-GFP via the episomal plasmid SF99, which is based on a new set of vectors for gene expression in non-axenic *D. discoideum* strains [@paschke2018rapid]. Plasmids were transformed as described before [@paschke2018rapid] with an ECM2001 electroporator using three square wave pulses of 500 V for 30 ms in electroporation cuvettes with a gap of 1 mm. G418 (5 $\mu$g/ml) and Hygromycin (33 $\mu$g/ml) were used as selection markers. The phenotype of DdB NF1 KO cells differs between individual cells of a population and especially between different developmental stages. When cultivated in buffer supplemented with bacteria, cells are in the vegetative state and the predominant phenotype is amoeboid with very little movement due to the abundance of bacteria. After several hours of starvation, cell enter the developed state and the probability to observe a fan-shaped phenotype is increased. Preparation of the cells forexperiments therefore differed between experiments. Cells were washed to removethe bacteria and (i) suspended in Sörensen’s Buffer immediately after washing to obtain mainly amoeboid cells with high motility or (ii) starved for 3-6 hours to obtain a high percentage of fan-shaped cells. After starvation, cells were seeded in microscopy dishes at low density for imaging. Usually in the beginning of an experiment, many cells showed the amoeboid or the intermediate phenotype with regular switches from amoeboid to fan-shaped motility and vice versa. The percentage of fan-shaped cells increased over time and the fan-shaped phenotype became more stable. An increase in the number of fan-shaped cells during development has also been described for the D. discoideum Ax2 AmiB knockout strain (Asano et al., 2004). Note however, that the effects of cell development on the phenotype of DdB NF1 KO cells showed a high day-to-day variability and best results were accomplished with fresh K. aerogenes cells. The cells were transferred to a 35 mm glass bottom microscopy dish (FluoroDish, World Precision Instrumnets) and diluted to a concentration enabling imaging of single cells. For imaging an LSM 780 (Zeiss, Jena) with a 488 nm argon laser and a 63x or a 40x oil objective lens were used. Computational Model ------------------- We investigate different types of cell motility based on a minimal model that couples a concentration field accounting for the complex biochemical reactions occurring in the interior of the cell to an auxiliary phase field describing the evolution of the cell shape. The use of a phase field is a well-established approach to deal with problems of evolving domains/geometries without the need of explicitly tracking the domain boundaries, which has been exploited to tackle moving boundary problems of different nature such as crack propagation [@boettinger2002phase], solidification [@pons2010helical], or fluid interface motion [@folch1999phase]. In particular, the versatility of this approach has been used for modelling cell-shape evolution and locomotion [@shao2012coupling; @taniguchi2013phase; @Camley17]. The model we use here has been previously introduced in [@Alonso18] and will be summarized below. In what follows we will consider the dynamics of a generic activatory biochemical component at to the substrate-attached cell-membrane and, thus, restrict ourselves to an idealized 2D geometry. The phase field $\phi(\boldsymbol{x},t)$ smoothly varies between the values of $\phi=1$ inside and $\phi=0$ outside the cell, respectively. The phase field allows to implicitly impose no-flux boundary conditions at the cell border, which we assume to be where the phase field takes the value of $\phi=0.5$. Following the work by Shao *etal.* [@shao2012coupling], the phase field evolves according to the equation $$\label{pf} \tau \frac{\partial \phi}{\partial t} = \gamma \left(\nabla^2 \phi -\frac{G'(\phi)}{\epsilon^2}\right) - \beta \left(\int \phi\, dA - A_0 \right)\left| \nabla \phi \right| + \alpha\, \phi\, c \left| \nabla \phi \right| \,,$$ where $G(\phi) = 18\,\phi^2\,(1-\phi)^2$ is a double well potential. The phase field equation is the result of a force balance involving forces of different nature acting on the cell body. The terms with $| \nabla \phi|$ affect the border of the cell, while the others affect the volume. The first term on the right hand side of eq.  corresponds to the surface energy of the cell membrane, where $\gamma$ is the surface tension (note that value is obtained assuming a cell height of 0.15 $\mu$m [@shao2012coupling]) and $\epsilon$ the mathematical definition of the width of the cell boundary. The second term ensures that the cell area is kept close to $A_0$. The last term represents the active force of the biochemical field $c(\boldsymbol{x},t)$ on the cell membrane. The parameters that control the impact of the area conservation constraint and the active force, $\beta$ and $\alpha$, respectively, are kept constant in our simulations. The term on the left hand side of eq.  accounts for cell-substrate friction. A complete derivation of eq.  can be found in [@shao2010computational]. Cell polarization is implemented in our model by assuming noisy bistable dynamics resulting from a non-linear reaction-diffusion equation for the concentration $c(\boldsymbol{x},t)$. The biochemical field $c(\boldsymbol{x},t)$ represents a dimensionless generic concentration variable that accounts for different subcellular components that promote the growth of filamentous actin (F-actin) such as active Ras, PI3K or PIP$_3$ [@swaney2010eukaryotic]. It is related to the intensity of the Lifeact-GFP marker for F-actin in our experiments. Imaging experiments with [*D. discoideum*]{} typically show rich dynamical patterns in the cell cortex and at the cell membrane. Deriving a detailed model that captures the full complexity of the underlying biochemical reactions is unfeasible. To overcome this difficulty deriving a detailed model of the reactions inside the cell and aiming for mathematical simplicity, we take a similar approach as in previous studies [@mori2008wave; @Camley17; @Alonso18] and formulate a simple reaction-diffusion equation, where the non-linear reaction kinetics leading to bistability is modelled by a cubic polynomial in the variable $c(\boldsymbol{x},t)$. In addition, we introduce a term accounting for degradation of the biochemical component $c$. The equation reads $$\label{eq1} \frac{\partial (\phi c) }{\partial t} = \nabla \left(\phi D \nabla c\right) + \phi [k_a\, c\,(1-c)(c-\delta(c)) - \rho\, c] + \,\phi\,(1-\phi)\,\xi(x,t),$$ where $k_a$ is the reaction rate, $\rho$ the degradation rate, and $D$ the diffusivity of the biochemical component. The last term on the right hand side introduces noise at the cell membrane, which allows us to account for the stochastic nature of the reaction-diffusion processes occurring within the cell. The noise intensity along with the reaction rate are key parameters in our model that allow the transition between different forms of cell motility. The stochastic variable $\xi(x,t)$ follows an Ornstein-Uhlembeck dynamics $$\label{noise} \frac{d \xi}{d t} = -k_{\eta}\, \xi + \eta\,,$$ where $\eta$ is a Gaussian white noise with zero mean average $\langle \eta \rangle=0$ and a variance of $\langle \eta(x,t) \eta(x\sp{\prime},t\sp{\prime})\rangle=2\sigma^2\delta(x-x\sp{\prime})\delta(t-t\sp{\prime})$. The reaction diffusion equation aims at reproducing the pattern activity on the substrate-attached cell membrane observed in our experiments. A control of the size of the patterns is important since the patterned area rarely covers the entire cell membrane. Previous experiments with giant [*D. discoideum*]{} cells revealed that, after a critical size is reached, wave patterns tend to modify their shape rather than actually growing into larger areas [@Gerhardt4507]. Therefore, a dynamic control in the form of a global feedback on the parameter $\delta(c)$ is implemented. It affects the pattern dynamics and prevents the system to be covered completely by $c$ depending of the value of $C_0$. The control term reads $$\label{delta} \delta(c) = \delta_0 + M\left( \int \phi\, c\, dA - C_0 \right)\,,$$ where the parameter $C_0$ represents the average area covered by component $c$. The control mechanism shown in eq.  dynamically changes the value of the unstable fixed point of the system. This enforces that the amount of component $c$ inside the cell is constant on average. In our simulations, we will specifically show the effect of changing $C_0$ on the cell trajectories. We integrated Eqs. - on a square domain of 300$\times$300 pixels with periodic boundary conditions using standard finite differences. The pixel size is given by $\Delta x = \Delta y = 0.15$$\mu$m and the integration time step is $\Delta t=0.002$s. The values and definitions of the parameters of the model can be found in Table \[table1\]. Cell trajectories and velocities were obtained by finding and tracking the center of mass of cells from the numerical simulations. In the following sections, we will study and analyze different cell motility modes. The transitions between these modes are obtained by varying three parameters: the noise intensity, the average membrane coverage with the activatory component $c$, and the activity rate of the biochemical field. In general, the bistable kinetics of $c$ will drive the formation of patches of high concentration of $c$ on a background of low $c$ concentration. The coherent effects of these patches on directed cell locomotion will be disturbed and interrupted by the impact of noise, which will favor nucleation events and the formation of new patches in other regions of the membrane. Therefore, the dynamics of our model can be qualitatively understood as a competition between the coordinated effects of pattern formation and the randomizing impact of noise on cell locomotion. [@\*[5]{}[l]{}]{} Parameter & Value & Units & Meaning & Value Reference\ $D$ & 0.5 & $\mu m^2 /s$ & Diffusion coefficient & [@Alonso18]\ $k_a$ & 2-5 & $s^{-1}$ & Reaction rate & [@Alonso18]\ $\rho$ & 0.02 & $s^{-1}$ & Degradation rate & [@Alonso18]\ $\sigma$ & 0.15 & $s^{-2}$ & Noise strength & [@Alonso18]\ $\tau$ & 2 & $pN s \mu m^{-2} $ & Membrane dynamics time-scale & 1[@shao2010computational], 2.62[@ziebert2011model]\ $\gamma$ & 2 & $pN$ & Surface tension & 1[@shao2010computational]\ $\epsilon$ & 0.75 & $\mu m$ & Membrane thickness & 1[@shao2010computational]\ $\beta$ & 22.22 & $pN \mu m^{-3} $ & Parameter for total area constraint & [@Alonso18]\ $A_0$ & 113 & $\mu m^{2}$ & Area of the cell & [@Alonso18]\ $\delta_0$ & 0.5 & - & Bistability critical parameter & [@Alonso18]\ $M$ & 0.045 & $\mu m^{-2}$ & Strength of the global feedback input & [@Alonso18]\ $k_{\eta}$ & 0.1 & $s^{-1}$ & Ornstein-Uhlembeck rate & [@Alonso18]\ $\alpha$ & 3 & $pN \mu m^{-1}$ & Active tension & 0.5-3[@ziebert2011model]\ $Co$ & 28, 56, 84 & $\mu m^{2}$ & Maximum area coverage by $c$ & [@Alonso18]\ Results ======= We performed a systematic study of the model described in the previous section. By modifying the values of the biochemical reaction rate $k_a$, the intracellular area covered by the concentration $c$, defined as $C_0$, and the noise strength $\sigma$, we obtained a diverse set of cell shapes, trajectories, and speeds. An overview of the studied cases is presented in Figure 1, where different cell shapes and average speeds are shown in the plane spanned by the parameters $\sigma$ and $C_0$. We identified four different types of motility with distinct shapes and trajectories: Amoeboid cells, characterized by a motion parallel to the elongation axis, fan-shaped cells that move perpendicular to the elongation axis, intermediate states that combine features of both amoeboid and fan-shaped types, and oscillatory cells, where the concentration $c$ is almost homogeneously distributed inside the cell with only small fluctuations at the border. ![\[Figura1\] Phase diagram of the cell shapes obtained from the model and their respective average speeds for different values of the spatio-temporal noise, the reaction rate, and the maximum area coverage of the concentration $c$. The intracellular concentration $c$ is proportional to the intensity of sky blue color. Cell speed was computed as the sum of the velocities at each time step divided by the number of time steps. Colored boxes classify cells into four types: Amoeboid cells inside the red box, fan-shaped and similar cases inside the green box, intermittent cases at the transition from the amoeboid to the fan-shaped regime inside the yellow box, and oscillatory cells inside the purple box.[]{data-label="fig:dfas"}](f1.png){width="100.00000%"} The transitions between the four cell types, summarized in , are presented in more detail in , where cell shapes are shown for $k_a=2s^{-1}$ and $k_a=5s^{-1}$ separately as a function of the parameter $C_0$, which is changed in smaller increments here. In both diagrams, the simulations in the left top panels show cell types that mimic the vegetative and starving states of [*D. discoideum*]{}, analyzed in more detail in [@Alonso18]. The right bottom panels in A and B correspond to stable fan-shaped and rotating fan-shaped cells, respectively, which are shapes related to particular mutations of [*D. discoideum*]{} [@asano2004keratocyte; @miao2017altering]. In between these two limits, different dynamical regimes have been obtained during the computational study of our model, some of which have been also observed in our experiments with [*D. discoideum*]{}, as we described below. ![\[Figura2\] Phase diagrams of the cell shape obtained from the model simulations when varying area coverage and noise intensity. The computational cells are obtained for $k_a=2s^{-1}$ (A) and $k_a=5s^{-1}$ (B). The intracellular concentration $c$ is proportional to the intensity of sky blue color.[]{data-label="fig:allcases"}](f2.png){width="100.00000%"} Amoeboid motion. ---------------- If we set $C_0=28$ $\mu m^2$, the biochemical component $c$ roughly occupies one quarter of the total cell area, fixed at $113$ $\mu m^2$ (corresponding to a circular cell with radius 6 $\mu m$). Under these conditions, the concentration $c$ accumulates in the front part of the cell reminiscent of the typical amoeboid shape of [*D. discoideum*]{} cells. In this scenario, we observe different trajectories depending on the noise intensity. A large noise intensity translates into slow and random cell motion, whereas low intensity leads to faster and much more persistent motion. ![\[fig:amosimexp\] Numerical and experimental results for amoeboid motion. (A) Sequence of several snapshots of a numerical simulation with $C_0=28\mu m^2$ and $k_a=5s^{-1}$. (B) Sequence of several snapshots of a vegetative DdB NF1 KO cell showing the amoeboid phenotype. (C) Example cell trajectories tracked from four simulations over 600 $s$ (two simulations with $k_a=2s^{-1}$ and two with $k_a=5s^{-1}$) It is important to mention that the two small trajectories correspond to $k_a=2s^{-1}$, while the two larger trajectories to $k_a=5s^{-1}$. Both cases also with $C_0=28\mu m^2$, (D) Examples of cell trajectories tracked from four vegetative DdB NF1 KO cells showing the amoeboid phenotype in experiments over 600 $s$.](f3.jpg){width="100.00000%"} A comparison between the dynamics of living [*D. discoideum*]{} cells under amoeboid conditions and cell dynamics generated by our model is shown in . The numerical simulations shown in A and C correspond to low values of $C_0$ and a noise intensity of 100$\%$ of the total given in Table 1. C, showing trajectories for $k_a=2s^{-1}$ and $k_a=5s^{-1}$, demonstrates that the parameter $k_a$ controls cell polarization [@Alonso18]. A value of $k_a=2s^{-1}$ induces a diffuse trajectory in a small area of space and a random appearance of protrusions formed by fluctuating amounts of concentration $c$ along the cell membrane. For $k_a=5s^{-1}$ the cell explores larger areas due to a continuous and more stable accumulation of $c$ in one region of the membrane that sets in the direction of motion (see A). A representative sequence of snapshots of an experimental observation of amoeboid motion of a [*D. discoideum*]{} cell is shown in B, where the accumulation of actin is clearly appearing in the cell front. In addition, we present examples of individual experimentally recorded cell trajectories that cover the behavioral diversity of amoeboid [*D. discoideum*]{} cells D. Intermediate dynamics. ---------------------- By increasing the parameter $C_0$ to $56$ $\mu m^2$, which corresponds to half of the total area of the cell covered by the biochemical species $c$, and maintaining the noise intensity between 75$\%$ and 100$\%$ of the total given in Table 1, we find a different motile behaviour in our model simulations. Under theses conditions, the results of the numerical simulations resemble the amoeboid shapes described in the previous section, however the repeated appearance of an additional large protrusion strongly modifies the trajectories of the simulated cells, see A. Initially, the amount of $c$ is concentrated in one region at the cell border, clearly defining a leading edge. From time to time, a part of the total amount of $c$ changes position at the cell border thus triggering an instability of the initial leading edge. This drives the formation of a new protrusion, where eventually most of the total amount of $c$ will accumulate and define a new cell front. In B, we show an example of similar intermittent behaviour that was observed in our experiments with [*D. discoideum*]{} cells which frequently switch from amoeboid to fan-shaped motility and vice versa. C, where we present a comparison between an experimental trajectory and three trajectories obtained from numerical simulations, demonstrates how this dynamics generates trajectories with abrupt changes in direction. ![\[Figura1\] Numerical and experimental results for the intermediate unstable case (A). Sequence of six snapshots of a numerical simulation with $C_0=56\mu m^2$, $k_a=5s^{-1}$. (B) Sequence of snapshots of a DdB NF1 KO cell starved for 4 hours prior to imaging, showing the intermediate phenotype. (C) Comparison between the trajectories of the center of mass of three numerical simulations (solid lines) and a trajectory of the center of mass of the DdB NF1 KO cell shown in B (dotted line). In all four cases, cells were tracked over $1032 s$.[]{data-label="fig:unssimexp"}](f4.jpg){width="80.00000%"} By further increasing the parameter $C_0$ to $84$ $\mu m^2$, corresponding to 75% of the total area of the cell covered by concentration $c$, another distinct behaviour is obtained in the numerical simulations, see the purple box in . Keeping $k_a=2s^{-1}$ and a noise intensity similar to the previous case, an oscillatory behaviour of the cell border is observed due to saturation of $c$ inside the cell. Noise-driven small displacements and a [*circular*]{} cell shape characterize this regime. It resembles previous experimental observations of a so-called pancake phenotype [@edwards2018insight]. Fan-shaped motion. ------------------ For the values of $C_0$ employed in the previous section but low noise intensity, the shape of the numerically obtained cells becomes more elongated perpendicular to the direction of motion than parallel to the direction of motion of the cell. Moreover, their elongated shape is stable over time, and they move in a highly persistent fashion. Together these features characterize the so-called fan-shaped motion of [*D. discoideum*]{} cells [@miao2017altering]. The overall appearance and motion characteristic of fan-shaped cells share many similarities with keratocytes, even though the internal organization of the motility apparatus is clearly different. For $C_0=56\,\mu m^2$, corresponding to a concentration $c$ covering half of the total cell area, and a noise intensity that is reduced to 50$\%$ or less of its maximal value, a rounded elongated shape, reminiscent of a keratocytes, is observed in the numerical simulations, see for example the results in . The trajectories are straight and persistent for lower noise intensities and become more erratic for high noise levels. ![\[Figura1\] Numerical and experimental results for fan-shaped motility. (A) Sequence of three snapshots taken from a numerical simulation of the computed cell with parameter values $C_0=84\mu m^2$, $k_a=2s^{-1}$ and noise intensity set to $10$ percent. (B) Snapshots of a DdB NF1 KO cell starved for 4 hours prior to imaging, showing the fan-shaped phenotype. (C) Four examples of trajectories of the center of mass of numerically simulated fan-shaped cells tracked over $600$ $s$. (D) Four trajectories of the center of mass of DdB NF1 KO cells starved for 4 hours prior to imaging, showing the fan-shaped phenotype tracked over 600 $s$ in experiments.[]{data-label="fig:kersimexp"}](f5.jpg){width="100.00000%"} By further increasing the covered area to $C_0=84\,\mu m^2$, we obtain similar fan-shaped cells. For $k_a=2s^{-1}$ the simulation produces rounded cell shapes that move at a reduced speed in a highly random fashion. Comparing A and B reveals the qualitative similarities between fan-shaped cells obtained from simulations under these conditions and the experimentally observed dynamics of fan-shaped [*D. discoideum*]{} cells. The model satisfactorily reproduces the experimental features of the cell motion. The four different realizations of trajectories, generated in numerical simulations and presented in C, display a similar persistent motion as the straight cell trajectories observed in experiments, see D. These trajectories resemble the trajectories of fan-shaped cells with $C_0=56\mu m^2$ and intermediate noise intensity as discussed previously. Thus, depending on the value of the reaction rate and the noise intensity, we see remarkable differences between cell shapes and trajectories. Rotational trajectories of fan-shaped cells. -------------------------------------------- Numerical simulations with $C_0=84\mu m^2$ and $k_a=5s^{-1}$ produce fan-shaped cells with a more elongated and curved shape. Depending on the noise intensity, different scenarios are obtained ranging from irregular shapes and trajectories at high levels of noise to regular shapes and circular trajectories for lower noise levels (see ). Under these conditions the trajectories may also reveal rotational dynamics. In A and B, we show examples of rotational dynamics observed in a simulation and in an experiment, respectively, finding a qualitative similarity between them. The corresponding trajectories are displayed in C for comparison, along with a third trajectory of another simulation. Despite the differences in radius and frequency of rotation between simulations and experiment, the main characteristics of a periodic rotary motion are reproduced. Note that the concentration patterns inside the simulated cells resemble a half-moon shape, which is typical for fan-shaped cells with both straight and rotational trajectories. ![\[Figura1\] Numerical and experimental results for the rotational fan-shaped case. (A) Sequence of four snapshots obtained in the numerical simulations for $C_0=84\mu m^2$, $k_a=5s^{-1}$ and noise intensity set to $10$ percent. (B) Sequence of snapshots of a DdB NF1 KO cell starved for 5 hours prior to imaging, showing a fan-shaped cell with rotational movement. (C) Comparison of the trajectories of two simulations (solid lines) and one experimental realization (dotted line). The three lines correspond to trajectories tracked over more than $2000 s$.[]{data-label="fig:rotsimexp"}](f6.jpg){width="100.00000%"} The transition from straight to rotational motion for different values of $C_0$ and $k_a$ is shown in a phase diagram in . The noise intensity was kept constant at 10$\%$ in all cases. For low values of $C_0$ the trajectories of the simulated cells are straight and only sometimes exhibit a slight curvature, depending on the realization and the parameter values. With increasing parameter $C_0$, irregular trajectories are observed combining straight pieces with rapid rotations giving rise to a highly erratic motion. For values of $C_0$ between 80$\mu m^2$ and 90$\mu m^2$ and $k_a$ larger than 3$s^{-1}$, the simulations produce rotating cells as shown in . For even higher values of $C_0$, after a small region with curved trajectories, the cell surface is almost saturated with the concentration $c$, and due to the low noise intensity, no significant net motion is observed. Thus, we have found that rotational trajectories arise for specific combination of the reaction rate and the area coverage, and that they are favoured by low values of the noise intensity. Finally, we also investigate if rotational trajectories can be induced by other factors. In , we present a trajectory phase diagram spanned by the diffusion coefficient ($D$) and the surface tension ($\gamma$). The simulations indicate that rotational modes giving rise to circular trajectories are obtained by increasing the diffusion coefficient and by reducing the surface tension. This analysis agrees with the results obtained with a similar model for keratocyte dynamics described in [@Camley17]. Cell shapes and velocities in numerical simulations are comparable to typical experimental values. -------------------------------------------------------------------------------------------------- As we described in the previous sections, shapes and trajectories of the cells vary strongly from one case to another. Amoeboid cells produce fluctuating displacements, fan-shaped cells exhibit also persistent and rotational motion. Finally, an intermediate case between amoeboid and fan-shaped phenotypes produces motion with characteristic features of both cases. The great majority of shapes and dynamics have also been observed in experiments, and a good qualitative agreement between experimental and numerical results was found. In this section we will perform a more quantitative comparison between the experimental and the numerical results, and we define an index which clearly differentiates amoeboid and fan-shaped cells and permits to characterize the transition between both cases. There are several indices that are commonly used in the studies of cell migration [@gorelik2014quantitative], such as the directionality ratio, the mean square displacement (MSD), and the directional autocorrelation. Here we will compute the directionality ratio to characterize the transition between amoeboid and fan-shape. It is defined as the distance between the starting point and the endpoint of the cell trajectory, divided by the length of the real trajectory. Mathematically is defined as follows: $$\label{DRa} \ DR = \frac{\left | \vec{X_N}-\vec{X_0}\right |}{\sum_{n=0}^{N-1} \left | \vec{X_{n+1}}-\vec{X_n}\right |},$$ where $X_0$ and $X_N$ are the initial and final positions, respectively. This ratio is close to 1 for a straight trajectory and close to 0 for a highly curved trajectory. In the following results DR was computed for every $\Delta t$ using the positions of the cell trajectories. First, we compare the directionality ratio for a set of simulations corresponding to vegetative amoeboid cells ($k_a=2s^{-1}$), which produce low values of the directionality and low velocities because of their random dynamics. Here, the relation between random motion and velocity can be interpreted as follows. A low noise intensity or large activity rate will lead to less nucleation events of new patches of $c$ at the membrane, favoring stable movement in one particular direction which will result in higher velocities. On the contrary, a high noise or small activity rate will generate more nucleation events at different positions of the membrane, that will compete with each other, thus pushing the cell membrane in different directions, resulting in lower instantaneous velocities. Second, a set of simulations corresponding to starvation-developed amoeboid cells ($k_a=5s^{-1}$) is considered, which produce intermediate values of the directionality ratio because of their more persistent motion. Finally, we analyze a set of fan-shaped cells, which produce large directionality ratios because of their highly persistent movement. All these results are presented in A, where we see that the three types of cells are located in different regions of the parameter space. The numerical results of the directionality ratio are in good agreement with the experimental measurements of this quantity, see B, where the experimental data is plotted for a similar amount of cells. Fan-shaped cells in both cases have large directionality ratios due to their persistent motion. The set of amoeboid cells naturally divides into two subsets: one with low directionality ratios and low speeds and the other one with larger directionality ratios (although lower than in the fan-shaped cases) and speeds spreading over a wide range. In the numerical simulations, we have produced equivalent subsets by changing the parameter $k_a$. Further similarities between the numerical results and the experimental realizations are found when comparing panels C and D in . Here, the average of the directionality ratio over time with their respective errors for the amoeboid and the fan-shaped cells are shown for the numerical simulations in C and the experimental recordings in D. It is important to mention that for a better comparison we included experiments curves from panel D in C and curves from panel C in D, in both cases as dotted lines and also with their respective errors. The good agreement between the two cases is remarkable although a systematic slight decrease of the directionality ratio appears in the experimental case, which may be related to the more noisy dynamics of the cell outline in the experimental measurements. In E we display the directionality ratio for several cases obtained in the simulations and for the two experimental cases discussed above. We compare the directionality ratios in a box plot representation of the different cases. First, we observe that the circular motion of fan-shaped cells gives rise to the smallest value of the directionality ratio as expected, because the motion is confined to a small region of space. The second observation is that intermediate cases, discussed in the previous sections, give rise also to intermediate values of the directionality ratio. The largest directionality ratios are observed for the stable fan-shaped cells. Note that we did not take into account other experimental cases because of the low number of recordings available in some of the experiments. Finally, due to the cell shape diversity in experiments and simulations we have also analyzed the shape in a quantitative way. Computing quantities such as aspect ratio, ellipticity, or circularity are commonly computed when comparing cell shapes. There exist some earlier works, where cell morphology has been analyzed [@collenburg2017activity; @lustig2019noninvasive; @lee2014automated; @frank2016frequent]. In this work, we focus on the circularity measure, which quantifies how closely the shape of a marked region approaches that of a circle. Circularity can be valued between 0 and 1, where 1 represents the value of a perfect circle. Mathematically the circularity is defined as follows: $CR=\frac{4\pi A}{P^2}$, where A is the area and P is the perimeter of the cell. Along the trajectories of the cells, we calculate in each frame the value of the circularity for amoeboid and fan-shaped cells with the images obtained from simulations and experiments. In we show a box plot representing the circularity ratio for the cases mentioned above. Here we see the tendency of fan-shape cells to oscillate around values close to 1 due to their rounder shape. On the other hand, amoeboid cells present a larger variation in the value of their circularity parameter because of their irregular fluctuating shape. From the results we also notice a good agreement between simulations and experiments for both scenarios. Just a small difference is marked in the fan-shaped case, where we obtained a larger amount of outliers in the analysis of the experimental data. ![Time series of the log amplitudes of displacement vectors. (A) Amoeboid and (B) fan-shaped cells recorded in experiments. (C) Amoeboid and (D) fan-shaped cells produced by numerical model simulations. We consider 15 different trajectories for each type of cell. \[fig:timeseries\_amplitude\]](f11.pdf){width="100.00000%"} Correlation analysis of cell trajectories ----------------------------------------- To compare the cell trajectories obtained from model simulations and experimental recordings in more detail, we analyzed the correlation structure of the trajectories of both amoeboid and fan-shaped cells. For this we chose a representation in polar coordinates, so that all the displacement vectors that connect the adjacent data points of a trajectory are represented by an amplitude (absolute value of the displacement) and a phase (angle with respect to the laboratory frame). The time series of the log amplitudes and the phases are displayed respectively in and . In the case of the log amplitudes, the time series are stationary and fluctuate around a constant mean value. Overall, the magnitudes and time scales of fluctuations are comparable between simulations and experiments and also between amoeboid and fan-shaped cases. Only in the case of the simulated fan-shaped cells the magnitude of fluctuations is smaller. In contrast to the amplitude, the time series of the phases is not stationary. They furthermore reflect that the amoeboid cases reorient much more rapidly compared to the fan-shaped cases, i.e. in the given time interval, the phase drifts over a much larger range for the amoeboid cells than for the fan-shaped cells. This difference is particularly pronounced for the simulated fan-shaped cells, where the phase remains almost constant over the entire measurement time, see D. ![Time series of the phases of displacement vectors. (A) Amoeboid and (B) fan-shaped cells recorded in experiments. (C) Amoeboid and (D) fan-shaped cells produced by numerical model simulations. We consider 15 different trajectories for each type of cell. \[fig:timeseries\_phase\]](f12.pdf){width="100.00000%"} From the time series, we computed the corresponding autocovariances. For a real valued scalar time series $X_i$, $i=0,\dots, n-1$ of length $n$ we take the following estimator of the autocovariance ($|k| < n-1$) $$\label{ACV} \hat\gamma(k) = \frac{1}{n-|k|} \sum_{i=0}^{n-|k|-1} ( X_{i+|k|} - \mu)(X_i - \mu),\quad \mu = \frac{1}{n} \sum_{i=0}^{n-1} X_i$$ In , the autocovariance of the log amplitude is shown for both experimental and model trajectories of amoeboid and fan-shaped cells. For the log amplitude of the model trajectories, we observe average correlation times that are slightly larger than for the experimental trajectories; they differ by a factor of approximately two. Nevertheless, in all cases the correlation time is rather short (of the order of seconds). In particular, no significant difference is observed between amoeboid and fan-shaped cases. Note, however, that the variances differ between amoeboid and fan-shaped cases, which is particularly pronounced in the case of the model trajectories. ![Autocovariance calculated from the time series of the log amplitudes of displacement vectors shown in . (A) Amoeboid and (B) fan-shaped cells recorded in experiments. (C) Amoeboid and (D) fan-shaped cells produced by numerical model simulations. We consider 15 different trajectories for each type of cell. Dashed red thick lines correspond to the mean Autocovariances. \[fig:covariance\_amplitude\]](f13.pdf){width="100.00000%"} In the case of the phase, we calculated the autocovariance based on the time series of the rate of phase change between adjacent displacement vectors (the time series of the rate of phase change is stationary, while the phase time series is not). In , the autocovariance of the rate of phase change is shown for all cases. While in the amoeboid cases and for the simulated fan-shaped cells correlations decay to zero within 4 sec, the fan-shaped cells in the experiment show markedly larger correlation times, see B. On the other hand, the variances are comparable in all cases except for the simulated fan-shaped cells, where the variance is by a factor of 100 smaller than in the other cases, see D. ![Autocovariance calculated from the time series of the phase change rate of displacement vectors. (A) Amoeboid and (B) fan-shaped cells recorded in experiments. (C) Amoeboid and (D) fan-shaped cells produced by numerical model simulations. We consider 15 different trajectories for each type of cell. Dashed red thick lines correspond to the mean Autocovariances. \[fig:covariance\_phase\]](f14.pdf){width="100.00000%"} From our correlation analysis, we thus conclude that the cell trajectories produced by our model simulations correctly capture the main correlation structure of the experimental trajectories. Only in the case of the fan-shaped cells, qualitative differences can be observed. In the experimental case, an increased correlation time in the change rate of the phase leads to smooth and persistent trajectories. The model trajectories, in contrast, do not show increased correlations in the phase change rate. Here, smooth and straight trajectories originate from a strongly decreased variance of the phase change rate. Discussion ========== We have studied a mathematical model that consist of biochemical dynamics in the form of a bistable reaction-diffusion equation including a noise description based on an Ornstein-Uhlenbeck process. The biochemical dynamics is coupled to a phase field to account for the deformable cell border. The model was previously introduced in [@Alonso18] to characterize the dynamics of vegetative and starvation-developed amoeboid [*D. discoideum*]{} cells. In that case, we found good agreement between the cell shape evolution, the intracelullar patterns, and the center of mass movement. Here, we have now systematically explored the entire relevant range of the parameter space and qualitatively reproduced different motility regimes observed in [*D. discoideum*]{} cells. The phase diagram of the model was explored by changing parameters such as the noise strength, the coverage area, and the rate responsible for cell polarization, giving rise to a series of different motility scenarios as presented above. The numerical results reproduced the dynamics of [*D. discoideum*]{} cells, which were difficult to catalog for the experimentalists. In general, the comparison of trajectories, cell shapes, and cell speeds between numerical simulations of the model and experimental data showed good agreement. We also performed a more quantitative comparison, based on the correlation structure of the amplitude and phase of the displacement vectors. Also in this case, good agreement was found and only minor differences in the case of the fan-shaped cells revealed that the model imposed smoothness of the cell trajectories of fan-shaped cells by reducing the variance of the phase changes rate, while the experimental data shows an increased correlation time instead. In future studies, longer model trajectories will be generated to explore anomalous behavior in the diffusive properties that has been recently reported also for [*D. discoideum*]{} cells [@makarava_quantifying_2014; @cherstvy_non-gaussianity_2018]. We should distinguish between the characteristic times obtained for example from the autocorrelation functions and the persistence time, which refer to different concepts. In contrast with the calculations using autocorrelation functions, the persistence times are typically calculated from mean square displacements (MSD) [@takagi2008functional; @hiraiwa2014relevance]. The behavior of the model critically depends on the choice of the model parameters. Together with noise, realistic dynamics of intracellular patterns and cell shape changes are produced, when correct characteristic temporal scales are used. The parameter $C_0$ corresponds to the area covered by the biochemical component $c$. It takes into account membrane deformations due to local accumulation of the biochemical component and, together with the reaction rate, reproduce variations in cell speed and persistence of motion. Our model shows similarities to the bistable reaction-diffusion model coupled to a phase field described in [@Camley17], where the motion of keratocytes is investigated. In particular, a systematic study of the transition between straight and circular motion of cells moving in a keratocyte-like fashion, equivalent to our fan-shaped cells, was done: a high tension tends to stabilize the cell motion to a straight trajectory, whereas large diffusion coefficients or small velocities tend to push the cell towards rotation [@Camley17]. In our study presented here, we have investigated the connection of keratocyte-like behavior and amoeboid motility, apparently associated to different mechanisms and cell types. We studied the transition between the persistent fan-shaped phenotype (keratocyte-like) and the amoeboid case, also showing intermediate dynamics and directly compared our model to experimental data obtained from recordings of [*D. discoideum*]{}. With respect to the transition between straight and circularly moving fan-shaped cells our results shown in  are consistent with the earlier predictions [@Camley17]. Transitions between amoeboid and fan-shaped motility modes have also been described recently in a wave-generating two-component reaction-diffusion model by Cao *et al.* [@cao2019plasticity] that specifically emphasizes the role of cell deformation mechanics. While our results are compatible with the findings of Cao *et al.*, our model clearly shows that the richness of different motility modes does not require intracellular traveling waves but can be already observed for intracellular kinetics that relies on a single dynamical variable only. In contrast to these simple modeling approaches based on generic reaction-diffusion systems, there are also more complex descriptions following a different biophysical approach including more detailed biochemical reactions and mechanical forces. For example keratocyte motion has been extensively studied in [@maree2006polarization] combining biochemical and mechanical aspects to model how epidermal fish keratocytes form a leading edge, polarize, and maintain their shape and polarity. There are also more complex models where the transitions between straight and circular trajectories have been studied, see for example [@Nickaeen17], where a minimal mechanical model is presented consisting of two equations, one for the force balance of the actin network and a second one consisting of a reaction diffusion equation that describes the concentration of myosin, demonstrating that transitions occurs for small values of the Peclet number. On the other hand, there are reductionist approaches to keratocyte motion, see for example [@Camley17] and [@ziebert2011model], which display similar levels of complexity as our model. Both types of descriptions contribute to a better understanding of the experimentally observed dynamics and can be readily extended in different directions. For example, we are currently working on the implementation of more complex biochemical models into the phase field description. In particular, we can extend the model to more closely recover the detailed dynamics of certain intracellular reactions, such as, for example, the phosphorylation of PIP$_2$ to PIP$_3$ or the dynamics of the associated kinases and phosphotases that affect cell polarization, membrane deformation, and pseudopod formation [@van2017coupled; @fukushima2019excitable]. Furthermore, the phase field framework will also allow us to implement cell-cell interactions [@lober_collisions_2015], the behavior of cells under confined stimuli [@gerhardt_signaling_2014], in enclosed environments [@nagel_geometry-driven_2014; @winkler_confinement_2019], in the presence of external chemical gradients [@najem2013phase], and also in three dimensions [@cao_minimal_2019]. Here, we have restricted ourselves to a comparison of our model to [*D. discoideum*]{} cells. However, we found that our model is also able to describe more diverse situations observed, for example, in keratocytes, where close to the transition to circular motion, bipedial motion was observed that relies on local alternation of cell displacements during persistent motion [@Barnhart10]. Conclusions =========== In summary, we have studied a model based on a bistable reaction-diffusion equation with Ornstein-Uhlenbeck noise for the intracellular biochemistry, coupled to a dynamical phase field to describe the cell membrane dynamics. The results obtained from the numerical integration of the model show that essential features of amoeboid and fan-shaped motion observed in experiments of motile [*D. discoideum*]{} cells are reproduced by our model. We found close qualitative agreement between the numerical simulations and the experiments and, in some cases, motility measures such as the directionality ratio even showed quantitative agreement. The study of the correlation structure of the cell displacements furthermore allowed us to perform a quantitative comparison of the cell trajectories from our model simulations with experimental data. Based on our simulations we furthermore conclude that a continuous transition between amoeboid and fan-shaped motions is a realistic scenario, as some of the predicted intermediate states observed in simulations have been confirmed in experiments with [*D. discoideum*]{} cells. We speculate that the same model can be also employed to describe the motion of other cell types with different motion strategies such as keratocytes or fibroblasts. Acknowledgment {#acknowledgment .unnumbered} ============== E.M. and C.B. acknowledge funding by the Deutsche Forschungsgemeinschaft in the framework of Sonderforschungsbereich 1294, project B02. S.F. and C.B. gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft in the framework of Sonderforschungsbereich 937, project A09. S.A. and E.M. thank support from MICINN (Spain), and FEDER (European Union), under project PGC2018-095456-B-I00. E.M. acknowledges also financial support from CONACYT. F.F. acknowledges financial support from the *[Juan de la Cierva]{} programme (grant IJC2018-038463-I) from the Spanish MICINN, from the [*Obra Social la Caixa*]{} through the programme [*Recerca en Matemàtica Col·laborativa*]{} and the CERCA Programme of the [*Generalitat de Catalunya*]{}.* [1]{} Jun Allard and Alex Mogilner. Traveling waves in actin dynamics and cell motility. , 25(1):107–115, 2013. Sergio Alonso, Maike Stange, and Carsten Beta. Modeling random crawling, membrane deformation and intracellular polarity of motile amoeboid cells. , 13(8):1–22, 08 2018. Steven J Altschuler, Sigurd B Angenent, Yanqin Wang, and Lani F Wu. On the spontaneous emergence of cell polarity. , 454(7206):886, 2008. Sarah J. Annesley and Paul R. Fisher. Dictyostelium discoideum—a model for many reasons. , 329(1-2):73–91, September 2009. Yukako Asano, Takafumi Mizuno, Takahide Kon, Akira Nagasaki, Kazuo Sutoh, and Taro QP Uyeda. Keratocyte-like locomotion in amib-null dictyostelium cells. , 59(1):17–27, 2004. Erin L. Barnhart, Greg M. Allen, Frank J[ü]{}licher, and Julie A. Theriot. Bipedal locomotion in crawling cells. , 98(6):933 – 942, 2010. Carsten Beta. Bistability in the actin cortex. , 3(1):12, 2010. Carsten Beta, Gabriel Amselem, and Eberhard Bodenschatz. A bistable mechanism for directional sensing. , 10(8):083015, 2008. Carsten Beta and Karsten Kruse. Intracellular oscillations and waves. , 8:239–264, 2017. Gareth Bloomfield, David Traynor, Sophia P Sander, Douwe M Veltman, Justin A Pachebat, and Robert R Kay. Neurofibromin controls macropinocytosis and phagocytosis in dictyostelium. , 4:e04940, 2015. William J Boettinger, James A Warren, Christoph Beckermann, and Alain Karma. Phase-field simulation of solidification. , 32(1):163–194, 2002. Brian A Camley, Yanxiang Zhao, Bo Li, Herbert Levine, and Wouter-Jan Rappel. Periodic migration in a physical model of cells on micropatterns. , 111(15):158102, 2013. Brian A. Camley, Yanxiang Zhao, Bo Li, Herbert Levine, and Wouter-Jan Rappel. Crawling and turning in a minimal reaction-diffusion cell motility model: Coupling cell shape and biochemistry. , 95:012401, Jan 2017. Yuansheng Cao, Elisabeth Ghabache, Yuchuan Miao, Cassandra Niman, Hiroyuki Hakozaki, Samara L. Reck-Peterson, Peter N. Devreotes, and Wouter-Jan Rappel. A minimal computational model for three-dimensional cell migration. , 16(161):20190619, December 2019. Yuansheng Cao, Elisabeth Ghabache, and Wouter-Jan Rappel. Plasticity of cell migration resulting from mechanochemical coupling. , 8, 2019. Andrey G. Cherstvy, Oliver Nagel, Carsten Beta, and Ralf Metzler. Non-[Gaussianity]{}, population heterogeneity, and transient superdiffusion in the spreading dynamics of amoeboid cells. , 20(35):23034–23054, 2018. Lena Collenburg, Niklas Beyersdorf, Teresa Wiese, Christoph Arenz, Essa M Saied, Katrin Anne Becker-Flegler, Sibylle Schneider-Schaulies, and Elita Avota. The activity of the neutral sphingomyelinase is important in t cell recruitment and directional migration. , 8:1007, 2017. Marc Edwards, Huaqing Cai, Bedri Abubaker-Sharif, Yu Long, Thomas J Lampert, and Peter N Devreotes. Insight from the maximal activation of the signal transduction excitable network in dictyostelium discoideum. , 115(16):E3722–E3730, 2018. Sven Flemming, Francesc Font, Sergio Alonso, and Carsten Beta. How cortical waves drive fission of motile cells. , 117(12):6330–6338, 2020. R Folch, J Casademunt, A Hern[á]{}ndez-Machado, and L Ramirez-Piscina. Phase-field model for hele-shaw flows with arbitrary viscosity contrast. i. theoretical approach. , 60(2):1724, 1999. Viktoria Frank, Stefan Kaufmann, Rebecca Wright, Patrick Horn, Hiroshi Y Yoshikawa, Patrick Wuchter, Jeppe Madsen, Andrew L Lewis, Steven P Armes, Anthony D Ho, et al. Frequent mechanical stress suppresses proliferation of mesenchymal stem cells from human bone marrow without loss of multipotency. , 6(1):1–12, 2016. Seiya Fukushima, Satomi Matsuoka, and Masahiro Ueda. Excitable dynamics of ras triggers spontaneous symmetry breaking of pip3 signaling in motile cells. , 132(5):jcs224121, 2019. Matthias Gerhardt, Mary Ecke, Michael Walz, Andreas Stengl, Carsten Beta, and G[ü]{}nther Gerisch. Actin and pip3 waves in giant cells reveal the inherent length scale of an excited state. , 127(20):4507–4517, 2014. Matthias Gerhardt, Michael Walz, and Carsten Beta. Signaling in chemotactic amoebae remains spatially confined to stimulated membrane regions. , 127(23):5115–5125, December 2014. Nathan W Goehring and Stephan W Grill. Cell polarity: mechanochemical patterning. , 23(2):72–80, 2013. Roman Gorelik and Alexis Gautreau. Quantitative and unbiased analysis of directional persistence in cell migration. , 9(8):1931, 2014. Peter J. M. Van Haastert and Peter N. Devreotes. Chemotaxis: signalling the way forward. , 5(8):626–634, August 2004. Tetsuya Hiraiwa, Akihiro Nagamatsu, Naohiro Akuzawa, Masatoshi Nishikawa, and Tatsuo Shibata. Relevance of intracellular polarity to accuracy of eukaryotic chemotaxis. , 11(5):056002, 2014. Pablo A Iglesias and Peter N Devreotes. Navigating through models of chemotaxis. , 20(1):35–40, 2008. Alexandra Jilkine and Leah Edelstein-Keshet. A comparison of mathematical models for polarization of single eukaryotic cells in response to guided cues. , 7(4):e1001121, 2011. Julien Kockelkoren, Herbert Levine, and Wouter-Jan Rappel. Computational approach for modeling intra-and extracellular dynamics. , 68(3):037702, 2003. Dirk Alexander Kulawiak, Brian A Camley, and Wouter-Jan Rappel. Modeling contact inhibition of locomotion of colliding cells migrating on micropatterned substrates. , 12(12):e1005239, 2016. Chen-Yu Lee, Sukryool Kang, Andrew D Chisholm, and Pamela C Cosman. Automated cell junction tracking with modified active contours guided by sift flow. In [*2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI)*]{}, pages 290–293. IEEE, 2014. Jakob L[ö]{}ber, Falko Ziebert, and Igor S Aranson. Modeling crawling cell movement on soft engineered substrates. , 10(9):1365–1373, 2014. Jakob L[ö]{}ber, Falko Ziebert, and Igor S. Aranson. Collisions of deformable cells lead to collective migration. , 5:9172, March 2015. Maayan Lustig, Qingling Feng, Yohan Payan, Amit Gefen, and Dafna Benayahu. Noninvasive continuous monitoring of adipocyte differentiation: From macro to micro scales. , 25(1):119–128, 2019. Natallia Makarava, Stephan Menz, Matthias Theves, Wilhelm Huisinga, Carsten Beta, and Matthias Holschneider. Quantifying the degree of persistence in random amoeboid motion based on the [Hurst]{} exponent of fractional [Brownian]{} motion. , 90(4):042703, October 2014. Athanasius FM Mar[é]{}e, Alexandra Jilkine, Adriana Dawes, Ver[ô]{}nica A Grieneisen, and Leah Edelstein-Keshet. Polarization and movement of keratocytes: a multiscale modelling approach. , 68(5):1169–1211, 2006. Satomi Matsuoka and Masahiro Ueda. Mutual inhibition between pten and pip3 generates bistability for polarity in motile cells. , 9(1):4481, 2018. Yuchuan Miao, Sayak Bhattacharya, Marc Edwards, Huaqing Cai, Takanari Inoue, Pablo A Iglesias, and Peter N Devreotes. Altering the threshold of an excitable signal transduction network changes cell migratory modes. , 19(4):329, 2017. Alex Mogilner, Jun Allard, and Roy Wollman. Cell polarity: quantitative modeling as a tool in cell biology. , 336(6078):175–179, 2012. Alex Mogilner, Erin L. Barnhart, and Kinneret Keren. Experiment, theory, and the keratocyte: [An]{} ode to a simple model for cell motility. , page S1084952119300369, November 2019. Yoichiro Mori, Alexandra Jilkine, and Leah Edelstein-Keshet. Wave-pinning and cell polarity from a bistable reaction-diffusion system. , 94(9):3684–3697, 2008. Adrian Moure and Hector Gomez. Computational model for amoeboid motion: Coupling membrane and cytosol dynamics. , 94(4):042423, 2016. Adrian Moure and Hector Gomez. Phase-field model of cellular migration: Three-dimensional simulations in fibrous networks. , 320:162–197, 2017. Oliver Nagel, Can Guven, Matthias Theves, Meghan Driscoll, Wolfgang Losert, and Carsten Beta. Geometry-[Driven]{} [Polarity]{} in [Motile]{} [Amoeboid]{} [Cells]{}. , 9(12):e113382, December 2014. Sara Najem and Martin Grant. Phase-field approach to chemotactic driving of neutrophil morphodynamics. , 88(3):034702, 2013. Masoud Nickaeen, Igor L. Novak, Stephanie Pulford, Aaron Rumack, Jamie Brandon, Boris M. Slepchenko, and Alex Mogilner. A free-boundary model of a motile cell explains turning behavior. , 13(11):1–22, 11 2017. Mikiya Otsuji, Shuji Ishihara, Kozo Kaibuchi, Atsushi Mochizuki, Shinya Kuroda, et al. A mass conserved reaction–diffusion system captures properties of cell polarity. , 3(6):e108, 2007. Peggy Paschke, David A Knecht, Augustinas Silale, David Traynor, Thomas D Williams, Peter A Thomason, Robert H Insall, Jonathan R Chubb, Robert R Kay, and Douwe M Veltman. Rapid and efficient genetic engineering of both wild type and axenic strains of dictyostelium discoideum. , 13(5):e0196809, 2018. Antonio J Pons and Alain Karma. Helical crack-front instability in mixed-mode fracture. , 464(7285):85–89, 2010. Wouter-Jan Rappel and Leah Edelstein-Keshet. Mechanisms of cell polarization. , 3:43–53, 2017. Britta Schroth-Diez, Silke Gerwig, Mary Ecke, Reiner Hegerl, Stefan Diez, and Günther Gerisch. Propagating waves separate two states of actin organization in living cells. , 3(6):412–427, December 2009. Danying Shao, Herbert Levine, and Wouter-Jan Rappel. Coupling actin flow, adhesion, and morphology in a computational cell motility model. , 109(18):6851–6856, 2012. Danying Shao, Wouter-Jan Rappel, and Herbert Levine. Computational model for cell morphodynamics. , 105(10):108104, 2010. Kristen F Swaney, Chuan-Hsiang Huang, and Peter N Devreotes. Eukaryotic chemotaxis: a network of signaling pathways controls motility, directional sensing, and polarity. , 39:265–289, 2010. Hiroaki Takagi, Masayuki J Sato, Toshio Yanagida, and Masahiro Ueda. Functional analysis of spontaneous cell movement under different physiological conditions. , 3(7), 2008. Daisuke Taniguchi, Shuji Ishihara, Takehiko Oonuki, Mai Honda-Kitahara, Kunihiko Kaneko, and Satoshi Sawai. Phase geometries of two-dimensional excitable waves govern self-organized morphodynamics of amoeboid cells. , 110(13):5016–5021, 2013. Peter JM van Haastert, Ineke Keizer-Gunnink, and Arjan Kortholt. Coupled excitable ras and f-actin activation mediates spontaneous pseudopod formation and directed cell movement. , 28(7):922–934, 2017. Benjamin Winkler, Igor S. Aranson, and Falko Ziebert. Confinement and substrate topography control cell migration in a [3D]{} computational model. , 2(1):1–11, July 2019. Falko Ziebert, Sumanth Swaminathan, and Igor S Aranson. Model for self-polarization and motility of keratocyte fragments. , 9(70):1084–1092, 2011. [^1]: eduardo.moreno.ramos@upc.edu [^2]: svenflem@uni-potsdam.de [^3]: ffont@crm.cat [^4]: hols@uni-potsdam.de [^5]: beta@uni-potsdam.de [^6]: s.alonso@upc.edu
--- abstract: 'This study proposes a novel channel model called the *modular arithmetic erasure channel,* which is a general type of arbitrary input erasure-like channels containing the binary erasure channel (BEC) and some other previously-known erasure-like channels. For this channel model, we give recursive formulas of Ar[i]{}kan-like polar transforms to simulate its channel polarization easily. In other words, similar to the polar transforms for BECs, we show that the synthetic channels of modular arithmetic erasure channels are again equivalent to the same channel models with certain transition probabilities, which can be easily calculated by explicit recursive formulas. We also show that Ar[i]{}kan-like polar transforms for modular arithmetic erasure channels behave *multilevel channel polarization,* which is a phenomenon appeared in the study of non-binary polar codes; and thus, modular arithmetic erasure channels are informative toy problems of multilevel channel polarization. Furthermore, as a solution of an open problem in non-binary polar codes for special cases, we solve exactly and algorithmically the limiting proportions of partially noiseless synthetic channels, called the *asymptotic distribution* of multilevel channel polarization, for modular arithmetic erasure channels.' author: - title: | Modular Arithmetic Erasure Channels\ and Their Multilevel Channel Polarization --- Non-binary polar codes; multilevel channel polarization; partially noiseless channels; asymptotic distribution; generalized erasure channels. Introduction ============ Ar[i]{}kan [@arikan_it2009] proposed binary polar codes as a class of provable symmetric capacity achieving codes with deterministic constructions and low encoding/decoding complexity for binary-input discrete memoryless channels (DMCs). Analyses of polar codes are mainly concentrated on the polar transforms, which asymptotically make noiseless and useless synthetic channels. This phenomenon is called the channel polarization, and the limiting proportion of noiseless and useless synthetic channels can be fully and simply characterized by the symmetric capacity of an initial channel. In non-binary polar codes, there are two types of channel polarization: strong channel polarization [@mori_tanaka_it2014; @sasoglu_isit2012] and multilevel channel polarization [@nasser_it2016_ergodic1; @nasser_it2017_ergodic2; @nasser_telatar_it2016; @park_barg_it2013; @sahebi_pradhan_it2013]. Strong channel polarization asymptotically transforms similar extremal synthetic channels to the binary cases, i.e., either *noiseless* or *useless.* On the other hand, multilevel channel polarization allows us to convege several types of *partially noiseless* synthetic channels. It was independently shown in [@mori_tanaka_it2014; @sasoglu_isit2012; @nasser_it2016_ergodic1; @nasser_it2017_ergodic2; @nasser_telatar_it2016; @park_barg_it2013; @sahebi_pradhan_it2013] that both strong and multilevel channel polarization can achieve the symmetric capacity by showing the rate of polarization for the Bhattacharyya parameters. Although the limiting proportions of noiseless and useless synthetic channels are fully and simply characterized by the symmetric capacity in the context of strong channel polarization, the limiting proportions of partially noiseless synthetic channels still remain, however, an open problem in the context of multilevel channel polarization (see [@nasser_PhD Section 9.2.1]). In this study, we call such limiting proportions the *asymptotic distribution* of multilevel channel polarization. For more details on the notions of strong and multilevel channel polarization, refer to Sections \[sect:strong\] and \[sect:multilevel\], respectively. To construct and analyze polar codes, channel parameters of the synthetic channels created by polar transforms are needed. Commonly used such channel parameters are the symmetric capacity and the Bhattacharyya parameter; however, the computational complexities of those of the synthetic channels grow double-exponentially as the number of polar transforms increases. This is a main issue of constructing and analyzing polar codes. In the binary-input cases, Tal and Vardy [@tal_vardy_it2013] solved this issue by proposing approximation algorithms of the synthetic channels for each polar transform. Such approximation algorithms were recently extended to arbitrary input cases by Gulch, Ye, and Barg [@gulcu_ye_barg_it2018]. On the other hand, fortunately, the binary erasure channel (BEC) can avoid this computational complexity without any approximation argument. More precisely, if initial channels are BECs, then every synthetic channel is equivalent to another BEC with a certain erasure probability, as will be shown in [Section \[sect:bec\]]{}. Obviously, the asymptotic distributions of strong channel polarization of a BEC can be simply characterized by the underlying erasure probability. Therefore, BECs are excellent toy problems in the study of binary polar codes. To non-binary polar codes, such easily-analyzable channel models like BECs have been proposed by Park and Barg [@park_barg_it2013 Section III] and Sahebi and Pradhan [@sahebi_pradhan_it2013 Figs. 3 and 4], and their recursive formulas of polar transforms were given therein[^1]. The main contributions of this paper can be broadly divided into the following two parts: Firstly, we propose a novel channel model called a *modular arithmetic erasure channel* in [Definition \[def:V\]]{} of [Section \[sect:maec\]]{}, which can be naturally reduced to BECs; $q$-ary erasure channels ($q$-ECs) in a naïve sense (see, e.g., [@mackay_2003 p. 589]); $q$-ary input ordered erasure channels (OECs) proposed by Park and Barg [@park_barg_isit2011 p. 2285] when $q$ is a prime power; and Sahebi and Pradhan’s senary-input erasure-like channels [@sahebi_pradhan_it2013 Fig. 4: Channel 2]. To our erasure-like channel model, in [Theorem \[th:recursive\_V\]]{} of [Section \[sect:recursive\]]{}, we show the ease of analyzing the polar transform in which it seems a weighted sum [@abbe_li_madiman_2017] under the ring $\mathbb{Z}/q\mathbb{Z}$ of integers modulo $q$. More precisely, similar to the polar transforms for BECs, we show in [Theorem \[th:recursive\_V\]]{} that the synthetic channels, in which the initial channels are modular arithmetic erasure channels, are again equivalent to other modular arithmetic erasure channels with certain transition probabilities. Secondly, we characterize the asymptotic distribution of multilevel channel polarization for modular arithmetic erasure channels, i.e., each limiting proportion of partially noiseless synthetic channels. Before we consider an arbitrary input alphabet size $q$, we restrict our attention to the case where $q$ is a prime power, and we then simply characterize in [Theorem \[th:primepower\]]{} of [Section \[sect:primepower\]]{} the asymptotic distribution of multilevel channel polarization for modular arithmetic erasure channels. By extending this simple argument, in [Section \[sect:composite\]]{}, we next examine the asymptotic distribution in the case where $q$ is an arbitrary positive integer; and give Algorithm \[alg:main\] to calculate the asymptotic distribution as computable quantities. The rest of this paper is organized as follows: In [Section \[sect:preliminaries\]]{}, we introduce basic notations used in this study together with detailed notions of strong and multilevel channel polarization. Modular arithmetic erasure channels are proposed in [Section \[sect:ease\]]{}; and then, the ease of analyzing polar transforms for this channel model is characterized, as in the polar transforms for BECs. The asymptotic distribution of multilevel channel polarization for modular arithmetic erasure channels is solved in [Section \[sect:asymptotic\_distribution\_MAEC\]]{}. [Section \[sect:conclusion\]]{} concludes this study. Preliminaries and Problem Presentations {#sect:preliminaries} ======================================= Basic Notations of DMCs and Channel Parameters {#sect:notations} ---------------------------------------------- In this study, discrete memoryless channels (DMCs) are given as follows: The input alphabet of a channel is denoted by a finite set $\mathcal{X}$ having two or more elements; and the output alphabet of a channel is denoted by a nonempty and countable set $\mathcal{Y}$. The transition probability of a channel from an input symbol $x \in \mathcal{X}$ to an output symbol $y \in \mathcal{Y}$ is denoted by $W(y \mid x)$. Let $W : \mathcal{X} \to \mathcal{Y}$, or simply $W$, be a shorthand for such a channel. We shall denote by $q = |\mathcal{X}|$ the input alphabet size of a channel $W$, where $|\cdot|$ denotes the cardinality of a finite set. The *$\alpha$-symmetric capacity* of a channel $W$, which is the $\alpha$-mutual information [@ho_verdu_isit2015; @verdu_ita2015] between input and output of $W$ under a uniform input distribution on $\mathcal{X}$, is defined by $$\begin{aligned} I_{\alpha}( W ) \coloneqq \begin{dcases} \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{y \in \mathcal{Y}} \bigg( \sum_{x \in \mathcal{X}} \frac{ 1 }{ q } W(y \mid x)^{\alpha} \bigg)^{1/\alpha} \Bigg) & \mathrm{if} \ \alpha \in (0, 1) \cup (1, \infty) , \\ \min_{y \in \mathcal{Y}} \bigg( \log \frac{ q }{ |\{ x \in \mathcal{X} \mid W(y \mid x) > 0 \}| } \bigg) & \mathrm{if} \ \alpha = 0 , \\ \sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}} \frac{ 1 }{ q } W(y \mid x) \log \frac{ W(y \mid x) }{ \sum_{x^{\prime} \in \mathcal{X}} (1/q) W(y \mid x^{\prime}) } & \mathrm{if} \ \alpha = 1 , \\ \log \bigg( \sum_{y \in \mathcal{Y}} \max_{x \in \mathcal{X}} W(y \mid x) \bigg) & \mathrm{if} \ \alpha = \infty \end{dcases} \label{def:alpha}\end{aligned}$$ for each order $\alpha \in [0, \infty]$. Unless stated otherwise, suppose throughout this paper that the base of logarithms is $q$. In particular, if $\alpha = 1$, then $I_{\alpha}( W )$ coincides with the symmetric capacity $I(W)$, i.e., the *symmetric capacity* of a channel $W$ can be defined by $I(W) \coloneqq I_{1}( W )$. As shown in the following remark, the $\alpha$-symmetric capacity $I_{\alpha}( W )$ contains many channel parameters used in coding problems, also in polar codes. \[remark:connections\_alpha\_mutual\] For each $\alpha \in (0, 1) \cup (1, \infty)$, we readily see the following identities: $$\begin{aligned} I_{\alpha}( W ) & = \frac{ \alpha }{ 1 - \alpha } E_{0}\bigg( \frac{ 1 - \alpha }{ \alpha } , W \bigg) , \\ I_{1/2}( W ) & = E_{0}( 1, W ) = \log \bigg( \frac{ q }{ 1 + (q-1) Z( W ) } \bigg) , \\ I_{\infty}( W ) & = (\log q) + \log \Big( 1 - P_{\mathrm{e}}( W ) \Big) ,\end{aligned}$$ where $$\begin{aligned} P_{\mathrm{e}}( W ) \coloneqq 1 - \sum_{y \in \mathcal{Y}} \frac{ 1 }{ q } \max_{x \in \mathcal{X}} W(y \mid x)\end{aligned}$$ denotes the average probability of maximum likelihood decoding error for uncoded communication via a channel $W$; $$\begin{aligned} Z( W ) \coloneqq \frac{ 1 }{ q (q - 1) } \sum_{\substack{ x, x^{\prime} \in \mathcal{X} : \\ x \neq x^{\prime} }} \sum_{y \in \mathcal{Y}} \sqrt{ W(y \mid x) \, W(y \mid x^{\prime}) }\end{aligned}$$ denotes the average Bhattacharyya distance of a channel $W$ [@sasoglu_telatar_arikan_itw2009]; and $$\begin{aligned} E_{0}(\rho, W) \coloneqq - \log \Bigg( \sum_{y \in \mathcal{Y}} \bigg( \sum_{x \in \mathcal{X}} \frac{ 1 }{ q } W(y \mid x)^{\frac{ 1 }{ 1 + \rho }} \bigg)^{1+\rho} \Bigg)\end{aligned}$$ denotes Gallager’s reliability function $E_{0}$ of a channel $W$ under a uniform input distribution for $\rho \in (-1, \infty)$ [@gallager_1968 Equation (5.6.14)]. Regarding to the study of polar codes, the average Bhattacharyya distance $Z( W )$ is often used to analyze the rate of polarization (see, e.g., [@sasoglu_telatar_arikan_itw2009; @park_barg_it2013; @sahebi_pradhan_it2013; @nasser_telatar_it2016; @nasser_it2017_ergodic2]); and behaviors of Gallager’s reliability function $E_{0}$ under polar transforms on binary alphabets were well-studied by Alsan [@alsan_it2014] and Alsan and Telatar [@alsan_telatar_it2014]. Ar[i]{}kan-like Polar Transforms with Quasigroups ------------------------------------------------- We now introduce polar transforms with a quasigroup operation[^2] $\ast$ on a non-binary alphabet $\mathcal{X}$. For given two channels $W_{1} : \mathcal{X} \to \mathcal{Y}_{1}$ and $W_{2} : \mathcal{X} \to \mathcal{Y}_{2}$, the polar transform makes two synthetic channels: the worse channel $W_{1} \boxast W_{2} : \mathcal{X} \to \mathcal{Y}_{1} \times \mathcal{Y}_{2}$ defined by $$\begin{aligned} (W_{1} \boxast W_{2}) (y_{1}, y_{2} \mid u_{1}) & \coloneqq \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } W_{1}(y_{1} \mid u_{1} \ast u_{2}^{\prime}) \, W_{2}(y_{2} \mid u_{2}^{\prime}) ; \label{def:minus}\end{aligned}$$ and the better channel $W_{1} \varoast W_{2} : \mathcal{X} \to \mathcal{Y}_{1} \times \mathcal{Y}_{2} \times \mathcal{X}$ defined by $$\begin{aligned} (W_{1} \varoast W_{2}) (y_{1}, y_{2}, u_{1} \mid u_{2}) & \coloneqq \frac{ 1 }{ q } W_{1}(y_{1} \mid u_{1} \ast u_{2}) \, W_{2}(y_{2} \mid u_{2}) . \label{def:plus}\end{aligned}$$ As these polar transforms are analogues of the polar transform with $2 \times 2$ kernel, in this paper, we call them Ar[i]{}kan-like polar transforms. Ar[i]{}kan-like polar transforms with distinct initial channels $W_{1} \neq W_{2}$ have been studied in the study of polar codes for non-stationary memoryless channels [@alsan_telatar_it2016; @mahdavifar_isit2017]. Especially, in the case where both $W_{1}$ and $W_{2}$ are fully identical to an initial channel $W : \mathcal{X} \to \mathcal{Y}$, those are standard polar transforms for a stationary memoryless channel $W$; and we then write $W^{-} \coloneqq W \boxast W$ and $W^{+} \coloneqq W \varoast W$. Formally, these transition probability distributions can be defined by $$\begin{aligned} W^{-}(y_{1}, y_{2} \mid u_{1}) & \coloneqq \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } W(y_{1} \mid u_{1} \ast u_{2}^{\prime}) \, W(y_{2} \mid u_{2}^{\prime}) , \\ W^{+}(y_{1}, y_{2}, u_{1} \mid u_{2}) & \coloneqq \frac{ 1 }{ q } W(y_{1} \mid u_{1} \ast u_{2}) \, W(y_{2} \mid u_{2})\end{aligned}$$ for each $(u_{1}, u_{2}, y_{1}, y_{2}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$. In the case where $W$ is stationary, after the $n$-step polar transforms for an integer $n \in \mathbb{N}$, the synthetic channel $W^{{\boldsymbol{s}}} : \mathcal{X} \to \mathcal{Y}^{2^{n}} \times \mathcal{X}^{w({\boldsymbol{s}})}$ is created by $$\begin{aligned} W^{{\boldsymbol{s}}} & \coloneqq ( \cdots (W^{s_{1}})^{s_{2}} \cdots )^{s_{n}} \label{def:n-step}\end{aligned}$$ for each ${\boldsymbol{s}} = s_{1}s_{2} \cdots s_{n} \in \{ -, + \}^{n}$, where the function[^3] $w : \{ -, + \}^{\ast} \to \mathbb{N}_{0}$ is recursively defined by[^4] $$\begin{aligned} w( s_{1}, \dots, s_{n} ) \coloneqq \begin{cases} 2 \, w( s_{1}, \dots, s_{n-1} ) & \text{if} \ n \ge 1 \ \mathrm{and} \ s_{n} = - , \\ 2 \, w( s_{1}, \dots, s_{n-1} ) + 1 & \text{if} \ n \ge 1 \ \mathrm{and} \ s_{n} = + , \\ 0 & \mathrm{otherwise} , \end{cases} \label{def:weight}\end{aligned}$$ and $\{ -, + \}^{\ast} \coloneqq \{ \epsilon, -, +, --, -+, +-, ++, \dots \}$ denotes the set of $\{ -, + \}$-valued finite-length sequences containing the empty sequence $\epsilon$. Namely, the output alphabet size $|\mathcal{Y}^{2^{n}} \times \mathcal{X}^{w({\boldsymbol{s}})}|$ of the synthetic channel $W^{{\boldsymbol{s}}}$ grows double-exponentially as the number $n$ of polar transforms increases. Difficulties of constructing and analyzing polar codes are mainly due to this issue. As a special case, it is well-known that the polar transforms for BECs can avoid such a computational difficulty. Indeed, if all initial channels are BECs, then every synthetic channel is equivalent in a certain sense[^5] to another BEC with certain erasure probability. This means that it suffices to propagate erasure probabilities by certain recursive formulas, as will be summarized in [Proposition \[prop:bec\]]{} of [Section \[sect:bec\]]{}. In the studies of non-binary polar codes, some easily-analyzable non-binary channel models like BECs have been proposed by Park and Barg [@park_barg_it2013 Section III] in the case where $q = 2^{r}$ is a power of two (see [Example \[ex:oec\]]{} of [Section \[sect:maec\]]{}); and by Sahebi and Pradhan [@sahebi_pradhan_it2013 Fig. 4: Channel 2] in the case where $q = 6$ (see [Example \[ex:sahebi\_pradhan\]]{} of [Section \[sect:maec\]]{}). In [Definition \[def:V\]]{} of [Section \[sect:maec\]]{}, we propose a more general easily-analyzable channel model containing them; and we give certain recursive formulas for Ar[i]{}kan-like polar transforms in [Theorem \[th:recursive\_V\]]{} of [Section \[sect:recursive\]]{}. Strong Channel Polarization {#sect:strong} --------------------------- In the case where the input alphabet size $q$ is a prime number, Şaşoğlu et al. [@sasoglu_telatar_arikan_itw2009] showed that for any $q$-ary input DMC $W: \mathcal{X} \to \mathcal{Y}$ and any fixed $\delta \in (0, 1)$, both equalities $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ I( W^{{\boldsymbol{s}}} ) > 1 - \delta \Big\} \Big| & = I( W ) , \label{eq:strong1} \\ \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ I( W^{{\boldsymbol{s}}} ) < \delta \Big\} \Big| & = 1 - I( W ) \label{eq:strong2}\end{aligned}$$ hold under the polar transforms in which $(\mathcal{X}, \ast)$ forms a cyclic group $(\mathbb{Z}/q\mathbb{Z}, +)$. The left-hand sides of and are the limiting proportions of *almost noiseless* and *almost useless* synthetic channels, respectively. Moreover, Equations and imply that the limiting proportion of *intermediate* synthetic channels is zero, i.e., $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \delta \le I( W^{{\boldsymbol{s}}} ) \le 1 - \delta \Big\} \Big| = 0 \label{eq:strong}\end{aligned}$$ for every fixed $\delta > 0$. In this paper, we call phenomena of the *strong channel polarization*[^6]. Moreover, for any $q \ge 2$ which is not only a prime number but also a composite number, Şaşoğlu [@sasoglu_isit2012] showed a sufficient condition of the strong polarization for quasigroup operations[^7] $\ast$ used in the polar transforms and . Furthermore, Mori and Tanaka[^8] [@mori_tanaka_it2014] considered the polar transforms and with quasigroup operation $\ast$ defined by field operations of $\mathbb{F}_{q}$, and they showed the necessary and sufficient condition of the strong polarization under such an operation. As shown in and , the asymptotic distributions of noiseless and useless synthetic channels , respectively, can be always and exactly characterized by only the symmetric capacity $I(W)$[^9]. Multilevel Channel Polarization {#sect:multilevel} ------------------------------- Besides [Section \[sect:strong\]]{}, in the case where the input alphabet size $q$ is a composite number, there are quasigroups $(\mathcal{X}, \ast)$ employed in the polar transforms such that the strong channel polarization does not hold in general (cf. [@sasoglu_isit2012 Example 1]). That is, there is a $q$-ary input DMC $W$ such that the limiting proportion of intermediate synthetic channels $W^{{\boldsymbol{s}}}$ is positive: $$\begin{aligned} \liminf_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \delta \le I( W^{{\boldsymbol{s}}} ) \le 1 - \delta \Big\} \Big| > 0 \label{eq:positive_intermediate}\end{aligned}$$ for some fixed $\delta > 0$. In those cases, another type of polarization phenomena called *multilevel channel polarization*[^10] has been examined by some researchers. The notion of multilevel channel polarization is introduced later in this subsection. In the case where $q$ is a power of two, i.e., $q = 2^{r}$ for some positive integer $r$, Park and Barg [@park_barg_it2013] established the multilevel polarization theorem under the polar transforms with cyclic group $(\mathbb{Z}/q\mathbb{Z}, +)$. Independent of [@park_barg_it2013], in the case where $q$ is a prime power, Sahebi and Pradhan [@sahebi_pradhan_it2013] examined the multilevel polarization theorem and generalized it to arbitrary composite numbers $q$ under the polar transforms with arbitrary finite abelian group $(\mathcal{X}, +)$. Nasser and Telatar [@nasser_telatar_it2016] established the multilevel polarization theorem under the polar transforms with arbitrary finite quasigroup $(\mathcal{X}, \ast)$. Nasser [@nasser_it2016_ergodic1; @nasser_it2017_ergodic2] clarified further the necessary and sufficient condition of multilevel channel polarization for algebraic structures $(\mathcal{X}, \ast)$ allowing more weaker postulates than quasigroups. In the context of multilevel channel polarization, the limiting proportion of intermediate synthetic channels is allowed to be positive, as shown in . Then, notions of *partially noiseless* channels are required to achieve the symmetric capacity for arbitrary input DMCs. Such notions are, however, independently introduced by several authors [@park_barg_it2013; @sahebi_pradhan_it2013; @nasser_it2016_ergodic1; @nasser_it2017_ergodic2; @nasser_telatar_it2016] as different types. In particular, descriptions of multilevel channel polarization are slightly complicated if $(\mathcal{X}, \ast)$ is a quasigroup [@nasser_telatar_it2016] or a more weaker algebraic structure [@nasser_it2016_ergodic1; @nasser_it2017_ergodic2]. As a simple instance of them, following [@nasser_telatar_it2016 Section VI], we now introduce a notion of multilevel channel polarization under the polar transforms with a finite group $\mathcal{X} = G$ briefly as follows: Let $N \lhd G$ be a shorthand for a normal subgroup $N$ of a group $G$. For a channel $W : G \to \mathcal{Y}$ and a normal subgroup $N \lhd G$, the *homomorphism channel* $W[N] : G/N \to \mathcal{Y}$ is defined by $$\begin{aligned} W[N](y \mid a N) \coloneqq \frac{ 1 }{ |N| } \sum_{x \in a N} W(y \mid x) , \label{def:homomorphism}\end{aligned}$$ where the quotient group of $G$ by $N \lhd G$ is denoted by $G/N$. Then, Nasser and Telatar [@nasser_telatar_it2016 Theorem 6] showed that[^11] $$\begin{aligned} & \sum_{N \lhd G} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \big| I(W^{{\boldsymbol{s}}}) - \log [G : N] \big| < \delta \ \mathrm{and} \ \big| I(W^{{\boldsymbol{s}}}[N]) - \log [G : N] \big| < \delta \Big\} \Big| = 1 \label{eq:multilevel}\end{aligned}$$ for every fixed $\delta > 0$, where $[G : N] = |G/N|$ denotes the index of a normal subgroup $N$ in a group $G$. We now consider each term of the summation of . It is clear that the left-hand sides of and coincide with the terms of the summation with the trivial subgroup $N = \{ e \}$ and with the whole group $N = G$, respectively, where $e \in G$ denotes the identity element. Thus, it can be verified that the strong channel polarization is a special case of multilevel channel polarization . Moreover, other terms of the summation are the limiting proportions of partially noiseless synthetic channels $W^{{\boldsymbol{s}}}$, because the condition $$\begin{aligned} \big| I(W^{{\boldsymbol{s}}}[N]) - \log [G : N] \big| < \delta \label{eq:partially_noiseless}\end{aligned}$$ implies an almost noiseless homomorphism channel $W^{{\boldsymbol{s}}}[N]$ for $\delta$ sufficiently small. Together with , note that the condition $$\begin{aligned} \big| I(W^{{\boldsymbol{s}}}) - \log [G : N] \big| < \delta\end{aligned}$$ implies that the almost noiseless homomorphism channel $W^{{\boldsymbol{s}}}[N]$ has almost the same symmetric capacity as original one $W^{{\boldsymbol{s}}}$; this is a reason why polar codes can achieve the symmetric capacity with multilevel channel polarization. Although the limiting proportions and are fully and simply solved, the asymptotic distribution of multilevel channel polarization, i.e., all terms of the summation of , still remains an open problem, as written in [@nasser_PhD Section 9.2.1]. As a special case, in the case where $q$ is a power of two, Park and Barg said that the asymptotic distribution of multilevel channel polarization for OECs introduced in [Example \[ex:oec\]]{} simply coincides with its underlying probability vector [@park_barg_it2013 Section III][^12]. In [Section \[sect:asymptotic\_distribution\_MAEC\]]{}, in the case where $q$ is an arbitrary number, we tackle this open problem for our proposed channel models called modular arithmetic erasure channels defined in [Definition \[def:V\]]{} containing Park and Barg’s OECs (see [Example \[ex:oec\]]{}). Easily-Analyzable Channel Models for Polar Transforms {#sect:ease} ===================================================== In this section, we propose a general type of erasure-like channels containing already-known erasure-like channels. We call our erasure-like channel model a modular arithmetic erasure channel. For modular arithmetic erasure channels, we show that Ar[i]{}kan-like polar transforms under the ring $\mathbb{Z}/q\mathbb{Z}$ of integers modulo $q$ can be easily analyzed, i.e., every synthetic channel is again equivalent to a certain modular arithmetic erasure channel, like the polar transform for BECs. To describe this ease, we first introduce an equivalence relation between two channels in [Section \[sect:output\_equiv\]]{}. Employing this equivalence relation, we second introduce the ease of analyzing the polar transforms for BECs in [Section \[sect:bec\]]{}. In [Section \[sect:maec\]]{}, we third give [Definition \[def:V\]]{}, which is a formal definition of modular arithmetic erasure channels. [Section \[sect:recursive\]]{} shows the ease of analyzing the polar transforms for modular arithmetic erasure channels, and its proof is given in [Section \[sect:proof\_recursive\_V\]]{}. Output Degradedness and Equivalence of Channels {#sect:output_equiv} ----------------------------------------------- To describe eases of analyzing polar transforms, we now introduce an equivalence relation between two channels having the same input alphabet $\mathcal{X}$ as follows: \[def:output\_equiv\] A channel $W : \mathcal{X} \to \mathcal{Y}$ is said to be *degraded* with respect to another channel $\tilde{W} : \mathcal{X} \to \mathcal{Z}$ if there exists an intermediate channel $Q : \mathcal{Z} \to \mathcal{Y}$ fulfilling $$\begin{aligned} W(y \mid x) & = \sum_{z \in \mathcal{Z}} \tilde{W}(z \mid x) \, Q(y \mid z)\end{aligned}$$ for every $(x, y) \in \mathcal{X} \times \mathcal{Y}$. We denote by $W \preceq \tilde{W}$ this relation between two channels $W$ and $\tilde{W}$. In particular, we say that $W$ and $\tilde{W}$ are *equivalent* if $W \preceq \tilde{W}$ and $\tilde{W} \preceq W$; and we denote by $W \equiv \tilde{W}$ its equivalence. Note that a different notion of the equivalence relation between two joint distributions has been discussed by Mori and Tanaka [@mori_tanaka_it2014 Section IV] and Gulcu, Ye, and Barg [@gulcu_ye_barg_it2018 Definition 3] in the study of non-binary polar codes. In this study, we employ the channel equivalence $\equiv$ in the sense of [Definition \[def:output\_equiv\]]{}, as with [@tal_vardy_it2013 Section III] and [@nasser_it2017_ergodic2 Definition 11]. The following lemma means that the above equivalence relation $\equiv$ conserves the $\alpha$-symmetric capacity defined in [Definition \[def:alpha\]]{}. \[lem:degraded\_mutual\] For any $\alpha \in [0, \infty]$, it holds that $$\begin{aligned} W \preceq \tilde{W} \quad \Longrightarrow \quad I_{\alpha}( W ) \le I_{\alpha}( \tilde{W} ) . \label{eq:degraded_alpha_mutual}\end{aligned}$$ Consequently, for any $\alpha \in [0, \infty]$, it holds that $$\begin{aligned} W \equiv \tilde{W} \quad \Longrightarrow \quad I_{\alpha}( W ) = I_{\alpha}( \tilde{W} ) . \label{eq:equiv_alpha_mutual}\end{aligned}$$ Equation  is a direct consequence of the data-processing lemma[^13] for the $\alpha$-mutual information [@polyanskiy_verdu_allerton2010 Theorem 5-2)]. It is worth mentioning that [Lemma \[lem:degraded\_mutual\]]{} is a minor extension of [@tal_vardy_it2013 Lemma 3], because the $\alpha$-symmetric capacity $I_{\alpha}( W )$ contains the following channel parameters: the symmetric capacity $I( W )$; the average Bhattacharyya distance $Z( W )$; and the probability of error $P_{\mathrm{e}}( W )$ (see [Remark \[remark:connections\_alpha\_mutual\]]{}). The following lemma shows that the channel relation of [Definition \[def:output\_equiv\]]{} is invariant under the polar transforms and . \[lem:invariant\_degradedness\] Given four channels $W_{1} : \mathcal{X} \to \mathcal{Y}_{1}$, $\tilde{W}_{1} : \mathcal{X} \to \mathcal{Z}_{1}$, $W_{2} : \mathcal{X} \to \mathcal{Y}_{2}$, and $\tilde{W}_{2} : \mathcal{X} \to \mathcal{Z}_{2}$, it holds that $$\begin{aligned} W_{1} \preceq \tilde{W}_{1} \ \mathrm{and} \ W_{2} \preceq \tilde{W}_{2} \quad \Longrightarrow \quad W_{1} \boxast W_{2} \preceq \tilde{W}_{1} \boxast \tilde{W}_{2} \ \mathrm{and} \ W_{1} \varoast W_{2} \preceq \tilde{W}_{1} \varoast \tilde{W}_{2} . \label{eq:invariant_degradedness}\end{aligned}$$ Consequently, it holds that $$\begin{aligned} W_{1} \equiv \tilde{W}_{1} \ \mathrm{and} \ W_{2} \equiv \tilde{W}_{2} \quad \Longrightarrow \quad W_{1} \boxast W_{2} \equiv \tilde{W}_{1} \boxast \tilde{W}_{2} \ \mathrm{and} \ W_{1} \varoast W_{2} \equiv \tilde{W}_{1} \varoast \tilde{W}_{2} . \label{eq:invariant_equivalence}\end{aligned}$$ By [Definition \[def:output\_equiv\]]{}, there exist two channels $Q_{1} : \mathcal{Z}_{1} \to \mathcal{Y}_{1}$ and $Q_{2} : \mathcal{Z}_{2} \to \mathcal{Y}_{2}$ satisfying $$\begin{aligned} W_{1}(y_{1} \mid x_{1}) & = \sum_{z_{1} \in \mathcal{Z}_{1}} Q_{1}(y_{1} \mid z_{1}) \, \tilde{W}_{1}(z_{1} \mid x_{1}) , \\ W_{2}(y_{2} \mid x_{2}) & = \sum_{z_{2} \in \mathcal{Z}_{2}} Q_{2}(y_{2} \mid z_{2}) \, \tilde{W}_{2}(z_{2} \mid x_{2}) .\end{aligned}$$ For each $(u_{1}, y_{1}, y_{2}) \in \mathcal{X} \times \mathcal{Y}_{1} \times \mathcal{Y}_{2}$, we have $$\begin{aligned} (W_{1} \boxast W_{2})(y_{1}, y_{2} \mid u_{1}) & = \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{1}{q} W_{1}(y_{1} \mid u_{1} \ast u_{2}^{\prime}) \, W_{2}(y_{2} \mid u_{2}^{\prime}) \notag \\ & = \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{1}{q} \bigg( \sum_{z_{1} \in \mathcal{Z}_{1}} Q_{1}(y_{1} \mid z_{1}) \, \tilde{W}_{1}(z_{1} \mid u_{1} \ast u_{2}^{\prime}) \bigg) \bigg( \sum_{z_{2} \in \mathcal{Z}_{2}} Q_{2}(y_{2} \mid z_{2}) \, \tilde{W}_{2}(z_{2} \mid u_{2}^{\prime}) \bigg) \notag \\ & = \sum_{(z_{1}, z_{2}) \in \mathcal{Z}_{1} \times \mathcal{Z}_{2}} Q_{1}(y_{1} \mid z_{1}) \, Q_{2}(y_{2} \mid z_{2}) \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{1}{q} \tilde{W}_{1}(z_{1} \mid u_{1} \ast u_{2}^{\prime}) \, \tilde{W}_{2}(z_{2} \mid u_{2}^{\prime}) \notag \\ & = \sum_{(z_{1}, z_{2}) \in \mathcal{Z}_{1} \times \mathcal{Z}_{2}} Q_{1, 2}(y_{1}, y_{2} \mid z_{1}, z_{2}) \, (\tilde{W}_{1} \boxast \tilde{W}_{2})(z_{1}, z_{2} \mid u_{1}) ,\end{aligned}$$ which implies that $W_{1} \boxast W_{2} \preceq \tilde{W}_{1} \boxast \tilde{W}_{2}$, where the product channel $Q_{1, 2} : \mathcal{Z}_{1} \times \mathcal{Z}_{2} \to \mathcal{Y}_{1} \times \mathcal{Y}_{2}$ is given by $$\begin{aligned} Q_{1, 2}(y_{1}, y_{2} \mid z_{1}, z_{2}) & = Q_{1}(y_{1} \mid z_{1}) \, Q_{2}(y_{2} \mid z_{2}) .\end{aligned}$$ Similarly, for each $(u_{1}, u_{2}, y_{1}, y_{2}) \in \mathcal{X}^{2} \times \mathcal{Y}_{1} \times \mathcal{Y}_{2}$, we see that $$\begin{aligned} (W_{1} \varoast W_{2})(y_{1}, y_{2}, u_{1} \mid u_{2}) & = \frac{ 1 }{ q } W_{1}(y_{1} \mid u_{1} \ast u_{2}) \, W_{2}(y_{2} \mid u_{2}) \notag \\ & = \frac{ 1 }{ q } \bigg( \sum_{z_{1} \in \mathcal{Z}_{1}} Q_{1}(y_{1} \mid z_{1}) \, \tilde{W}_{1}(z_{1} \mid u_{1} \ast u_{2}) \bigg) \bigg( \sum_{z_{2} \in \mathcal{Z}_{2}} Q_{2}(y_{2} \mid z_{2}) \, \tilde{W}_{2}(z_{2} \mid u_{2}) \bigg) \notag \\ & = \sum_{(z_{1}, z_{2}) \in \mathcal{Z}_{1} \times \mathcal{Z}_{2}} Q_{1}(y_{1} \mid z_{1}) \, Q_{2}(y_{2} \mid z_{2}) \, \bigg( \frac{ 1 }{ q } \tilde{W}_{1}(z_{1} \mid u_{1} \ast u_{2}) \, \tilde{W}_{2}(z_{2} \mid u_{2}) \bigg) \notag \\ & = \sum_{(z_{1}, z_{2}) \in \mathcal{Z}_{1} \times \mathcal{Z}_{2}} Q_{1, 2}(y_{1}, y_{2} \mid z_{1}, z_{2}) \, (\tilde{W}_{1} \varoast \tilde{W}_{2})(z_{1}, z_{2}, u_{1} \mid u_{2}) \notag \\ & = \sum_{(z_{1}, z_{2}, u_{1}^{\prime}) \in \mathcal{Z}_{1} \times \mathcal{Z}_{2} \times \mathcal{X}} \hat{Q}_{1, 2}(y_{1}, y_{2}, u_{1} \mid z_{1}, z_{2}, u_{1}^{\prime}) \, (\tilde{W}_{1} \varoast \tilde{W}_{2})(z_{1}, z_{2}, u_{1}^{\prime} \mid u_{2}) ,\end{aligned}$$ which implies that $W_{1} \varoast W_{2} \preceq \tilde{W}_{1} \varoast \tilde{W}_{2}$, where the channel $\hat{Q}_{1, 2} : \mathcal{Z}_{1} \times \mathcal{Z}_{2} \times \mathcal{X} \to \mathcal{Y}_{1} \times \mathcal{Y}_{2} \times \mathcal{X}$ is given by $$\begin{aligned} \hat{Q}_{1, 2}(y_{1}, y_{2}, u_{1} \mid z_{1}, z_{2}, u_{1}^{\prime}) & = \begin{cases} Q_{1, 2}(y_{1}, y_{2} \mid z_{1}, z_{2}) & \mathrm{if} \ u_{1} = u_{1}^{\prime} , \\ 0 & \mathrm{if} \ u_{1} \neq u_{1}^{\prime} . \end{cases}\end{aligned}$$ As $\preceq$ forms a partial order for channels having the same input alphabet $\mathcal{X}$, note that is a direct consequence of . This completes the proof of [Lemma \[lem:invariant\_degradedness\]]{}. It is also worth mentioning that [Lemma \[lem:invariant\_degradedness\]]{} is a straightforward extension of [@tal_vardy_it2013 Lemma 5], and this lemma is an important notion to characterize the ease of analyzing polar transforms for modular arithmetic erasure channels defined in [Definition \[def:V\]]{} of [Section \[sect:maec\]]{}. Ease of Analyzing Polar Transforms for Binary Erasure Channels {#sect:bec} -------------------------------------------------------------- In this subsection, we consider the binary alphabet $|\mathcal{X}| = 2$, i.e, suppose that $q = 2$. The binary erasure channel (BEC) $W_{\mathrm{BEC}( \varepsilon )} : \mathcal{X} \to \mathcal{Y}$ with erasure probability $0 \le \varepsilon \le 1$ is defined by $$\begin{aligned} W_{\mathrm{BEC}(\varepsilon)}(y \mid x) \coloneqq \begin{cases} 1 - \varepsilon & \mathrm{if} \ y = x , \\ \varepsilon & \mathrm{if} \ y = \: ? , \\ 0 & \mathrm{otherwise} , \end{cases} \label{def:bec}\end{aligned}$$ where $\mathcal{Y} = \mathcal{X} \cup \{ ? \}$. For short, we denote by $\mathrm{BEC}( \varepsilon )$ this channel model. Consider the polar transforms of and with group[^14] $(\mathcal{X}, \ast)$. As summarized in the following proposition, it is well-known that both synthetic channels $\mathrm{BEC}( \varepsilon ) \boxast \mathrm{BEC}( \varepsilon^{\prime} )$ and $\mathrm{BEC}( \varepsilon ) \varoast \mathrm{BEC}( \varepsilon^{\prime} )$ with not necessarily identical erasure probabilities $\varepsilon$ and $\varepsilon^{\prime}$ are equivalent in the sense of [Definition \[def:output\_equiv\]]{} to other BECs with certain erasure probabilities. \[prop:bec\] For any $0 \le \varepsilon, \varepsilon^{\prime} \le 1$, it holds that $$\begin{aligned} \mathrm{BEC}( \varepsilon ) \boxast \mathrm{BEC}( \varepsilon^{\prime} ) & \equiv \mathrm{BEC}( \varepsilon + \varepsilon^{\prime} - \varepsilon \varepsilon^{\prime}) , \label{eq:bec_minus} \\ \mathrm{BEC}( \varepsilon ) \varoast \mathrm{BEC}( \varepsilon^{\prime} ) & \equiv \mathrm{BEC}( \varepsilon \varepsilon^{\prime}) . \label{eq:bec_plus}\end{aligned}$$ By [Lemma \[lem:degraded\_mutual\]]{} and [Proposition \[prop:bec\]]{}, channel parameters of the synthetic channels $\mathrm{BEC}( \varepsilon ) \boxast \mathrm{BEC}( \varepsilon^{\prime} )$ and $\mathrm{BEC}( \varepsilon ) \varoast \mathrm{BEC}( \varepsilon^{\prime} )$ can be calculated by erasure probabilities $\varepsilon$ and $\varepsilon^{\prime}$ only. Moreover, combining [Lemma \[lem:invariant\_degradedness\]]{} and [Proposition \[prop:bec\]]{}, we readily see that if initial channels are BECs, then every synthetic channel can be reduced to a BEC with a certain erasure probability which can be recursively calculated by explicit recursive formulas. In the case of stationary channels , we summarize this fact as follows: \[cor:bec\] For each ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$ and each $0 \le \varepsilon \le 1$, it holds that $$\begin{aligned} \mathrm{BEC}( \varepsilon )^{{\boldsymbol{s}}} \equiv \mathrm{BEC}( \varepsilon^{{\boldsymbol{s}}} ) ,\end{aligned}$$ where the erasure probability $0 \le \varepsilon^{{\boldsymbol{s}}} \le 1$ can be recursively calculated by $$\begin{aligned} \left\{ \begin{array}{l} \varepsilon^{{\boldsymbol{s}}-} = 2 \varepsilon^{{\boldsymbol{s}}} - ( \varepsilon^{{\boldsymbol{s}}} )^{2} , \\ \varepsilon^{{\boldsymbol{s}}+} = ( \varepsilon^{{\boldsymbol{s}}} )^{2} . \end{array} \right. \label{eq:recursive_bec}\end{aligned}$$ Namely, to analyze the polar transforms of stationary BECs, it suffices to propagate erasure probabilities by the recursive formulas . [Corollary \[cor:bec\]]{} is a well-known property. Moreover, we can verify that[^15] for any fixed $\delta > 0$, $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ I( W_{\mathrm{BEC}(\varepsilon)}^{{\boldsymbol{s}}} ) > 1 - \delta \Big\} \Big| & = 1 - \varepsilon , \label{eq:noiseless_bec} \\ \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ I( W_{\mathrm{BEC}(\varepsilon)}^{{\boldsymbol{s}}} ) < \delta \Big\} \Big| & = \varepsilon , \label{eq:useless_bec} \end{aligned}$$ which imply that the asymptotic distribution of strong channel polarization for a BEC can be simply characterized by the underlying erasure probability $\varepsilon$. Based on these observations, BECs are very useful toy problems in the study of binary polar codes from a pedagogical point of view. Similar to [Proposition \[prop:bec\]]{}, later in [Theorem \[th:recursive\_V\]]{} of [Section \[sect:recursive\]]{}, we will give the ease of analyzing polar transforms for our proposed general type of erasure-like channels, called modular arithmetic erasure channels, defined in [Definition \[def:V\]]{} of the next subsection. Modular Arithmetic Erasure Channels {#sect:maec} ----------------------------------- In this section, we propose a general type of erasure channels in terms of modular arithmetic, which relates to the ring of integers modulo $q$. To define such a class of erasure channels, we now introduce standard notations in elementary number theory as follows: Let $\mathbb{Z} \coloneqq \{ \dots, -2, -1, 0, 1, 2, \dots \}$ be the set of integers. Given two positive integers $a, b \in \mathbb{N}$, define the following three sets: $$\begin{aligned} a \mathbb{Z} & \coloneqq \{ a z \mid z \in \mathbb{Z} \} = \{ \dots, -2 a, - a, 0, a, 2 a, \dots \} , \\ b + a \mathbb{Z} & \coloneqq \{ b + z \mid z \in a \mathbb{Z} \} = \{ \dots, b - 2 a, b - a, b, b + a, b + 2 a, \dots \} , \\ \frac{ \mathbb{Z} }{ a\mathbb{Z} } & \coloneqq \{ z + a \mathbb{Z} \mid z \in \mathbb{Z} \} = \{ a \mathbb{Z}, 1 + a \mathbb{Z}, \dots, (a-1) + a \mathbb{Z} \} .\end{aligned}$$ For two positive integers $a, b \in \mathbb{N}$, let $a|b$ be a shorthand for “$a$ divides $b$,” which means that there exists a positive integer $c \in \mathbb{N}$ satisfying $a c = b$. If we define the sumset $\mathcal{S} + \mathcal{T} \coloneqq \{ s + t \mid s \in \mathcal{S} \ \mathrm{and} \ t \in \mathcal{T} \}$ for given two subsets $\mathcal{S}, \mathcal{T} \subset \mathbb{Z}$, then it is clear that[^16] $a \mathbb{Z} + b \mathbb{Z} = a \mathbb{Z}$ whenever $a|b$. Using these notations, we define modular arithmetic erasure channels as follows: \[def:V\] For a given probability vector[^17] ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$, the modular arithmetic erasure channel $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is defined by $$\begin{aligned} V_{{\boldsymbol{\varepsilon}}}(y \mid x) \coloneqq \begin{dcases} \varepsilon_{d} & \text{\emph{if $y = x + d \mathbb{Z}$ for some divisor $d$ of $q$}} , \\ 0 & \mathrm{otherwise} \end{dcases} \label{eq:V}\end{aligned}$$ for each $(x, y) \in \mathcal{X} \times \mathcal{Y}$, where the input and output alphabets are given by $$\begin{aligned} \mathcal{X} & = \frac{ \mathbb{Z} }{ q\mathbb{Z} } , \\ \mathcal{Y} & = \bigcup_{d|q} \frac{ \mathbb{Z} }{ d\mathbb{Z} } = \{ z + d \mathbb{Z} \mid \text{\emph{$z \in \mathbb{Z}$ and $d$ divides $q$}} \} , \label{def:alphabetY}\end{aligned}$$ respectively. We sometime denote by $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ this channel model, as in $\mathrm{BEC}( \cdot )$. Note that a modular arithmetic erasure channel $V_{{\boldsymbol{\varepsilon}}}$ is only determined by two parameters: an input alphabet size $q$ and an underlying probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. Thus, the notation $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ specifies them. It can be easily verified that $V_{{\boldsymbol{\varepsilon}}}$ is symmetric in the sense of Gallager’s definition [@gallager_1968 p. 94] (see also [@ieeeit2018-01 Definition 4]), i.e., its channel capacity coincides with the symmetric capacity $I( V_{{\boldsymbol{\varepsilon}}} )$ (cf. [@gallager_1968 Theorem 4.5.2]). The following proposition gives formulas of the $\alpha$-symmetric capacity of a modular arithmetic erasure channel $V_{{\boldsymbol{\varepsilon}}}$. \[prop:I(V)\] For any probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ and any order $\alpha \in [0, \infty]$, it holds that $$\begin{aligned} I_{\alpha}( V_{{\boldsymbol{\varepsilon}}} ) & = \begin{dcases} \frac{ \alpha }{ \alpha - 1 } \log \bigg( \sum_{d|q} d^{(\alpha-1)/\alpha} \, \varepsilon_{d} \bigg) & \mathrm{if} \ \alpha \in (0, 1) \cup (1, \infty) , \\ \min_{d|q : \varepsilon_{d} > 0} \Big( \log d \Big) & \mathrm{if} \ \alpha = 0 , \\ \sum_{d|q} (\log d) \, \varepsilon_{d} & \mathrm{if} \ \alpha = 1 , \\ \log \bigg( \sum_{d|q} d \, \varepsilon_{d} \bigg) & \mathrm{if} \ \alpha = \infty . \end{dcases}\end{aligned}$$ See [Appendix \[app:proof\_prop:I(V)\]]{}. \[rem:I(V)\] By [Remark \[remark:connections\_alpha\_mutual\]]{} and [Proposition \[prop:I(V)\]]{}, after some algebra, we observe that $$\begin{aligned} Z( V_{{\boldsymbol{\varepsilon}}} ) & = \frac{ 1 }{ q - 1 } \Bigg( \sum_{d|q} \bigg( \frac{ q }{ d } \bigg) \, \varepsilon_{d} - 1 \Bigg) , \\ P_{\mathrm{e}}( V_{{\boldsymbol{\varepsilon}}} ) & = 1 - \sum_{d|q} \bigg( \frac{ d }{ q } \bigg) \, \varepsilon_{d} .\end{aligned}$$ [Proposition \[prop:I(V)\]]{} and [Remark \[rem:I(V)\]]{} tell us that the cannel parameters of a modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ can be calculated only by its input alphabet size $q$ and its underlying probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. As another interpretation of a modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$, it can be seen as a kind of additive noise channels[^18] as follows: The input symbol is modeled by a random variable $X$ taking values in $\mathbb{Z}/q\mathbb{Z}$, and the noise symbol is modeled by a random variable $Z$ taking values in $\{ d \mathbb{Z} \mid \text{$d$ divides $q$} \}$ with the probability law $\mathbb{P}( Z = d \mathbb{Z} ) = \varepsilon_{d}$ for each $d|q$. Then, the output symbol is modeled by the random variable $Y = X + Z$, which seems an analogue of additive noise channels. In this case, it can be verified that the conditional probability distribution $P_{Y|X}$ of $Y$ given $X$ is equal to the transition probability distribution $V_{{\boldsymbol{\varepsilon}}}$ given in . A graphical representation of such an analogue of additive noise channels is illustrated in [Fig. \[fig:additive\]]{}. \(x) at (0, 0) [$\mathbb{Z}/q\mathbb{Z} \ni X$]{}; (plus) at (3, 0) [$\bigoplus$]{}; (y) at (9, 0) [$Y = X + Z \in \{ x + d\mathbb{Z} \mid x \in \mathbb{Z}/q\mathbb{Z} \ \mathrm{and} \ \text{$d$ divides $q$} \}$]{}; (z) at (3, 2) [ $Z \in \{ d \mathbb{Z} \mid \text{$d$ divides $q$} \}$]{}; (x) – (plus); (z) – (plus); (plus) – (y); Roughly speaking, a modular arithmetic erasure channel $V_{{\boldsymbol{\varepsilon}}}$ behaves as follows: If the receiver observes the output symbol $y = x + d \mathbb{Z}$ through the channel $V_{{\boldsymbol{\varepsilon}}}$ for some $d|q$, then the receiver can uniquely and exactly estimate the transmitted input symbol $x$ modulo $d$. More precisely, by an obtained output symbol $y = x + d\mathbb{Z}$, one can estimate the transmitted input symbol $x$ correctly modulo $d_{1}$ for each $d_{1}|d$, but cannot estimate correctly modulo $d_{2}$ for each $d_{2} \! \nmid \! d$. This observation seems that an input symbol $x$ erases modular arithmetically. As shown in the following examples, the modular arithmetic erasure channel can be reduced to the following erasure channels: the BEC, the $q$-EC in an ordinary sense (see, e.g., [@mackay_2003 p. 589]), the OEC introduced by Park and Barg [@park_barg_isit2011 p. 2285], and a special senary-input channel model given by Sahebi and Pradhan [@sahebi_pradhan_it2013 Fig. 4: Channel 2]. \[ex:bec\] Consider the case where $q = 2$, i.e., $\mathcal{X} = \mathbb{Z}/2\mathbb{Z}$. Then, the output alphabet of [Definition \[def:V\]]{} is given by $\mathcal{Y} = (\mathbb{Z}/\mathbb{Z}) \cup (\mathbb{Z}/2\mathbb{Z}) = \{ \mathbb{Z}, 2 \mathbb{Z}, 1 + 2 \mathbb{Z} \}$. For a given probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d | q} = ( \varepsilon_{1}, \varepsilon_{2} )$, the transition probability of the erasure channel $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is determined by $$\begin{aligned} V_{{\boldsymbol{\varepsilon}}}(y \mid x) = \begin{cases} \varepsilon_{2} & \mathrm{if} \ y = x , \\ \varepsilon_{1} & \mathrm{if} \ y = \mathbb{Z} , \\ 0 & \mathrm{otherwise} \end{cases}\end{aligned}$$ for each $(x, y) \in \mathcal{X} \times \mathcal{Y}$. This channel model is essentially the same[^19] as the BEC with erasure probability $\varepsilon_{1}$, i.e., the erasure symbol ‘$?$’ corresponds to $\mathbb{Z}$ (see of [Section \[sect:bec\]]{}). Let $q \ge 2$ be an arbitrary positive integer. Suppose that an underlying probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ satisfies $\varepsilon_{1} + \varepsilon_{q} = 1$, i.e., $\varepsilon_{d} = 0$ for every $d|q$ in which $1 < d < q$. Then, the transition probability of the erasure channel $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is determined by $$\begin{aligned} V_{{\boldsymbol{\varepsilon}}}(y \mid x) = \begin{cases} \varepsilon_{q} & \mathrm{if} \ y = x , \\ \varepsilon_{1} & \mathrm{if} \ y = \mathbb{Z} , \\ 0 & \mathrm{otherwise} \end{cases}\end{aligned}$$ for each $(x, y) \in \mathcal{X} \times \mathcal{Y}$. In this case, the output alphabet $\mathcal{Y}$ and the underlying probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ can be reduced to $$\begin{aligned} \mathcal{Y}^{\prime} & = \bigg( \frac{ \mathbb{Z} }{ \mathbb{Z} } \bigg) \cup \bigg( \frac{ \mathbb{Z} }{ q\mathbb{Z} } \bigg) , \\ {\boldsymbol{\varepsilon}}^{\prime} & = ( \varepsilon_{1}, \varepsilon_{q} ) ,\end{aligned}$$ respectively; and the channel $V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{\prime}$ is essentially the same as the $q$-EC [@mackay_2003 p. 589] with erasure probability $\varepsilon_{1}$, i.e., the erasure symbol corresponds to $\mathbb{Z}$ as in [Example \[ex:bec\]]{}. Note that if $\varepsilon_{1} + \varepsilon_{q} < 1$, i.e., if there exists a divisor $1 < d < q$ satisfying $\varepsilon_{d} > 0$, then $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is totally different from $q$-ECs. \[ex:oec\] Let $q$ be a prime power, i.e., $q = p^{r}$ for some prime number $p$ and some positive integer $r$. Note that each divisor $d|q$ can be written by $d = p^{t}$ for some $0 \le t \le r$. For a given probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{1}, \varepsilon_{p}, \varepsilon_{p^{2}}, \dots, \varepsilon_{p^{r-1}}, \varepsilon_{p^{r}} )$, the transition probability of the erasure channel $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is determined by $$\begin{aligned} V_{{\boldsymbol{\varepsilon}}}(y \mid x) & = \begin{cases} \varepsilon_{p^{t}} & \mathrm{if} \ y = x + p^{t} \mathbb{Z} \ \mathrm{for} \ \mathrm{some} \ 0 \le t \le r , \\ 0 & \mathrm{otherwise} \end{cases} \notag \\ & = \begin{cases} \varepsilon_{p^{r}} & \mathrm{if} \ y = x , \\ \varepsilon_{p^{r-1}} & \mathrm{if} \ y = x + p^{r-1} \mathbb{Z} , \\ \vdots & \vdots \\ \varepsilon_{p} & \mathrm{if} \ y = x + p \mathbb{Z} , \\ \varepsilon_{1} & \mathrm{if} \ y = \mathbb{Z} , \\ 0 & \mathrm{otherwise} \end{cases}\end{aligned}$$ for each $(x, y) \in \mathcal{X} \times \mathcal{Y}$. This channel model is essentially the same as the OEC proposed by Park and Barg [@park_barg_isit2011 p. 2285]. Note that Sahebi and Pradhan’s quaternary-input erasure channel [@sahebi_pradhan_it2013 Fig. 3: Channel 1] is also essentially the same as the quaternary-input OEC. \[ex:sahebi\_pradhan\] Consider the case where $q = 6$, i.e., the input alphabet is given by $\mathcal{X} = \mathbb{Z}/6\mathbb{Z}$. Then, the output alphabet is given by $\mathcal{Y} = (\mathbb{Z}/\mathbb{Z}) \cup (\mathbb{Z}/2\mathbb{Z}) \cup (\mathbb{Z}/3\mathbb{Z}) \cup (\mathbb{Z}/6\mathbb{Z}) = \{ \mathbb{Z}, 2 \mathbb{Z}, 1 + 2 \mathbb{Z}, 3 \mathbb{Z}, 1 + 3 \mathbb{Z}, 2 + 3 \mathbb{Z}, 6 \mathbb{Z}, 1 + 6 \mathbb{Z}, 2 + 6 \mathbb{Z}, 3 + 6 \mathbb{Z}, 4 + 6 \mathbb{Z}, 5 + 6 \mathbb{Z} \}$. For a given probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q} = ( \varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}, \varepsilon_{6} )$, the transition probability of the erasure channel $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is determined by $$\begin{aligned} V_{{\boldsymbol{\varepsilon}}}(y \mid x) = \begin{cases} \varepsilon_{6} & \mathrm{if} \ y = x , \\ \varepsilon_{3} & \mathrm{if} \ y = x + 3 \mathbb{Z} , \\ \varepsilon_{2} & \mathrm{if} \ y = x + 2 \mathbb{Z} , \\ \varepsilon_{1} & \mathrm{if} \ y = \mathbb{Z} , \\ 0 & \mathrm{otherwise} \end{cases}\end{aligned}$$ for each $(x, y) \in \mathcal{X} \times \mathcal{Y}$. This channel model is essentially the same as the senary-input erasure-like channel proposed by Sahebi and Pradhan [@sahebi_pradhan_it2013 Fig. 4: Channel 2]. Recursive Formulas of Polar Transforms {#sect:recursive} -------------------------------------- We now consider polar transforms for $q$-ary input channels under the ring $(\mathbb{Z}/q\mathbb{Z}, +, \cdot)$ of integers modulo $q$. Let $\mathcal{X} = \mathbb{Z}/q\mathbb{Z}$ be the input alphabet, and let $\gamma \in \mathbb{Z}/q\mathbb{Z}$ be a unit[^20] of the ring. For given two channels $W_{1} : \mathcal{X} \to \mathcal{Y}_{1}$ and $W_{2} : \mathcal{X} \to \mathcal{Y}_{2}$ that are not necessarily identical, the polar transform[^21] makes two synthetic channels: the worse channel $W_{1} \boxast_{\gamma} W_{2} : \mathcal{X} \to \mathcal{Y}_{1} \times \mathcal{Y}_{2}$ defined by $$\begin{aligned} (W_{1} \boxast_{\gamma} W_{2}) (y_{1}, y_{2} \mid u_{1}) \coloneqq \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } \, W_{1}(y_{1} \mid u_{1} + \gamma u_{2}^{\prime}) \, W_{2}(y_{2} \mid u_{2}^{\prime}) ; \label{def:minus_two}\end{aligned}$$ and the better channel $W_{1} \varoast_{\gamma} W_{2} : \mathcal{X} \to \mathcal{Y}_{1} \times \mathcal{Y}_{2} \times \mathcal{X}$ defined by $$\begin{aligned} (W_{1} \varoast_{\gamma} W_{2})(y_{1}, y_{2}, u_{1} \mid u_{2}) \coloneqq \frac{ 1 }{ q } \, W_{1}(y_{1} \mid u_{1} + \gamma u_{2}) \, W_{2}(y_{2} \mid u_{2}) . \label{def:plus_two}\end{aligned}$$ These polar transforms and with a unit $\gamma \in \mathbb{Z}/q\mathbb{Z}$ are based on the context of entropy weighted sums (see [@abbe_li_madiman_2017]). Recall that $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ denotes the $q$-ary input modular arithmetic erasure channel $V_{{\boldsymbol{\varepsilon}}}$ defined in [Definition \[def:V\]]{} with underlying probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. The following theorem is a main result: both synthetic channels $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} ) \boxast_{\gamma} \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}}^{\prime} )$ and $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} ) \varoast_{\gamma} \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}}^{\prime} )$ are equivalent to other modular arithmetic erasure channels $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime} )$ and $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime} )$, respectively, with certain underlying probability vectors ${\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}$ and ${\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}$, respectively, as in the recursive formulas of the Ar[i]{}kan-like polar transform for BECs (see [Proposition \[prop:bec\]]{}). \[th:recursive\_V\] Let $q \ge 2$ be an integer, let $\gamma \in \mathbb{Z}/q\mathbb{Z}$ be a unit of the ring, and let ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ and ${\boldsymbol{\varepsilon}}^{\prime} = ( \varepsilon_{d}^{\prime} )_{d|q}$ be two probability vectors. Then, it holds that $$\begin{aligned} \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} ) \boxast_{\gamma} \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}}^{\prime} ) & \equiv \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime} ) , \label{eq:maec_minus_recursive} \\ \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} ) \varoast_{\gamma} \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}}^{\prime} ) & \equiv \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime} ) , \label{eq:maec_plus_recursive} \end{aligned}$$ where two probability vectors ${\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime} \coloneqq ( \varepsilon_{d}^{\boxast} )_{d|q}$ and ${\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime} \coloneqq ( \varepsilon_{d}^{\varoast} )_{d|q}$ are given by $$\begin{aligned} \varepsilon_{d}^{\boxast} & = \varepsilon_{d}^{\boxast}( {\boldsymbol{\varepsilon}}, {\boldsymbol{\varepsilon}}^{\prime} ) \coloneqq \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd(d_{1}, d_{2}) = d }} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} , \label{def:eps_minus} \\ \varepsilon_{d}^{\varoast} & = \varepsilon_{d}^{\varoast}( {\boldsymbol{\varepsilon}}, {\boldsymbol{\varepsilon}}^{\prime} ) \coloneqq \sum_{\substack{ d_{1}|q, d_{2}|q : \\ {\operatorname{lcm}}(d_{1}, d_{2}) = d }} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} , \label{def:eps_plus}\end{aligned}$$ respectively, for each $d|q$. We will prove [Theorem \[th:recursive\_V\]]{} in the next subsection. Two simple examples of [Theorem \[th:recursive\_V\]]{} are given as follows: \[ex:recursive\_2\] Consider the case where $q = 2$, i.e., the modular arithmetic erasure channel $\mathrm{MAEC}_{2}( \varepsilon_{1}, \varepsilon_{2} )$ coincides with the BEC with erasure probability $\varepsilon_{1}$ (see [Example \[ex:bec\]]{}). Let $\gamma \in \mathbb{Z}/2\mathbb{Z}$ be a unit. By [Theorem \[th:recursive\_V\]]{}, the worse channel $\mathrm{MAEC}_{2}( \varepsilon_{1}, \varepsilon_{2} ) \boxast_{\gamma} \mathrm{MAEC}_{2}( \varepsilon_{1}^{\prime}, \varepsilon_{2}^{\prime} )$ is equivalent to $\mathrm{MAEC}_{2}( \varepsilon_{1}^{\boxast}, \varepsilon_{2}^{\boxast} )$, where $$\begin{aligned} \left\{ \begin{array}{l} \varepsilon_{2}^{\boxast} = \varepsilon_{2} \, \varepsilon_{2}^{\prime} , \\ \varepsilon_{1}^{\boxast} = \varepsilon_{1} \, \varepsilon_{1}^{\prime} + \varepsilon_{1} \, \varepsilon_{2}^{\prime} + \varepsilon_{1}^{\prime} \, \varepsilon_{2} . \end{array} \right.\end{aligned}$$ Since $\varepsilon_{1} + \varepsilon_{2} = 1$ and $\varepsilon_{1}^{\prime} + \varepsilon_{2}^{\prime} = 1$, it can be verified that $$\begin{aligned} \varepsilon_{1}^{\boxast} = \varepsilon_{1} + \varepsilon_{1}^{\prime} - \varepsilon_{1} \, \varepsilon_{1}^{\prime} ,\end{aligned}$$ which coincides with of [Proposition \[prop:bec\]]{}. Moreover, by [Theorem \[th:recursive\_V\]]{}, the better channel $\mathrm{MAEC}_{2}( \varepsilon_{1}, \varepsilon_{2} ) \varoast_{\gamma} \mathrm{MAEC}_{2}( \varepsilon_{1}^{\prime}, \varepsilon_{2}^{\prime} )$ is equivalent to $\mathrm{MAEC}_{2}( \varepsilon_{1}^{\varoast}, \varepsilon_{2}^{\varoast} )$, where $$\begin{aligned} \left\{ \begin{array}{l} \varepsilon_{2}^{\varoast} = \varepsilon_{1}^{\prime} \, \varepsilon_{2} + \varepsilon_{1} \, \varepsilon_{2}^{\prime} + \varepsilon_{2} \, \varepsilon_{2}^{\prime} , \\ \varepsilon_{1}^{\varoast} = \varepsilon_{1} \, \varepsilon_{1}^{\prime} , \end{array} \right.\end{aligned}$$ which coincides with of [Proposition \[prop:bec\]]{}. \[ex:recursive\_6\] Consider the case where $q = 6$ (see [Example \[ex:sahebi\_pradhan\]]{}). Let $\gamma \in \mathbb{Z}/6\mathbb{Z}$ be a unit. By [Theorem \[th:recursive\_V\]]{}, the worse channel $\mathrm{MAEC}_{6}( \varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}, \varepsilon_{6} ) \boxast_{\gamma} \mathrm{MAEC}_{6}( \varepsilon_{1}^{\prime}, \varepsilon_{2}^{\prime}, \varepsilon_{3}^{\prime}, \varepsilon_{6}^{\prime} )$ is equivalent to $\mathrm{MAEC}_{6}( \varepsilon_{1}^{\boxast}, \varepsilon_{2}^{\boxast}, \varepsilon_{3}^{\boxast}, \varepsilon_{6}^{\boxast} )$, where $$\begin{aligned} \left\{ \begin{array}{l} \varepsilon_{6}^{\boxast} = \varepsilon_{6} \, \varepsilon_{6}^{\prime} , \\ \varepsilon_{3}^{\boxast} = \varepsilon_{3} \, \varepsilon_{3}^{\prime} + \varepsilon_{3} \, \varepsilon_{6}^{\prime} + \varepsilon_{6} \, \varepsilon_{3}^{\prime} , \\ \varepsilon_{2}^{\boxast} = \varepsilon_{2} \, \varepsilon_{2}^{\prime} + \varepsilon_{2} \, \varepsilon_{6}^{\prime} + \varepsilon_{6} \, \varepsilon_{2}^{\prime} , \\ \varepsilon_{1}^{\boxast} = \varepsilon_{1} \, \varepsilon_{1}^{\prime} + \varepsilon_{1} \, \varepsilon_{2}^{\prime} + \varepsilon_{1} \, \varepsilon_{3}^{\prime} + \varepsilon_{1} \, \varepsilon_{6}^{\prime} + \varepsilon_{2} \, \varepsilon_{1}^{\prime} + \varepsilon_{2} \, \varepsilon_{3}^{\prime} + \varepsilon_{3} \, \varepsilon_{1}^{\prime} + \varepsilon_{3} \, \varepsilon_{2}^{\prime} + \varepsilon_{6} \, \varepsilon_{1}^{\prime} . \end{array} \right. \label{eq:recursive_6_minus}\end{aligned}$$ Moreover, by [Theorem \[th:recursive\_V\]]{}, the better channel $\mathrm{MAEC}_{6}( \varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}, \varepsilon_{6} ) \varoast_{\gamma} \mathrm{MAEC}_{6}( \varepsilon_{1}^{\prime}, \varepsilon_{2}^{\prime}, \varepsilon_{3}^{\prime}, \varepsilon_{6}^{\prime} )$ is equivalent to $\mathrm{MAEC}_{6}( \varepsilon_{1}^{\varoast}, \varepsilon_{2}^{\varoast}, \varepsilon_{3}^{\varoast}, \varepsilon_{6}^{\varoast} )$, where $$\begin{aligned} \left\{ \begin{array}{l} \varepsilon_{6}^{\varoast} = \varepsilon_{1} \, \varepsilon_{6}^{\prime} + \varepsilon_{2} \, \varepsilon_{3}^{\prime} + \varepsilon_{2} \, \varepsilon_{6}^{\prime} + \varepsilon_{3} \, \varepsilon_{2}^{\prime} + \varepsilon_{3} \, \varepsilon_{6}^{\prime} + \varepsilon_{6} \, \varepsilon_{1}^{\prime} + \varepsilon_{6} \, \varepsilon_{2}^{\prime} + \varepsilon_{6} \, \varepsilon_{3}^{\prime} + \varepsilon_{6} \, \varepsilon_{6}^{\prime} , \\ \varepsilon_{3}^{\varoast} = \varepsilon_{1} \, \varepsilon_{3}^{\prime} + \varepsilon_{3} \, \varepsilon_{1}^{\prime} + \varepsilon_{3} \, \varepsilon_{3}^{\prime} , \\ \varepsilon_{2}^{\varoast} = \varepsilon_{1} \, \varepsilon_{2}^{\prime} + \varepsilon_{2} \, \varepsilon_{1}^{\prime} + \varepsilon_{2} \, \varepsilon_{2}^{\prime} , \\ \varepsilon_{1}^{\varoast} = \varepsilon_{1} \, \varepsilon_{1}^{\prime} . \end{array} \right. \label{eq:recursive_6_plus}\end{aligned}$$ Note that can be reduced to Sahebi and Pradhan’s recursive formula [@sahebi_pradhan_it2013 Equation (4)] for the minus transform, and corrects the error in Sahebi and Pradhan’s recursive formula [@sahebi_pradhan_it2013 Equation (3)] for the plus transform. It is worth mentioning that whereas the polar transforms depend on the unit $\gamma \in \mathbb{Z}/q\mathbb{Z}$ in general (see [@abbe_li_madiman_2017]), [Theorem \[th:recursive\_V\]]{} shows that the polar transforms for modular arithmetic erasure channels are independent of the choice of a unit $\gamma \in \mathbb{Z}/q\mathbb{Z}$. Based on this observation, we may assume without loss of generality that $\gamma = 1 + q\mathbb{Z} \in \mathbb{Z}/q\mathbb{Z}$ in the analyses of polar transforms for modular arithmetic erasure channels. Consider polar transforms for stationary channels $W : \mathcal{X} \to \mathcal{Y}$ in which $\mathcal{X} = \mathbb{Z}/q\mathbb{Z}$ as follows: The polar transforms make the worse channel $W^{-} : \mathcal{X} \to \mathcal{Y}^{2}$ and the better channel $W^{+} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ as $$\begin{aligned} W^{-}(y_{1}, y_{2} \mid u_{1}) & \coloneqq \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } W(y_{1} \mid u_{1} + u_{2}^{\prime}) \, W(y_{2} \mid u_{2}^{\prime}) , \\ W^{+}(y_{1}, y_{2}, u_{1} \mid u_{2}) & \coloneqq \frac{ 1 }{ q } W(y_{1} \mid u_{1} + u_{2}) \, W(y_{2} \mid u_{2}) ,\end{aligned}$$ respectively. Then, after the $n$-step polar transforms for an integer $n \in \mathbb{N}$, the synthetic channel $W^{{\boldsymbol{s}}} : \mathcal{X} \to \mathcal{Y}^{2^{n}} \times \mathcal{X}^{w( {\boldsymbol{s}} )}$ is created by $$\begin{aligned} W^{{\boldsymbol{s}}} \coloneqq ( \cdots ( W^{s_{1}} )^{s_{2}} \cdots )^{s_{n}}\end{aligned}$$ for each ${\boldsymbol{s}} \in \{ -, + \}^{n}$, where the mapping $w : \{ -, + \}^{\ast} \to \mathbb{N}_{0}$ is defined in . Similar to [Corollary \[cor:bec\]]{}, combining [Lemma \[lem:invariant\_degradedness\]]{} and [Theorem \[th:recursive\_V\]]{}, we readily see the following corollary. \[cor:recursive\_V\] Let $q \ge 2$ be an integer, let $\gamma \in \mathbb{Z}/q\mathbb{Z}$ be a unit of the ring, and let ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ be a probability vector. For any sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, it holds that $$\begin{aligned} \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )^{{\boldsymbol{s}}} \equiv \mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}}^{{\boldsymbol{s}}} ) ,\end{aligned}$$ where the probability vector ${\boldsymbol{\varepsilon}}^{{\boldsymbol{s}}} = ( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q}$ is recursively given by $$\begin{aligned} \left\{ \begin{array}{l} \varepsilon_{d}^{{\boldsymbol{s}}-} = \displaystyle \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd(d_{1}, d_{2}) = d }} \varepsilon_{d_{1}}^{{\boldsymbol{s}}} \varepsilon_{d_{2}}^{{\boldsymbol{s}}} , \\ \varepsilon_{d}^{{\boldsymbol{s}}+} = \displaystyle \sum_{\substack{ d_{1}|q, d_{2}|q : \\ {\operatorname{lcm}}(d_{1}, d_{2}) = d }} \varepsilon_{d_{1}}^{{\boldsymbol{s}}} \varepsilon_{d_{2}}^{{\boldsymbol{s}}} \end{array} \right. \label{def:eps_s}\end{aligned}$$ for each $d|q$. \ It follows from [Lemma \[lem:degraded\_mutual\]]{}, [Proposition \[prop:I(V)\]]{}, and [Corollary \[cor:recursive\_V\]]{} that it suffices to propagate the probability vector ${\boldsymbol{\varepsilon}}^{{\boldsymbol{s}}}$ by the recursive formulas for calculating the $\alpha$-symmetric capacity $I_{\alpha}( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}} )$. Some numerical examples of this fact are plotted in [Fig. \[fig:example\_recursive\_formulas\]]{}, which illustrates the symmetric capacities $I( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}} )$ of synthetic channels $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}$ in which each initial channel $V_{{\boldsymbol{\varepsilon}}}$ is a modular arithmetic erasure channel. From [Fig. \[fig:example\_recursive\_formulas\]]{}, we can easily conjecture that polarization for modular arithmetic erasure channels behaves multilevel channel polarization in general. On the other hand, whereas [Fig. \[subfig:maec\_6\_case1\]]{} seems a multilevel channel polarization with $q=6$, Figure \[subfig:maec\_6\_case2\] seems a strong channel polarization with the same input alphabet size $q=6$. Moreover, in [Fig. \[subfig:maec\_45\]]{}, we can imagine that the limiting proportion of useless synthetic channels $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}$, i.e., $I( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}} ) \approx 0$, approaches to zero as $n \to \infty$, while the limiting proportion of noiseless synthetic channels $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}$, i.e., $I( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}} ) \approx 1$, does not approach to one as $n \to 1$. To understand the general low of them, in [Section \[sect:asymptotic\_distribution\_MAEC\]]{}, we fully characterize the asymptotic distribution of multilevel channel polarization for modular arithmetic erasure channels $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ for every input alphabet size $q$. Using the results in [Section \[sect:asymptotic\_distribution\_MAEC\]]{}, we will clarify the asymptotic distributions of Figs. \[subfig:maec\_6\_case1\] and \[subfig:maec\_6\_case2\] in [Example \[ex:asymptotic\_distribution\_6\]]{}; the asymptotic distribution of [Fig. \[subfig:maec\_45\]]{} in the below of [Corollary \[cor:mu\_d\]]{}; and the asymptotic distribution of [Fig. \[subfig:maec\_512\]]{} in the below of [Theorem \[th:primepower\]]{}. Before we move to [Section \[sect:asymptotic\_distribution\_MAEC\]]{}, we give a proof of [Theorem \[th:recursive\_V\]]{} in the following subsection. Proof of [Theorem \[th:recursive\_V\]]{} {#sect:proof_recursive_V} ---------------------------------------- In this study, we define the congruence between $a$ and $b$ modulo $d \in \mathbb{N}$ by $$\begin{aligned} a \equiv b \pmod{d} \overset{\text{def}}{\iff} a + d \mathbb{Z} = b + d \mathbb{Z} , \label{def:congruence}\end{aligned}$$ where $a$ and $b$ are elements belonging either to $\mathbb{Z}$ or to $\mathbb{Z}/c\mathbb{Z}$ for an arbitrary positive integer $c \in \mathbb{N}$ satisfying $d|c$. It is worth mentioning that even if $a \in \mathbb{Z}/c_{1}\mathbb{Z}$ and $b \in \mathbb{Z}/c_{2}\mathbb{Z}$ with distinct $c_{1}, c_{2} \in \mathbb{N}$ satisfying $d|\gcd(c_{1}, c_{2})$, or even if $a \in \mathbb{Z}$ and $b \in \mathbb{Z}/c\mathbb{Z}$ for some $c \in \mathbb{N}$ satisfying $d|c$, this congruence is still well-defined; and this fact is useful in our analyses, especially in the proof of [Theorem \[th:recursive\_V\]]{} later. To prove [Theorem \[th:recursive\_V\]]{}, noting the definition of the congruence, we employ the following well-known result in elementary number theory. \[lem:CRT\] Let $d_{1}, d_{2} \in \mathbb{N}$. For every $a$ and $b$, the system of two congruences $$\begin{aligned} z & \equiv a \pmod{d_{1}} , \label{eq:congruence_d1} \\ z & \equiv b \pmod{d_{2}} \label{eq:congruence_d2} \end{aligned}$$ has a solution $z$ if and only if $$\begin{aligned} a \equiv b \pmod{\gcd(d_{1}, d_{2})} . \label{eq:condition_gcd}\end{aligned}$$ In particular, when the solution $z$ exists, it is unique modulo ${\operatorname{lcm}}(d_{1}, d_{2})$. We now introduce two useful notations. Let $P$ be a probability distribution on $\mathcal{X}$, and let $W : \mathcal{X} \to \mathcal{Y}$ be a channel. Then, the output distribution $PW$ on $\mathcal{Y}$ of $W$ with input distribution $P$ is defined by $$\begin{aligned} PW( y ) \coloneqq \sum_{x \in \mathcal{X}} P( x ) \, W(y \mid x) \label{def:output}\end{aligned}$$ for each $y \in \mathcal{Y}$; and the backward channel $\overline{W}_{P} : \mathcal{Y} \to \mathcal{X}$ of $W$ with input distribution $P$ is denoted by $$\begin{aligned} \overline{W}_{P}(x \mid y) \coloneqq \frac{ P( x ) \, W(y \mid x) }{ PW( y ) } \label{def:backward}\end{aligned}$$ for each $(x, y) \in \mathcal{X} \times \mathcal{Y}$. When $P$ is uniform, we drop the subscript $P$ of $\overline{W}_{P}$ as $\overline{W}$ for short. Let ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ and ${\boldsymbol{\varepsilon}}^{\prime} = ( \varepsilon_{d}^{\prime} )_{d|q}$ be two probability vectors. Consider two modular arithmetic erasure channels $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ and $V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$ defined in [Definition \[def:V\]]{}, where note that $\mathcal{X} = \mathbb{Z}/q\mathbb{Z}$. By the construction of the output alphabet $\mathcal{Y}$ (see ), each output symbol $y \in \mathcal{Y}$ can be written by $y = z + d \mathbb{Z}$ for some $z \in \mathbb{Z}$ and some $d|q$. From this perspective, for convenience, we often write output symbols in $\mathcal{Y}$ as $z + d \mathbb{Z} \in \mathcal{Y}$ in this proof. If the input distribution $P$ is a uniform distribution on $\mathcal{X}$, i.e., if $P( x ) = 1/q$ for each $x \in \mathcal{X} = \mathbb{Z}/q\mathbb{Z}$, then the output distribution of $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ is given by $$\begin{aligned} PV_{{\boldsymbol{\varepsilon}}}( z + d \mathbb{Z} ) \overset{\eqref{def:output}}{=} \sum_{x \in \mathbb{Z}/q\mathbb{Z}} \frac{ 1 }{ q } \, V_{{\boldsymbol{\varepsilon}}}(z + d \mathbb{Z} \mid x) \overset{\eqref{eq:V}}{=} \sum_{\substack{ x \in \mathbb{Z}/q\mathbb{Z} : \\ x \equiv z \ (\mathrm{mod} \, d) }} \frac{ \varepsilon_{d} }{ q } = \frac{ q }{ d } \frac{ \varepsilon_{d} }{ q } = \frac{ \varepsilon_{d} }{ d } \label{eq:output_V}\end{aligned}$$ for each $z + d\mathbb{Z} \in \mathcal{Y}$. In addition, the backward channel of $V_{{\boldsymbol{\varepsilon}}} : \mathcal{X} \to \mathcal{Y}$ with uniform input distribution $P$ is given by $$\begin{aligned} \overline{V_{{\boldsymbol{\varepsilon}}}}(x \mid z + d\mathbb{Z}) \overset{\eqref{def:backward}}{=} \frac{ 1 }{ q } \frac{ V_{{\boldsymbol{\varepsilon}}}(z + d\mathbb{Z} \mid x) }{ PV_{{\boldsymbol{\varepsilon}}}( z + d\mathbb{Z} ) } \overset{\eqref{eq:output_V}}{=} \frac{ 1 }{ q } \frac{ V_{{\boldsymbol{\varepsilon}}}(z + d\mathbb{Z} \mid x) }{ (\varepsilon_{d}/d) } \overset{\eqref{eq:V}}{=} \begin{dcases} \frac{ d }{ q } & \mathrm{if} \ x \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} , \end{dcases} \label{eq:backward_V}\end{aligned}$$ provided that $\varepsilon_{d} > 0$, for each $(x, z + d \mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}$. Similarly, for the other erasure channel $V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$ with possibly different underlying probability vector ${\boldsymbol{\varepsilon}}^{\prime} = ( \varepsilon_{d}^{\prime} )_{d|q}$, it follows that $$\begin{aligned} PV_{{\boldsymbol{\varepsilon}}^{\prime}}( z + d \mathbb{Z} ) = \frac{ \varepsilon_{d}^{\prime} }{ d } \label{eq:output_V_prime}\end{aligned}$$ for each $z + d \mathbb{Z} \in \mathcal{Y}$; and $$\begin{aligned} \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}(x \mid z + d \mathbb{Z}) = \begin{dcases} \frac{ d }{ q } & \mathrm{if} \ x \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} , \end{dcases} \label{eq:backward_V_prime}\end{aligned}$$ provided that $\varepsilon_{d}^{\prime} > 0$, for each $(x, z + d \mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}$. Given a unit $\gamma \in \mathbb{Z}/q\mathbb{Z}$, consider the worse channel $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2}$ and the better channel $V_{{\boldsymbol{\varepsilon}}} \varoast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ defined in and , respectively. We first prove the assertion of [Theorem \[th:recursive\_V\]]{} for the worse channel. ### Proof for Worse Channel $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}}$ The output distribution of $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2}$ with uniform input distribution $P$ is given by $$\begin{aligned} P(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}) \, & \overset{\mathclap{\eqref{def:output}}}{=} \, \sum_{u_{1} \in \mathcal{X}} \frac{ 1 }{ q } (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2} \mid u_{1}) \notag \\ & \overset{\mathclap{\eqref{def:minus_two}}}{=} \, \sum_{u_{1} \in \mathcal{X}} \frac{ 1 }{ q } \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}}(y_{1} \mid u_{1} + \gamma u_{2}^{\prime}) \, V_{{\boldsymbol{\varepsilon}}^{\prime}}(y_{2} \mid u_{2}^{\prime}) \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \bigg( \sum_{u_{1} \in \mathcal{X}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}}(y_{1} \mid u_{1}) \bigg) \bigg( \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}^{\prime}}(y_{2} \mid u_{2}^{\prime}) \bigg) \notag \\ & \overset{\mathclap{\eqref{def:output}}}{=} \, PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \label{eq:output_Vminus}\end{aligned}$$ for each $y_{1}, y_{2} \in \mathcal{Y}$, where (a) follows from the fact that $a \mapsto a + \gamma b$ forms a bijection[^22] on $\mathbb{Z}/q\mathbb{Z}$ for each $b \in \mathbb{Z}/q\mathbb{Z}$. Moreover, the backward channel of $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2}$ with uniform input distribution $P$ is given by $$\begin{aligned} \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid y_{1}, y_{2}) \, & \overset{\mathclap{\eqref{def:backward}}}{=} \, \frac{ 1 }{ q } \frac{ (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2} \mid u_{1}) }{ P(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}) } \notag \\ & \overset{\mathclap{\eqref{eq:output_Vminus}}}{=} \, \frac{ 1 }{ q } \frac{ (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2} \mid u_{1}) }{ PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) } \notag \\ & \overset{\mathclap{\eqref{def:minus_two}}}{=} \, \sum_{u_{2}^{\prime} \in \mathcal{X}} \bigg( \frac{ 1 }{ q } \frac{ V_{{\boldsymbol{\varepsilon}}}(y_{1} \mid u_{1} + \gamma u_{2}^{\prime}) }{ PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) } \bigg) \bigg( \frac{ 1 }{ q } \frac{ V_{{\boldsymbol{\varepsilon}}^{\prime}}(y_{2} \mid u_{2}^{\prime}) }{ PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) } \bigg) \notag \\ & \overset{\mathclap{\eqref{def:backward}}}{=} \sum_{u_{2}^{\prime} \in \mathcal{X}} \overline{V_{{\boldsymbol{\varepsilon}}}}(u_{1} + \gamma u_{2}^{\prime} \mid y_{1}) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}(u_{2}^{\prime} \mid y_{2}) , \label{eq:backward_Vminus}\end{aligned}$$ provided that $PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) > 0$, for each $(u_{1}, y_{1}, y_{2}) \in \mathcal{X} \times \mathcal{Y}^{2}$. It follows from and that $$\begin{aligned} \overline{V_{{\boldsymbol{\varepsilon}}}}(u_{1} + \gamma u_{2}^{\prime} \mid z_{1} + d_{1} \mathbb{Z}) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}(u_{2}^{\prime} \mid z_{2} + d_{2} \mathbb{Z}) = \begin{dcases} \frac{ d_{1} d_{2} }{ q^{2} } & \mathrm{if} \ u_{1} + \gamma u_{2}^{\prime} \equiv z_{1} \pmod{ d_{1} } , \\ & \qquad\quad\ \; u_{2}^{\prime} \equiv z_{2} \pmod{d_{2}} , \\ 0 & \mathrm{otherwise} , \end{dcases} \label{eq:bV1_times_bV2}\end{aligned}$$ provided that $\varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} > 0$, for each $(u_{1}, u_{2}^{\prime}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$. Note that in , the system of two congruences $$\begin{aligned} u_{1} + \gamma u_{2}^{\prime} & \equiv z_{1} \pmod{d_{1}} , \\ u_{2}^{\prime} & \equiv z_{2} \pmod{d_{2}}\end{aligned}$$ can be rewritten as $$\begin{aligned} u_{2}^{\prime} & \equiv \gamma^{-1}( z_{1} - u_{1} ) \pmod{d_{1}} , \label{equiv:system1} \\ u_{2}^{\prime} & \equiv z_{2} \pmod{d_{2}} ; \label{equiv:system2} \end{aligned}$$ and thus, it follows from [Lemma \[lem:CRT\]]{} that this system has a unique solution $u_{2}^{\prime} \in \mathbb{Z}/{\operatorname{lcm}}(d_{1}, d_{2})\mathbb{Z}$ if and only if $$\begin{aligned} \gamma^{-1}( z_{1} - u_{1} ) \equiv z_{2} \pmod{\gcd( d_{1}, d_{2})} , \label{eq:solution_condition}\end{aligned}$$ which is equivalent to $$\begin{aligned} u_{1} & \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } .\end{aligned}$$ Therefore, for every $(u_{1}, u_{2}^{\prime}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$ satisfying $\varepsilon_{d_{1}} \varepsilon_{d_{2}}^{\prime} > 0$, there exists a representative $r \in (\gamma^{-1} (z_{1} - u_{1}) + d_{1} \mathbb{Z}) \cap (z_{2} + d_{2} \mathbb{Z})$ such that $$\begin{aligned} \overline{V_{{\boldsymbol{\varepsilon}}}}(u_{1} + \gamma u_{2}^{\prime} \mid z_{1} + d_{1} \mathbb{Z}) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}(u_{2}^{\prime} \mid z_{2} + d_{2} \mathbb{Z}) = \begin{dcases} \frac{ d_{1} d_{2} }{ q^{2} } & \mathrm{if} \ u_{1} \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } , \\ & \quad u_{2}^{\prime} \equiv r \pmod{{\operatorname{lcm}}(d_{1}, d_{2})} , \\ 0 & \mathrm{otherwise} ; \end{dcases} \label{eq:product_bV1_bV2}\end{aligned}$$ and hence, we have $$\begin{aligned} \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \, & \overset{\mathclap{\eqref{eq:backward_Vminus}}}{=} \, \sum_{u_{2}^{\prime} \in \mathbb{Z}/q\mathbb{Z}} \overline{V_{{\boldsymbol{\varepsilon}}}}(u_{1} + \gamma u_{2}^{\prime} \mid z_{1} + d_{1}\mathbb{Z}) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}(u_{2}^{\prime} \mid z_{2} + d_{2}\mathbb{Z}) \notag \\ & \overset{\mathclap{\eqref{eq:product_bV1_bV2}}}{=} \, \begin{dcases} \sum_{\substack{ u_{2}^{\prime} \in \mathbb{Z}/q\mathbb{Z} : \\ u_{2}^{\prime} \equiv r \ (\mathrm{mod} \, {\operatorname{lcm}}(d_{1}, d_{2})) }} \frac{ d_{1} d_{2} }{ q^{2} } & \mathrm{if} \ u_{1} \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & = \begin{dcases} \frac{ q }{ {\operatorname{lcm}}(d_{1}, d_{2}) } \frac{ d_{1} d_{2} }{ q^{2} } & \mathrm{if} \ u_{1} \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & = \begin{dcases} \frac{ \gcd(d_{1}, d_{2}) }{ q } & \mathrm{if} \ u_{1} \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \label{eq:bV_minus}\end{aligned}$$ provided that $\varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} > 0$, for each $(u_{1}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}^{2}$. Therefore, we have $$\begin{aligned} (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z} \mid u_{1}) \, & \overset{\mathclap{\eqref{def:backward}}}{=} \, q \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \, P(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \notag \\ & \overset{\mathclap{\eqref{eq:output_Vminus}}}{=} \, q \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \, PV_{{\boldsymbol{\varepsilon}}}( z_{1} + d_{1} \mathbb{Z} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}(z_{2} + d_{2} \mathbb{Z}) \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \, q \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \, \frac{ \varepsilon_{d_{1}} }{ d_{1} } \frac{ \varepsilon_{d_{2}}^{\prime} }{ d_{2} } \notag \\ & \overset{\mathclap{\eqref{eq:bV_minus}}}{=} \, \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ {\operatorname{lcm}}(d_{1}, d_{2}) } & \mathrm{if} \ u_{1} \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \label{eq:Vminus}\end{aligned}$$ for each $(u_{1}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}^{2}$, where (a) follows from and . Finally, to prove the equivalence relation of [Definition \[def:output\_equiv\]]{} between $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2}$ and $V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$ with underlying probability vector ${\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime} = ( \varepsilon_{d}^{\boxast} )_{d|q}$ given in , it suffices to show the existence of intermediate channels $Q_{1} : \mathcal{Y}^{2} \to \mathcal{Y}$ and $Q_{2} : \mathcal{Y} \to \mathcal{Y}^{2}$ between $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2}$ and $V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$. If we define the channel $Q_{1} : \mathcal{Y}^{2} \to \mathcal{Y}$ by $$\begin{aligned} Q_{1}(z + d \mathbb{Z} \mid z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) = \begin{cases} 1 & \mathrm{if} \ \gcd(d_{1}, d_{2}) = d \ \mathrm{and} \ z_{1} - \gamma z_{2} \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} \end{cases} \label{def:Q1}\end{aligned}$$ for each $(z + d \mathbb{Z}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{Y}^{3}$, then a direct calculation shows $$\begin{aligned} \sum_{y_{1}, y_{2} \in \mathcal{Y}} (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2} \mid u_{1}) \, Q_{1}(z + d \mathbb{Z} \mid y_{1}, y_{2}) \, & = \sum_{d_{1}|q} \sum_{y_{1} \in \mathbb{Z}/d_{1}\mathbb{Z}} \sum_{d_{2}|q} \sum_{y_{2} \in \mathbb{Z}/d_{2}\mathbb{Z}} (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2} \mid u_{1}) \, Q_{1}(z + d \mathbb{Z} \mid y_{1}, y_{2}) \notag \\ & \overset{\mathclap{\eqref{def:Q1}}}{=} \, \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd( d_{1}, d_{2} ) = d }} \sum_{\substack{ y_{1} \in \mathbb{Z}/d_{1}\mathbb{Z} , \\ y_{2} \in \mathbb{Z}/d_{2}\mathbb{Z} : \\ y_{1} - \gamma y_{2} \equiv z \pmod{d} }} (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2} \mid u_{1}) \notag \\ & \overset{\mathclap{\eqref{eq:Vminus}}}{=} \, \begin{dcases} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd( d_{1}, d_{2} ) = d }} \sum_{\substack{ y_{1} \in \mathbb{Z}/d_{1}\mathbb{Z} , \\ y_{2} \in \mathbb{Z}/d_{2}\mathbb{Z} : \\ y_{1} - \gamma y_{2} \equiv z \pmod{d} }} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ {\operatorname{lcm}}(d_{1}, d_{2}) } & \mathrm{if} \ u_{1} \equiv z \pmod{ d } , \notag \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & = \, \begin{dcases} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd( d_{1}, d_{2} ) = d }} \frac{ d_{1} d_{2} }{ \gcd(d_{1}, d_{2}) } \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ {\operatorname{lcm}}(d_{1}, d_{2}) } & \mathrm{if} \ u_{1} \equiv z \pmod{ d } , \notag \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & = \, \begin{dcases} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd( d_{1}, d_{2} ) = d }} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} & \mathrm{if} \ u_{1} \equiv z \pmod{ d } , \notag \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \overset{\mathclap{\eqref{def:eps_minus}}}{=} \, \begin{dcases} \varepsilon_{d}^{\boxast}( {\boldsymbol{\varepsilon}}, {\boldsymbol{\varepsilon}}^{\prime} ) & \mathrm{if} \ u_{1} \equiv z \pmod{ d } , \notag \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \overset{\mathclap{\eqref{eq:V}}}{=} \, V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}}(z + d \mathbb{Z} \mid u_{1}) \label{eq:equivalence_condition_minus1}\end{aligned}$$ for each $(u_{1}, z + d \mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}$. Similarly, if we define the DMC $Q_{2} : \mathcal{Y} \to \mathcal{Y}^{2}$ by $$\begin{aligned} Q_{2}(z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z} \mid z + d \mathbb{Z}) = \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ \varepsilon_{d}^{\boxast} \, {\operatorname{lcm}}(d_{1}, d_{2}) } & \mathrm{if} \ \gcd( d_{1}, d_{2} ) = d \ \mathrm{and} \ z_{1} - \gamma z_{2} \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} \end{dcases} \label{def:Q2}\end{aligned}$$ for each $(z + d \mathbb{Z}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{Y}^{3}$, then a simple calculation yields $$\begin{aligned} \sum_{y \in \mathcal{Y}} V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}}(y \mid u_{1}) \, Q_{2}(z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z} \mid y) \, & = \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}}(y \mid u_{1}) \, Q_{2}(z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z} \mid y) \notag \\ & \overset{\mathclap{\eqref{def:Q2}}}{=} \, \sum_{\substack{ d|q : \\ d = \gcd(d_{1}, d_{2}) }} \sum_{\substack{ y \in \mathbb{Z}/d\mathbb{Z} : \\ y \equiv z_{1} - \gamma z_{2} \ (\mathrm{mod} \, d) }} V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}}(y \mid u_{1}) \, \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ \varepsilon_{d}^{\boxast} \, {\operatorname{lcm}}(d_{1}, d_{2}) } \notag \\ & = V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}}((z_{1} - \gamma z_{2}) + \gcd(d_{1}, d_{2}) \mathbb{Z} \mid u_{1}) \, \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ \varepsilon_{\gcd(d_{1}, d_{2})}^{\boxast} \, {\operatorname{lcm}}(d_{1}, d_{2}) } \notag \\ & \overset{\mathclap{\eqref{eq:V}}}{=} \, \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ {\operatorname{lcm}}(d_{1}, d_{2}) } & \mathrm{if} \ u_{1} \equiv z_{1} - \gamma z_{2} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \overset{\mathclap{\eqref{eq:Vminus}}}{=} \, (V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z} \mid u_{1}) \label{eq:equivalence_condition_minus2}\end{aligned}$$ for each $(u_{1}, z_{1} + d_{1} \mathbb{Z}, z_{2} + d_{2} \mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}^{2}$. Therefore, it follows from and that $V_{{\boldsymbol{\varepsilon}}} \boxast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2}$ is equivalent in the sense of [Definition \[def:output\_equiv\]]{} to $V_{{\boldsymbol{\varepsilon}} \boxast {\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$. This completes the proof of written in [Theorem \[th:recursive\_V\]]{}. ### Proof for Better Channel $V_{{\boldsymbol{\varepsilon}}} \varoast_{\gamma} V_{{\boldsymbol{\varepsilon}}^{\prime}}$ We next prove the assertion of [Theorem \[th:recursive\_V\]]{} for the synthetic channel $V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ created by the plus transform . The output distribution of $V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ with uniform input distribution $P$ is given by $$\begin{aligned} P(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})( y_{1}, y_{2}, u_{1} ) \, & \overset{\mathclap{\eqref{def:output}}}{=} \, \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q } (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})( y_{1}, y_{2}, u_{1} \mid u_{2}^{\prime} ) \notag \\ & \overset{\mathclap{\eqref{def:plus_two}}}{=} \sum_{u_{2}^{\prime} \in \mathcal{X}} \frac{ 1 }{ q^{2} } V_{{\boldsymbol{\varepsilon}}}( y_{1} \mid u_{1} + \gamma u_{2}^{\prime} ) \, V_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} \mid u_{2}^{\prime} ) \notag \\ & = \, PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \sum_{u_{2}^{\prime} \in \mathcal{X}} \Bigg( \frac{ 1 }{ q } \frac{ V_{{\boldsymbol{\varepsilon}}}( y_{1} \mid u_{1} + \gamma u_{2}^{\prime} ) }{ PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) } \Bigg) \Bigg( \frac{ 1 }{ q } \frac{ V_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} \mid u_{2}^{\prime} ) }{ PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) } \Bigg) \notag \\ & \overset{\mathclap{\eqref{def:backward}}}{=} \, PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \sum_{u_{2}^{\prime} \in \mathcal{X}} \overline{V_{{\boldsymbol{\varepsilon}}}}( y_{1} \mid u_{1} + \gamma u_{2}^{\prime} ) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}( y_{2} \mid u_{2}^{\prime} ) \notag \\ & \overset{\mathclap{\eqref{eq:backward_Vminus}}}{=} \, PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid y_{1}, y_{2}) , \label{eq:output_Vplus}\end{aligned}$$ provided that $PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) > 0$, for each $(u_{1}, y_{1}, y_{2}) \in \mathcal{X} \times \mathcal{Y}^{2}$. Moreover, the backward channel of $V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ with uniform input distribution $P$ is given by $$\begin{aligned} \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid y_{1}, y_{2}, u_{1}) \, & \overset{\mathclap{\eqref{def:backward}}}{=} \, \frac{ 1 }{ q } \frac{ (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}, u_{1} \mid u_{2}) }{ P(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})( y_{1}, y_{2}, u_{1} ) } \notag \\ & \overset{\mathclap{\eqref{eq:output_Vplus}}}{=} \, \frac{ 1 }{ q } \frac{ (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}, u_{1} \mid u_{2}) }{ PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid y_{1}, y_{2}) } \notag \\ & \overset{\mathclap{\eqref{def:plus_two}}}{=} \, \frac{ 1 }{ q^{2} } \frac{ V_{{\boldsymbol{\varepsilon}}}( y_{1} \mid u_{1} + \gamma u_{2} ) \, V_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} \mid u_{2} ) }{ PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid y_{1}, y_{2}) } \notag \\ & \overset{\mathclap{\eqref{def:backward}}}{=} \, \frac{ \overline{V_{{\boldsymbol{\varepsilon}}}}( u_{1} + \gamma u_{2} \mid y_{1} ) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}( u_{2} \mid y_{2} ) }{ \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid y_{1}, y_{2}) } , \label{eq:backward_Vplus}\end{aligned}$$ provided that $PV_{{\boldsymbol{\varepsilon}}}( y_{1} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( y_{2} ) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid y_{1}, y_{2}) > 0$, for each $(u_{1}, u_{2}, y_{1}, y_{2}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$, where note that $$\begin{aligned} PV_{{\boldsymbol{\varepsilon}}}( z_{1} + d_{1}\mathbb{Z} ) > 0 & \overset{\eqref{eq:output_V}}{\iff} \varepsilon_{d_{1}} > 0, \label{eq:condition1} \\ PV_{{\boldsymbol{\varepsilon}}^{\prime}}( z_{2} + d_{2}\mathbb{Z} ) > 0 & \overset{\eqref{eq:output_V_prime}}{\iff} \varepsilon_{d_{2}}^{\prime} > 0, \label{eq:condition2} \\ \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) > 0 & \overset{\eqref{eq:bV_minus}}{\iff} z_{1} - \gamma z_{2} \equiv u_{1} \pmod{\gcd(d_{1}, d_{2})} . \label{eq:condition3} \end{aligned}$$ With attention to the above conditions, for every $(u_{1}, u_{2}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$ satisfying $\varepsilon_{d_{1}} \varepsilon_{d_{2}}^{\prime} > 0$ and $z_{1} - \gamma z_{2} \equiv u_{1} \pmod{\gcd(d_{1}, d_{2})}$, we observe that $$\begin{aligned} \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) \, & \overset{\mathclap{\eqref{eq:backward_Vplus}}}{=} \, \frac{ \overline{V_{{\boldsymbol{\varepsilon}}}}( u_{1} + \gamma u_{2} \mid z_{1} + d_{1}\mathbb{Z} ) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}( u_{2} \mid z_{2} + d_{2}\mathbb{Z} ) }{ \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) } \notag \\ & \overset{\mathclap{\eqref{eq:bV_minus}}}{=} \, \frac{ q }{ \gcd(d_{1}, d_{2}) } \, \overline{V_{{\boldsymbol{\varepsilon}}}}( u_{1} + \gamma u_{2} \mid z_{1} + d_{1}\mathbb{Z} ) \, \overline{V_{{\boldsymbol{\varepsilon}}^{\prime}}}( u_{2} \mid z_{2} + d_{2}\mathbb{Z} ) \notag \\ & \overset{\mathclap{\eqref{eq:product_bV1_bV2}}}{=} \begin{dcases} \frac{ {\operatorname{lcm}}(d_{1}, d_{2}) }{ q } & \mathrm{if} \ u_{1} + \gamma u_{2} \equiv z_{1} \pmod{d_{1}} , \\ & \qquad\quad\ \; u_{2} \equiv z_{2} \pmod{d_{2}} , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \begin{dcases} \frac{ {\operatorname{lcm}}(d_{1}, d_{2}) }{ q } & \mathrm{if} \ u_{2} \equiv r \pmod{{\operatorname{lcm}}(d_{1}, d_{2})} , \\ 0 & \mathrm{otherwise} , \end{dcases} \label{eq:backward_Vplus_complete}\end{aligned}$$ where (a) follows from [Lemma \[lem:CRT\]]{} with some solution $r \in (\gamma^{-1} (z_{1} - u_{1}) + d_{1} \mathbb{Z}) \cap (z_{2} + d_{2} \mathbb{Z})$ of the system of two congruences $$\begin{aligned} u_{1} + \gamma u_{2} & \equiv z_{1} \pmod{d_{1}} , \\ u_{2} & \equiv z_{2} \pmod{d_{2}}\end{aligned}$$ with respect to $u_{2} \in \mathbb{Z}/q\mathbb{Z}$ for given $(u_{1}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}^{2}$. Therefore, we have $$\begin{aligned} & (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid u_{2}) \notag \\ & \quad \overset{\mathclap{\eqref{def:backward}}}{=} \, q \, P(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})( z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) \notag \\ & \quad \overset{\mathclap{\eqref{eq:output_Vplus}}}{=} \, q \, PV_{{\boldsymbol{\varepsilon}}}( z_{1} + d_{1}\mathbb{Z} ) \, PV_{{\boldsymbol{\varepsilon}}^{\prime}}( z_{2} + d_{2}\mathbb{Z} ) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) \notag \\ & \quad \overset{\mathclap{\text{(a)}}}{=} \, q \, \frac{ \varepsilon_{d_{1}} }{ d_{1} } \frac{ \varepsilon_{d_{2}}^{\prime} }{ d_{2} } \, \overline{(V_{{\boldsymbol{\varepsilon}}} \boxast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{1} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \, \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) \notag \\ & \quad \overset{\mathclap{\eqref{eq:bV_minus}}}{=} \, \begin{dcases} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} \, \frac{ q }{ d_{1} d_{2} } \frac{ \gcd( d_{1}, d_{2} ) }{ q } \, \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) & \mathrm{if} \ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \quad = \, \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ {\operatorname{lcm}}(d_{1}, d_{2}) } \, \overline{(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})}(u_{2} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} ) & \mathrm{if} \ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \quad \overset{\mathclap{\eqref{eq:backward_Vplus_complete}}}{=} \, \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ {\operatorname{lcm}}(d_{1}, d_{2}) } \frac{ {\operatorname{lcm}}(d_{1}, d_{2}) }{ q } & \mathrm{if} \ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ & \qquad \quad \ \, u_{2} \equiv r \pmod{ {\operatorname{lcm}}(d_{1}, d_{2}) } \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \quad = \, \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ q } & \mathrm{if} \ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ & \qquad \quad \ \, u_{2} \equiv r \pmod{ {\operatorname{lcm}}(d_{1}, d_{2}) } \\ 0 & \mathrm{otherwise} \end{dcases} \label{eq:Vplus}\end{aligned}$$ for each $(u_{1}, u_{2}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$ satisfying the right sides of the conditions –, where (a) follows from and . Note that by the definition of , we readily see that $(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid u_{2}) = 0$ for every $(u_{1}, u_{2}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$ satisfying $\varepsilon_{d_{1}} \varepsilon_{d_{2}}^{\prime} = 0$. Moreover, it follows from [Lemma \[lem:CRT\]]{} that $(V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid u_{2}) = 0$ for every $(u_{1}, u_{2}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$ in which $z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) }$ does not hold. Hence, Equation  holds for every $(u_{1}, u_{2}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$. Finally, to prove the equivalence relation of [Definition \[def:output\_equiv\]]{} between $V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ and $V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$, it suffices to show the existence of intermediate channels $Q_{3} : \mathcal{Y}^{2} \times \mathcal{X} \to \mathcal{Y}$ and $Q_{4} : \mathcal{Y} \to \mathcal{Y}^{2} \times \mathcal{X}$ between $V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ and $V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$. If we define the channel $Q_{3} : \mathcal{Y}^{2} \times \mathcal{X} \to \mathcal{Y}$ by $$\begin{aligned} Q_{3}(z + d\mathbb{Z} \mid z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1}) = \begin{cases} 1 & \mathrm{if} \ {\operatorname{lcm}}(d_{1}, d_{2}) = d \ \mathrm{and} \ z \equiv r \pmod{ d } , \\ & \qquad \qquad \ \; \qquad z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{cases} \label{def:Q3}\end{aligned}$$ for each $(u_{1}, z + d\mathbb{Z}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}^{3}$, then a direct calculation shows $$\begin{aligned} & \sum_{y_{1} \in \mathcal{Y}} \sum_{y_{2} \in \mathcal{Y}} \sum_{u_{1} \in \mathcal{X}} (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}, u_{1} \mid u_{2}) \, Q_{3}(z + d\mathbb{Z} \mid y_{1}, y_{2}, u_{1}) \notag \\ & \qquad = \sum_{d_{1}|q} \sum_{y_{1} \in \mathbb{Z}/d_{1}\mathbb{Z}} \sum_{d_{2}|q} \sum_{y_{2} \in \mathbb{Z}/d_{2}\mathbb{Z}} \sum_{u_{1} \in \mathbb{Z}/q\mathbb{Z}} (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}, u_{1} \mid u_{2}) \, Q_{3}(z + d\mathbb{Z} \mid y_{1}, y_{2}, u_{1}) \notag \\ & \qquad \overset{\mathclap{\eqref{def:Q3}}}{=} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ {\operatorname{lcm}}(d_{1}, d_{2}) = d }} \sum_{\substack{ u_{1} \in \mathbb{Z}/q\mathbb{Z}, \\ y_{1} \in \mathbb{Z}/d_{1}\mathbb{Z} , \\ y_{2} \in \mathbb{Z}/d_{2}\mathbb{Z} : \\ y_{1} - \gamma y_{2} \equiv u_{1} \ (\mathrm{mod} \, \gcd(d_{1}, d_{2})) }} (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(y_{1}, y_{2}, u_{1} \mid u_{2}) \, \mathbbm{1}[ z \equiv r \pmod{ d } ] \notag \\ & \qquad \overset{\mathclap{\eqref{eq:Vplus}}}{=} \, \begin{dcases} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ {\operatorname{lcm}}(d_{1}, d_{2}) = d }} \frac{ q }{ {\operatorname{lcm}}(d_{1}, d_{2}) } \frac{ d_{1} d_{2} }{ \gcd(d_{1}, d_{2}) } \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ q } & \mathrm{if} \ u_{2} \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \qquad = \, \begin{dcases} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ {\operatorname{lcm}}(d_{1}, d_{2}) = d }} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} & \mathrm{if} \ u_{2} \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \qquad \overset{\mathclap{\eqref{def:eps_plus}}}{=} \, \begin{cases} \varepsilon_{d}^{\varoast}({\boldsymbol{\varepsilon}}, {\boldsymbol{\varepsilon}}^{\prime}) & \mathrm{if} \ u_{2} \equiv z \pmod{d} , \\ 0 & \mathrm{otherwise} \end{cases} \notag \\ & \qquad \overset{\mathclap{\eqref{eq:V}}}{=} \, V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}}(z + d\mathbb{Z} \mid u_{2}) \label{eq:equivalence_condition_plus1}\end{aligned}$$ for each $(u_{2}, z + d\mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}$, where $$\begin{aligned} \mathbbm{1}[ A ] \coloneqq \begin{cases} 1 & \text{if $A$ is true} , \\ 0 & \text{if $A$ is false} \end{cases}\end{aligned}$$ denotes the indicator function of a condition $A$. Similarly, it we define the channel $Q_{4} : \mathcal{Y} \to \mathcal{Y}^{2} \times \mathcal{X}$ by $$\begin{aligned} Q_{4}(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid z + d\mathbb{Z}) = \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ q \, \varepsilon_{d}^{\varoast}( {\boldsymbol{\varepsilon}} , {\boldsymbol{\varepsilon}}^{\prime} ) } & \mathrm{if} \ {\operatorname{lcm}}(d_{1}, d_{2}) = d \ \mathrm{and} \ z \equiv r \pmod{ d } , \\ & \qquad \qquad \ \; \qquad z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ 0 & \mathrm{otherwise} \end{dcases} \label{def:Q4}\end{aligned}$$ for each $(u_{1}, z + d\mathbb{Z}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X} \times \mathcal{Y}^{3}$, then a simple calculation yields $$\begin{aligned} & \sum_{y \in \mathcal{Y}} V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}}(y \mid u_{2}) \, Q_{4}(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid y) \notag \\ & \qquad = \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}}(y \mid u_{2}) \, Q_{4}(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid y) \notag \\ & \qquad \overset{\mathclap{\eqref{def:Q4}}}{=} \sum_{\substack{ d|q : \\ d = {\operatorname{lcm}}(d_{1}, d_{2}) }} \sum_{\substack{ y \in \mathbb{Z}/d\mathbb{Z} : \\ y \equiv r \ (\mathrm{mod} \, d) }} V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}}(y \mid u_{2}) \, \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ q \, \varepsilon_{d}^{\varoast}( {\boldsymbol{\varepsilon}} , {\boldsymbol{\varepsilon}}^{\prime} ) } \, \mathbbm{1} [ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } ] \notag \\ & \qquad = V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}}(r + {\operatorname{lcm}}(d_{1}, d_{2}) \mathbb{Z} \mid u_{2}) \, \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ q \, \varepsilon_{{\operatorname{lcm}}(d_{1}, d_{2})}^{\varoast}( {\boldsymbol{\varepsilon}} , {\boldsymbol{\varepsilon}}^{\prime} ) } \, \mathbbm{1} [ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } ] \notag \\ & \qquad \overset{\mathclap{\eqref{eq:V}}}{=} \, \begin{dcases} \frac{ \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} }{ q } & \mathrm{if} \ z_{1} - \gamma z_{2} \equiv u_{1} \pmod{ \gcd(d_{1}, d_{2}) } , \\ & \qquad \quad \ \, u_{2} \equiv r \pmod{ {\operatorname{lcm}}(d_{1}, d_{2}) } \\ 0 & \mathrm{otherwise} \end{dcases} \notag \\ & \qquad \overset{\mathclap{\eqref{eq:Vplus}}}{=} \, (V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}})(z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}, u_{1} \mid u_{2}) \label{eq:equivalence_condition_plus2}\end{aligned}$$ for each $(u_{1}, u_{2}, z_{1} + d_{1}\mathbb{Z}, z_{2} + d_{2}\mathbb{Z}) \in \mathcal{X}^{2} \times \mathcal{Y}^{2}$. Therefore, it follows from and that $V_{{\boldsymbol{\varepsilon}}} \varoast V_{{\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}^{2} \times \mathcal{X}$ is equivalent in the sense of [Definition \[def:output\_equiv\]]{} to $V_{{\boldsymbol{\varepsilon}} \varoast {\boldsymbol{\varepsilon}}^{\prime}} : \mathcal{X} \to \mathcal{Y}$. This completes the proof of written in [Theorem \[th:recursive\_V\]]{}; and all assertions of [Theorem \[th:recursive\_V\]]{} are just proved. Asymptotic Distributions of Multilevel Channel Polarization {#sect:asymptotic_distribution_MAEC} =========================================================== Let $q$ be an integer, and let ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ be an arbitrary probability vector. In this section, we characterize the asymptotic distribution of multilevel channel polarization for modular arithmetic erasure channels $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ defined in [Definition \[def:V\]]{}. To accomplish the end, we now define the average value of the recursive formula over all sequences ${\boldsymbol{s}} \in \{ -, + \}^{n}$ of length $n$ as $$\begin{aligned} \mu_{d}^{(n)} \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \varepsilon_{d}^{{\boldsymbol{s}}} \label{def:mu_d}\end{aligned}$$ for each $d|q$ and each $n \in \mathbb{N}$. Note that since ${\boldsymbol{\varepsilon}}^{{\boldsymbol{s}}} = ( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q}$ forms a probability vector for each ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, the vector $( \mu_{d}^{(n)} )_{d|q}$ also forms a probability vector for each $n \in \mathbb{N}$. Moreover, we define the limit $$\begin{aligned} \mu_{d}^{(\infty)} \coloneqq \lim_{n \to \infty} \mu_{d}^{(n)} \label{def:mu_d_infty}\end{aligned}$$ for each $d|q$ when the limit exists. As will be shown later, the limit $\mu_{d}^{(\infty)}$ always exists for every $d|q$, and the probability vector $( \mu_{d}^{(\infty)} )_{d|q}$ coincides with the asymptotic distribution of multilevel channel polarization for $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$. We summarize this fact in the following corollary. \[cor:multilevel\] Let $q \ge 2$ be an integer, and let ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ be a probability vector. For any fixed $\delta > 0$, it holds that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \big| I(V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}) - \log d \big| < \delta \ \mathrm{and} \ \big| I(V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}[\ker \varphi_{d}]) - \log d \big| < \delta \Big\} \Big| = \mu_{d}^{(\infty)} \label{eq:asymptotic_distribution}\end{aligned}$$ for every $d|q$, where $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}[\ker \varphi_{d}]$ denotes the homomorphism channel of $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}$ defined in ; the function $\varphi_{d} : x \mapsto (x + d\mathbb{Z})$ denotes the natural projection; and $\ker \varphi_{d} \coloneqq \{ x \in \mathbb{Z}/q\mathbb{Z} \mid \phi_{d}( x ) = d \mathbb{Z} \}$ denotes the kernel of $\varphi_{d}$. [Corollary \[cor:multilevel\]]{} is a direct consequence of [Theorem \[th:polarization\]]{} which will be presented in [Section \[sect:asymptotic\_distribution\]]{}; and we defer to prove them until [Section \[sect:asymptotic\_distribution\]]{}. It follows from [Corollary \[cor:multilevel\]]{} that $$\begin{aligned} \sum_{d|q} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \big| I(V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}) - \log d \big| < \delta \ \mathrm{and} \ \big| I(V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}[\ker \varphi_{d}]) - \log d \big| < \delta \Big\} \Big| = 1 , \label{eq:multilevel_sum_V}\end{aligned}$$ which is a version of . Therefore, [Corollary \[cor:multilevel\]]{} characterizes each term of the sum of for every modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$. Namely, the asymptotic distribution of multilevel channel polarization for $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ is indeed the probability vector $( \mu_{d}^{(\infty)} )_{d|q}$; hence, we often call the probability vector $( \mu_{d}^{(\infty)} )_{d|q}$ the asymptotic distribution. Fortunately, in the case where $q = p^{r}$ is a prime power, the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ simply coincides with the initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. However, in the case where $q = p_{1}^{r_{1}} p_{2}^{r_{2}} \cdots p_{m}^{r_{m}}$ is a composite number having two or more prime factors $p_{i}^{r_{i}}$, the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ does not coincide with ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ in general. In [Section \[sect:primepower\]]{}, we first solve the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ in the case where $q = p^{r}$ is a prime power as special cases. In [Section \[sect:composite\]]{}, we second solve the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ in the case where $q = p_{1}^{r_{1}} p_{2}^{r_{2}} \cdots p_{m}^{r_{m}}$ is a composite number as more general cases; and we then give Algorithm \[alg:main\] which calculates $( \mu_{d}^{(\infty)} )_{d|q}$ from a given initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. In [Section \[sect:asymptotic\_distribution\]]{}, we finally prove that $( \mu_{d}^{(\infty)} )_{d|q}$ indeed denotes the asymptotic distribution of multilevel channel polarization for $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$. Special Cases of $( \mu_{d}^{(\infty)} )_{d|q}$: The Input Alphabet Size $q = p^{r}$ is a Prime Power {#sect:primepower} ----------------------------------------------------------------------------------------------------- In this section, unless stated otherwise, assume that $q = p^{r}$ for some prime number $p$ and some positive integer $r$. Note that in this case, the modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ is essentially the same as the OEC (see [Example \[ex:oec\]]{}), and the probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ can be written as ${\boldsymbol{\varepsilon}} = ( \varepsilon_{p^{i}} )_{i = 0}^{r}$. Then, we can observe the following proposition. \[prop:primepower\_conservation\] Let $q$ be a prime power. For any probability vectors ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ and ${\boldsymbol{\varepsilon}}^{\prime} = ( \varepsilon_{d}^{\prime} )_{d|q}$, it holds that $$\begin{aligned} \varepsilon_{d}^{\boxast} + \varepsilon_{d}^{\varoast} = \varepsilon_{d} + \varepsilon_{d}^{\prime}\end{aligned}$$ for every $d|q$, where $\varepsilon_{d}^{\boxast}$ and $\varepsilon_{d}^{\varoast}$ are defined in and , respectively, depending on both ${\boldsymbol{\varepsilon}}$ and ${\boldsymbol{\varepsilon}}^{\prime}$. For each $i = 0, 1, \dots, r$, we have $$\begin{aligned} \varepsilon_{p^{i}}^{\boxast} & = \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd(d_{1}, d_{2}) = p^{i} }} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} \notag \\ & = \sum_{j = 0}^{r} \sum_{k = 0}^{r} \varepsilon_{p^{j}} \, \varepsilon_{p^{k}}^{\prime} \, \mathbbm{1}[ \min\{ j, k \} = i ] \notag \\ & = \sum_{j = 0}^{r} \sum_{k = 0}^{r} \varepsilon_{p^{j}} \, \varepsilon_{p^{k}}^{\prime} \, \Big( \mathbbm{1}[ i = j \le k ] + \mathbbm{1}[ i = k < j ] \Big) , \\ \varepsilon_{p^{i}}^{\varoast} & = \sum_{\substack{ d_{1}|q, d_{2}|q : \\ {\operatorname{lcm}}(d_{1}, d_{2}) = p^{i} }} \varepsilon_{d_{1}} \, \varepsilon_{d_{2}}^{\prime} \notag \\ & = \sum_{j = 0}^{r} \sum_{k = 0}^{r} \varepsilon_{p^{j}} \, \varepsilon_{p^{k}}^{\prime} \, \mathbbm{1}[ \max\{ j, k \} = i ] \notag \\ & = \sum_{j = 0}^{r} \sum_{k = 0}^{r} \varepsilon_{p^{j}} \, \varepsilon_{p^{k}}^{\prime} \, \Big( \mathbbm{1}[ k < j = i ] + \mathbbm{1}[ j \le k = i ] \Big) .\end{aligned}$$ Hence, for each $i = 0, 1, \dots, r$, it holds that $$\begin{aligned} \varepsilon_{p^{i}}^{\boxast} + \varepsilon_{p^{i}}^{\varoast} & = \sum_{j = 0}^{r} \sum_{k = 0}^{r} \varepsilon_{p^{j}} \, \varepsilon_{p^{k}}^{\prime} \, \Big( \mathbbm{1}[ i = j \le k ] + \mathbbm{1}[ i = k < j ] + \mathbbm{1}[ k < j = i ] + \mathbbm{1}[ j \le k = i ] \Big) \notag \\ & = \sum_{j = 0}^{r} \sum_{k = 0}^{r} \varepsilon_{p^{j}} \, \varepsilon_{p^{k}}^{\prime} \, \Big( \mathbbm{1}[ i = j ] + \mathbbm{1}[ i = k ] \Big) \notag \\ & = \varepsilon_{p^{i}} \sum_{k = 0}^{r} \varepsilon_{p^{k}}^{\prime} + \varepsilon_{p^{i}}^{\prime} \sum_{j = 0}^{r} \varepsilon_{p^{j}} \notag \\ & = \varepsilon_{p^{i}} + \varepsilon_{p^{i}}^{\prime} .\end{aligned}$$ This completes the proof of [Proposition \[prop:primepower\_conservation\]]{}. Clearly, by setting ${\boldsymbol{\varepsilon}} = {\boldsymbol{\varepsilon}}^{\prime}$, [Proposition \[prop:primepower\_conservation\]]{} can be reduced to the identity $$\begin{aligned} \frac{ 1 }{ 2 } \Big[ \varepsilon_{d}^{{\boldsymbol{s}}-} + \varepsilon_{d}^{{\boldsymbol{s}}+} \Big] & = \varepsilon_{d}^{{\boldsymbol{s}}} \label{eq:martingale_primepower}\end{aligned}$$ for every $d|q$ and every ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, which seems a martingale-like property of the recursive formulas with respect to the polar transforms. Indeed, we observe from that $$\begin{aligned} \mu_{d}^{(n)} = \varepsilon_{d} \label{eq:primepower_conservation_mu_d}\end{aligned}$$ for every $d|q$ and every $n \in \mathbb{N}$, where $\mu_{d}^{(n)}$ is defined in . Equation  straightforwardly proves the following theorem. \[th:primepower\] If $q$ is a prime power, then $ \mu_{d}^{(\infty)} = \varepsilon_{d} $ for every $d|q$. Therefore, [Theorem \[th:primepower\]]{} shows that if $q = p^{r}$ is a prime power, then the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ coincides with an initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$, i.e., the right-hand side of given in [Corollary \[cor:multilevel\]]{} is equal to a probability mass of ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. Hence, it can be verified that the asymptotic distribution of [Fig. \[subfig:maec\_512\]]{} is given by $\mu_{d}^{(\infty)} = 1/10$ for each $d|q$. Note again that the modular arithmetic erasure channel is essentially the same as the OEC, proposed by Park and Barg [@park_barg_isit2011 p. 2285], in the case where $q = p^{r}$ is a prime power. Unfortunately, if $q$ has two or more prime factors, then [Proposition \[prop:primepower\_conservation\]]{} does not hold in general. Thus, whereas [Theorem \[th:primepower\]]{} is very simple, characterizing the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ with respect to the initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ is hard in the case where $q$ is a general composite number. To obtain an idea of solving this problem, we now give an alternative proof of [Theorem \[th:primepower\]]{} in a roundabout way as follows. For each integer $a \ge 1$ and each sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, we define $$\begin{aligned} T^{{\boldsymbol{s}}}( a ) & \coloneqq \sum_{i = a}^{r} \varepsilon_{p^{i}}^{{\boldsymbol{s}}} , \label{def:T} \\ B^{{\boldsymbol{s}}}( a ) & \coloneqq \sum_{i = 0}^{a-1} \varepsilon_{p^{i}}^{{\boldsymbol{s}}} , \label{def:B}\end{aligned}$$ where ${\boldsymbol{\varepsilon}}^{{\boldsymbol{s}}} = ( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q} = ( \varepsilon_{p^{i}} )_{i = 0}^{r}$ is recursively defined in . If the sequence ${\boldsymbol{s}}$ is empty, then we omit the superscripts ${\boldsymbol{s}}$ as $T( a )$ and $B( a )$. Clearly, it holds that $$\begin{aligned} T^{{\boldsymbol{s}}}( a ) + B^{{\boldsymbol{s}}}( a ) = \sum_{i = 0}^{r} \varepsilon_{p^{i}}^{{\boldsymbol{s}}} = 1 \label{eq:sum_TB}\end{aligned}$$ for each $a \ge 1$ and each ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$. The following lemma gives recursive formulas of and with respect to polar transforms. \[lem:recursive\_TB\] For each integer $a \ge 1$ and each sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, it holds that $$\begin{aligned} T^{{\boldsymbol{s}}-}( a ) & = T^{{\boldsymbol{s}}}( a )^{2} , \label{eq:T_minus} \\ B^{{\boldsymbol{s}}-}( a ) & = 2 \, B^{{\boldsymbol{s}}}( a ) \, T^{{\boldsymbol{s}}}( a ) + B^{{\boldsymbol{s}}}( a )^{2} , \label{eq:B_minus} \\ T^{{\boldsymbol{s}}+}( a ) & = 2 \, B^{{\boldsymbol{s}}}( a ) \, T^{{\boldsymbol{s}}}( a ) + T^{{\boldsymbol{s}}}( a )^{2} , \\ B^{{\boldsymbol{s}}+}( a ) & = B^{{\boldsymbol{s}}}( a )^{2} .\end{aligned}$$ We now prove the assertion for the minus transform. A straightforward calculation yields $$\begin{aligned} \varepsilon_{p^{i}}^{{\boldsymbol{s}}-} \, & \overset{\mathclap{\eqref{def:eps_s}}}{=} \sum_{\substack{ d_{1}|p^{r}, d_{2}|p^{r} : \\ \gcd(d_{1}, d_{2}) = p^{i} }} \varepsilon_{d_{1}}^{{\boldsymbol{s}}} \varepsilon_{d_{2}}^{{\boldsymbol{s}}} \notag \\ & = \sum_{j = i}^{r} \sum_{k = i}^{r} \varepsilon_{p^{j}}^{{\boldsymbol{s}}} \, \varepsilon_{p^{k}}^{{\boldsymbol{s}}} \, \mathbbm{1}[ \min\{ j, k \} = i ] \notag \\ & = \varepsilon_{p^{i}}^{{\boldsymbol{s}}} \, \Bigg( \sum_{j = i}^{r} \varepsilon_{p^{j}}^{{\boldsymbol{s}}} + \sum_{k = i + 1}^{r} \varepsilon_{p^{k}}^{{\boldsymbol{s}}} \Bigg)\end{aligned}$$ for each $i = 0, 1, \dots, r$. Then, we have $$\begin{aligned} T^{{\boldsymbol{s}}-}( a ) & = \sum_{i = a}^{r} \varepsilon_{p^{i}}^{{\boldsymbol{s}}-} \notag \\ & = \sum_{i = a}^{r} \varepsilon_{p^{i}}^{{\boldsymbol{s}}} \, \Bigg( \sum_{j = i}^{r} \varepsilon_{p^{j}}^{{\boldsymbol{s}}} + \sum_{k = i + 1}^{r} \varepsilon_{p^{k}}^{{\boldsymbol{s}}} \Bigg) \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \sum_{i = a}^{r} \varepsilon_{p^{i}}^{{\boldsymbol{s}}} \, \Bigg( \sum_{j = i}^{r} \varepsilon_{p^{j}}^{{\boldsymbol{s}}} + \sum_{k = a}^{i-1} \varepsilon_{p^{k}}^{{\boldsymbol{s}}} \Bigg) \notag \\ & = \sum_{i = a}^{r} \varepsilon_{p^{i}}^{{\boldsymbol{s}}} \sum_{j = a}^{r} \varepsilon_{p^{j}}^{{\boldsymbol{s}}} \notag \\ & = T^{{\boldsymbol{s}}}( a )^{2} ,\end{aligned}$$ where (a) follows from the fact that $\mathbbm{1}[ a \le i \le r ] \mathbbm{1}[ i < k \le r ] = \mathbbm{1}[ a \le i < k \le r ] = \mathbbm{1}[ a \le k \le r ] \mathbbm{1}[ a \le i < k ]$. This is indeed . Moreover, it follows from and that $$\begin{aligned} B^{{\boldsymbol{s}}-}( a ) & = 1 - T^{{\boldsymbol{s}}-}( a ) \notag \\ & = 1 - T^{{\boldsymbol{s}}}( a )^{2} \notag \\ & = \Big( 1 - T^{{\boldsymbol{s}}}( a ) \Big) \Big( 1 + T^{{\boldsymbol{s}}}( a ) \Big) \notag \\ & = \Big( T^{{\boldsymbol{s}}}( a ) + B^{{\boldsymbol{s}}}( a ) - T^{{\boldsymbol{s}}}( a ) \Big) \Big( T^{{\boldsymbol{s}}}( a ) + B^{{\boldsymbol{s}}}( a ) + T^{{\boldsymbol{s}}}( a ) \Big) \notag \\ & = B^{{\boldsymbol{s}}}( a ) \Big( 2 \, T^{{\boldsymbol{s}}}( a ) + B^{{\boldsymbol{s}}}( a ) \Big) \notag \\ & = 2 \, T^{{\boldsymbol{s}}}( a ) \, B^{{\boldsymbol{s}}}( a ) + B^{{\boldsymbol{s}}}( a )^{2} ,\end{aligned}$$ which is indeed . The assertion for the plus transform can be dually proved; and this completes the proof of [Lemma \[lem:recursive\_TB\]]{}. [Lemma \[lem:recursive\_TB\]]{} shows that combining probability masses of $( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q}$ into two sums and , the recursive formulas seem the recursive formulas of the polar transforms for BECs (see [Corollary \[cor:bec\]]{} and [Example \[ex:recursive\_2\]]{}). The following lemma is a straightforward consequence of [Lemma \[lem:recursive\_TB\]]{}. \[lem:martingale\_primepower\] For each integer $a \ge 1$ and each sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, it holds that $$\begin{aligned} \frac{ 1 }{ 2 } \Big[ T^{{\boldsymbol{s}}-}( a ) + T^{{\boldsymbol{s}}+}( a ) \Big] & = T^{{\boldsymbol{s}}}( a ) , \\ \frac{ 1 }{ 2 } \Big[ B^{{\boldsymbol{s}}-}( a ) + B^{{\boldsymbol{s}}+}( a ) \Big] & = B^{{\boldsymbol{s}}}( a ) .\end{aligned}$$ Consequently, it holds that $$\begin{aligned} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} T^{{\boldsymbol{s}}}( a ) & = T( a ) , \\ \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} B^{{\boldsymbol{s}}}( a ) & = B( a )\end{aligned}$$ for every integers $n \ge 1$ and $a \ge 1$. [Lemma \[lem:martingale\_primepower\]]{} is a martingale-like property for and ; and this immediately proves [Theorem \[th:primepower\]]{}. It suffices to verify that $$\begin{aligned} \mu_{p^{i}}^{(n)} = \varepsilon_{p^{i}} \label{eq:induction_hypothesis}\end{aligned}$$ for every $n \in \mathbb{N}$ and every $i = 0, 1, \dots, r$, as with . We prove by induction. It follows from [Lemma \[th:primepower\]]{} that $$\begin{aligned} \mu_{1}^{(n)} & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \varepsilon_{1}^{{\boldsymbol{s}}} \notag \\ & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} B^{{\boldsymbol{s}}}( 1 ) \notag \\ & = B( 1 ) \notag \\ & = \varepsilon_{1}\end{aligned}$$ for every $n \in \mathbb{N}$, which implies with $i = 0$. Let $0 \le k < r$ be an integer. Suppose that holds for every $n \in \mathbb{N}$ and every $i = 0, 1, \dots, k$. Then, it also follows from [Lemma \[th:primepower\]]{} that $$\begin{aligned} \mu_{p^{k+1}}^{(n)} & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \varepsilon_{p^{k+1}}^{{\boldsymbol{s}}} \notag \\ & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \sum_{i = 0}^{k+1} \varepsilon_{p^{k+1}}^{{\boldsymbol{s}}} - \sum_{j = 0}^{k} \varepsilon_{p^{j}} \notag \\ & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} B^{{\boldsymbol{s}}}( k+1 ) - \sum_{j = 0}^{k} \varepsilon_{p^{j}} \notag \\ & = B( k+1 ) - \sum_{j = 0}^{k} \varepsilon_{p^{j}} \notag \\ & = \sum_{i = 0}^{k+1} \varepsilon_{p^{i}} - \sum_{j = 0}^{k} \varepsilon_{p^{j}} \notag \\ & = \varepsilon_{p^{k+1}}\end{aligned}$$ for every $n \in \mathbb{N}$, which completes the proof of [Theorem \[th:primepower\]]{}. Finally, it is worth mentioning that the above indirect proof of [Theorem \[th:primepower\]]{} can be extended to the case where $q$ is a general composite number by adding slightly complicated arguments. Namely, the idea of this proof is an informative step for understanding the next subsection. General Cases of $( \mu_{d}^{(\infty)} )_{d|q}$: The Input Alphabet Size $q = p_{1}^{r_{1}} p_{2}^{r_{2}} \cdots p_{m}^{r_{m}}$ is a Composite Number {#sect:composite} ----------------------------------------------------------------------------------------------------------------------------------------------------- Henceforth, assume that the input alphabet size $q$ can be factorized by[^23] $q = p_{1}^{r_{1}} p_{2}^{r_{2}} \cdots p_{m}^{r_{m}}$ with distinct prime numbers $p_{1}, p_{2} \dots, p_{m}$ and nonnegative integers $r_{1}, r_{2}, \dots, r_{m}$. If a positive integer $d$ of $q$ can be factorized by $d = p_{1}^{t_{1}} p_{2}^{t_{2}} \cdots p_{m}^{t_{m}}$, then we write it as $d = \langle {\boldsymbol{t}} \rangle$ for short, where ${\boldsymbol{t}} = (t_{1}, t_{2}, \dots, t_{m})$. Namely, defining a partial order ${\boldsymbol{t}} \le {\boldsymbol{u}}$ between two $m$-tuples ${\boldsymbol{t}}$ and ${\boldsymbol{u}}$ by $t_{i} \le u_{i}$ for every $i = 1, 2, \dots, m$, we observe that $d$ divides $q$ if and only if ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ for $d = \langle {\boldsymbol{t}} \rangle$ and $q = \langle {\boldsymbol{r}} \rangle$, where ${\boldsymbol{0}} = (0, \dots, 0)$ denotes the zero vector. As in and , the key idea of our analyses is that for each integers $i$ and $j$ satisfying $1 \le i < j \le m$, we combine the probability masses $( \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} )_{{\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}}$ into the following four masses[^24]:$$\begin{aligned} \theta_{i, j}^{{\boldsymbol{s}}}(a, b) & \coloneqq \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} \ge b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} , \label{def:theta} \\ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) & \coloneqq \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} , \label{def:lambda} \\ \rho_{i, j}^{{\boldsymbol{s}}}(a, b) & \coloneqq \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} < a, t_{j} \ge b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} , \label{def:rho} \\ \beta_{i, j}^{{\boldsymbol{s}}}(a, b) & \coloneqq \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} < a, t_{j} < b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \label{def:beta} \end{aligned}$$ for each integers $a, b \ge 1$, and each sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, where ${\boldsymbol{\varepsilon}}^{{\boldsymbol{s}}} = ( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q} = ( \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} )_{{\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}}$ is recursively defined in with an initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. If the sequence ${\boldsymbol{s}}$ is empty, then we omit the superscripts ${\boldsymbol{s}}$ as $\theta_{i, j}(a, b)$, $\lambda_{i, j}(a, b)$, $\rho_{i, j}(a, b)$, and $\beta_{i, j}(a, b)$. Note that $$\begin{aligned} \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) + \beta_{i, j}^{{\boldsymbol{s}}}(a, b) = \sum_{d|q} \varepsilon_{d}^{{\boldsymbol{s}}} = 1 \label{eq:sum_theta_lambda_rho_beta}\end{aligned}$$ for each $1 \le i < j \le m$, each $a, b \ge 1$, and each ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$. A simple example of – is given as follows: \[ex:theta\_lambda\_rho\_beta\_6\] Consider the case where $q = 6$ (see Examples \[ex:sahebi\_pradhan\] and \[ex:recursive\_6\]). Set $m = 2$, $(p_{1}, p_{2}) = (2, 3)$, and $(r_{1}, r_{2}) = (1, 1)$. Let $( \varepsilon_{d} )_{d|q} = (\varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}, \varepsilon_{6})$ be an initial four-dimensional probability vector. Since $m = 2$, it suffices to consider the case where $(i, j) = (1, 2)$. For every $a, b \ge 2$ and every ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, we observe that $$\begin{aligned} & \left\{ \begin{array}{l} \theta_{1, 2}^{{\boldsymbol{s}}}(1, 1) = \varepsilon_{6}^{{\boldsymbol{s}}} , \\ \lambda_{1, 2}^{{\boldsymbol{s}}}(1, 1) = \varepsilon_{2}^{{\boldsymbol{s}}} , \\ \rho_{1, 2}^{{\boldsymbol{s}}}(1, 1) = \varepsilon_{3}^{{\boldsymbol{s}}} , \\ \beta_{1, 2}^{{\boldsymbol{s}}}(1, 1) = \varepsilon_{1}^{{\boldsymbol{s}}} . \end{array} \right. \label{eq:theta_lambda_rho_beta_6} \\ & \left\{ \begin{array}{l} \theta_{1, 2}^{{\boldsymbol{s}}}(a, 1) = 0 , \\ \lambda_{1, 2}^{{\boldsymbol{s}}}(a, 1) = 0 , \\ \rho_{1, 2}^{{\boldsymbol{s}}}(a, 1) = \varepsilon_{3}^{{\boldsymbol{s}}} + \varepsilon_{6}^{{\boldsymbol{s}}} , \\ \beta_{1, 2}^{{\boldsymbol{s}}}(a, 1) = \varepsilon_{1}^{{\boldsymbol{s}}} + \varepsilon_{2}^{{\boldsymbol{s}}} . \end{array} \right. \\ & \left\{ \begin{array}{l} \theta_{1, 2}^{{\boldsymbol{s}}}(1, b) = 0 , \\ \lambda_{1, 2}^{{\boldsymbol{s}}}(1, b) = \varepsilon_{2}^{{\boldsymbol{s}}} + \varepsilon_{6}^{{\boldsymbol{s}}} , \\ \rho_{1, 2}^{{\boldsymbol{s}}}(1, b) = 0 , \\ \beta_{1, 2}^{{\boldsymbol{s}}}(1, b) = \varepsilon_{1}^{{\boldsymbol{s}}} + \varepsilon_{3}^{{\boldsymbol{s}}} . \end{array} \right. \\ & \left\{ \begin{array}{l} \theta_{1, 2}^{{\boldsymbol{s}}}(a, b) = 0 , \\ \lambda_{1, 2}^{{\boldsymbol{s}}}(a, b) = 0 , \\ \rho_{1, 2}^{{\boldsymbol{s}}}(a, b) = 0 , \\ \beta_{1, 2}^{{\boldsymbol{s}}}(a, b) = \varepsilon_{1}^{{\boldsymbol{s}}} + \varepsilon_{2}^{{\boldsymbol{s}}} + \varepsilon_{3}^{{\boldsymbol{s}}} + \varepsilon_{6}^{{\boldsymbol{s}}} = 1 . \end{array} \right.\end{aligned}$$ We now give formulas for – under the recursive formulas as follows: \[lem:formulas\] For any ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, $1 \le i < j \le m$, and $a, b \ge 1$, it holds that $$\begin{aligned} & \left\{ \begin{array}{l} \theta_{i, j}^{{\boldsymbol{s}}-}(a, b) = \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} , \\ \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) = \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] , \\ \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) = \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ \rho_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] , \\ \beta_{i, j}^{{\boldsymbol{s}}-}(a, b) = \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ 2 - \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) , \end{array} \right. \\ & \left\{ \begin{array}{l} \theta_{i, j}^{{\boldsymbol{s}}+}(a, b) = \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ 2 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) , \\ \lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) = \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] , \\ \rho_{i, j}^{{\boldsymbol{s}}+}(a, b) = \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ \rho_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] , \\ \beta_{i, j}^{{\boldsymbol{s}}+}(a, b) = \beta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} . \end{array} \right.\end{aligned}$$ By symmetry, it suffices to prove only for the minus transforms. Fix a sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, indices $1 \le i < j \le m$, and integers $a, b \ge 1$ arbitrarily. A direct calculation shows $$\begin{aligned} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}-} & \overset{\mathclap{\eqref{def:eps_s}}}{=} \sum_{\substack{ d_{1}|q, d_{2}|q : \\ \gcd(d_{1}, d_{2}) = \langle {\boldsymbol{t}} \rangle }} \varepsilon_{d_{1}}^{{\boldsymbol{s}}} \, \varepsilon_{d_{2}}^{{\boldsymbol{s}}} \notag \\ & = \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{v}} : {\boldsymbol{0}} \le {\boldsymbol{v}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{v}} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \mathbbm{1}[ t_{k} = \min\{ u_{k}, v_{k} \} ] \notag \\ & = \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{v}} : {\boldsymbol{0}} \le {\boldsymbol{v}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{v}} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} = u_{k} \le v_{k} ] + \mathbbm{1}[ t_{k} = v_{k} < u_{k} ] \Big) \notag \\ & = \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{v}} : {\boldsymbol{0}} \le {\boldsymbol{v}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{v}} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} = u_{k} \le v_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} = v_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & = \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{w}}^{(0)} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{w}}^{(1)} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \label{eq:eps_minus_proof}\end{aligned}$$ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$, where ${\boldsymbol{u}} = (u_{1}, \dots, u_{m})$, ${\boldsymbol{v}} = (v_{1}, \dots, v_{m})$, and ${\boldsymbol{b}} = (b_{1}, \dots, b_{m})$; and both ${\boldsymbol{w}}^{(0)} = (w_{1}^{(0)}, \dots, w_{m}^{(0)})$ and ${\boldsymbol{w}}^{(1)} = (w_{1}^{(1)}, \dots, w_{m}^{(1)})$ are defined as functions of $({\boldsymbol{b}}, {\boldsymbol{t}}, {\boldsymbol{u}})$ so that $$\begin{aligned} w_{k}^{(0)} & = \begin{cases} t_{k} & \mathrm{if} \ b_{k} = 0 , \\ u_{k} & \mathrm{if} \ b_{k} = 1 , \end{cases} \\ w_{k}^{(1)} & = \begin{cases} u_{k} & \mathrm{if} \ b_{k} = 0 , \\ t_{k} & \mathrm{if} \ b_{k} = 1 , \end{cases}\end{aligned}$$ respectively, for each $k = 1, 2, \dots, m$. Letting an $m$-tuple ${\boldsymbol{c}} = (c_{1}, \dots, c_{m})$ by $$\begin{aligned} c_{k} = \begin{cases} a & \mathrm{if} \ k = i , \\ b & \mathrm{if} \ k = j , \\ 0 & \mathrm{otherwise} \end{cases}\end{aligned}$$ for each $k = 1, 2, \dots, m$, we observe that $$\begin{aligned} \theta_{i, j}^{{\boldsymbol{s}}-}(a, b) & = \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}-} \notag \\ & \overset{\mathclap{\eqref{eq:eps_minus_proof}}}{=} \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{w}}^{(0)} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{w}}^{(1)} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & = \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ c_{k} \le u_{k} < t_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & = \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] + \mathbbm{1}[ c_{k} \le u_{k} < t_{k} ] \Big) \notag \\ & = \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \mathbbm{1}[ c_{k} \le u_{k} ] \notag \\ & = \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \sum_{{\boldsymbol{u}} : {\boldsymbol{c}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \notag \\ & = \bigg( \sum_{{\boldsymbol{t}} : {\boldsymbol{c}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \bigg)^{2} \notag \\ & = \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} . \label{eq:formulas1_proof}\end{aligned}$$ Similarly, we have $$\begin{aligned} \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) & = \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}-} \notag \\ & \overset{\mathclap{\eqref{eq:eps_minus_proof}}}{=} \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{{\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}}} \varepsilon_{\langle {\boldsymbol{w}}^{(0)} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{w}}^{(1)} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & = \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} < b}} \varepsilon_{\langle {\boldsymbol{w}}^{(0)} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{w}}^{(1)} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & \qquad {} + \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} \ge b}} \varepsilon_{\langle {\boldsymbol{w}}^{(0)} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{w}}^{(1)} \rangle}^{{\boldsymbol{s}}} \prod_{k = 1}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & = \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} < b}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \Big( \mathbbm{1}[ t_{i} \le u_{i} ] \mathbbm{1}[ b_{i} = 0 ] + \mathbbm{1}[ a \le u_{i} < t_{i} ] \mathbbm{1} [ b_{i} = 1 ] \Big) \notag \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad {} \times \prod_{k = 1 : k \neq i}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ 0 \le u_{k} < t_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & \qquad {} + \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{{\boldsymbol{b}} \in \{ 0, 1 \}^{m}} \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} \ge b}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \Big( \mathbbm{1}[ a \le u_{i} \le t_{i} ] \mathbbm{1}[ b_{i} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{i} = 1 ] \Big) \notag \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad {} \times \prod_{k = 1 : k \neq i}^{m} \Big( \mathbbm{1}[ 0 \le u_{k} \le t_{k} ] \mathbbm{1}[ b_{k} = 0 ] + \mathbbm{1}[ t_{k} < u_{k} ] \mathbbm{1} [ b_{k} = 1 ] \Big) \notag \\ & \overset{\text{(a)}}{=} \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} < b}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \Big( \mathbbm{1}[ t_{i} \le u_{i} ] + \mathbbm{1}[ a \le u_{i} < t_{i} ] \Big) \prod_{k = 1 : k \neq i}^{m} \Big( \mathbbm{1}[ t_{k} \le u_{k} ] + \mathbbm{1}[ 0 \le u_{k} < t_{k} ] \Big) \notag \\ & \qquad {} + 2 \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} \ge b}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \Big( \mathbbm{1}[ a \le u_{i} \le t_{i} ] + \mathbbm{1}[ t_{i} < u_{i} ] \Big) \prod_{k = 1 : k \neq i}^{m} \Big( \mathbbm{1}[ 0 \le u_{k} \le t_{k} ] + \mathbbm{1}[ t_{k} < u_{k} ] \Big) \notag \\ & = \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} < b}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \, \mathbbm{1}[ a \le u_{i} ] + 2 \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{j} \ge b}} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \, \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \, \mathbbm{1}[ a \le u_{i} ] \notag \\ & = \left( \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \right)^{2} + 2 \left( \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{i} \ge a, t_{j} < b } } \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \right) \left( \sum_{\substack{ {\boldsymbol{u}} : {\boldsymbol{0}} \le {\boldsymbol{u}} \le {\boldsymbol{r}} , \\ u_{i} \ge a, u_{j} \ge b}} \varepsilon_{\langle {\boldsymbol{u}} \rangle}^{{\boldsymbol{s}}} \right) \notag \\ & = \lambda_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{eq:formulas2_proof}\end{aligned}$$ where the factor $2$ in (a) comes from the fact that $t_{j} < b$ and $u_{j} \ge b$ imply $\mathbbm{1}[ t_{j} < u_{j} ] = 1$. Since $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b) = \rho_{j, i}^{{\boldsymbol{s}}}(b, a)$, we readily see from that $$\begin{aligned} \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) = \rho_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} + 2 \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) . \label{eq:formulas3_proof}\end{aligned}$$ Finally, as $\theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) + \beta_{i, j}^{{\boldsymbol{s}}}(a, b) = 1$ (see ), it follows from – that $$\begin{aligned} \beta_{i, j}^{{\boldsymbol{s}}-}(a, b) & = 1 - \theta_{i, j}^{{\boldsymbol{s}}-}(a, b) - \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) - \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) \notag \\ & = 1 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} - \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] - \big[ \rho_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} + 2 \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & = 1 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} - \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big]^{2} - 2 \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \notag \\ & = 1 - \big[ \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big]^{2} + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \notag \\ & = \big[ 1 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \, \big[ 1 + \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \notag \\ & = \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ 1 + \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \notag \\ & = \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ 2 - \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + 2 \, \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) .\end{aligned}$$ This completes the proof of [Lemma \[lem:formulas\]]{}. Similar to [Lemma \[lem:martingale\_primepower\]]{}, as shown in the following lemma, [Lemma \[lem:formulas\]]{} can characterize the average value of – over one-step polar transform. \[lem:quasi-conservation\] For any ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$, $1 \le i < j \le m$, and $a, b \ge 1$, it holds that $$\begin{aligned} \frac{ 1 }{ 2 } \Big[ \theta_{i, j}^{{\boldsymbol{s}}-}(a, b) + \theta_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & = \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) , \\ \frac{ 1 }{ 2 } \Big[ \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & = \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ 1 - \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] , \\ \frac{ 1 }{ 2 } \Big[ \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & = \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ 1 - \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big] , \\ \frac{ 1 }{ 2 } \Big[ \beta_{i, j}^{{\boldsymbol{s}}-}(a, b) + \beta_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & = \beta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) .\end{aligned}$$ [Lemma \[lem:quasi-conservation\]]{} straightforwardly follows from [Lemma \[lem:formulas\]]{}. The idea of [Lemma \[lem:quasi-conservation\]]{} comes from the conservation property $[I(W^{-}) + I(W^{+})]/2 = I(W)$; and note that in general, these quantities are not conserved on the polar transform. In fact, [Lemma \[lem:quasi-conservation\]]{} looks like sub or super-martingales with inequalities $$\begin{aligned} \frac{ 1 }{ 2 } \Big[ \theta_{i, j}^{{\boldsymbol{s}}-}(a, b) + \theta_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & \ge \theta_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{ineq:sub_theta} \\ \frac{ 1 }{ 2 } \Big[ \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & \le \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{ineq:super_lambda} \\ \frac{ 1 }{ 2 } \Big[ \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & \le \rho_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{ineq:super_rho} \\ \frac{ 1 }{ 2 } \Big[ \beta_{i, j}^{{\boldsymbol{s}}-}(a, b) + \beta_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big] & \ge \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \label{ineq:sub_beta}\end{aligned}$$ when the sequence ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$ seems a uniformly distributed Bernoulli process, i.e., when $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}$ is discussed as a polarization process. The following lemma is a nice property between $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b)$ and $\rho_{i, j}^{{\boldsymbol{s}}}(a, b)$; it shows that the inequality between $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b)$ and $\rho_{i, j}^{{\boldsymbol{s}}}(a, b)$ is invariant under any polar transforms ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$. \[lem:ineq\] For each $1 \le i < j \le m$ and $a, b \ge 1$, it holds that $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \le \rho_{i, j}^{{\boldsymbol{s}}}(a, b)$ for every ${\boldsymbol{s}} \in \{ -, + \}^{\ast}$ if and only if $\lambda_{i, j}(a, b) \le \rho_{i, j}(a, b)$. Let $1 \le i < j \le m$ and $a, b \ge 1$ be given. By the symmetry $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b) = \rho_{j, i}^{{\boldsymbol{s}}}(b, a)$, it suffices to prove the “if” part. We prove the lemma by induction. If the sequence ${\boldsymbol{s}}$ is empty, then the lemma is obvious. Hence, it suffices to show that if $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \le \rho_{i, j}^{{\boldsymbol{s}}}(a, b)$, then both $\lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) \le \rho_{i, j}^{{\boldsymbol{s}}-}(a, b)$ and $\lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) \le \rho_{i, j}^{{\boldsymbol{s}}+}(a, b)$ hold. It follows from [Lemma \[lem:formulas\]]{} that $$\begin{aligned} \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) & = \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & \overset{\mathclap{\text{(a)}}}{\le} \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ \rho_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & = \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) , \label{eq:minus_ineq}\end{aligned}$$ where (a) follows by the hypothesis $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \le \rho_{i, j}^{{\boldsymbol{s}}}(a, b)$. Similar to , we also have $$\begin{aligned} \lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) & = \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & \le \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ \rho_{i, j}^{{\boldsymbol{s}}}(a, b) + 2 \, \beta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & = \rho_{i, j}^{{\boldsymbol{s}}+}(a, b) . \label{eq:plus_ineq}\end{aligned}$$ This completes the proof of [Lemma \[lem:ineq\]]{}. We now define the average value of – as follows: $$\begin{aligned} \mu_{i, j}^{(n)}[\theta](a, b) & \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \theta_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{def:mu_theta} \\ \mu_{i, j}^{(n)}[\lambda](a, b) & \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{def:mu_lambda} \\ \mu_{i, j}^{(n)}[\rho](a, b) & \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \rho_{i, j}^{{\boldsymbol{s}}}(a, b) , \label{def:mu_rho} \\ \mu_{i, j}^{(n)}[\beta](a, b) & \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \beta_{i, j}^{{\boldsymbol{s}}}(a, b) . \label{def:mu_beta}\end{aligned}$$ For convenience, when $n = 0$, we write $\mu_{i, j}^{(0)}[\theta](a, b) \coloneqq \theta_{i, j}(a, b)$, $\mu_{i, j}^{(0)}[\lambda](a, b) \coloneqq \lambda_{i, j}(a, b)$, $\mu_{i, j}^{(0)}[\rho](a, b) \coloneqq \rho_{i, j}(a, b)$, and $\mu_{i, j}^{(0)}[\beta](a, b) \coloneqq \beta_{i, j}(a, b)$. Unlike [Lemma \[lem:martingale\_primepower\]]{}, these quantities are not preserved with respect to $n \in \mathbb{N}$. However, as shown in the following lemma, the difference between $\mu_{i, j}^{(n)}[\lambda](a, b)$ and $\mu_{i, j}^{(n)}[\rho](a, b)$ and several addition of two quantities are preserved with respect to $n \in \mathbb{N}$. \[lem:martingale\] For any $n \ge 0$, $1 \le i < j \le m$, and $a, b \ge 1$, it holds that $$\begin{aligned} \mu_{i, j}^{(n)}[\lambda](a, b) - \mu_{i, j}^{(n)}[\rho](a, b) & = \lambda_{i, j}(a, b) - \rho_{i, j}(a, b) , \label{eq:lambda_minus_rho} \\ \mu_{i, j}^{(n)}[\theta](a, b) + \mu_{i, j}^{(n)}[\lambda](a, b) & = \theta_{i, j}(a, b)+ \lambda_{i, j}(a, b) , \label{eq:theta_plus_lambda} \\ \mu_{i, j}^{(n)}[\theta](a, b) + \mu_{i, j}^{(n)}[\rho](a, b) & = \theta_{i, j}(a, b) + \rho_{i, j}(a, b) , \label{eq:theta_plus_rho} \\ \mu_{i, j}^{(n)}[\beta](a, b) + \mu_{i, j}^{(n)}[\lambda](a, b) & = \beta_{i, j}(a, b) + \lambda_{i, j}(a, b) , \label{eq:beta_plus_lambda} \\ \mu_{i, j}^{(n)}[\beta](a, b) + \mu_{i, j}^{(n)}[\rho](a, b) & = \beta_{i, j}(a, b) + \rho_{i, j}(a, b) . \label{eq:beta_plus_rho} \end{aligned}$$ Let $1 \le i < j \le m$ and $a, b \ge 1$ be given. For each $n \in \mathbb{N}_{0}$, we have $$\begin{aligned} \mu_{i, j}^{(n+1)}[\lambda](a, b) - \mu_{i, j}^{(n+1)}[\rho](a, b) & = \frac{ 1 }{ 2^{n+1} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \Big( \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big) - \frac{ 1 }{ 2^{n+1} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \Big( \rho_{i, j}^{{\boldsymbol{s}}-}(a, b) + \rho_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big) \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ 1 - \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] - \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ 1 - \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) - \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \notag \\ & = \mu_{i, j}^{(n)}[\lambda](a, b) - \mu_{i, j}^{(n)}[\rho](a, b) , \label{eq:proof_martingale}\end{aligned}$$ where (a) follows by [Lemma \[lem:quasi-conservation\]]{}. This proves by induction. The rest of equalities – can be similarly proved by [Lemma \[lem:quasi-conservation\]]{}, as in . This completes the proof of [Lemma \[lem:martingale\]]{}. [Lemma \[lem:martingale\]]{} implies that the left-hand sides of – has martingale-like properties with respect to a polarization process $V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}$ when ${\boldsymbol{s}}$ seems a uniformly distributed Bernoulli process. It is worth mentioning that [Lemma \[lem:martingale\]]{} is very important to prove the limits of – as $n \to \infty$. The existences of these limits are ensured by the following lemma. \[lem:convergent\] The four sequences $( \mu_{i, j}^{(n)}[\theta](a, b) )_{n=1}^{\infty}$, $( \mu_{i, j}^{(n)}[\lambda](a, b) )_{n=1}^{\infty}$, $( \mu_{i, j}^{(n)}[\rho](a, b) )_{n=1}^{\infty}$, and $( \mu_{i, j}^{(n)}[\beta](a, b) )_{n=1}^{\infty}$ are convergent for each $1 \le i < j \le m$ and $a, b \ge 1$. Let $1 \le i < j \le m$ and $a, b \ge 1$ be given. It follows from – that - the number $\mu_{i, j}^{(n)}[\theta](a, b)$ is nondecreasing as $n$ increases; - the number $\mu_{i, j}^{(n)}[\lambda](a, b)$ is nonincreasing as $n$ increases; - the number $\mu_{i, j}^{(n)}[\rho](a, b)$ is nonincreasing as $n$ increases; and - the number $\mu_{i, j}^{(n)}[\beta](a, b)$ is nondecreasing as $n$ increases. Therefore, since these numbers are bounded as $$\begin{aligned} 0 & \le \mu_{i, j}^{(n)}[\theta](a, b) \le 1 , \\ 0 & \le \mu_{i, j}^{(n)}[\lambda](a, b) \le 1 , \\ 0 & \le \mu_{i, j}^{(n)}[\rho](a, b) \le 1 , \\ 0 & \le \mu_{i, j}^{(n)}[\beta](a, b) \le 1\end{aligned}$$ for every $n \in \mathbb{N}_{0}$, we obtain the claim of [Lemma \[lem:convergent\]]{}. By [Lemma \[lem:convergent\]]{}, we can define the following limits: $$\begin{aligned} \mu_{i, j}^{(\infty)}[\theta](a, b) & \coloneqq \lim_{n \to \infty} \mu_{i, j}^{(n)}[\theta](a, b) , \\ \mu_{i, j}^{(\infty)}[\lambda](a, b) & \coloneqq \lim_{n \to \infty} \mu_{i, j}^{(n)}[\lambda](a, b) , \\ \mu_{i, j}^{(\infty)}[\rho](a, b) & \coloneqq \lim_{n \to \infty} \mu_{i, j}^{(n)}[\rho](a, b) , \\ \mu_{i, j}^{(\infty)}[\beta](a, b) & \coloneqq \lim_{n \to \infty} \mu_{i, j}^{(n)}[\beta](a, b) .\end{aligned}$$ The following theorem shows that these limits can be simply solved by the initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$. \[lem:fujisaki\] For any $1 \le i < j \le m$ and $a, b \ge 1$, it holds that $$\begin{aligned} \mu_{i, j}^{(\infty)}[\theta](a, b) & = \theta_{i, j}(a, b) + \min\{ \lambda_{i, j}(a, b), \rho_{i, j}(a, b) \} , \label{eq:theta_inf} \\ \mu_{i, j}^{(\infty)}[\lambda](a, b) & = \big| \lambda_{i, j}(a, b) - \rho_{i, j}(a, b) \big|^{+} , \label{eq:lambda_inf} \\ \mu_{i, j}^{(\infty)}[\rho](a, b) & = \big| \rho_{i, j}(a, b) - \lambda_{i, j}(a, b) \big|^{+} , \label{eq:rho_inf} \\ \mu_{i, j}^{(\infty)}[\beta](a, b) & = \beta_{i, j}(a, b) + \min\{ \lambda_{i, j}(a, b), \rho_{i, j}(a, b) \} , \label{eq:beta_inf} \end{aligned}$$ where $| c |^{+} \coloneqq \max\{ 0, c \}$ for $c \in \mathbb{R}$. Let $1 \le i < j \le m$ and $a, b \ge 1$ be given. Since $\lambda_{i, j}^{{\boldsymbol{s}}}(a, b) = \rho_{j, i}^{{\boldsymbol{s}}}(b, a)$, we may assume without loss of generality that $\lambda_{i, j}(a, b) \le \rho_{i, j}(a, b)$. A simple calculation yields $$\begin{aligned} \mu_{i, j}^{(n+1)}[\lambda](a, b) & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \frac{ 1 }{ 2 } \Big( \lambda_{i, j}^{{\boldsymbol{s}}-}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big) \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big[ 1 - \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & \overset{\mathclap{\text{(b)}}}{\le} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big[1 - \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \big] \notag \\ & \overset{\mathclap{\text{(c)}}}{=} \mu_{i, j}^{(n)}[\lambda](a, b) - \nu_{i, j}^{(n)}[\lambda](a, b) , \label{eq:lambda_mu_nu}\end{aligned}$$ where (a) follows by [Lemma \[lem:quasi-conservation\]]{}, (b) follows by [Lemma \[lem:ineq\]]{}, and (c) follows by the definition of the second moment: $$\begin{aligned} \nu_{i, j}^{(n)}[\lambda](a, b) \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} .\end{aligned}$$ It follows from that $$\begin{aligned} 0 \le \nu_{i, j}^{(n)}[\lambda](a, b) \le \mu_{i, j}^{(n)}[\lambda](a, b) - \mu_{i, j}^{(n+1)}[\lambda](a, b) , \label{eq:convergence_nu}\end{aligned}$$ and the squeeze theorem shows that $\nu_{i, j}^{(n)}[\lambda](a, b) \to 0$ as $n \to \infty$, because $\mu_{i, j}^{(n)}[\lambda](a, b) - \mu_{i, j}^{(n+1)}[\lambda](a, b) \to 0$ as $n \to \infty$ (cf. [Lemma \[lem:convergent\]]{}). On the other hand, we observe that $$\begin{aligned} \mu_{i, j}^{(n)}[\lambda](a, b)^{2} & = \Bigg[ \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \Bigg]^{2} \notag \\ & = \frac{ 1 }{ 2^{2n} } \sum_{{\boldsymbol{s}}_{1} \in \{ -, + \}^{n}} \Bigg[ \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b)^{2} + \sum_{\substack{ {\boldsymbol{s}}_{2} \in \{ -, + \}^{n} : \\ {\boldsymbol{s}}_{2} \neq {\boldsymbol{s}}_{1} }}\lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b) \, \lambda_{i, j}^{{\boldsymbol{s}}_{2}}(a, b) \Bigg] \notag \\ & \le \frac{ 1 }{ 2^{2n} } \sum_{{\boldsymbol{s}}_{1} \in \{ -, + \}^{n}} \Bigg[ \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b)^{2} + \sum_{\substack{ {\boldsymbol{s}}_{2} \in \{ -, + \}^{n} : \\ \lambda_{i, j}^{{\boldsymbol{s}}_{2}}(a, b) \ge \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b) }} \lambda_{i, j}^{{\boldsymbol{s}}_{2}}(a, b)^{2} + \sum_{\substack{ {\boldsymbol{s}}_{3} \in \{ -, + \}^{n} : \\ \lambda_{i, j}^{{\boldsymbol{s}}_{3}}(a, b) < \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b) }} \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b)^{2} \Bigg] \notag \\ & \le \frac{ 1 }{ 2^{2n} } \sum_{{\boldsymbol{s}}_{1} \in \{ -, + \}^{n}} \Bigg[ \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b)^{2} + \sum_{{\boldsymbol{s}}_{2} \in \{ -, + \}^{n}} \lambda_{i, j}^{{\boldsymbol{s}}_{2}}(a, b)^{2} + (2^{n}-1) \, \lambda_{i, j}^{{\boldsymbol{s}}_{1}}(a, b)^{2} \Bigg] \notag \\ & = 2 \, \nu_{i, j}^{(n)}[\lambda](a, b) ,\end{aligned}$$ which implies that $$\begin{aligned} 0 \le \mu_{i, j}^{(n)}[\lambda](a, b) \le \sqrt{ 2 \, \nu_{i, j}^{(n)}[\lambda](a, b) } . \label{ineq:Holder}\end{aligned}$$ Note that the second inequality of can be seen as a version of Hölder’s inequality. Then, it also follows by the squeeze theorem that $\mu_{i, j}^{(\infty)}[\lambda](a, b) = 0$, because $\nu_{i, j}^{(n)}[\lambda](a, b) \to 0$ as $n \to \infty$ (cf. ). Hence, we have $$\begin{aligned} \mu_{i, j}^{(\infty)}[\rho](a, b) & = \mu_{i, j}^{(\infty)}[\rho](a, b) - \mu_{i, j}^{(\infty)}[\lambda](a, b) \notag \\ & = \lim_{n \to \infty} \Big( \mu_{i, j}^{(n)}[\rho](a, b) - \mu_{i, j}^{(n)}[\lambda](a, b) \Big) \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \rho_{i, j}(a, b) - \lambda_{i, j}(a, b) , \label{eq:deriving_mu-rho-infty} \\ \mu_{i, j}^{(\infty)}[\theta](a, b) & = \mu_{i, j}^{(\infty)}[\theta](a, b) + \mu_{i, j}^{(\infty)}[\lambda](a, b) \notag \\ & = \lim_{n \to \infty} \Big( \mu_{i, j}^{(n)}[\theta](a, b) + \mu_{i, j}^{(n)}[\lambda](a, b) \Big) \notag \\ & \overset{\mathclap{\text{(b)}}}{=} \theta_{i, j}(a, b) + \lambda_{i, j}(a, b) , \\ \mu_{i, j}^{(\infty)}[\beta](a, b) & = \mu_{i, j}^{(\infty)}[\beta](a, b) + \mu_{i, j}^{(\infty)}[\lambda](a, b) \notag \\ & = \lim_{n \to \infty} \Big( \mu_{i, j}^{(\infty)}[\beta](a, b) + \mu_{i, j}^{(n)}[\lambda](a, b) \Big) \notag \\ & \overset{\mathclap{\text{(c)}}}{=} \beta_{i, j}(a, b) + \lambda_{i, j}(a, b) ,\end{aligned}$$ where (a)–(c) follow by [Lemma \[lem:martingale\]]{}. Considering the counterpart hypothesis $\lambda_{i, j}(a, b) \ge \rho_{i, j}(a, b)$, we have –. This completes the proof of [Lemma \[lem:fujisaki\]]{}. If $q$ is a semiprime, i.e., if $q = p_{1} p_{2}$ for some distinct prime numbers $p_{1}$ and $p_{2}$, then [Lemma \[lem:fujisaki\]]{} immediately solves the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ defined in , as shown in the following example. \[ex:asymptotic\_distribution\_6\] Consider the case where $q = 6 = 2 \cdot 3$ (see Examples \[ex:sahebi\_pradhan\]–\[ex:theta\_lambda\_rho\_beta\_6\]). It follows from of [Example \[ex:theta\_lambda\_rho\_beta\_6\]]{} and [Lemma \[lem:fujisaki\]]{} that $$\begin{aligned} \left\{ \begin{array}{l} \mu_{6}^{(\infty)} = \mu_{1, 2}^{(\infty)}[\theta](1, 1) = \varepsilon_{6} + \min\{ \varepsilon_{2}, \varepsilon_{3} \} , \\ \mu_{2}^{(\infty)} = \mu_{1, 2}^{(\infty)}[\lambda](1, 1) = | \varepsilon_{2} - \varepsilon_{3} |^{+} , \\ \mu_{3}^{(\infty)} = \mu_{1, 2}^{(\infty)}[\rho](1, 1) = | \varepsilon_{3} - \varepsilon_{2} |^{+} , \\ \mu_{1}^{(\infty)} = \mu_{1, 2}^{(\infty)}[\beta](1, 1) = \varepsilon_{1} + \min\{ \varepsilon_{2}, \varepsilon_{3} \} \end{array} \right.\end{aligned}$$ for every initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q} = (\varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}, \varepsilon_{6})$. Therefore, the asymptotic distribution of [Fig. \[subfig:maec\_6\_case1\]]{} is given by $( \mu_{1}^{(\infty)}, \mu_{2}^{(\infty)}, \mu_{3}^{(\infty)}, \mu_{6}^{(\infty)} ) = ( 3/10, 0, 3/10, 2/5 )$; and the asymptotic distribution of [Fig. \[subfig:maec\_6\_case2\]]{} is given by $( \mu_{1}^{(\infty)}, \mu_{2}^{(\infty)}, \mu_{3}^{(\infty)}, \mu_{6}^{(\infty)} ) = ( 1/2, 0, 0, 1/2 )$. Similar to –, the following theorem shows that the limit $\mu_{d}^{(\infty)}$ defined in always exists for each $d|q$, and the asymptotic distribution[^25] $( \mu_{d}^{(\infty)} )_{d|q}$ can be algorithmically calculated. Initialize $( \mu_{d}^{(\infty)} )_{d|q}$ by the zero vector $(0, \dots, 0)$ $\xi \longleftarrow 0$ ${\boldsymbol{t}} = (t_{1}, \dots, t_{m}) \longleftarrow {\boldsymbol{0}} = (0, \dots, 0)$ \[th:mu\_d\] The probability vector $( \mu_{d}^{(\infty)} )_{d|q}$ can be calculated by Algorithm \[alg:main\] running in[^26] $\mathrm{O}( \omega(q) \, \Omega(q) \, \tau( q ) )$, where $\omega( q ) \le m$ denotes the number of distinct prime factors of $q$; $\Omega(q) \coloneqq \sum_{i = 1}^{m} r_{i}$ denotes the number of prime factors of $q$ with multiplicity; and $\tau( q ) \coloneqq \prod_{i = 1}^{m} (r_{i}+1)$ denotes the number of positive divisors of $q$. Note that even if $\omega( q ) = 1$, i.e., even if $q$ is a prime power, Algorithm \[alg:main\] still works well by setting $m = 2$ and $r_{2} = 0$, i.e., the input alphabet size is denoted by $q = p_{1}^{r_{1}} = p_{1}^{r_{1}} p_{2}^{0} = p_{1}^{r_{1}} p_{2}^{r_{2}}$. However, fortunately, [Theorem \[th:primepower\]]{} of [Section \[sect:primepower\]]{} shows that $( \mu_{d}^{(\infty)} )_{d|q} = ( \varepsilon_{d} )_{d|q}$, and we do not need to use Algorithm \[alg:main\] in the case where $q$ is a prime power. If $\omega( q ) \ge 2$, then $m = \omega(q)$ is sufficient. First, suppose that ${\boldsymbol{t}}^{(0)} = (t_{1}^{(0)}, \dots, t_{m}^{(0)}) = (0, \dots, 0) = {\boldsymbol{0}}$ as in Line 3 of Algorithm \[alg:main\]. That is, consider the first step of the while loop in Lines 4–15 of Algorithm \[alg:main\]. If $\lambda_{i, j}(t_{i}^{(0)}+1, t_{j}^{(0)}+1) = \lambda_{i, j}(1, 1) \le \rho_{i, j}(1, 1) = \rho_{i, j}(t_{i}^{(0)}+1, t_{j}^{(0)}+1)$ as in Line 7 of Algorithm \[alg:main\], then it follows from [Lemma \[lem:fujisaki\]]{} that $$\begin{aligned} \mu_{i, j}^{(\infty)}[\lambda](t_{i}^{(0)}+1, t_{j}^{(0)}+1) = \mu_{i, j}^{(\infty)}[\lambda](1, 1) = 0 . \label{eq:alg1_lambda0}\end{aligned}$$ Given that $$\begin{aligned} \mu_{i, j}^{(n)}[\lambda](a, b) = \sum_{{\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, t_{i} \ge a, t_{j} < b} \mu_{\langle {\boldsymbol{t}} \rangle}^{(n)} ,\end{aligned}$$ Equation implies that $ \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0 $ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $0 = t_{i}^{(0)} < t_{i} \le r_{i}$ and $t_{j} \le t_{j}^{(0)} = 0$. Similarly, we observe that if $\lambda_{i, j}(t_{i}^{(0)}+1, t_{j}^{(0)}+1) = \lambda_{i, j}(1, 1) < \rho_{i, j}(1, 1) = \rho_{i, j}(t_{i}^{(0)}+1, t_{j}^{(0)}+1)$ as in Line 10 of Algorithm \[alg:main\], then $ \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0 $ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $t_{i} \le t_{i}^{(0)} = 0$ and $0 = t_{j}^{(0)} < t_{j} \le r_{j}$. Therefore, by the while loop in Lines 5–12 of Algorithm \[alg:main\], one can get the number $k$ such that for each $1 \le k^{\prime} \le m$ satisfying $k^{\prime} \neq k$, it holds that $ \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0 $ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $t_{k} \le t_{k}^{(0)} = 0$ and $0 = t_{k^{\prime}}^{(0)} < t_{k^{\prime}} \le r_{k^{\prime}}$. Note that $l = k$ if $k < m$. Given that $$\begin{aligned} \mu_{i, j}^{(n)}[\beta](a, b) = \sum_{{\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, t_{i} < a, t_{j} < b} \mu_{\langle {\boldsymbol{t}} \rangle}^{(n)} , \label{eq:mu_beta_eps}\end{aligned}$$ we have $$\begin{aligned} \mu_{\langle {\boldsymbol{t}}^{(0)} \rangle}^{(\infty)} & = \mu_{l, m}^{(\infty)}[\beta](t_{l}^{(0)}+1, t_{l}^{(0)}+1) = \mu_{l, m}^{(\infty)}[\beta](1, 1) , \label{eq:sol_alg_first1} \\ \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} & = 0 \qquad \mathrm{for} \ \mathrm{every} \ {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} \ \mathrm{satisfying} \ t_{k} = 0 \ \mathrm{and} \ 1 \le t_{k^{\prime}} \le r_{k^{\prime}} \ \mathrm{for} \ \mathrm{some} \ k^{\prime} \neq k . \label{eq:sol_alg_first2} \end{aligned}$$ Note that it follows from [Lemma \[lem:fujisaki\]]{} that $$\begin{aligned} \mu_{l, m}^{(\infty)}[\beta](1, 1) = \beta_{l, m}(1, 1) + \min\{ \lambda_{l, m}(1, 1), \rho_{i, j}(1, 1) \} .\end{aligned}$$ Therefore, by the first step ${\boldsymbol{t}}^{(0)} = {\boldsymbol{0}}$ of the while loop in Lines 4–15 of Algorithm \[alg:main\], one can obtain $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)}$ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $t_{k} \le t_{k}^{(0)} = 0$, as in and . To continue the while loop, after ${\boldsymbol{t}}^{(1)} = (t_{1}^{(1)}, \dots, t_{m}^{(1)})$ is created in Line 15 of Algorithm \[alg:main\] as $$\begin{aligned} t_{k^{\prime}}^{(1)} = \begin{cases} t_{k^{\prime}}^{(0)} & \mathrm{if} \ k^{\prime} \neq k , \\ t_{k^{\prime}}^{(0)} + 1 & \mathrm{if} \ k^{\prime} = k , \end{cases}\end{aligned}$$ for each $1 \le k^{\prime} \le m$, we go back to Line 4 of Algorithm \[alg:main\] whenever $0 \le \xi < 1$. The case $\xi = 1$ occurs if and only if $\beta_{l, m}(1, 1) = 1$. In this case, we have $\mu_{\langle {\boldsymbol{t}}^{(0)} \rangle}^{(\infty)} = 1$ and $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0$ for every ${\boldsymbol{t}} \neq {\boldsymbol{t}}^{(0)}$; and we just finish the algorithm. Second, suppose that for some ${\boldsymbol{0}} \le {\boldsymbol{t}}^{(h)} = (t_{1}^{(h)}, \dots, t_{m}^{(h)}) \le {\boldsymbol{r}}$ with $0 \le h \le \Omega( q )$, the value $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)}$ has been already solved for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $t_{\bar{k}} < t_{\bar{k}}^{(h)}$ for some $1 \le \bar{k} \le m$. That is, consider the $h$th-step of the while loop in Lines 4–15 of Algorithm \[alg:main\]. By Lines 2 and 14 of Algorithm \[alg:main\], it follows that $$\begin{aligned} \xi & = \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{\bar{k}} < t_{\bar{k}}^{(h)} \, \text{for some} \, 1 \le \bar{k} \le m }} \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} \notag \\ & = \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{\bar{k}} < t_{\bar{k}}^{(h)} \, \text{for some} \, 1 \le \bar{k} \le m }} \varepsilon_{\langle {\boldsymbol{t}} \rangle} . \label{eq:xi_h}\end{aligned}$$ Similar to the previous paragraph, by the while loop in Line 6–12 of Algorithm \[alg:main\], one can obtain the integer $k$ such that for each $1 \le k^{\prime} \le m$ satisfying $k^{\prime} \neq k$, it holds that $ \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0 $ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $t_{k} \le t_{k}^{(h)}$ and $t_{k^{\prime}}^{(h)} < t_{k^{\prime}} \le r_{k^{\prime}}$. Note also that $l = k$ if $k < m$. Therefore, given that , it follows from that $$\begin{aligned} \mu_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{(\infty)} & = \mu_{l, m}^{(\infty)}[\beta](t_{l}^{(h)}+1, t_{m}^{(h)}+1) - \xi , \\ \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} & = 0 \qquad \mathrm{for} \ \mathrm{every} \ {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} \ \mathrm{satisfying} \ t_{k} \le t_{k}^{(h)} \ \mathrm{and} \ t_{k^{\prime}}^{(h)} < u_{k^{\prime}} \le r_{k^{\prime}} \ \mathrm{for} \ \mathrm{some} \ k^{\prime} \neq k .\end{aligned}$$ Note that it follows from [Lemma \[lem:fujisaki\]]{} that $$\begin{aligned} \mu_{l, m}^{(\infty)}[\beta](t_{l}^{(h)}+1, t_{l}^{(h)}+1) = \beta_{l, m}(t_{l}^{(h)}+1, t_{l}^{(h)}+1) + \min\{ \lambda_{l, m}(t_{l}^{(h)}+1, t_{l}^{(h)}+1), \rho_{i, j}(t_{l}^{(h)}+1, t_{l}^{(h)}+1) \} .\end{aligned}$$ Then, by setting ${\boldsymbol{0}} \le {\boldsymbol{t}}^{(h+1)} \le {\boldsymbol{r}}$ as $$\begin{aligned} t_{k^{\prime}}^{(h+1)} = \begin{cases} t_{k^{\prime}}^{(h)} & \mathrm{if} \ k^{\prime} \neq k , \\ t_{k^{\prime}}^{(h)} + 1 & \mathrm{if} \ k^{\prime} = k \end{cases} \label{eq:t_next}\end{aligned}$$ for each $1 \le k^{\prime} \le m$, we observe that $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)}$ has been solved for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying $t_{\bar{k}} < t_{\bar{k}}^{(h+1)}$ for some $1 \le \bar{k} \le m$. Note that is done in Line 15 of Algorithm \[alg:main\]. If $0 \le \xi < 1$, then $$\begin{aligned} 0 \le \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{\bar{k}} < t_{\bar{k}}^{(h+1)} \, \text{for some} \, 1 \le \bar{k} \le m }} \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} < 1 ,\end{aligned}$$ and we go back to Line 4 of Algorithm \[alg:main\]. Note that ${\boldsymbol{0}} \le {\boldsymbol{t}}^{(h+1)} \le {\boldsymbol{r}}$ if $\xi < 1$ (cf. ). On the other hand, if $\xi = 1$, then $$\begin{aligned} \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ t_{\bar{k}} < t_{\bar{k}}^{(h+1)} \, \text{for some} \, 1 \le \bar{k} \le m }} \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 1 \end{aligned}$$ which implies that the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q} = ( \mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} )_{{\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}}$ is solved. Note that if $h = \Omega( q )$, i.e., if ${\boldsymbol{t}}^{(h)} = {\boldsymbol{r}}$, then $\xi = 1$ always holds after Line 14 of Algorithm \[alg:main\]. Finally, we verify the computational complexity of Algorithm \[alg:main\]. By Line 15 of Algorithm \[alg:main\], the while loop in Lines 4–15 of Algorithm \[alg:main\] is repeated at most $\Omega( q ) = r_{1} + r_{2} + \dots r_{m} + 1$ times. The while loop in Lines 6–12 of Algorithm \[alg:main\] is repeated at $m-1$ times. In Line 7 of Algorithm \[alg:main\], both $\lambda_{i, j}(t_{i} + 1, t_{j} + 1)$ and $\lambda_{i, j}(t_{i} + 1, t_{j} + 1)$ can be calculated by a given initial probability vector $( \varepsilon_{d} )_{d|q}$ at most $\tau( q ) = (r_{1}+1) (r_{2}+1) \cdots (r_{m}+1)$ times addition. Similarly, in Line 13 of Algorithm \[alg:main\], the values $\beta_{l, m}(t_{l}+1, t_{m}+1)$, $\lambda_{l, m}(t_{l}+1, t_{m}+1)$, and $\lambda_{l, m}(t_{l}+1, t_{m}+1)$ can also be calculated by a given initial probability vector $( \varepsilon_{d} )_{d|q}$ at most $\tau( q ) = (r_{1}+1) (r_{2}+1) \cdots (r_{m}+1)$ times addition. Therefore, we conclude that Algorithm \[alg:main\] runs in $\mathrm{O}( \omega( q ) \, \Omega( q ) \, \tau( q ) )$. Note that calculations in Algorithm \[alg:main\] are only addition and subtraction, i.e., there is nether multiplication nor division. This complete the proof of [Theorem \[th:mu\_d\]]{}. By [Theorem \[th:mu\_d\]]{}, we can immediately observe the following corollary. \[cor:mu\_d\] For any initial probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$, there exists a sequence $( {\boldsymbol{t}}^{(h)} = (t_{1}^{(h)}, \dots, t_{m}^{(h)}) )_{h = 0}^{m}$ satisfying (i) ${\boldsymbol{0}} = {\boldsymbol{t}}^{(0)} \le {\boldsymbol{t}}^{(1)} \le \cdots \le {\boldsymbol{t}}^{(m)} = {\boldsymbol{r}}$ and (ii) $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} > 0$ only if ${\boldsymbol{t}} = {\boldsymbol{t}}^{(h)}$ for some $0 \le h \le m$. Consequently, the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ has at most $\Omega( q ) + 1$ positive probability masses. By Algorithm \[alg:main\], we can solve the asymptotic distribution of [Fig. \[subfig:maec\_45\]]{} as $( \mu_{1}^{(\infty)}, \mu_{3}^{(\infty)}, \mu_{5}^{(\infty)}, \mu_{9}^{(\infty)}, \mu_{15}^{(\infty)}, \mu_{45}^{(\infty)} ) = (0, 0, 1/3, 0, 1/3, 1/3)$. A more complicated example of Algorithm \[alg:main\] is given as follows: \[ex:mu\_d\] Consider an erasure channel $V$ defined in [Definition \[def:V\]]{} with an initial probability vector $(\varepsilon_{d})_{d|q}$ as follows: The input alphabet size is $q = 4500 = 2^{2} \cdot 3^{2} \cdot 5^{3}$, where note that the set of positive divisors $d$ of $q$ is $\{ 1, \allowbreak 2, \allowbreak 3, \allowbreak 4, \allowbreak 5, \allowbreak 6, \allowbreak 9, \allowbreak 10, \allowbreak 12, \allowbreak 15, \allowbreak 18, \allowbreak 20, \allowbreak 25, \allowbreak 30, \allowbreak 36, \allowbreak 45, \allowbreak 50, \allowbreak 60, \allowbreak 75, \allowbreak 90, \allowbreak 100, \allowbreak 125, \allowbreak 150, \allowbreak 180, \allowbreak 225, \allowbreak 250, \allowbreak 300, \allowbreak 375, \allowbreak 450, \allowbreak 500, \allowbreak 750, \allowbreak 900, \allowbreak 1125, \allowbreak 1500, \allowbreak 2250, \allowbreak 4500 \}$. The initial probability vector $(\varepsilon_{d})_{d|q}$ is given by[^27] $(\varepsilon_{d})_{d|q} = (1/150) \times (0, \allowbreak 1, \allowbreak 2, \allowbreak 3, \allowbreak 4, \allowbreak 5, \allowbreak 6, \allowbreak 7, \allowbreak 8, \allowbreak 9, \allowbreak 0, \allowbreak 1, \allowbreak 2, \allowbreak 3, \allowbreak 4, \allowbreak 5, \allowbreak 6, \allowbreak 7, \allowbreak 8, \allowbreak 9, \allowbreak 0, \allowbreak 1, \allowbreak 2, \allowbreak 3, \allowbreak 4, \allowbreak 5, \allowbreak 6, \allowbreak 7, \allowbreak 8, \allowbreak 9, \allowbreak 0, \allowbreak 1, \allowbreak 2, \allowbreak 3, \allowbreak 4, \allowbreak 5)$. Then, Algorithm \[alg:main\] solves the asymptotic distribution $(\mu_{d}^{(\infty)})_{d|q} = (29/150, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 1/15, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 11/150, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 9/50, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 11/75, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 1/150, \allowbreak 0, \allowbreak 0, \allowbreak 7/75, \allowbreak 0, \allowbreak 0, \allowbreak 0, \allowbreak 6/25)$. We summarize this result in [Table \[table:mu\_d\]]{}. The calculation process of Algorithm \[alg:main\] is shown in [Appendix \[app:example\_of\_algorithm\]]{}. divisor $d$ $1$ $2$ $3$ $4$ $5$ $6$ $9$ $10$ $12$ $15$ $18$ $20$ $25$ $30$ $36$ $45$ $50$ $60$ -------------------------------- --------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- --------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- $( \varepsilon_{d} )_{d|q}$ $0$ $\mathlarger{\mathlarger{\sfrac{1}{150}}}$ $\mathlarger{\mathlarger{\sfrac{2}{150}}}$ $\mathlarger{\mathlarger{\sfrac{3}{150}}}$ $\mathlarger{\mathlarger{\sfrac{4}{150}}}$ $\mathlarger{\mathlarger{\sfrac{5}{150}}}$ $\mathlarger{\mathlarger{\sfrac{6}{150}}}$ $\mathlarger{\mathlarger{\sfrac{7}{150}}}$ $\mathlarger{\mathlarger{\sfrac{8}{150}}}$ $\mathlarger{\mathlarger{\sfrac{9}{150}}}$ $0$ $\mathlarger{\mathlarger{\sfrac{1}{150}}}$ $\mathlarger{\mathlarger{\sfrac{2}{150}}}$ $\mathlarger{\mathlarger{\sfrac{3}{150}}}$ $\mathlarger{\mathlarger{\sfrac{4}{150}}}$ $\mathlarger{\mathlarger{\sfrac{5}{150}}}$ $\mathlarger{\mathlarger{\sfrac{6}{150}}}$ $\mathlarger{\mathlarger{\sfrac{7}{150}}}$ $( \mu_{d}^{(\infty)} )_{d|q}$ $\mathlarger{\mathlarger{\sfrac{29}{150}}}$ $0$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{1}{15}}}$ $0$ $0$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{11}{150}}}$ $0$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{9}{50}}}$ $0$ $0$ $0$ $0$ divisor $d$ $75$ $90$ $100$ $125$ $150$ $180$ $225$ $250$ $300$ $375$ $450$ $500$ $750$ $900$ $1125$ $1500$ $2250$ $4500$ $( \varepsilon_{d} )_{d|q}$ $\mathlarger{\mathlarger{\sfrac{8}{150}}}$ $\mathlarger{\mathlarger{\sfrac{9}{150}}}$ $0$ $\mathlarger{\mathlarger{\sfrac{1}{150}}}$ $\mathlarger{\mathlarger{\sfrac{2}{150}}}$ $\mathlarger{\mathlarger{\sfrac{3}{150}}}$ $\mathlarger{\mathlarger{\sfrac{4}{150}}}$ $\mathlarger{\mathlarger{\sfrac{5}{150}}}$ $\mathlarger{\mathlarger{\sfrac{6}{150}}}$ $\mathlarger{\mathlarger{\sfrac{7}{150}}}$ $\mathlarger{\mathlarger{\sfrac{8}{150}}}$ $\mathlarger{\mathlarger{\sfrac{9}{150}}}$ 0 $\mathlarger{\mathlarger{\sfrac{1}{150}}}$ $\mathlarger{\mathlarger{\sfrac{2}{150}}}$ $\mathlarger{\mathlarger{\sfrac{3}{150}}}$ $\mathlarger{\mathlarger{\sfrac{4}{150}}}$ $\mathlarger{\mathlarger{\sfrac{5}{150}}}$ $( \mu_{d}^{(\infty)} )_{d|q}$ $0$ $0$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{11}{75}}}$ $0$ $0$ $0$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{1}{150}}}$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{7}{75}}}$ $0$ $0$ $0$ $\mathlarger{\mathlarger{\sfrac{6}{25}}}$ : Example of Algorithm \[alg:main\] with the setting of [Example \[ex:mu\_d\]]{}. The input alphabet size is $q = 4500 = 2^{2} \cdot 3^{2} \cdot 5^{3}$. An initial probability vector $( \varepsilon_{d} )_{d|q}$ and its resultant asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ are summarized in the table.[]{data-label="table:mu_d"} [graph\_input4500\_times28.png]{} (-2, 70) (-1, 25) (28, -2)[indices of ${\boldsymbol{s}}$ (sorted in increasing order of $I( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}} )$)]{} (8, 35)[$\mu_{1}^{(\infty)} = \sfrac{29}{150} \approx 0.193333$]{} (15, 33)[(0, -1)[26]{}]{} (27, 7.5)[$\mu_{5}^{(\infty)} = \sfrac{1}{15} \approx 0.066667$]{} (28, 10.5)[(-1, 4)[1.75]{}]{} (34, 15)[$\mu_{15}^{(\infty)} = \sfrac{11}{150} \approx 0.073333$]{} (35, 18)[(-1, 4)[2]{}]{} (22, 45)[$\mu_{30}^{(\infty)} = \sfrac{9}{50} = 0.18$]{} (35, 43)[(1, -1)[9]{}]{} (60, 28)[$\mu_{150}^{(\infty)} = \sfrac{11}{75} \approx 0.146667$]{} (61, 31)[(0, 1)[13]{}]{} (70, 48.5)[$\mu_{450}^{(\infty)} = \sfrac{1}{150} \approx 0.006667$]{} (71, 51)[(-2, 1)[4.75]{}]{} (45, 67)[$\mu_{900}^{(\infty)} = \sfrac{7}{75} \approx 0.093333$]{} (63, 66)[(1, -1)[5.5]{}]{} (77, 60)[$\mu_{4500}^{(\infty)} = \sfrac{6}{25} = 0.24$]{} (88, 62)[(0, 1)[9]{}]{} Figure \[fig:mu\_d\] shows an example of multilevel channel polarization for the modular arithmetic erasure channels $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ given in [Example \[ex:mu\_d\]]{} (see also [Table \[table:mu\_d\]]{}), where note that [Fig. \[fig:mu\_d\]]{} is plotted by [Proposition \[prop:I(V)\]]{} and the recursive formulas (see also the below of [Corollary \[cor:recursive\_V\]]{}). In this subsection, we have solved the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ algorithmically in the case where $q$ is a general composite number. In the next subsection, we will show that $( \mu_{d}^{(\infty)} )_{d|q}$ is indeed the asymptotic distribution of multilevel channel polarization for a modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$. Asymptotic Distribution Characterized by $( \mu_{d}^{(\infty)} )_{d|q}$ {#sect:asymptotic_distribution} ----------------------------------------------------------------------- The following theorem shows that $( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q}$ tends to a unit vector $(0, \dots, 0, 1, 0, \dots, 0)$ as $n$ goes to infinity for each sequence of polarization process $( {\boldsymbol{s}} = s_{1} s_{2} \cdots s_{n} )_{n=1}^{\infty}$, and limiting proportions of them are exactly characterized by the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$. \[th:polarization\] For any fixed $\delta \in (0, 1)$, it holds that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \delta \le \varepsilon_{d}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| & = 0 , \label{eq:proportion0} \\ \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{d}^{{\boldsymbol{s}}} > 1 - \delta \Big\} \Big| & = \mu_{d}^{(\infty)} \label{eq:proportion1}\end{aligned}$$ for every $d|q$, where $(\mu_{d}^{(\infty)})_{d|q}$ can be calculated by Algorithm \[alg:main\] (cf. [Theorem \[th:mu\_d\]]{}). Note that if $q$ is a prime power, then $(\mu_{d}^{(\infty)})_{d|q}$ simply coincides with the initial probability vector $( \varepsilon_{d} )_{d|q}$, as shown in [Theorem \[th:primepower\]]{}. To prove [Theorem \[th:polarization\]]{}, we give the following simple and useful lemma. \[lem:additive\] For each $n \in \mathbb{N}$, let a nonempty collection $\mathcal{F}_{n}$ of subsets of a set be a field[^28], and let $f_{n} : \mathcal{F}_{n} \to [0, 1]$ be an additive set function. For each $i \in \mathbb{N}$, let $( S_{i, n} )_{n}$ be a sequence of sets such that $S_{i, n} \in \mathcal{F}_{n}$ for every $n \in \mathbb{N}$ and $f_{n}( S_{i, n} ) \to 1$ as $n \to \infty$. Then, it holds that $$\begin{aligned} \lim_{n \to \infty} f_{n} \bigg( \bigcap_{i=1}^{k} S_{i, n} \bigg) = 1 \quad \mathrm{for} \ k \in \mathbb{N} .\end{aligned}$$ We prove [Lemma \[lem:additive\]]{} by induction. Define $$\begin{aligned} S_{n}^{(k)} \coloneqq \bigcap_{i = 1}^{k} S_{i, n}\end{aligned}$$ for each $k, n \in \mathbb{N}$. By hypothesis, it is clear that $$\begin{aligned} \lim_{n \to \infty} f_{n}\big( S_{n}^{(1)} \big) = \lim_{n \to \infty} f_{n}\big( S_{1, n} \big) = 1 .\end{aligned}$$ Suppose that $$\begin{aligned} \lim_{n \to \infty} f_{n}\big( S_{n}^{(k-1)} \big) = 1 .\end{aligned}$$ for a fixed integer $k \in \mathbb{N}$. Then, we have $$\begin{aligned} 1 & = \lim_{n \to \infty} f_{n}\big( S_{n}^{(k-1)} \big) \notag \\ & \ge \liminf_{n \to \infty} f_{n}\big( S_{n}^{(k)} \big) \notag \\ & = \liminf_{n \to \infty} \Big( f_{n}\big( S_{n}^{(k-1)} \big) + f_{n}\big( S_{k, n} \big) - f_{n}\big( S_{n}^{(k-1)} \cup S_{k, n} \big) \Big) \notag \\ & \ge \liminf_{n \to \infty} f_{n}\big( S_{n}^{(k-1)} \big) + \liminf_{n \to \infty} f_{n}\big( S_{k, n} \big) - \limsup_{n \to \infty} f_{n}\big( S_{n}^{(k-1)} \cup S_{k, n} \big) \notag \\ & \ge 1 + 1 - 1 \notag \\ & = 1 ,\end{aligned}$$ which implies that $$\begin{aligned} \lim_{n \to \infty} f_{n}\big( S_{n}^{(k)} \big) = 1 .\end{aligned}$$ This completes the proof of [Lemma \[lem:additive\]]{}. This proof is inspired by Alsan and Telatar’s simple proof of polarization [@alsan_telatar_it2016 Theorem 1]. Let $1 \le i < j \le m$ and $a, b \ge 1$ be given. Define $$\begin{aligned} \nu_{i, j}^{(n)}[\theta](a, b) \coloneqq \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2}\end{aligned}$$ for each $n \in \mathbb{N}$. Then, we have that for a fixed $\delta \in (0, 1)$, $$\begin{aligned} \nu_{i, j}^{(n+1)}[\theta](a, b) & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \frac{ 1 }{ 2 } \Big[ \theta_{i, j}^{{\boldsymbol{s}}-}(a, b)^{2} + \theta_{i, j}^{{\boldsymbol{s}}+}(a, b)^{2} \Big] \notag \\ & \overset{\mathclap{\text{(a)}}}{=} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \Bigg[ \bigg( \frac{ 1 }{ 2 } \Big( \theta_{i, j}^{{\boldsymbol{s}}-}(a, b) + \theta_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big) \bigg)^{2} + \bigg( \frac{ 1 }{ 2 } \Big(\theta_{i, j}^{{\boldsymbol{s}}-}(a, b) - \theta_{i, j}^{{\boldsymbol{s}}+}(a, b) \Big) \bigg)^{2} \Bigg] \notag \\ & \overset{\mathclap{\text{(b)}}}{=} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \Bigg[ \Big( \theta_{i, j}^{{\boldsymbol{s}}}(a, b) + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \Big)^{2} + \Big( \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \, \big[ 1 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big] + \lambda_{i, j}^{{\boldsymbol{s}}}(a, b) \, \rho_{i, j}^{{\boldsymbol{s}}}(a, b) \Big)^{2} \Bigg] \notag \\ & \ge \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \Big[ \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} + \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} \big[1 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big]^{2} \Big] \notag \\ & \ge \nu_{i, j}^{(n)}[\theta](a, b) + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \le 1 - \delta }} \theta_{i, j}^{{\boldsymbol{s}}}(a, b)^{2} \big[1 - \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \big]^{2} \notag \\ & \ge \nu_{i, j}^{(n)}[\theta](a, b) + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \le 1 - \delta }} \delta^{2} (1 - \delta)^{2} , \label{eq:nu_theta}\end{aligned}$$ where (a) follows from the identity $$\begin{aligned} \frac{ x^{2} + y^{2} }{ 2 } = \Big( \frac{ x + y }{ 2 } \Big)^{2} + \Big( \frac{ x - y }{ 2 } \Big)^{2} ,\end{aligned}$$ and (b) follows by [Lemma \[lem:formulas\]]{}. This implies that the sequence $\big( \nu_{i, j}^{(n)}[\theta](a, b) \big)_{n=1}^{\infty}$ is nondecreasing. As $\nu_{i, j}^{(n)}[\theta](a, b) \le 1$ for every $n \in \mathbb{N}$, the sequence $\big( \nu_{i, j}^{(n)}[\theta](a, b) \big)_{n=1}^{\infty}$ is convergent; thus, it holds that $\nu_{i, j}^{(n+1)}[\theta](a, b) - \nu_{i, j}^{(n)}[\theta](a, b) \to 0$ as $n \to \infty$. We get from that $$\begin{aligned} 0 \le \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \le 1 - \delta \Big\} \Big| \le \frac{ \nu_{i, j}^{(n+1)}[\theta](a, b) - \nu_{i, j}^{(n)}[\theta](a, b) }{ \delta^{2} (1 - \delta)^{2} } .\end{aligned}$$ As $\delta \in (0, 1)$ is a fixed number that does not depend on $n \in \mathbb{N}$, this implies that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(a, b) \le 1 - \delta \Big\} \Big| = 0 . \label{eq:theta_limit}\end{aligned}$$ We now prove . It follows from [Corollary \[cor:mu\_d\]]{} that there exist an integer $0 \le \tilde{m} \le m$ and a sequence $( {\boldsymbol{t}}^{(h)} )_{h = 0}^{\tilde{m}}$ such that (i) ${\boldsymbol{0}} = {\boldsymbol{t}}^{(0)} \le {\boldsymbol{t}}^{(1)} \le \cdots \le {\boldsymbol{t}}^{(\tilde{m})} = {\boldsymbol{r}}$; (ii) ${\boldsymbol{t}}^{(h)} \neq {\boldsymbol{t}}^{(h^{\prime})}$ whenever $h \neq h^{\prime}$; and (iii) $\mu_{\langle {\boldsymbol{t}} \rangle} > 0$ if and only if ${\boldsymbol{t}} = {\boldsymbol{t}}^{(h)}$ for some $0 \le h \le \tilde{m}$. If $\mu_{d}^{(\infty)} = 0$, then we observe that for a fixed $\delta \in (0, 1)$, $$\begin{aligned} 0 & = \mu_{d}^{(\infty)} \notag \\ & = \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{-, +\}^{n}} \varepsilon_{d}^{{\boldsymbol{s}}} \notag \\ & \ge \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{-, +\}^{n} : \varepsilon_{d} \ge \delta} \varepsilon_{d}^{{\boldsymbol{s}}} \notag \\ & \ge \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{-, +\}^{n} : \varepsilon_{d} \ge \delta} \delta \notag \\ & = \delta \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{d}^{{\boldsymbol{s}}} \ge \delta \Big\} \Big| ,\end{aligned}$$ which implies that $$\begin{aligned} \mu_{d}^{(\infty)} = 0 \quad \Longrightarrow \quad \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{d}^{{\boldsymbol{s}}} < \delta \Big\} \Big| = 1 . \label{eq:eps_vanish}\end{aligned}$$ Therefore, it suffices to verify that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| = 0 \label{eq:intermediate_eps_sequence}\end{aligned}$$ for every $h = 0, 1, \dots, \tilde{m}$. We prove by induction. Firstly, consider the case where $h = \tilde{m}$, where note that ${\boldsymbol{t}}^{(\tilde{m})} = {\boldsymbol{r}}$ and $\langle {\boldsymbol{t}}^{(\tilde{m})} \rangle = \langle {\boldsymbol{r}} \rangle = q$. Since ${\boldsymbol{t}}^{(\tilde{m}-1)} \le {\boldsymbol{t}}^{(\tilde{m})}$ and ${\boldsymbol{t}}^{(\tilde{m}-1)} \neq {\boldsymbol{t}}^{(\tilde{m})}$, there exists an index $1 \le i \le m$ satisfying $t_{i}^{(\tilde{m}-1)} < t_{i}^{(\tilde{m})}$, which implies that $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0$ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying ${\boldsymbol{t}} \neq {\boldsymbol{r}}$ and $(t_{i}, t_{j}) = (r_{i}, r_{j})$ for some $j \neq i$. For such an appropriate choice of $(i, j)$, we have $$\begin{aligned} 0 & \overset{\mathclap{\text{(a)}}}{=} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(r_{i}, r_{j}) \le 1 - \frac{ \delta }{ \tau( q ) } \Big\} \Big| \notag \\ & \ge \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(r_{i}, r_{j}) \le 1 - \frac{ \delta }{ \tau( q ) } \Big\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \notag \\ & \overset{\mathclap{\text{(b)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \notag \\ & \overset{\mathclap{\text{(c)}}}{=} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left( \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| + \left| \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right| \right. \notag \\ & \qquad \qquad \qquad \left. {} - \left| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \cup \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \right) \notag \\ & \overset{\mathclap{\text{(d)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left( \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| + \left| \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right| - 2^{n} \right) \notag \\ & \overset{\mathclap{\text{(e)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| + \liminf_{n \to \infty} \frac{ 1 }{ 2^{n} } \left| \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right| - 1 \notag \\ & \overset{\mathclap{\text{(f)}}}{=} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| , \label{eq:eps_q_vanish}\end{aligned}$$ where (a) follows from , i.e., $$\begin{aligned} 1 & = \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(r_{i}, r_{j}) \le 1 - \delta \Big\} \Big| \notag \\ & \le \liminf_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(r_{i}, r_{j}) \le 1 - \frac{ \delta }{ \tau( q ) } \Big\} \Big| \notag \\ & \le 1 \end{aligned}$$ with $\tau( q ) \coloneqq \prod_{i = 1}^{m}(r_{i} + 1)$; (b) follows from the identities $$\begin{aligned} \theta_{i, j}^{{\boldsymbol{s}}}(r_{i}, r_{j}) \overset{\eqref{def:theta}}{=} \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} = \varepsilon_{q}^{{\boldsymbol{s}}} + \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , {\boldsymbol{t}} \neq {\boldsymbol{r}} , \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}}\end{aligned}$$ and the inclusions $$\begin{aligned} & \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(r_{i}, r_{j}) \le 1 - \frac{ \delta }{ \tau( q ) } \Big\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \notag \\ & \qquad \supset \left\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \middle| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tau( q ) } - \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , {\boldsymbol{t}} \neq {\boldsymbol{r}} \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \right\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \notag \\ & \qquad \supset \left\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \middle| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tau( q ) } - \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}} , {\boldsymbol{t}} \neq {\boldsymbol{r}} \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \frac{ \delta }{ \tau( q ) } \right\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \notag \\ & \qquad \supset \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tau( q ) } - (\tau(q) - 1) \frac{ \delta }{ \tau( q ) } \Big\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \notag \\ & \qquad = \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) ; \label{eq:inclusions}\end{aligned}$$ (c) follows by the inclusion-exclusion principle; (d) follows from the fact that $$\begin{aligned} \left| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{q}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \cup \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, {\boldsymbol{t}} \neq {\boldsymbol{r}}, \\ (t_{i}, t_{j}) = (r_{i}, r_{j}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \le 2^{n} ;\end{aligned}$$ (e) follows from the fact that $$\begin{aligned} \limsup_{n \to \infty} (a_{n} + b_{n}) \ge \limsup_{n \to \infty} a_{n} + \liminf_{n \to \infty} b_{n} \label{eq:limsup_liminf}\end{aligned}$$ for two sequences $(a_{n})_{n}$ and $(b_{n})_{n}$; and (f) follows from [Lemma \[lem:additive\]]{} and . Thus, it follows from that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(\tilde{m})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| = 0 .\end{aligned}$$ We now suppose that for some integer $0 \le h < \tilde{m}$, it holds that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| = 0 \qquad \mathrm{for} \ \mathrm{all} \ h < h^{\prime} \le \tilde{m} . \label{eq:hypo_eps_h}\end{aligned}$$ Note that $\mu_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{(\infty)} > 0$ for every $h \le h^{\prime} \le \tilde{m}$; and $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0$ for every ${\boldsymbol{t}}^{(h)} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying ${\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})}$ for all $h \le h^{\prime} \le \tilde{m}$. If $h > 0$, then since ${\boldsymbol{t}}^{(h-1)} \le {\boldsymbol{t}}^{(h)}$ and ${\boldsymbol{t}}^{(h-1)} \neq {\boldsymbol{t}}^{(h)}$, there exists an index $1 \le i \le m$ satisfying $t_{i}^{(h-1)} < t_{i}^{(h)}$, which implies that $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0$ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying ${\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})}$ for all $h \le h^{\prime} \le \tilde{m}$ and $(t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)})$ for some $j \neq i$. If $h = 0$, then it is obvious that $\mu_{\langle {\boldsymbol{t}} \rangle}^{(\infty)} = 0$ for every ${\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}$ satisfying ${\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})}$ for all $0 \le h^{\prime} \le \tilde{m}$. For such an appropriate choice of $(i, j)$, similar to , we have $$\begin{aligned} 0 & \overset{\mathclap{\text{(a)}}}{=} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(t_{i}^{(h)}, t_{j}^{(h)}) \le 1 - \frac{ \delta }{ \tau( q ) } \Big\} \Big| \notag \\ & \ge \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \theta_{i, j}^{{\boldsymbol{s}}}(t_{i}^{(h)}, t_{j}^{(h)}) \le 1 - \frac{ \delta }{ \tau( q ) } \Big\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ {\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})} \, \forall h^{\prime} \ge h , \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \notag \\ & \overset{\mathclap{\text{(b)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Bigg\} \cap \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ {\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})} \, \forall h^{\prime} \ge h , \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \notag \\ & \overset{\mathclap{\text{(c)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \left( \Bigg| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Bigg\} \Bigg| + \left| \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ {\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})} \, \forall h^{\prime} \ge h , \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right| - 2^{n} \right) \notag \\ & \overset{\mathclap{\text{(d)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Bigg\} \Bigg| + \liminf_{n \to \infty} \frac{ 1 }{ 2^{n} } \left| \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ {\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})} \, \forall h^{\prime} \ge h , \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right| - 1 \notag \\ & \overset{\mathclap{\text{(e)}}}{=} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Bigg\} \Bigg| , \label{eq:eps_sum_vanish}\end{aligned}$$ where (a) follows from ; (b) follows from the identities $$\begin{aligned} \theta_{i, j}^{{\boldsymbol{s}}}(t_{i}^{(h)}, t_{j}^{(h)}) & \overset{\eqref{def:theta}}{=} \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} = \left( \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \right) + \left( \sum_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ {\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})} \, \forall h^{\prime} \ge h , \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} \right) \label{eq:inclusions_2}\end{aligned}$$ and the inclusions as in ; (c) follows by the inclusion-exclusion principle and the fact that $$\begin{aligned} \left| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Bigg\} \cup \left( \bigcap_{\substack{ {\boldsymbol{t}} : {\boldsymbol{0}} \le {\boldsymbol{t}} \le {\boldsymbol{r}}, \\ {\boldsymbol{t}} \neq {\boldsymbol{t}}^{(h^{\prime})} \, \forall h^{\prime} \ge h , \\ (t_{i}, t_{j}) \ge (t_{i}^{(h)}, t_{j}^{(h)}) }} \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{\langle {\boldsymbol{t}} \rangle}^{{\boldsymbol{s}}} < \frac{ \delta }{ \tau( q ) } \Big\} \right) \right| \le 2^{n} ;\end{aligned}$$ (d) follows from ; and (e) follows from [Lemma \[lem:additive\]]{} and . Hence, it follows from that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Bigg\} \Bigg| & = 0 . \label{eq:eps_sum_vanish2}\end{aligned}$$ Furthermore, we observe that $$\begin{aligned} 0 & \overset{\mathclap{\text{(a)}}}{=} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Bigg\} \Bigg| \notag \\ & \ge \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \Bigg\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Bigg| \ \delta \le \sum_{h^{\prime} = h}^{\tilde{m}} \varepsilon_{\langle {\boldsymbol{t}}^{(h^{\prime})} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Bigg\} \cap \Bigg( \bigcap_{h^{\prime} = h + 1}^{\tilde{m}} \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \frac{ \delta }{ \tilde{m} } \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Big\}^{\complement} \Bigg) \Bigg| \notag \\ & \overset{\mathclap{\text{(b)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \cap \Bigg( \bigcap_{h^{\prime} = h + 1}^{\tilde{m}} \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \frac{ \delta }{ \tilde{m} } \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Big\}^{\complement} \Bigg) \Bigg| \notag \\ & \overset{\mathclap{\text{(c)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg( \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| + \Bigg| \bigcap_{h^{\prime} = h + 1}^{\tilde{m}} \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \frac{ \delta }{ \tilde{m} } \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Big\}^{\complement} \Bigg| - 2^{n} \Bigg) \notag \\ & \overset{\mathclap{\text{(d)}}}{\ge} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| + \liminf_{n \to \infty} \frac{ 1 }{ 2^{n} } \Bigg| \bigcap_{h^{\prime} = h + 1}^{\tilde{m}} \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \frac{ \delta }{ \tilde{m} } \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Big\}^{\complement} \Bigg| - 1 \notag \\ & \overset{\mathclap{\text{(e)}}}{=} \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| , \label{eq:eps_h_vanish}\end{aligned}$$ where (a) follows from ; (b) follows by the inclusions as in and ; (c) follows by the inclusion-exclusion principle and the fact that $$\begin{aligned} \Bigg| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \cup \Bigg( \bigcap_{h^{\prime} = h + 1}^{\tilde{m}} \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \frac{ \delta }{ \tilde{m} } \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \frac{ \delta }{ \tilde{m} } \Big\}^{\complement} \Bigg) \Bigg| \le 2^{n} ;\end{aligned}$$ (d) follows from ; and (e) follows from [Lemma \[lem:additive\]]{} and the hypothesis . Therefore, it follows from that $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{\langle {\boldsymbol{t}}^{(h)} \rangle}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| & = 0 , \label{eq:eps_h_vanish2}\end{aligned}$$ which implies by induction of together with that of [Theorem \[th:polarization\]]{} holds, i.e., $$\begin{aligned} \lim_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{-, +\}^{n} \ \Big| \ \delta \le \varepsilon_{d}^{{\boldsymbol{s}}} \le 1 - \delta \Big\} \Big| & = 0\end{aligned}$$ for every fixed $\delta \in (0, 1)$ and every $d|q$. Finally, we prove of [Theorem \[th:polarization\]]{}. It follows by the definition that $$\begin{aligned} \mu_{d}^{(n)} & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \varepsilon_{d}^{{\boldsymbol{s}}} \notag \\ & \le \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \varepsilon_{d} < \delta }} \delta + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \delta \le \varepsilon_{d} \le 1 - \delta }} (1 - \delta) + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \varepsilon_{d} > 1 - \delta }} 1 \notag \\ & = \delta + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \delta \le \varepsilon_{d} \le 1 - \delta }} (1 - 2 \delta) + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \varepsilon_{d} > 1 - \delta }} (1 - \delta) , \notag\end{aligned}$$ which implies together with that $$\begin{aligned} \mu_{d}^{(\infty)} \le \delta + (1 - \delta) \liminf_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{d}^{(n)} > 1 - \delta \Big\} \Big| \label{eq:proportion1_1}\end{aligned}$$ In addition, we also get $$\begin{aligned} \mu_{d}^{(n)} & = \frac{ 1 }{ 2^{n} } \sum_{{\boldsymbol{s}} \in \{ -, + \}^{n}} \varepsilon_{d}^{{\boldsymbol{s}}} \notag \\ & \ge \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \delta \le \varepsilon_{d} \le 1 - \delta }} \delta + \frac{ 1 }{ 2^{n} } \sum_{\substack{ {\boldsymbol{s}} \in \{ -, + \}^{n} : \\ \varepsilon_{d} > 1 - \delta }} (1-\delta) ,\end{aligned}$$ which also implies together with that $$\begin{aligned} (1-\delta) \limsup_{n \to \infty} \frac{ 1 }{ 2^{n} } \Big| \Big\{ {\boldsymbol{s}} \in \{ -, + \}^{n} \ \Big| \ \varepsilon_{d}^{(n)} > 1 - \delta \Big\} \Big| \le \mu_{d}^{(\infty)} \label{eq:proportion1_2}\end{aligned}$$ As $\delta > 0$ can be chosen arbitrarily small, as in Alsan and Telatar’s proof of [@alsan_telatar_it2016 Theorem 1], it follows from and that . This completes the proof of [Theorem \[th:polarization\]]{}. Considering the input alphabet $\mathcal{X} = \mathbb{Z}/q\mathbb{Z}$ as a cyclic group $(\mathbb{Z}/q\mathbb{Z}, +)$, similar to , we can conclude a multilevel polarization theorem of modular arithmetic erasure channels $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$, as summarized in [Corollary \[cor:multilevel\]]{} of [Section \[sect:asymptotic\_distribution\_MAEC\]]{}. We now give a proof of [Corollary \[cor:multilevel\]]{} shortly as follows: Let $V_{{\boldsymbol{\varepsilon}}}$ be a modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ defined in [Definition \[def:V\]]{}. It follows from that for each $d|q$, the homomorphism channel $V_{{\boldsymbol{\varepsilon}}}[\ker \varphi_{d}] : \mathcal{X} / \ker \varphi_{d} \to \mathcal{Y}$ is given by $$\begin{aligned} V_{{\boldsymbol{\varepsilon}}}[\ker \varphi_{d}](y \mid x) & = \begin{dcases} \bar{\varepsilon}_{d_{1}} & \mathrm{if} \ y = x + d_{1}\mathbb{Z} \ \mathrm{for} \ \mathrm{some} \ d_{1}|d , \\ 0 & \mathrm{otherwise} , \end{dcases}\end{aligned}$$ where the probability vector $\bar{{\boldsymbol{\varepsilon}}} = ( \bar{\varepsilon}_{d_{1}} )_{d_{1}|d}$ is given by $$\begin{aligned} \bar{\varepsilon}_{d_{1}} \coloneqq \varepsilon_{d_{1}} + \sum_{\substack{ d_{2}|q : \\ d_{2} \neq d_{1}, d_{2} \nmid d, d_{1}|d_{2} }} \varepsilon_{d_{2}} .\end{aligned}$$ That is, the channel $V_{{\boldsymbol{\varepsilon}}}[\ker \varphi_{d}]$ is also a modular arithmetic channel $\mathrm{MAEC}_{d}( \bar{{\boldsymbol{\varepsilon}}} )$. It follows from [Proposition \[prop:I(V)\]]{} that $$\begin{aligned} I( V[\ker \varphi_{d}] ) = \sum_{d_{1}|d} \Bigg( \varepsilon_{d_{1}} + \sum_{\substack{ d_{2}|q : \\ d_{2} \neq d_{1}, d_{2} \nmid d, d_{1}|d_{2} }} \varepsilon_{d_{2}} \Bigg) \log d_{1} . \label{eq:homomorphism_I}\end{aligned}$$ Given a divisor $d|q$, let us denote by ${\boldsymbol{u}}_{d} = ( u_{d^{\prime}} )_{d^{\prime}|q} = (0, \cdots, 0, 1, 0, \dots, 0)$ a unit vector for which $u_{d} = 1$. It follows from that if $( \varepsilon_{d}^{{\boldsymbol{s}}} )_{d|q}$ approaches to ${\boldsymbol{u}}_{d}$ as the number $n$ in a sequence of polarization processes $( {\boldsymbol{s}} = s_{1} s_{2} \cdots s_{n} )_{n=1}^{\infty}$ goes to infinity, then both $I( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}} )$ and $I( V_{{\boldsymbol{\varepsilon}}}^{{\boldsymbol{s}}}[ \ker \varphi_{d} ] )$ tend to $\log d$. Therefore, [Theorem \[th:polarization\]]{} directly provides [Corollary \[cor:multilevel\]]{}. Conclusion {#sect:conclusion} ========== We have proposed a general type of erasure-like channels called *modular arithmetic erasure channels*, and have characterized the *asymptotic distribution* of multilevel channel polarization for the proposed channel model. Compared with the study of strong channel polarization, the multilevel channel polarization is hard to formulate and characterize. Particularly, the asymptotic distribution, i.e., the limiting proportions of partially noiseless synthetic channels, of multilevel channel polarization has not been solved. To tackle the study of multilevel channel polarization, we made informative toy problems by introducing modular arithmetic erasure channels in [Definition \[def:V\]]{} of [Section \[sect:maec\]]{}. Similar to the recursive formulas of the polar transforms for BECs (see [Proposition \[prop:bec\]]{} of [Section \[sect:bec\]]{}), we also gave the recursive formulas and for modular arithmetic erasure channels in [Theorem \[th:recursive\_V\]]{}. Moreover, as a partial solution of an open problem in the context of multilevel channel polarization, [Theorem \[th:polarization\]]{} and [Corollary \[cor:multilevel\]]{} characterized the asymptotic distribution of multilevel channel polarization for modular arithmetic erasure channels by a certain probability vector $( \mu_{d}^{(\infty)} )_{d|q}$ defined in . Furthermore, [Theorem \[th:primepower\]]{} showed that $( \mu_{d}^{(\infty)} )_{d|q}$ coincides with the underlying probability vector ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ of the initial modular arithmetic erasure channel $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$ in the case where $q$ is a prime power; and [Theorem \[th:mu\_d\]]{} showed that Algorithm \[alg:main\] can calculate $( \mu_{d}^{(\infty)} )_{d|q}$ as computable quantities from the initial ${\boldsymbol{\varepsilon}} = ( \varepsilon_{d} )_{d|q}$ for general composite numbers $q$. As future works, of course, characterizing the asymptotic distribution of multilevel channel polarization for general DMCs is highly interesting to establish the multilevel polarization theorem. In addition, the performance comparisons between strong and multilevel channel polarization will help us to make better non-binary polar codes. In works of tackling them, we hope that modular arithmetic erasure channels will be helpful. In this study, we have only considered the polar transforms under the ring $\mathbb{Z}/q\mathbb{Z}$ of integers modulo $q$, i.e., under the structure of a cyclic group $(\mathbb{Z}/q\mathbb{Z}, +)$. On the other hand, Abbe and Telatar [@abbe_telatar_it2012] considered polar codes for $m$-user multiple access channels by setting $\mathcal{X} = \mathbb{F}_{2}^{m} = \mathbb{F}_{2} \times \mathbb{F}_{2} \times \dots \times \mathbb{F}_{2}$, which forms an elementary abelian group. Nasser and Telatar [@nasser_telatar_it2016 Section VII] and Nasser [@nasser_it2017_fourier] also considered this problem by setting that $\mathcal{X}$ is an arbitrary finite abelian group. In the study of polar codes for multiple access channels, a more general algebraic structure of $\mathcal{X}$ was also considered by Nasser [@nasser_it2017_ergodic2 Section V]. To generalize this study from two-terminal communications to multiple access channels, extending the structure of the input alphabet $\mathcal{X}$ from cyclic groups $(\mathbb{Z}/q\mathbb{Z}, +)$ to more general algebraic structures is also significantly of interest. Proof of [Proposition \[prop:I(V)\]]{} {#app:proof_prop:I(V)} ====================================== A direct calculation shows that $$\begin{aligned} I_{\alpha}( V_{{\boldsymbol{\varepsilon}}} ) & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{y \in \mathcal{Y}} \bigg( \sum_{x \in \mathcal{X}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}}(y \mid x)^{\alpha} \bigg)^{1/\alpha} \Bigg) \notag \\ & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \bigg( \sum_{x \in \mathbb{Z}/q\mathbb{Z}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}}(y \mid x)^{\alpha} \bigg)^{1/\alpha} \Bigg) \notag \\ & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \bigg( \sum_{\substack{ x \in \mathbb{Z}/q\mathbb{Z} : \\ x \equiv y \ (\mathrm{mod} \, d) }} \frac{ 1 }{ q } \, \varepsilon_{d}^{\alpha} \bigg)^{1/\alpha} \Bigg) \notag \\ & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \bigg( \frac{ q }{ d } \frac{ 1 }{ q } \, \varepsilon_{d}^{\alpha} \bigg)^{1/\alpha} \Bigg) \notag \\ & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \frac{ \varepsilon_{d} }{ d^{1/\alpha} } \Bigg) \notag \\ & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{d|q} \frac{ \varepsilon_{d} }{ d^{(1/\alpha) - 1} } \Bigg) \notag \\ & = \frac{ \alpha }{ \alpha - 1 } \log \Bigg( \sum_{d|q} d^{(\alpha - 1)/\alpha} \, \varepsilon_{d} \Bigg)\end{aligned}$$ for each $\alpha \in (0, 1) \cup (1, \infty)$. The rest of formulas can be verified as follows: $$\begin{aligned} I_{0}( V_{{\boldsymbol{\varepsilon}}} ) & = \min_{y \in \mathcal{Y}} \bigg( \log \frac{ q }{ | \{ x \in \mathcal{X} \mid V_{{\boldsymbol{\varepsilon}}}(y \mid x) > 0 \} | } \bigg) \notag \\ & = \min_{d|q} \min_{y \in \mathbb{Z}/d\mathbb{Z}} \bigg( \log \frac{ q }{ | \{ x \in \mathcal{X} \mid V_{{\boldsymbol{\varepsilon}}}(y \mid x) > 0 \} | } \bigg) \notag \\ & = \min_{d|q : \varepsilon_{d} > 0} \min_{y \in \mathbb{Z}/d\mathbb{Z}} \bigg( \log \frac{ q }{ (q/d) } \bigg) \notag \\ & = \min_{d|q : \varepsilon_{d} > 0} \Big( \log d \Big) , \\ I( V_{{\boldsymbol{\varepsilon}}} ) = I_{1}( V_{{\boldsymbol{\varepsilon}}} ) & = \sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}}(y \mid x) \log \frac{ V_{{\boldsymbol{\varepsilon}}}(y \mid x) }{ \sum_{x^{\prime} \in \mathcal{X}} (1/q) V_{{\boldsymbol{\varepsilon}}}(y \mid x^{\prime}) } \notag \\ & = \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \sum_{x \in \mathbb{Z}/q\mathbb{Z}} \frac{ 1 }{ q } V_{{\boldsymbol{\varepsilon}}}(y \mid x) \log \frac{ V_{{\boldsymbol{\varepsilon}}}(y \mid x) }{ \sum_{x^{\prime} \in \mathbb{Z}/q\mathbb{Z}} (1/q) V_{{\boldsymbol{\varepsilon}}}(y \mid x^{\prime}) } \notag \\ & = \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \sum_{\substack{ x \in \mathbb{Z}/q\mathbb{Z} : \\ x \equiv y \ (\mathrm{mod} \, d) }} \frac{ 1 }{ q } \, \varepsilon_{d} \log \frac{ \varepsilon_{d} }{ \sum_{x^{\prime} \in \mathbb{Z}/q\mathbb{Z} : x^{\prime} \equiv y \ (\mathrm{mod} \, d)} (1/q) \, \varepsilon_{d} } \notag \\ & = \sum_{d|q} d \, \frac{ q }{ d } \, \frac{ 1 }{ q } \, \varepsilon_{d} \log d \notag \\ & = \sum_{d|q} (\log d) \, \varepsilon_{d} , \\ I_{\infty}( V_{{\boldsymbol{\varepsilon}}} ) & = \log \bigg( \sum_{y \in \mathcal{Y}} \max_{x \in \mathcal{X}} V_{{\boldsymbol{\varepsilon}}}(y \mid x) \bigg) \notag \\ & = \log \bigg( \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \max_{x \in \mathbb{Z}/q\mathbb{Z}} V_{{\boldsymbol{\varepsilon}}}(y \mid x) \bigg) \notag \\ & = \log \bigg( \sum_{d|q} \sum_{y \in \mathbb{Z}/d\mathbb{Z}} \varepsilon_{d} \bigg) \notag \\ & = \log \bigg( \sum_{d|q} d \, \varepsilon_{d} \bigg) .\end{aligned}$$ This completes the proof of [Proposition \[prop:I(V)\]]{}. Example of Algorithm \[alg:main\] {#app:example_of_algorithm} ================================= We show an example of the calculation process of Algorithm \[alg:main\] in the setting of [Example \[ex:mu\_d\]]{} as follows: - $m = 3$; - $(p_{1}, p_{2}, p_{3}) = (2, 3, 5)$; - ${\boldsymbol{r}} = (r_{1}, r_{2}, r_{3}) = (2, 2, 3)$; - the input alphabet size $q = p_{1}^{r_{1}} p_{2}^{r_{2}} p_{3}^{r_{3}} = 2^{2} \cdot 3^{2} \cdot 5^{3} = 4500$; - the initial probability vector $( \varepsilon_{d} )_{d|q} = (\varepsilon_{1}, \varepsilon_{2}, \dots, \varepsilon_{4500})$ is given as [Example \[ex:mu\_d\]]{} (see also [Table \[table:mu\_d\]]{}). Note that $d = p_{1}^{t_{1}} p_{2}^{t_{2}} p_{3}^{t_{3}} = \langle t_{1}, t_{2}, t_{3} \rangle = \langle {\boldsymbol{t}} \rangle$. In Lines 1–3 of Algorithm \[alg:main\], we first initialize as follows: - $( \mu_{d}^{(\infty)} )_{d|q} = ( \mu_{1}^{(\infty)}, \mu_{2}^{(\infty)}, \dots, \mu_{4500}^{(\infty)} ) = (0, 0, \dots, 0)$; - $\xi = 0$; and - ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (0, 0, 0)$. It is clear that the condition $0 \le \xi = 0 < 1$ of Line 4 holds. Consider the first step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 0$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (0, 0, 0)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. It can be verified that $$\begin{aligned} \lambda_{1, 2}(1, 1) & = \sum_{u_{1} = 1}^{2} \sum_{u_{2} = 0}^{0} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{2} + \varepsilon_{4} + \varepsilon_{10} + \varepsilon_{20} + \varepsilon_{50} + \varepsilon_{100} + \varepsilon_{250} + \varepsilon_{500} = \frac{ 16 }{ 75 } , \\ \rho_{1, 2}(1, 1) & = \sum_{u_{1} = 0}^{0} \sum_{u_{2} = 1}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{3} + \varepsilon_{9} + \varepsilon_{15} + \varepsilon_{45} + \varepsilon_{75} + \varepsilon_{225} + \varepsilon_{375} + \varepsilon_{1125} = \frac{ 43 }{ 150 } .\end{aligned}$$ Since $\lambda_{1, 2}(1, 1) < \rho_{1, 2}(1, 1)$, store $(k, l) = (2, 1)$ as in Line 8; reset $(i, j) = (2, 3)$ as in Line 9; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{2, 3}(1, 1) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 1}^{2} \sum_{u_{3} = 0}^{0} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{3} + \varepsilon_{6} + \varepsilon_{9} + \varepsilon_{12} + \varepsilon_{18} + \varepsilon_{36} = \frac{ 1 }{ 6 } , \\ \rho_{2, 3}(1, 1) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{0} \sum_{u_{3} = 1}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{5} + \varepsilon_{10} + \varepsilon_{20} + \varepsilon_{25} + \varepsilon_{50} + \varepsilon_{100} + \varepsilon_{125} + \varepsilon_{250} + \varepsilon_{500} = \frac{ 7 }{ 30 } .\end{aligned}$$ Since $\lambda_{2, 3}(1, 1) < \rho_{2, 3}(1, 1)$, store $(k, l) = (3, 2)$ as in Line 8; reset $(i, j) = (3, 4)$ as in Line 9; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{2, 3}(1, 1) = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{0} \sum_{u_{3} = 0}^{0} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{4} = \frac{ 2 }{ 75 } .\end{aligned}$$ Since $\lambda_{2, 3}(1, 1) < \rho_{2, 3}(1, 1)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 0, 0, 0 \rangle}^{(\infty)} = \mu_{1}^{(\infty)} = \beta_{2, 3}(1, 1) + \lambda_{2, 3}(1, 1) - \xi = \frac{ 29 }{ 150 } .\end{aligned}$$ Resetting $\xi = 29/150$ and $t_{3} = 1$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (0, 0, 1)$, we go back to Line 4. As $0 \le \xi = 29/150 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the second step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 29/150$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (0, 0, 1)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. Since $\lambda_{2, 3}(1, 1) < \rho_{2, 3}(1, 1)$, store $(k, l) = (3, 2)$ as in Line 8; reset $(i, j) = (3, 4)$ as in Line 9; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{2, 3}(1, 2) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 1}^{2} \sum_{u_{3} = 0}^{1} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{3} + \varepsilon_{6} + \varepsilon_{9} + \varepsilon_{12} + \varepsilon_{15} + \varepsilon_{18} + \varepsilon_{30} + \varepsilon_{36} + \varepsilon_{45} + \varepsilon_{60} + \varepsilon_{90} + \varepsilon_{180} = \frac{ 61 }{ 150 } , \\ \rho_{2, 3}(1, 2) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{0} \sum_{u_{3} = 2}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{25} + \varepsilon_{50} + \varepsilon_{100} + \varepsilon_{125} + \varepsilon_{250} + \varepsilon_{500} = \frac{ 23 }{ 150 } .\end{aligned}$$ Since $\lambda_{2, 3}(1, 2) > \rho_{2, 3}(1, 2)$, store $(k, l) = (2, 2)$ as in Line 11; reset $(i, j) = (2, 4)$ as in Line 12; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{2, 3}(1, 2) = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{0} \sum_{u_{3} = 0}^{1} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{4} + \varepsilon_{5} + \varepsilon_{10} + \varepsilon_{20} = \frac{ 8 }{ 75 } .\end{aligned}$$ Since $\lambda_{2, 3}(1, 2) > \rho_{2, 3}(1, 2)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 0, 0, 1 \rangle}^{(\infty)} = \mu_{5}^{(\infty)} = \beta_{2, 3}(1, 2) + \rho_{2, 3}(1, 2) - \xi = \frac{ 1 }{ 15 } .\end{aligned}$$ Resetting $\xi = (29/150) + (1/15) = 59/150$ and $t_{2} = 1$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (0, 1, 1)$, we go back to Line 4. As $0 \le \xi = 59/150 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the third step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 59/150$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (0, 1, 1)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. It can be verified that $$\begin{aligned} \lambda_{1, 2}(1, 2) & = \sum_{u_{1} = 1}^{2} \sum_{u_{2} = 0}^{1} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} \notag \\ & = \varepsilon_{2} + \varepsilon_{4} + \varepsilon_{6} + \varepsilon_{10} + \varepsilon_{12} + \varepsilon_{20} + \varepsilon_{30} + \varepsilon_{50} + \varepsilon_{60} + \varepsilon_{100} + \varepsilon_{150} + \varepsilon_{250} + \varepsilon_{300} + \varepsilon_{500} + \varepsilon_{750} + \varepsilon_{1500} = \frac{ 11 }{ 25 } , \\ \rho_{1, 2}(1, 2) & = \sum_{u_{1} = 0}^{0} \sum_{u_{2} = 2}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{9} + \varepsilon_{45} + \varepsilon_{225} + \varepsilon_{1125} = \frac{ 17 }{ 150 } .\end{aligned}$$ Since $\lambda_{1, 2}(1, 2) > \rho_{1, 2}(1, 2)$, store $(k, l) = (1, 1)$ as in Line 11; reset $(i, j) = (1, 3)$ as in Line 12; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{1, 3}(1, 2) & = \sum_{u_{1} = 1}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{1} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{2} + \varepsilon_{4} + \varepsilon_{6} + \varepsilon_{10} + \varepsilon_{12} + \varepsilon_{18} + \varepsilon_{20} + \varepsilon_{30} + \varepsilon_{36} + \varepsilon_{60} + \varepsilon_{90} + \varepsilon_{180} = \frac{ 17 }{ 50 } , \\ \rho_{1, 3}(1, 2) & = \sum_{u_{1} = 0}^{0} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 2}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{25} + \varepsilon_{75} + \varepsilon_{125} + \varepsilon_{225} + \varepsilon_{375} + \varepsilon_{1125} = \frac{ 4 }{ 25 } .\end{aligned}$$ Since $\lambda_{1, 3}(1, 2) > \rho_{1, 3}(1, 2)$, store $(k, l) = (1, 1)$ as in Line 11; reset $(i, j) = (1, 4)$ as in Line 12; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{1, 3}(1, 2) = \sum_{u_{1} = 0}^{0} \sum_{u_{2} = 0}^{0} \sum_{u_{3} = 0}^{1} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{1} + \varepsilon_{3} + \varepsilon_{5} + \varepsilon_{9} + \varepsilon_{15} + \varepsilon_{45} = \frac{ 13 }{ 75 } .\end{aligned}$$ Since $\lambda_{1, 2}(1, 2) > \rho_{1, 2}(1, 2)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 0, 1, 1 \rangle}^{(\infty)} = \mu_{15}^{(\infty)} = \beta_{1, 3}(1, 2) + \rho_{1, 3}(1, 2) - \xi = \frac{ 11 }{ 150 } .\end{aligned}$$ Resetting $\xi = (59/150) + (11/150) = 1/3$ and $t_{1} = 1$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (1, 1, 1)$, we go back to Line 4. As $0 \le \xi = 1/3 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the fourth step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 1/3$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (1, 1, 1)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. It can be verified that $$\begin{aligned} \lambda_{1, 2}(2, 2) & = \sum_{u_{1} = 2}^{2} \sum_{u_{2} = 0}^{1} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{4} + \varepsilon_{12} + \varepsilon_{20} + \varepsilon_{60} + \varepsilon_{100} + \varepsilon_{300} + \varepsilon_{500} + \varepsilon_{1500} = \frac{ 37 }{ 150 } , \\ \rho_{1, 2}(2, 2) & = \sum_{u_{1} = 0}^{1} \sum_{u_{2} = 2}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{9} + \varepsilon_{18} + \varepsilon_{45} + \varepsilon_{90} + \varepsilon_{225} + \varepsilon_{450} + \varepsilon_{1125} + \varepsilon_{2250} = \frac{ 19 }{ 75 } .\end{aligned}$$ Since $\lambda_{1, 2}(2, 2) < \rho_{1, 2}(2, 2)$, store $(k, l) = (2, 1)$ as in Line 8; reset $(i, j) = (2, 3)$ as in Line 9; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{2, 3}(2, 2) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 2}^{2} \sum_{u_{3} = 0}^{1} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{9} + \varepsilon_{18} + \varepsilon_{36} + \varepsilon_{45} + \varepsilon_{90} + \varepsilon_{180} + \varepsilon_{500} + \varepsilon_{1500} = \frac{ 37 }{ 150 } , \\ \rho_{2, 3}(2, 2) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{1} \sum_{u_{3} = 2}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} \notag \\ & = \varepsilon_{25} + \varepsilon_{50} + \varepsilon_{75} + \varepsilon_{100} + \varepsilon_{125} + \varepsilon_{150} + \varepsilon_{250} + \varepsilon_{300} + \varepsilon_{375} + \varepsilon_{500} + \varepsilon_{750} + \varepsilon_{1500} = \frac{ 49 }{ 150 } .\end{aligned}$$ Since $\lambda_{2, 3}(2, 2) < \rho_{2, 3}(2, 2)$, store $(k, l) = (3, 2)$ as in Line 8; reset $(i, j) = (3, 4)$ as in Line 9; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{2, 3}(2, 2) = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{1} \sum_{u_{3} = 0}^{1} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{3} + \varepsilon_{4} + \varepsilon_{5} + \varepsilon_{6} + \varepsilon_{10} + \varepsilon_{12} + \varepsilon_{15} + \varepsilon_{20} + \varepsilon_{30} + \varepsilon_{60} = \frac{ 1 }{ 3 } .\end{aligned}$$ Since $\lambda_{2, 3}(2, 2) < \rho_{2, 3}(2, 2)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 1, 1, 1 \rangle}^{(\infty)} = \mu_{30}^{(\infty)} = \beta_{2, 3}(2, 2) + \lambda_{2, 3}(2, 2) - \xi = \frac{ 9 }{ 50 } .\end{aligned}$$ Resetting $\xi = (1/3) + (9/50) = 77/150$ and $t_{3} = 2$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (1, 1, 2)$, we go back to Line 4. As $0 \le \xi = 77/150 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the fifth step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 77/150$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (1, 1, 2)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. Since $\lambda_{1, 2}(2, 2) < \rho_{1, 2}(2, 2)$, store $(k, l) = (2, 1)$ as in Line 8; reset $(i, j) = (2, 3)$ as in Line 9; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{2, 3}(2, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 2}^{2} \sum_{u_{3} = 0}^{2} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{9} + \varepsilon_{18} + \varepsilon_{36} + \varepsilon_{45} + \varepsilon_{90} + \varepsilon_{180} + \varepsilon_{225} + \varepsilon_{450} + \varepsilon_{900} = \frac{ 4 }{ 15 } , \\ \rho_{2, 3}(2, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{1} \sum_{u_{3} = 3}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{125} + \varepsilon_{250} + \varepsilon_{375} + \varepsilon_{500} + \varepsilon_{750} + \varepsilon_{1500} = \frac{ 1 }{ 6 } .\end{aligned}$$ Since $\lambda_{2, 3}(2, 2) > \rho_{2, 3}(2, 2)$, store $(k, l) = (2, 2)$ as in Line 11; reset $(i, j) = (2, 4)$ as in Line 12; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{2, 3}(2, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{1} \sum_{u_{3} = 0}^{2} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} \notag \\ & = \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{3} + \varepsilon_{4} + \varepsilon_{5} + \varepsilon_{6} + \varepsilon_{10} + \varepsilon_{12} + \varepsilon_{15} + \varepsilon_{20} + \varepsilon_{25} + \varepsilon_{30} + \varepsilon_{50} + \varepsilon_{60} + \varepsilon_{75} + \varepsilon_{100} + \varepsilon_{150} + \varepsilon_{300} = \frac{ 37 }{ 75 } .\end{aligned}$$ Since $\lambda_{2, 3}(2, 3) > \rho_{2, 3}(2, 3)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 1, 1, 2 \rangle}^{(\infty)} = \mu_{150}^{(\infty)} = \beta_{2, 3}(2, 3) + \rho_{2, 3}(2, 3) - \xi = \frac{ 11 }{ 75 } .\end{aligned}$$ Resetting $\xi = (77/150) + (11/75) = 33/50$ and $t_{2} = 2$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (1, 2, 2)$, we go back to Line 4. As $0 \le \xi = 33/50 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the sixth step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 33/50$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (1, 2, 2)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. It can be verified that $$\begin{aligned} \lambda_{1, 2}(2, 3) & = \sum_{u_{1} = 2}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{4} + \varepsilon_{12} + \varepsilon_{20} + \varepsilon_{36} + \varepsilon_{60} + \varepsilon_{100} + \varepsilon_{180} + \varepsilon_{300} + \varepsilon_{500} + \varepsilon_{900} + \varepsilon_{1500} + \varepsilon_{4500} = \frac{ 1 }{ 3 } , \\ \rho_{1, 2}(2, 3) & = \sum_{u_{1} = 0}^{1} \sum_{u_{2} = 3}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = 0 .\end{aligned}$$ Since $\lambda_{1, 2}(2, 3) > \rho_{1, 2}(2, 3)$, store $(k, l) = (1, 1)$ as in Line 11; reset $(i, j) = (1, 3)$ as in Line 12; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{1, 3}(2, 3) & = \sum_{u_{1} = 2}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{2} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{4} + \varepsilon_{12} + \varepsilon_{20} + \varepsilon_{36} + \varepsilon_{60} + \varepsilon_{100} + \varepsilon_{180} + \varepsilon_{300} + \varepsilon_{900} = \frac{ 11 }{ 50 } , \\ \rho_{1, 3}(2, 3) & = \sum_{u_{1} = 0}^{1} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 3}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{125} + \varepsilon_{250} + \varepsilon_{375} + \varepsilon_{750} + \varepsilon_{1125} + \varepsilon_{2250} = \frac{ 19 }{ 150 } .\end{aligned}$$ Since $\lambda_{1, 3}(2, 3) > \rho_{1, 3}(2, 3)$, store $(k, l) = (1, 1)$ as in Line 11; reset $(i, j) = (1, 4)$ as in Line 12; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{1, 3}(2, 3) & = \sum_{u_{1} = 0}^{1} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{2} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} \notag \\ & = \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{3} + \varepsilon_{5} + \varepsilon_{6} + \varepsilon_{9} + \varepsilon_{10} + \varepsilon_{15} + \varepsilon_{18} + \varepsilon_{25} + \varepsilon_{30} + \varepsilon_{45} + \varepsilon_{50} + \varepsilon_{75} + \varepsilon_{90} + \varepsilon_{150} + \varepsilon_{225} + \varepsilon_{450} = \frac{ 27 }{ 50 } .\end{aligned}$$ Since $\lambda_{1, 3}(2, 3) > \rho_{1, 3}(2, 3)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 1, 2, 2 \rangle}^{(\infty)} = \mu_{450}^{(\infty)} = \beta_{1, 3}(2, 3) + \rho_{1, 3}(2, 3) - \xi = \frac{ 1 }{ 150 } .\end{aligned}$$ Resetting $\xi = (33/50) + (1/150) = 2/3$ and $t_{1} = 2$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (2, 2, 2)$, we go back to Line 4. As $0 \le \xi = 2/3 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the seventh step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 2/3$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (2, 2, 2)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. It can be verified that $$\begin{aligned} \lambda_{1, 2}(3, 3) & = \sum_{u_{1} = 3}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = 0 , \\ \rho_{1, 2}(3, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 3}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = 0 .\end{aligned}$$ Since $\lambda_{1, 2}(3, 3) = \rho_{1, 2}(3, 3)$, store $(k, l) = (2, 1)$ as in Line 8; reset $(i, j) = (2, 3)$ as in Line 9; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{2, 3}(3, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 3}^{2} \sum_{u_{3} = 0}^{2} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = 0 , \\ \rho_{2, 3}(3, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 3}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{125} + \varepsilon_{250} + \varepsilon_{375} + \varepsilon_{500} + \varepsilon_{750} + \varepsilon_{1125} + \varepsilon_{1500} + \varepsilon_{2250} + \varepsilon_{4500} = \frac{ 6 }{ 25 } .\end{aligned}$$ Since $\lambda_{2, 3}(3, 3) < \rho_{2, 3}(2, 3)$, store $(k, l) = (3, 2)$ as in Line 8; reset $(i, j) = (3, 4)$ as in Line 9; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{2, 3}(3, 3) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{2} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \varepsilon_{1} + \varepsilon_{2} + \varepsilon_{3} + \varepsilon_{4} + \varepsilon_{5} + \varepsilon_{6} + \varepsilon_{9} + \varepsilon_{10} + \varepsilon_{12} + \varepsilon_{15} + \varepsilon_{18} + \varepsilon_{25} + \varepsilon_{30} \notag \\ & \qquad \qquad \qquad {} + \varepsilon_{36} + \varepsilon_{45} + \varepsilon_{50} + \varepsilon_{60} + \varepsilon_{75} + \varepsilon_{90} + \varepsilon_{100} + \varepsilon_{150} + \varepsilon_{180} + \varepsilon_{225} + \varepsilon_{300} + \varepsilon_{450} + \varepsilon_{900} = \frac{ 19 }{ 25 } .\end{aligned}$$ Since $\lambda_{2, 3}(3, 3) < \rho_{2, 3}(3, 3)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 2, 2, 2 \rangle}^{(\infty)} = \mu_{900}^{(\infty)} = \beta_{2, 3}(3, 3) + \rho_{2, 3}(3, 3) - \xi = \frac{ 7 }{ 75 } .\end{aligned}$$ Resetting $\xi = (2/3) + (7/75) = 19/25$ and $t_{3} = 3$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (2, 2, 3)$, we go back to Line 4. As $0 \le \xi = 19/25 < 1$, we continue the while loop in Lines 4–15 of Algorithm \[alg:main\]. Consider the eighth step of the while loop in Lines 4–15 of Algorithm \[alg:main\] with the following parameters: $\xi = 19/25$ and ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (2, 2, 3)$. Set $(i, j) = (1, 2)$ as in Line 5, and go to the while loop in Lines 6–12 of Algorithm \[alg:main\]. Note that ${\boldsymbol{t}} = {\boldsymbol{r}} = (r_{1}, r_{2}, r_{3}) = (2, 2, 3)$. Since $\lambda_{1, 2}(3, 3) = \rho_{1, 2}(3, 3)$, store $(k, l) = (2, 1)$ as in Line 8; reset $(i, j) = (2, 3)$ as in Line 9; and go back to Line 6. It can be verified that $$\begin{aligned} \lambda_{2, 3}(3, 4) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 3}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = 0 , \\ \rho_{2, 3}(3, 4) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 4}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = 0 .\end{aligned}$$ Since $\lambda_{2, 3}(3, 4) = \rho_{2, 3}(3, 4)$, store $(k, l) = (3, 2)$ as in Line 8; reset $(i, j) = (3, 4)$ as in Line 9; and go back to Line 6. As $j = 4 > 3 = m$, the while loop in Lines 6–12 of Algorithm \[alg:main\] is finished and we go to Line 13. It can be verified that $$\begin{aligned} \beta_{2, 3}(3, 4) & = \sum_{u_{1} = 0}^{2} \sum_{u_{2} = 0}^{2} \sum_{u_{3} = 0}^{3} \varepsilon_{\langle u_{1}, u_{2}, u_{3} \rangle} = \sum_{d|q} \varepsilon_{d} = 1 .\end{aligned}$$ Since $\lambda_{2, 3}(3, 4) = \rho_{2, 3}(3, 4)$, we get in Line 13 that $$\begin{aligned} \mu_{\langle 2, 2, 3 \rangle}^{(\infty)} = \mu_{4500}^{(\infty)} = \beta_{2, 3}(3, 4) + \rho_{2, 3}(3, 4) - \xi = \frac{ 6 }{ 25 } .\end{aligned}$$ Resetting $\xi = (19/25) + (6/25) = 1$ and $t_{3} = 4$ as in Lines 14 and 15, respectively, i.e., ${\boldsymbol{t}} = (t_{1}, t_{2}, t_{3}) = (2, 2, 4)$, we go back to Line 4. As $\xi = 1$, we finish the while loop in Lines 4–15 of Algorithm \[alg:main\], and the asymptotic distribution $( \mu_{d}^{(\infty)} )_{d|q}$ is just obtained. [99]{} E. Abbe, J. Li, and M. Madiman, “Entropies of weighted sums in cyclic groups and an application to polar codes,” *Entropy*, vol. 19, no. 9, 19 pages, Sept. 2017. E. Abbe and E. Telatar, “Polar codes for the $m$-user multiple access channel,” *IEEE Trans. Inf. Theory*, vol. 58, no. 8, pp. 5473–5448, Aug. 2012. M. Alsan, “Extremal channels of Gallager’s $E_{0}$ under the basic polarization transformations,” *IEEE Trans. Inf. Theory*, vol. 60, no. 3, pp. 1582–1591, Mar. 2014. M. Alsan and E. Telatar, “Polarization improves $E_{0}$,” *IEEE Trans. Inf. Theory*, vol. 60, no. 5, pp. 2714–2719, May 2014. ———, “A simple proof of polarization and polarization for non-stationary memoryless channels,” *IEEE Trans. Inf. Theory*, vol. 62, no. 9, pp. 4873–4878, Sept. 2016. E. Ar[i]{}kan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” *IEEE Trans. Inf. Theory*, vol. 55, no. 7, pp. 3051–3073, July 2009. ———, “Source polarization,” in *Proc. IEEE Int. Symp. Inf. Theory*, Austin, TX, USA, June 2010, pp. 899–903. R. G. Gallager, *Information Theory and Reliable Communication.* New York: Wiley, 1968. T. C. Gulcu, M. Ye, and A. Barg, “Construction of polar codes for arbitrary discrete memoryless channels,” *IEEE Trans. Inf. Theory*, vol. 64, no. 1, pp. 309–321, Jan. 2018. S.-W. Ho and S. Verdú, “Convexity/concavity of Rényi entropy and $\alpha$-mutual information,” in *Proc. IEEE Int. Symp. Inf. Theory* (ISIT), Hong Kong, June 2015, pp. 745–749. D. J. C. MacKay, *Information Theory, Inference, and Learning Algorithms.* Cambridge: Cambridge University Press, 2003. H. Mahdavifar, “Fast polarization for non-stationary channels,” in *Proc. IEEE Int. Symp. Inf. Theory*, Aachen, Germany, June 2017, pp. 849–853. R. Mori and T. Tanaka, “Source and channel polarization over finite fields and Reed–Solomon matrices,” *IEEE Trans. Inf. Theory*, vol. 60, no. 5, pp. 2720–2736, May 2014. R. Nasser, “Ergodic theory meets polarization I: A foundation of polarization theory,” *IEEE Trans. Inf. Theory*, vol. 62, no. 12, pp. 6931–6952, Dec. 2016. ———, “Ergodic theory meets polarization II: A foundation of polarization theory for MACs,” *IEEE Trans. Inf. Theory*, vol. 63, no. 2, pp. 1063–1083, Feb. 2017. ———, “Fourier analysis of MAC polarization,” *IEEE Trans. Inf. Theory*, vol. 63, no. 6, pp. 3600–3620, June 2017. ———, *Polarization and channel ordering: Characterization and topological structures.* Ph.D. dissertation, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, 2017. R. Nasser and E. Telatar, “Polarization theorems for arbitrary DMCs and arbitrary MACs,” *IEEE Trans. Inf. Theory*, vol. 63, no. 6, pp. 2917–2936, June 2016. M. B. Parizi and E. Telatar, “On the correlation between polarized BECs,” in *Proc. IEEE Int. Symp. Inf. Theory*, Istanbul, Turkey, July 2013, pp. 784–788. W. Park and A. Barg, “The ordered Hamming metric and ordered symmetric channels,” in *Proc. IEEE Int. Symp. Inf. Theory* (ISIT), St. Peterburg, Russia, Aug. 2011, pp. 2283–2287. ———, “Polar codes for $q$-ary channels, $q = 2^{r}$,” *IEEE Trans. Inf. Theory*, vol. 59, no. 2, pp. 955–969, Feb. 2013. Y. Polyanskiy and S. Verdú, “Arimoto channel coding converse and Rényi divergence,” in *Proc. 48th Annual Allerton Conf. Commum. Control Comput.*, Oct. 2010, pp. 1327–1333. A. G. Sahebi and S. S. Pradhan, “Multilevel channel polarization for arbitrary discrete memoryless channels,” *IEEE Trans. Inf. Theory*, vol. 59, no. 12, pp. 7839–7857, Dec. 2013. Y. Sakai and K. Iwata, “A generalized erasure channel in the sense of polarization for binary erasure channels,” in *Proc. IEEE Inf. Theory, Workshop* (ITW), Cambridge, UK, Sept. 2016, 5 pages. \[Online\]. An extended version is available at <https://arxiv.org/abs/1604.04413>. ———, “Extremality between symmetric capacity and Gallager’s reliability function $E_{0}$ for ternary-input discrete memoryless channels,” *IEEE Trans. Inf. Theory*, vol. 64, no. 1, pp. 163–191, Jan. 2018. Y. Sakai, K. Iwata, and H. Fujisaki, “Asymptotic distribution of multilevel channel polarization for a certain class of erasure channels,” accepted by *IEEE Int. Symp. Inf. Theory* (ISIT), Vail, CO, USA, June 2018, 5 pages (no page numbers). \[Online\]. An extended version is available at <https://arxiv.org/abs/1604.04413>. E. [Ş]{}a[ş]{}o[ğ]{}lu, “Polar codes for discrete alphabets,” in *Proc. IEEE Int. Symp. Inf. Theory* (ISIT), Cambridge, MA, USA, July 2012, pp. 2137–2141. E. [Ş]{}a[ş]{}o[ğ]{}lu, E. Telatar, and E. Ar[i]{}kan, “Polarization for arbitrary discrete memoryless channels,” in *Proc. IEEE Inf. Theory Workshop* (ITW), Sicily, Italy, Oct. 2009, pp. 144–148. I. Tal and A. Vardy, “How to construct polar codes,” *IEEE Trans. Inf. Theory*, vol. 59, no. 10, pp. 6562–6582, Oct. 2013. S. Verdú, “$\alpha$-mutual information,” in *Proc. Inf. Theory Appl. Workshop* (ITA), San Diego, CA, USA, Feb. 2015, pp. 1–6. [^1]: Note that the recursive formula [@sahebi_pradhan_it2013 Equation (4)] for the minus transform is valid, but the recursive formula [@sahebi_pradhan_it2013 Equation (3)] for the plus transform is incorrect. [Theorem \[th:recursive\_V\]]{} of [Section \[sect:recursive\]]{} will correct this error, especially in [Example \[ex:recursive\_6\]]{}. [^2]: A quasigroup is the pair $(\mathcal{Q}, \ast)$ of a nonempty set $\mathcal{Q}$ and a closed binary operation $\ast$ on $\mathcal{Q}$ satisfying the divisibility: for any $a, b \in \mathcal{Q}$, there exist unique $c, d \in \mathcal{Q}$ such that $a = b \ast c$ and $a = d \ast b$. Note that quasigroups are weaker notions than groups. [^3]: The set $\mathbb{N}_{0} \coloneqq \mathbb{N} \cup \{ 0 \}$ consists of all nonnegative integers. [^4]: For example, we observe that $w(+, -, +) = 2 \, w(+, -) + 1 = 2 \cdot 2 \, w(+) + 1 = 2 \cdot 2 \cdot 1 + 1 = 5$. As $w( \cdot )$ seems binary expansions by replacing $(-, +)$ with $(0, 1)$, it is clear that $w : \{ -, + \}^{n} \to \{ 0, 1, \dots, 2^{n}-1 \}$ is bijective. [^5]: Such an equivalence relation between two channels is formally introduced in [Section \[sect:output\_equiv\]]{}. [^6]: The phenomena of are also called the *two-level channel polarization* (see, e.g., [@park_barg_it2013; @sahebi_pradhan_it2013]), based on the observations and . [^7]: Şaşoğlu [@sasoglu_isit2012] said such a quasigroup operation to be *polarizing*. To avoid confusion, in this paper, its channel polarization is called a strong channel polarization. [^8]: Note that in terms of the source polarization [@arikan_isit2010], Mori and Tanaka [@mori_tanaka_it2014] showed the necessary and sufficient conditions of the strong polarization like for more general polar transforms with $l \times l$ kernel, $l \ge 2$, over the finite field $\mathbb{F}_{q}$. When $l = 2$, their condition can be reduced to that if an operation $\ast$ is defined by $a \ast b = a + \gamma \cdot b$ under the field operations with $\gamma \in \mathbb{F}_{q}^{\times}$, then the strong polarization holds for every $q$-ary input DMC if and only if $\gamma$ is a primitive element of $\mathbb{F}_{q}$. [^9]: This fact comes from the conservation property $[I(W^{-}) + I(W^{+})]/2 = I(W)$ under an arbitrary finite quasigroup operation $\ast$ (cf. [@nasser_telatar_it2016]). Note that in [@nasser_it2016_ergodic1; @nasser_it2017_ergodic2], allowing more weaker postulates of a closed binary operation $\ast$ than quasigroups, Nasser showed that the conservation property holds for every $q$-ary input DMC if and only if the map $(a, b) \mapsto (a \ast b, b)$ is bijective. Such a postulate was said to be *uniformity preserving*. [^10]: The strong channel polarization is a special case of the multilevel channel polarization; hence the former is said to be strong in this paper. [^11]: In [@nasser_telatar_it2016 Theorem 6], the rate of polarization for Bhattacharyya parameters is also shown; but we omit it in the paper for simplicity. [^12]: Any explicit proof of this fact is not given in [@park_barg_it2013 Section III]; however, it is not too hard. The results presented in [Section \[sect:asymptotic\_distribution\_MAEC\]]{} formally prove this fact as more general assertions. [^13]: Note that the data-processing lemma [@polyanskiy_verdu_allerton2010 Theorem 5-2)] is stated by the Markov chain, which is a stronger notion than the stochastic degradedness as in [Definition \[def:output\_equiv\]]{}. Fortunately, its proof is made from the data-processing lemma for the Rényi divergence, which is stated on the stochastic degradedness. [^14]: Note that all groups of order $2$ are unique up to isomorphism. [^15]: For a simple way to prove and by using [Corollary \[cor:bec\]]{} only, refer Alsan and Telatar’s simple proof of polarization theorem [@alsan_telatar_it2016]. [^16]: Generally, it holds that $a \mathbb{Z} + b \mathbb{Z} = \gcd(a, b) \mathbb{Z}$. [^17]: A real vector $(a_{i})_{i}$ is called a *probability vector* if $a_{i} \ge 0$ and $\sum_{i} a_{i} = 1$. [^18]: Note that in general, the alphabet of additive noise symbols is the same as the input alphabet. [^19]: Strictly speaking, in this paper, we say that a channel $W_{1} : \mathcal{X}_{1} \to \mathcal{Y}_{1}$ is *essentially the same as another channel* $W_{2} : \mathcal{X}_{2} \to \mathcal{Y}_{2}$ if there exists a pair of bijections $f : \mathcal{X}_{1} \to \mathcal{X}_{2}$ and $g : \mathcal{Y}_{1} \to \mathcal{Y}_{2}$ such that $W_{1}(y \mid x) = W_{2}(g(y) \mid f(x))$ for every $(x, y) \in \mathcal{X}_{1} \times \mathcal{Y}_{1}$. In this case, the difference between $W_{1}$ and $W_{2}$ is only labeling of input and output symbols. Clearly, this is an equivalence relation. [^20]: A *unit* $\gamma \in \mathbb{Z}/q\mathbb{Z}$ in the ring $(\mathbb{Z}/q\mathbb{Z}, +, \cdot)$ is an element having a multiplicative inverse element $\gamma^{-1} \in \mathbb{Z}/q\mathbb{Z}$ satisfying $\gamma \cdot \gamma^{-1} = \gamma^{-1} \cdot \gamma = 1 + q \mathbb{Z}$, where the multiplication $\cdot$ is defined as $(a + q\mathbb{Z}) \cdot (b + q\mathbb{Z}) = (a \cdot b) + q\mathbb{Z}$. [^21]: Note that $(u_{1}, u_{2}) \mapsto u_{1} + \gamma u_{2}$ forms a quasigroup on $\mathbb{Z}/q\mathbb{Z}$, provided that $\gamma \in \mathbb{Z}/q\mathbb{Z}$ is a unit of the ring. [^22]: Note that $b \mapsto a + \gamma b$ is also bijective for each $a \in \mathbb{Z}/q\mathbb{Z}$, provided that $\gamma \in \mathbb{Z}/q\mathbb{Z}$ is a unit, which implies that the binary operation $a \ast b = a + \gamma b$ forms a quasigroup for each unit $\gamma \in \mathbb{Z}/q\mathbb{Z}$. [^23]: Even if $q$ has only one prime factor $q = p_{1}^{r_{1}}$, in this subsection, we write $q = p_{1}^{r_{1}} p_{2}^{r_{2}} \cdots p_{m}^{r_{m}}$ for some $m \ge 2$ by setting $r_{2} = \cdots = r_{m} = 0$. Doing so, analyses of [Section \[sect:composite\]]{} can be reduced to the case where $q$ is a prime power. [^24]: Note that – are well-defined even if $0 \le j < i \le r$; and it holds that $\lambda_{i, j}(a, b) = \rho_{j, i}(b, a)$. [^25]: As will be shown in [Corollary \[cor:multilevel\]]{}, the probability vector $( \mu_{d}^{(\infty)} )_{d|q}$ is indeed the asymptotic distribution of multilevel channel polarization for $\mathrm{MAEC}_{q}( {\boldsymbol{\varepsilon}} )$. [^26]: Note that $\mathrm{O}( \cdot )$ is the Big-O notation, but $\omega( \cdot )$ and $\Omega( \cdot )$ are number theoretic notations, i.e., these are not the little-omega and Big-Omega notations, respectively, in this paper. [^27]: The elements $\varepsilon_{d}$ of $(\varepsilon_{d})_{d|q}$ are sorted in increasing order of indices $d$. [^28]: Note that this field $\mathcal{F}_{n}$ is a measure theoretic notion satisfying $A^{\complement} \in \mathcal{F}_{n}$ if $A \in \mathcal{F}_{n}$; and $A \cup B \in \mathcal{F}_{n}$ if $A, B \in \mathcal{F}_{n}$, where $A^{\complement}$ denotes the complement of a set $A$.
--- abstract: 'Spatially resolved measurements of the magnetization dynamics induced by an intense laser pump-pulse reveal that the frequencies of resulting spin wave modes depend strongly on the distance to the pump center. This can be attributed to a laser generated temperature profile. On a CoFeB thin film magnonic crystal, Damon-Eshbach modes are expected to propagate away from the point of excitation. The experiments show that this propagation is frustrated by the strong temperature gradient.' author: - Frederik Busse - Maria Mansurova - Benjamin Lenk - Marvin Walter - Markus Münzenberg title: Generation of magnonic spin wave traps --- The manipulation of spin wave frequency and propagation characteristics are of great interest for the design of switching devices such as logic gates in the field of spintronics, and the number of studies in this field grows rapidly [@Kruglyak2010JPD_Magnonics; @Lenk2011PR_Building]. The most promising techniques include (i) current-injected magnetic solitons in thin films with perpendicular anisotropy, which could transmit information directly or alternatively be used to selectively influence another spin wave’s propagation [@Mohseni2013Spin], and (ii) a change in the ferromagnet’s temperature and therewith its saturation magnetization. The latter can either be brought about by direct contact with e.g. a peltier element, which has been demonstrated by Brillouin-Light-Scattering (BLS) experiments on YIG waveguides [@Obry2012_Spin], or it can be optically induced: The authors of a recent study [@Kolokoltsev2012JAP_Hot] were able to show that by punctually heating up the signal conducting stripline in their network analyzer configuration by up to $\Delta T=70{\, \mathrm{K}}$ using a focused cw laser, the magnetostatic surface spin waves propagating along the stripline could be trapped in the resulting potential well. In this letter, we address the generation of a spin wave trap on a magnonic crystal by means of a temperature gradient induced by intense laser pulses. In contrast to the experiments mentioned above, rich magnetization dynamics can be produced without any need for direct contact with the sample by using short optical pulses. One approach is using the inverse Faraday effect, which in combination with a spatially shaped pump spot can create propagating droplets of backward volume magnetostatic waves [@Satoh2012Directional]. On the other hand, the technique applied in this work relies on a thermally induced anisotropy field pulse in the sample to induce magnetization oscillations. A common method to access these dynamics, described by the Landau-Lifshitz model of magnetization precession, makes use of the magneto-optical Kerr effect (MOKE) [@Lenk2010PRB_Spinwave]. Both temporal and spatial information can be obtained by applying time resolved scanning Kerr microscopy (TRSKM). Using this technique, propagating spin wave modes have been observed by focusing pump pulses with a full width half maximum (FWHM) of only $10{\, \mbox{\textmu}\mathrm{m}}$ on a thin Permalloy film [@Dvornik2013PRL_Direct]. The spin wave spectrum originating from such optical excitation is usually quite broad: Ultrafast demagnetization leads to a dense population of high energy excitations which then gradually decays into lower energy spin wave modes on a timescale of a few picoseconds. Energy transfer from high frequency to low frequency spin waves after excitation by short microwave pumping pulses has been systematically studied by Brillouin-Light-Scattering and it was shown that this mechanism leads to the formation of Bose-Einstein condensates if the pumping is strong enough [@Demidov2008Observation]. The result is an overpopulation of the lowest energy states which on a continuous film are given by the uniform precession or Kittel mode and by a series of perpendicular standing spin waves. Using microstructured magnetic films, so-called magnonic crystals, energy is also transferred into a Damon-Eshbach type mode whose frequency can be tuned in a wide range by choosing appropriate lattice parameters [@Ulrichs2010APL_Magnonic]. In this work, we use CoFeB as the sample material due to its low Gilbert damping and high saturation magnetization. Ultrashort laser pulses from a regeneratively amplified Ti:Sapphire system are used to (i) excite the magnetization dynamics, (ii) probe the magnetic response of the magnonic crystal, and (iii) create a spin wave trap / resonator. The software package *COMSOL* has been used to calculate the thermal response of a thin a metallic film to ultrafast laser excitation. The sample system for these calculations consisted of $3{\, \mathrm{nm}}$ of ruthenium capping a $50{\, \mathrm{nm}}$ cobalt-iron-boron (Co$_{20}$Fe$_{60}$B$_{20}$) magnetic film on a Si(100) substrate. The heat diffusion equation, $$\label{eq:heat-diffusion} \rho c_p \frac{\partial T}{\partial t} = \nabla (\kappa \nabla T)+Q,$$ is solved in rotational symmetry for isolating sample edges and a fixed temperature at the bottom of the substrate using the material parameters listed in table \[tab:parameters\]. Material $\rho$ (kg$\,$m$^{-3}$) $c_p$ (J$\,$kg$^{-1}\,$K$^{-1}$) $\kappa$ (W$\,$m$^{-1}\,$K$^{-1}$) $R$ ---------------------------- ------------------------------- ---------------------------------- ------------------------------------ --------------------- Ru 12370 [@Walter2011NM_Seebeck] 238 [@Walter2011NM_Seebeck] 117 [@Walter2011NM_Seebeck] 0.70 [@Hass1981] Co$_{20}$Fe$_{60}$B$_{20}$ 7700 [@OHandley1976] 440 [@Walter2011NM_Seebeck] 87 [@Walter2011NM_Seebeck] 0.72 [@Johnson1974] Si 2330 [@Enghag2004] 712 [@Enghag2004] 153 [@Enghag2004] - : Material parameters of the *COMSOL* simulation for 3$nm$ Ru / 50$nm$ CoFeB / 50$\mu$m Si sample: Density $\rho$, heat capacity $c_p$, and thermal conductivity $\kappa$ and reflectivity $R$ at $\lambda=800\,$nm. CoFeB values $\rho$ and $c_p$ are average values for Co and Fe, CoFeB reflectivity is approximated by the value for Co.[]{data-label="tab:parameters"} Starting from equilibrium at room temperature, energy is deposited by an ultrashort laser pulse with a duration of $50{\, \mathrm{fs}}$. The optical penetration depth is $\Lambda=16.1{\, \mathrm{nm}}$ in accordance with the value for ruthenium [@Palik1985] as well as with the average value of cobalt and iron, respectively [@Johnson1974]. In the film plane, a Gaussian intensity profile is assumed with a FWHM of $60{\, \mbox{\textmu}\mathrm{m}}$. The energy carried by each pulse amounts to a total of $1.6 \mu J$, as will be the case in the experiments presented below. The results of the simulation are shown in Fig. \[fig:Simulations\]: In the beginning, the laser pulse produces a sudden rise in temperature. After thermalization of the optically excited electrons and equilibration of the spin and phonon subsystems, known to take place on timescales of $\approx 100\,$fs and $\approx 1\,$ps respectively, the modeling yields an effective sample temperature, i.e the temperature of the magnetic system. During the first $\approx 100\,$ps the spatial as well as the temporal heat gradient are rather large, whereas at later times the temperature remains at a high mean value and only a negligible depth profile remains for most part of the CoFeB film. While the temperature is mainly homogeneous throughout the sample depth, it changes significantly across its plane, as shown in Fig. \[fig:Simulations\] (bottom). The Gaussian distribution of laser intensity in the pump spot produces a temperature profile that persists longer than the lifetime of the observed coherent spin wave modes. During this time (up to 1ns), no significant heat transport takes place on a micrometer scale and the FWHM of the lateral temperature distribution remains unchanged. In accordance with the Curie-Weiss law, the temperature increase quenches sample’s saturation magnetization so that a potential well is formed which effectively prevents the escape of spin waves from this region. Magnetization dynamics experiments were conducted on amorphous $50{\, \mathrm{nm}}$-thick Co$_{40}$Fe$_{40}$B$_{20}$ films magnetron-sputtered onto a Si(100) substrate and capped with a $3{\, \mathrm{nm}}$ Ru layer to prevent oxidation. Ultrashort laser pulses (central wavelength $\lambda_c=800{\, \mathrm{nm}}$, pulse duration $50{\, \mathrm{fs}}$) amplified by a Coherent RegA 8040 regenerative amplifier were used to excite and detect the magnetization dynamics in a pump-probe experiment as described in ref. [@Djordjevic2006JAP_Intrinsic]. The experimental parameters are analogous to those in the presented COMSOL simulation. Additionally, an external magnetic field ${\ensuremath{\mu_0 H_\mathrm{ext}}}=\pm130{\, \mathrm{mT}}$ is applied at $20^\circ$ with respect to sample plane. The experiments were performed separating the pump and probe spots on the sample and measuring the magnetization dynamics as a function of pump-probe distance, thus allowing us to determine the shift in magnetization oscillation frequency along the temperature gradient (i.e the spin wave well). Using variable time delay $\Delta\tau$ between pump and probe pulses, the time-resolved magneto-optical Kerr effect (TRMOKE) reveals magnetization precession on timescales of up to $1{\, \mathrm{ns}}$, which changes phase by $\pi$ between positive and negative (i.e. reversed) field directions. For the quantitative analysis, the difference between both field directions is calculated. An incoherent background remains which originates from high frequency and high-$k$ magnons excited by the intense pump beam  [@Lenk2010PRB_Spinwave]. After respective subtraction, a fast Fourier transform is performed and the resulting peaks in frequency domain can be analyzed (see figure \[fig:DataAnalysis\]). The dataset presented in Fig. \[fig:DataAnalysis\] has been obtained on a continuous CoFeB reference film of thickness $d=50{\, \mathrm{nm}}$. Two modes of magnetic precession are observed. Based on earlier results, these can be identified as the in-phase precession of all spins (uniform Kittel mode) at $12.6 {\, \mathrm{GHz}}$ and a first order (i.e. $n=1$) standing spin wave with wave vector $k=n \pi \, d^{-1}$ perpendicular to the sample plane (PSSW) at $18.2 {\, \mathrm{GHz}}$ [@Ulrichs2010APL_Magnonic; @Lenk2010PRB_Spinwave]. Both Kittel and PSSW modes have no wave vector components in the lateral direction. In other words, they do not propagate on the sample but have a rather localized character at the spot of (optical) excitation. Consequently, spatially resolved measurements should show no significant precession outside of the pump laser spot. Fig. \[fig:Conti-Position-Dependence\] (top) shows the color-coded Fourier power $P_\mathrm{FFT}$ of magnetization oscillation as a function of spatial separation $\Delta x$ between the centers of pump and probe spot. In this dataset, pump displacement was performed parallel to the external field direction. For better comparison of different measurements, the Fourier power is normalized to the sum of all transformed data points, allowing to see how much the signal stands out against background noise. On the one hand, the precessional amplitude (represented by the color code) depends on the distance to the center of the pump pulse. This is due to the laser intensity profile and the localized character of the observed modes. On the other hand, also the frequency is strongly position dependent. The latter effect can be explained as a consequence of the increased disorder caused by the intense heating, which leads to a decrease in saturation magnetization and therefore to a change of the spin wave spectrum. Using the theoretical dispersion of Kittel mode in the Landau-Lifshitz formalism, $$\label{eq:kittel-dispersion} \omega^\mathrm{theo}_K = \gamma \mu_0 \sqrt{H_x \big(H_x + M_\mathrm{S} \big)},$$ the profile in observed frequency can be used to calculate the laser induced temperature increase: In Eq. (\[eq:kittel-dispersion\]) the saturation magnetization ${\ensuremath{\mu_0 M_\mathrm{S}}}$ is regarded as the only free parameter, such that $\omega^\mathrm{theo}_K = \omega^\mathrm{theo}_K ( {\ensuremath{M_\mathrm{S}}})$ . Comparing this with the experimentally observed frequency profile $\omega^\mathrm{exp}_K ( \Delta x)$ (open squares in Fig. \[fig:Conti-Position-Dependence\](top)), a corresponding profile in magnetization ${\ensuremath{M_\mathrm{S}}}( \Delta x)$ is calculated. The magnetization profile is then compared to the magnetization curve $M(T)$ obtained for a CoFeB sample of equal thickness and composition using a Vibrating Sample Magnetometer (VSM) (inset in Fig. \[fig:Conti-Position-Dependence\]). The resulting position dependent temperature profile is shown in figure \[fig:Conti-Position-Dependence\] (bottom). \[fig:Conti-Position-Dependence\] ![image](Fig4){width="98.00000%"} While we expect that the Kittel and PSSW do not propagate across the sample plane, in magnonic crystals composed of periodically arranged antidots the excitation of dipolar Damon-Eshbach surface waves (DE) of selective wave vector has been shown [@Ulrichs2010APL_Magnonic]. This provides a possibility for information transport via local excitation of propagating spin waves followed by a spatially separated detection. Magnetization dynamics measurement on a magnonic crystal and its analysis are presented in Fig. \[fig:PeakHeights\]. In these measurements, pump and probe beam were separated (a) parallel, (b) orthogonal and (c) at an angle of 45[$^\circ$]{} to the direction of the external magnetic field. In contrast to the measurements on the continuous film, an additional (magnonic) Damon-Eshbach mode is visible (bottom images of Fig. \[fig:PeakHeights\]). As has already been observed above for the Kittel and PSSW modes, the precession frequency of DE mode shows a Gaussian dependence on position with a minimal frequency at the position of maximal pump intensity, which is caused by the increased temperature. The measurements should reveal a second effect, though: Damon-Eshbach surface waves are known to propagate mainly orthogonal to the direction of the in-plane magnetic field. In magnonic antidot crystals, only a DE mode parallel to the direction of the smallest hole-to-hole distance is detected  [@Ulrichs2010APL_Magnonic]. Since the propagation direction of a dipolar surface wave is rotated by 180[$^\circ$]{} when the external field is reversed, and because the data shows the difference between the signals measured for positive and negative field direction, a spatial widening of the DE-mode is expected. From the damping time in the TRMOKE data of $300-800{\, \mathrm{ps}}$, the propagation length of spin waves in CoFeB can be estimated to be at least $100{\, \mbox{\textmu}\mathrm{m}}$ so that detection of the DE-mode should be possible well outside of the pump spot. The magnetization oscillation’s Fourier power for each measurement is plotted in the top row of Fig. \[fig:PeakHeights\]. Solid lines represent Gaussian fits to the experimental points, the fitted widths amount to around $50{\, \mbox{\textmu}\mathrm{m}}$. By comparison of the (localized) Kittel and the (propagating) DE mode, the surface mode’s propagation characteristics can be analyzed. Since both modes show the same FWHM, it can be concluded that no propagation occurs into the direction of the external field (Fig. \[fig:PeakHeights\](a)), as could be presumed. Peculiarly, for the orthogonal and 45[$^\circ$]{} configuration there is no significant broadening of the DE mode either, meaning that there is no propagation out of the excitation spot. Two damping mechanisms can be considered to explain the observed behavior: On the one hand, the pump pulse creates a magnon population of very high density, rendering the picture of ballistic spin wave propagation invalid. Instead, intense scattering takes place that results in a strong overall damping. On the other hand, a spin wave travelling away from the spot of excitation would propagate towards an increasing effective saturation magnetization due to the heat gradient imposed by the pump laser. As we have shown, this change in saturation magnetization drastically impacts the supported frequency, and consequently must result in repeated scattering of the DE-magnons. The presented experiments and their analysis carry two important points: Firstly, an effective spin-wave well is formed by the local absorption of the optical pump pulse. In analogy to [@Kolokoltsev2012JAP_Hot] we observe a magnetization profile ${\ensuremath{M_\mathrm{S}}}(x)$ that follows the intensity profile of optical excitation and strongly influences the observed spin wave spectrum. Despite the ultrashort character of the excitation, the temperature profile remains in effect over the complete range of observed time delays, namely up to $1{\, \mathrm{ns}}$. In view of magnonics and their applications, a possible scenario is an optical lattice on a continuous magnetic film. Effectively, a dynamic magnonic crystal can be created in this way without limitations by lithography. Secondly, we observed the absence of spin wave propagation away from the excitation spot, which is mainly caused by two distinct mechanisms: As discussed in references [@Djordjevic2007PRB_Connecting; @Lenk2010PRB_Spinwave; @Lenk2011PR_Building], optical spin wave excitation is highly non-equilibrium. The resulting spin wave density is far above the ballistic limit, thus leading to a high probability for scattering between spin waves and a drastically reduced mean free path. A temperature gradient imposes additional scattering as spin waves are continuously reflected when entering a colder region with higher saturation magnetization. This effect might be used to trap spin waves or selectively block their propagation and must certainly be considered when optically exciting propagating surface waves. Maria Mansurova thanks Soham Manni for assistance and discussion during VSM measurements. We thank the German Research Foundation (DFG) for funding through MU 1780/ 6-1 Photo-Magnonics, SPP 1538 SpinCaT and SFB 1073. [18]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (), <http://iopscience.iop.org/0022-3727/43/26/260301>. , , , , ****, (), ISSN , <http://www.sciencedirect.com/science/article/pii/S0370157311001694>. , , , , , , , , , , , ****, (), ISSN , <https://www.sciencemag.org/content/339/6125/1295>. , , , , , **** (), <http://scitation.aip.org/content/aip/journal/apl/101/19/10.1063/1.4767137>. , , , , ****, (), <http://scitation.aip.org/content/aip/journal/jap/112/1/10.1063/1.4730927>. , , , , , , , , ****, (), ISSN , <http://www.nature.com/nphoton/journal/v6/n10/full/nphoton.2012.218.html>. , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevB.82.134443>. , , , , , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.110.097201>. , , , , , ****, (), <http://link.aps.org/doi/10.1103/PhysRevLett.100.047205>. , , , ****, (), <http://scitation.aip.org/content/aip/journal/apl/97/9/10.1063/1.3483136>. , , , , , , , , , , , ****, (), ISSN , <http://www.nature.com/nmat/journal/v10/n10/abs/nmat3076.html#supplementary-information>. , ****, (), <http://ao.osa.org/abstract.cfm?URI=ao-20-14-2334_1>. , , , , ****, (), <http://scitation.aip.org/content/aip/journal/apl/29/6/10.1063/1.89085>. , ****, (), <http://link.aps.org/doi/10.1103/PhysRevB.9.5056>. , ** (, ). , ed., ** (, , ). , , , , , **** (), <http://scitation.aip.org/content/aip/journal/jap/99/8/10.1063/1.2177141>. , ****, (), <http://link.aps.org/doi/10.1103/PhysRevB.75.012404>.
\[defi\][Lemma]{} \[defi\][Theorem]{} \[defi\][Proposition]{} \#1[\[\#1\] \[ \#1 \] ]{} = 2em **Statistical Model with Measurement Degree of Freedom and Quantum Physics** 1.5em .5em $\mbox{Masahito Hayashi}^{1}$ [masahito@qci.jst.go.jp]{} $\mbox{Keiji Matsumoto}^{1, 2}$ [keiji@nii.ac.jp]{} Introduction ============ This is an English translation of the manuscript[@original] which appeared in Surikaiseki Kenkyusho Kokyuroku No. 1055 (1998). In the estimation of the unknown density operator by use of the experimental data, the error can be reduced by the improvement of the desingn of the experiment. Therefore, it is natural to ask what is the limit of the improvement. To answer the question, Helstrom[@Helstrom:1976] founded the quantum estimation theory, in analogy with classical estimation theory(in the manuscript, we reffer to statistical estimation theory of probability distribution as ‘classical estimation theory’). Often, for simplisity, it is assumed that a state belongs to a family ${\cal M} =\{\rho_\theta|\theta\in \Theta\subset {\R}^m\}$ of states, which is called [*model*]{} and that the finite dimensional parameter $\theta$ is to be estimated statistically. He considered the quantum analogue of Cram[é]{}r-Rao inequality, which gives the lower bound of mean square error of locally unbiased estimate. This bound, however, is not achievable at all, when the number of the data is finite. Let us assume that the number $n$ of the data tends to infinite. Then, if some regularity conditions are assumed, it is concluded that if the estimate is consistent, [*i.e.*]{}, the estimate converges to the true value of parameter, the first order asymptotic term of mean square error satisfies the Cram[é]{}r-Rao inequality, and that the bound is achieved for all $\theta\in\Theta$. This kind of discussion is called [*first order asymptotic theory*]{}. The quatntum version of first order asymptotic theory is started by H. Nagaoka[@Nagaoka:1989:2][@Nagb]. He defined, in our terminology, the quasi-quantum Cram[é]{}r-Rao type bound, and pointed out that the bound is achieved asymptotically and globally. The proof of achivability, however, is only roughly sketched in his paper. In this manuscript, the proof of the achievability of the bound is fully written out, and the regularity conditions fo the achievability is revealed. In addition, we defined another bound, the quantum Cram[é]{}r-Rao type bound, and showed that the new bound is also achievable, if the use of quantum correlation between samples are allowed. Preliminaries ============= An estimate $\hat\theta$ is obtained as a function $\hat\theta(\omega)$ of data $\omega\in \Omega$ to ${\rm R}^m$. The purpose of the theory is to obtain the best estimate and its accuracy. The optimization is done by the appropriate choice of the measuring apparatus and the function $\hat\theta(\omega)$ from data to the estimate. Let $\sigma({\R}^m)$ be a $\sigma$- field in the space ${\R}^m$. Whatever apparatus is used, the data $\omega\in\Omega$ lie in a measurable subset $B\in \sigma({\R}^m)$ of $\Omega$ writes $$\begin{aligned} {\rm Pr}\{ \omega \in B|\theta \} =\tr \rho( \theta )M(B), \Label{eqn:pdm}\end{aligned}$$ when the true value of the parameter is $\theta$. Here, $M$, which is called [*positive operator valued measure (, POM, in short)*]{}, is a mapping from subsets $B\subset \Omega$ to non-negative Hermitian operators in ${\cal H}$, such that $$\begin{aligned} &&M(\phi)=O, M(\Omega)=I,\nonumber\\ &&M(\bigcup_{i=1}^{\infty} B_i), =\sum_{i=1}^{\infty}M(B_i) \;\;(B_i\cap B_j=\phi,i\neq j), \Label{eqn:maxiom}\end{aligned}$$ (see Ref.[[@Helstrom:1976]]{},p.53 and Ref.[[@Holevo:1982]]{},p.50.). Conversely, some apparatus corresponds to any POM $M$ [@Steinspring:1955][@Ozawa:1984]. Therefore, we refer to the measurement which is controled by the POM $M$ as ‘measurement $M$’. A pair $(\hat\theta,M)$ is called an [*estimator*]{}. The classical Fisher information matrix $J^M_\theta$ by the POM $M$ is defined, as in the classical estimation theory, $$\begin{aligned} J^M_\theta:= \left[\int_{\omega\in\Omega} \partial_i\log\frac{{\rm d}{\rm P}^M_\theta}{{\rm d}\nu} \partial_j\log\frac{{\rm d}{\rm P}^M_\theta}{{\rm d}\nu} {\rm dP}\right],\end{aligned}$$ where $\partial_i=\partial/\partial\theta^i$, ${\rm P}^M_\theta(B):=\tr\rho_\theta M(B)$, and $\nu$ is some underlying measure (in the manuscript, we assume that for any POM $M$, there is a measure $\nu$ in $\Omega$ such that ${\rm P}^M_\theta\prec \nu$ for all $\theta\in\Theta$). Denote the mean square error matrix of $(\hat\theta M)$ by ${\rm V}_{\theta}[\hat\theta, M]$, and, as the measure of accuracy, let us take ${\rm Tr}G {\rm V}_{\theta}[M]$, where $G$ is nonnegative symmetric real matrix. If $G={\rm diag}(g_1,\cdots.g_m)$, ${\rm Tr}G {\rm V}_{\theta}[M]$ is weighed sum of mean square error of the estimate $\hat\theta^i$ of each component $\theta^i$ of the parameter. Let us define locally unbiased estimator $(\hat\theta,\,M)$ at $\theta$ by, $$\begin{aligned} {\rm E}_{\theta}[\hat\theta_n,\,M]:= \int\hat\theta^j(\omega) \tr\rho_\theta\, M({\rm d}\omega) &=&\theta^j,\quad(j=1,\cdots,m).\\ \int\hat\theta^j(\omega) \tr\partial_k\rho_\theta\, M({\rm d}\omega) &=&\delta^j_k,\quad(j,k=1,\cdots,m). \label{eqn:local_unbiasedness}\end{aligned}$$ Then, $J^M_\theta$ is caracterized by, $$\begin{aligned} J^{M-1}_\theta=\inf\{V_\theta[\hat\theta,\,M]\,|\, \mbox{$\hat\theta:(\hat\theta,\,M)$ is locally unbiased}\},\end{aligned}$$ and the quasi-quantum[*Cram[é]{}r-Rao type bound*]{} $C_\theta(G)$ is defined by, $$\begin{aligned} C_\theta(G)&:=&\inf\{{\rm Tr}G{\rm V}_{\theta}[\hat\theta,\,M]\:|\: \mbox{$M$ is locally unbiased}\}\nonumber\\ &=&\inf\{{\rm Tr}G J^{M-1}_\theta\:|\: \mbox{$M$ is a POM in ${\cal H}$}\}.\end{aligned}$$ Nagaoka pointed out that the quasi-quantum[*Cram[é]{}r-Rao type bound*]{} is achievable asymtotically for every $\theta\in\Theta$[@Nagaoka:1989:2]. $C_\theta(G)$ is calculated explicitely for several special cases [@Matu][@Haya]. Suppose $n$-i.i.d. pairs $\rho_\theta^{\otimes n}$ of the unknown state $\rho_\theta$ are given. The the sequence $\{(\hat\theta,\,M_n)\}$, where $M_n$ is a POM in ${\cal H}^{\otimes n}$, is said to be [*MSE consistent*]{} if the estimate $\hat\theta_n$ by converges to the true value of the parameter in the mean, [i.e.]{}, $\lim_{n\to\infty} V_\theta[(\hat\theta_n,\,M_n)]=0$. The quasi-classical Cram[é]{}r-Rao type bound ============================================= The lower bound --------------- Let $M_{(1)},..., M_{(n)}$ be a sequence of the POMs in ${\cal H}$, and apply the measurement $M_{(1)}$ to the first sample, and the measurement $M_{(2)}$ to the second sample, and so on. The choice of $M_(k)$ is dependent on the outcome $\vec{\omega}_{k-1}=(\omega_{(1)},..,\omega_{(k-1)})$ of $M_{(1)},..., M_{(k-1)}$. To reveal the dependency of $M_{(k)}$ on $\vec{\omega}_{k-1}$, we write $M_(k)[\vec{\omega}_{k-1}]$. Let us define the POM $M_n$ in ${\cal H}^n$ which takes value in $\Omega^n$ by, $$\begin{aligned} M_n(B)= \int_{\vec{\omega}_n\in B}\bigotimes_{k=1}^n M_{(k)}[\vec{\omega}_{k-1}]({\rm d}\omega_{(k)}).\end{aligned}$$ Then the data $\vec{\omega}_n$ is controled by the probability distribution $P^{M_n}_\theta(B)=\tr\rho_\theta^{\otimes n}M_n(B)$. The estimator is said to be [*asymptotically unbiased*]{} if $$\begin{aligned} (B_n)^i=\left(B_{\theta}\left(\hat{\theta}_n,M\right)\right)^i &:= &\int_{\Omega} \left(\hat{\theta}^i_n (\omega) - \theta^i \right) {\rm P}^M_{\theta}(\,{\rm d} \omega)\to 0 ~\hbox{ as } n \to \infty, \Label{k3}\\ (A_n)^i_j= \left(A_{\theta}\left(\hat{\theta},M\right)\right) &:=& \frac{\partial}{\partial \theta^j}E^i_{\theta}[\hat\theta_n,\,M_n] \to \delta^i_j ~\hbox{ as } n \to \infty.\Label{20}\end{aligned}$$ The MSE consistent estimator satisfies $(\ref{k3})$ always. Therefore, if apropriate regularity condisions are assumed so that the defferential, the integral and the trace commute with each other, then $(\ref{20})$ is also satisfied, and the estimator will be asymptotically unbiased. If $\{(\hat\theta_n,\, M_n)\}$ is MSE consistent, and $\vliminf_{n \to \infty} n {\rm V}_{\theta}[(\hat\theta_n,\, M_n)]$ exists, $$\begin{aligned} \vliminf_{n \to \infty} n \Tr G {\rm V}_{\theta}[(\hat\theta_n,\,M_n)] \ge C_{\theta}(G), \Label{siki}\end{aligned}$$ In the almost same manner as classical estimation theory, $(\ref{20})$ leads to, $$\begin{aligned} n {\rm V}_{\theta}[\hat\theta_n,\,M_n] \ge n A_n \left( J_{\theta}^{M_n} \right)^{-1} ~^t A_n \Label{21}\end{aligned}$$ Elementary calculation leads to, $$\begin{aligned} \frac{1}{n} J_{\theta}^{M_n} = J_{\theta}^{M^n_{\theta}}, \Label{100}\end{aligned}$$ where $M^n_{\theta} \in {\cal M}$ is a POM in ${\cal H}$ wich id defined by $$\begin{aligned} M^n_{\theta}(\prod_{k=1}^n B_k)= \int_{\vec{\omega}_n} \sum_{k=1}^n M_{(k)}[\vec{\omega}_{k-1}](B_k) {\rm P}_{\theta}^{M_{n}}(\,{\rm d}\vec{\omega}_{n}\,).\end{aligned}$$ $(\ref{21})$ and $(\ref{100})$ yield $$\begin{aligned} \Tr G n {\rm V}_{\theta}[\hat\theta_n,\,M_n] \ge \Tr G A_n \left( J_{\theta}^{M^n_{\theta}} \right)^{-1} ~^t A_n \ge C_{\theta}(~^t A_n G A_n) \Label{22}.\end{aligned}$$ Passing both sides of $(\ref{22})$ to the limit $n\to\infty$, we have the theorem. Estimator which achieves the bound ---------------------------------- The estimator defined in the following achives the equality in the inequality $(\ref{siki})$ if the regularity conditions [(B.1-4)]{} are satisfied. The proof will be presented later in the subsection \[subsec:k2\]. > First, apply the measurment $M_0$ to $\sqrt{n}$ samples of unknown state $\rho_\theta$, and calculate $\check\theta_n$ which satisfies $(\ref{k1})$. Second, apply the measurement $M_{\check{\theta}_{n}}$ to the remaining $n-\sqrt{n}$ samples, where $M_{\theta}$ is defined by $$\begin{aligned} > \Tr G \left(J_{\theta}^{M_{\theta}}\right)^{-1} > \leq C_{\theta}(G)+\epsilon', > \Label{k2}\end{aligned}$$ $(\ref{k2})$ is satisfied. Then, $\hat\theta_n$ is defined to be $\overline{\theta}_n(\check{\theta}_n)$, where $\overline{\theta}_n(\theta')$ is defined by, $$\begin{aligned} > \overline{\theta}_n(\theta')= > \argmax_{\theta\in\Theta} > \sum_{k=\sqrt{n}+1}^n > \log\frac{{\rm dP}_{\theta}^{M_{\theta'}}}{{\rm d}\nu}(\omega_k).\end{aligned}$$ Regularity conditions --------------------- - There is a POM $M_0$ and $\check\theta_n$ which satisfies $$\begin{aligned} \lim_{n\rightarrow\infty} {\rm P}_{\theta}^{ M_n}\{\|\theta-\check{\theta}_n\| \,> \delta \} =0,\quad \forall \delta>0. \Label{k1}\end{aligned}$$ - $K:=\sup_{\theta\in\Theta}\|\theta\|$ is finite. - $\overline{\theta}_n(\theta')$ achieves the equality in classical asymptotic Cram[é]{}r-Rao inequality of the familily $\{{\rm P}_{\theta}^{M_{\theta'}}\,|\,\theta\in\Theta\}$ of probability distributions. - The higher order term of mean square error of $\overline{\theta}_n(\theta')$ is uniformly bounded when $\|\theta'-\theta\|<\delta_1$ for some $\delta>0$. In other words, for any $\epsilon \,> 0, \theta \in \Theta$, there exists a positive real number $\delta_1 \,>0$ and a nutural number $N$ such that, $$\begin{aligned} \left| (n-\sqrt{n}) \Tr G {\rm V}_{\theta,n} - \Tr G \left( J_{\theta}^{M_{\check{\theta}}} \right)^{-1} \right| \,< \epsilon+\epsilon',\quad \forall n\ge N,\;\;\forall \check{\theta}\;\; s.t.\;\; \|\theta-\check{\theta}\| \le \delta_1, \Label{k4}\end{aligned}$$ where ${\rm V}_{\theta,n}$ is the conditional mean square error matrix of $\hat\theta_n$ when $\check\theta_n$ is given. - For any $\epsilon \,>0, \theta \in \Theta$, there exists $\delta_2 \,> 0, $ such that, $$\begin{aligned} \left| \Tr G \left( J_{\theta}^{M_{\check{\theta}}} \right)^{-1} - C_{\theta}(G) \right| \,< \epsilon+\epsilon' , \quad \forall \theta,\;\forall \check{\theta}\;\; s.t.\;\; \| \theta- \check{\theta}\| \,< \delta_2. \Label{k5}\end{aligned}$$ [(B.1)]{} is satisfied almost always, and [(B.2)]{} is not restrictive . For $\overline{\theta}_n(\theta')$ is the maximum likelihood estimator of the family $\{{\rm P}^{M_{\theta'}}_\theta\}$ of probability distributions, [(B.3)]{} is satisfied in usual cases. The validity of [(B.4)]{}, however, is hard to verify. Therefore, in the future, this condition needs to be replaced by other conditions. Obviously, [(B.5)]{} reduces to the following [(B.5.1-2)]{}, both of which are natural. - The map $\theta \mapsto C_{\theta}(G)$ is continuous. - For any $\theta'$, the map $\theta\mapsto [J^{M_{\theta'}}_\theta]^{-1}$ is continuous. Proof of achivability --------------------- If the model ${\cal M}$ satisfy conditions [*(B.1-5)*]{} in the following, then we have, $$\begin{aligned} \lim_{n \to \infty} n \Tr G {\rm V}_{\theta}[\hat\theta_n,\,M_n] = C_{\theta}(G) ,\,\forall \theta \in \Theta, \Label{48}\end{aligned}$$ Let us choose $\delta_1,\delta_2$ and $N$ so that $(\ref{k4}-\ref{k5})$ are satisfied, and define $\delta':= \min(\delta_1,\delta_2)$. Then, if $n\ge N$, we have, $$\begin{aligned} &&\quad n \Tr G {\rm V}_{\theta}[\hat\theta_n,\,M_n]\\ &&= n \int \Tr G {\rm V}_{\theta}[\overline{\theta}_n(\check\theta_n),\,M_n ] {\rm P}_\theta^{M_n}( \,{\rm d} \omega) \\ &&\le n \int_{\|\theta-\check{\theta}_n\| \le \delta'} \Tr G {\rm V}_{\theta}[\overline{\theta}_n(\check\theta_n),\,M_n ] {\rm P}_\theta^{M_n}( \,{\rm d} \omega) + K^2 \Tr G \int_{\|\theta-\check{\theta}_n\|\,> \delta'} {\rm P}_\theta^{ M_n }( \,{\rm d} \omega) \\ &&\le \frac{n}{n-\sqrt{n}}\int_{\|\theta-\check{\theta}_n\| \le \delta'} \left( \Tr G \left( J_{\theta}^{M_{\check{\theta}_n}} \right)^{-1} + \epsilon +\epsilon'\right) {\rm P}_\theta^{ M_n }( \,{\rm d} \omega) + n K^2 \Tr G {\rm P}_\theta^{ M_n } \{\|\theta-\check{\theta}_n\| \,> \delta \} \\ &&\le \frac{n}{n-\sqrt{n}}\int_{\|\theta-\check{\theta}_n\| \le \delta} \left( C_{\theta}(G) + 2\epsilon +\epsilon'\right) {\rm P}_\theta^{ M_n }( \,{\rm d} \omega) + n K^2 \Tr G {\rm P}_\theta^{ M_n } \{\|\theta-\check{\theta}_n\| \,> \delta \} \\ &&\le \frac{n}{n-\sqrt{n}}(C_{\theta}(G) + 2\epsilon +\epsilon') + n K^2 \Tr G {\rm P}_\theta^{ M_n }\{\|\theta-\check{\theta}_n\| \,> \delta \}.\end{aligned}$$ [(B.1)]{} implies that the third term of last end of the equation tends to $n$ as $n\to\infty$. Therefore, we have, for every $\epsilon'>0$ and for every $\epsilon>0$, $$\begin{aligned} \lim_{n \to \infty} n \Tr G {\rm V}_{\theta}[\hat\theta_n,\,M_n] \le C_{\theta}(G) + 2\epsilon+\epsilon'.\end{aligned}$$ which leads to the theorem. Use of quantum correlation ========================== In this section, we consider the minimization of asymptotic mean square error where $M_n$ runs every POM which satisfies MSE consistensy. Physically, this means we allow the use of interactions between sapmles. So far, we considerd POM which takes value in $\Omega$, or the totality of all the possible data. Instead, in this section, we consider POM with the values in $\R^d$, for if $M$ is a POM with values in $\Omega$, $M\circ\hat\theta^{-1}$ is POM with the values in $\R^d$. MSE consistensy is defined in the same way as the precedent sections. Let $C_{\theta}^n(G)$ denote the quasi-quantum Cramér-Rao type bound of the family $\{ \rho^{\otimes n}_{\theta} |\theta \in \Theta \}$ of density operators in ${\cal H}^{\otimes n}$. Then, the quantum Cramér-Rao type bound $C_{\theta}^Q(G)$ is defined by, $$\begin{aligned} C_{\theta}^Q(G):= \vliminf_{n \to \infty} n C_{\theta}^n(G).\end{aligned}$$ For $C_{\theta}(G) \ge n C_{\theta}^{n}(G)$ holds true, we have, $$\begin{aligned} C_{\theta}(G) \ge C_{\theta}^A(G). \end{aligned}$$ If the sequence $\{ M^n \}_{n=1}^{\infty}$ is MSE consistent, we have, $$\begin{aligned} \vliminf_{n \to \infty} n \tr G {\rm V}_{\theta}\left(M^n \right) \ge C^Q_{\theta}(G). \Label{jen}\end{aligned}$$ In the almost same manner as the proof of theorem \[teiri\], we have, $$\begin{aligned} {\rm V}_{\theta}\left[M_n\right] &\ge& A_{n} \left( J_{\theta}^{M^n} \right)^{-1} ~^t A_{n}, \\ n \Tr G {\rm V}_{\theta}[M_n] &\ge& n \tr G A_{n} \left( J_{\theta}^{M_n} \right)^{-1} ~^t A_{n} \\ & \ge& n C_{\theta}^n\left (~^t A_{n} G A_{n} \right),\end{aligned}$$ which approaches $(\ref{jen})$ as $n\to \infty$. If the family $\{ \rho_{\theta}^{\otimes n} | \theta \in \Theta \}$ of density operators satisfies [(B.1-5)]{}, we have the following theorem. There is a MSE consistent sequence $\{ M_n\}$ of POM such that $\lim_{n \to \infty} n \tr G {\rm V}_{\theta}[M_n]\leq C_{\theta}^Q(G) + \epsilon$ is satisfied for every $\epsilon>0$ and for every $\theta\in\Theta$. Let us devide $n$ samples into $n_2$ groups each of which is consist of $n_1$ samples, and let $M_{(1)}^{n_1},\ldots, M_{(n_2)}^{n_1}$ be a sequence of POMs in ${\cal H}^{\otimes n_1}$. Apply the measurement $M_{(1)}^{n_1}$ to the first group $\rho_\theta^{\otimes n_1}$ of samples, and apply $M_{(2)}^{n_1}$ to the second samples, and so on. The choice of $M_{(k)}^{n_1}$ is dependent on the outcome of the measurements $M_{(1)}^{n_1},\ldots, M_{(k-1)}^{n_1} $. With $n_1$ fixed, let us approach $n_2$ to $\infty$. Then, theorem \[te3\]implies the existence of a MSE consistent sequence $\{M_n\}$ of POM which satisfies $$\begin{aligned} \lim_{n \to \infty} n \tr G {\rm V}_{\theta}[M_n]= \lim_{n_2 \to \infty} n_1 n_2 \tr G {\rm V}_{\theta}[M_n]= n_1 C_{\theta}^{n_1}(G).\end{aligned}$$ For any epsilon, if $n_1$ is sufficiently large, $\lim_{n \to \infty} n \tr G {\rm V}_{\theta} \left(M^n \right)\leq C_{\theta}^Q(G) + \epsilon$ is satisfied, and we have the theorem. [99]{} M. Hayashi and K. Matsumoto, “Statistical Model with Measurement Degree of Freedom and Quantum Physics,” Large deviation and statistical inference (Kyoto, 1998), Surikaiseki Kenkyusho Kokyuroku No. 1055 (1998) 96–110. M. Hayashi, “A Linear Programming Approach to Attainable C ramér-Rao Type Bound,” in [*Quantum Communication, Computing, and Measurement*]{}, edited by O. Hirota, A. S. Holevo, and C. M. Caves, (Plenum Publishing, New York, 1997). C. W. Helstrom, [*Quantum Detection and Estimation Theory*]{} (Academic Press, New York, 1976). A. S. Holevo, [*Probabilistic and Statistical Aspects of Quantum Theory*]{} (North-Holland, Amsterdam, 1982) (in Russian, 1980). H. Nagaoka, “On the Parameter Estimation Problem for Quantum Statistical Models," SITA’89, 577-582 Dec. (1989). H. Nagaoka, “On the relation between Kullback divergence and Fisher information – from classical systems to quantum systems – SITA ’92, 63-72 (1992)(in Japanese). K. Matsumoto, ”A Geometrical approach to quantum estimation theory,” doctoral thesis, Graduate School of Mathematical Sciences, University of Tokyo (1997). E. L. Lehmann, “Theory of Point Estimation,” (Jhon Wiley, 1983). M. Ozawa, “Quantum measuring processes of continuous observables,” J. Math. Phys. [**25**]{}, 79-87 (1984). W.F.Steinspring, “Positive functions on $C^*$-algebras,” Proc. Am. Math. Soc. [**6**]{}, 211-216(1955).
--- bibliography: - 'references.bib' title: Study of Silicon Photomultiplier Radiation Hardness with the JULIC Cyclotron --- Introduction {#sec:sec-intro} ============ Silicon photomultipliers (SiPMs) are multi-pixel semiconductor devices, with pixels (microcells) arranged on a common silicon substrate. Each microcell is a Geiger-mode avalanche photodiode (GM-APD), working above the breakdown voltage ($\it{U_\textrm{bd}}$), and a resistor for passive quenching of the breakdown. SiPMs are designed to have high gain (typically $\sim$ 10$^{6}$), high photon detection efficiency ([PDE]{}) [@SiPM_2018], excellent time resolution, and wide range spectral response. They can be used to detect light signals at the single photon level. Compared with traditional photomultiplier tubes (PMTs), SiPMs are insensitive to the external magnetic field, more compact and do not require high operating voltage. These features make SiPMs very attractive photosensors for experiments where excellent particle detection is a key parameter. The future $\overline{\textrm{P}}$ANDA detector at FAIR in Darmstadt, Germany  [@PANDA], is one such application that intends to use SiPMs as photosensors to detect the fast scintillation light from the plastic scintillator tiles of the barrel time of flight (barrel-ToF) detector [@BarTofTDR]. The barrel-ToF concept is a modern version of a ToF wall in which large organic scintillator plates are segmented into small scintillator tiles readout by SiPMs. The barrel-ToF detector will be located at $\sim$  radial distance to the beam axis. This location exposes the barrel-ToF to high energy and flux radiation. It is thus imperative to avoid severe degradation of the SiPMs’ performance due to this exposure. The estimated average equivalent neutron dose on the barrel-ToF detector is in order of $\sim$ 9.13$\times{10^{9}}$ $\it{n_{eq}}$()/cm$^{2}$ a year [@BarTofTDR] (the equivalent neutron dose can be calculated by multiplying the proton flux, **$\phi_{P}$**, by the hardness factor, $\kappa$, of Silicon, which depends on the proton energy and can be deduced from [@Kfactor]). This estimated neutron equivalent flux is based on assuming fused silica material. Because the exact internal structure and doping concentrations of the SiPMs are not disclosed by vendors, it is difficult to accurately estimate/simulate a priory the effect of the radiation damage on the SiPM structure. Thus, the radiation dependence, as a function of energy and flux, of such devices needs to be measured experimentally. The expected damage effect from the radiation exposure can be categorized, depending on the energy loss process due to the interaction between the impinging radiation and the SiPM tile [@GCasse], as follows: 1. Surface damage due to the Ionizing Energy Loss (IEL) process, which is usually caused by photons and light charged particles, e.g. electrons. The surface damage of the SiPM tiles can be appeared as: 1. Charge build-up on the surface-protection Oxide layer of the SiPM. 2. Increase in the leakage current of the SiPM. 2. Bulk damage due to the Non-Ionizing Energy Loss (NIEL) process, which is usually caused by heavier particles, e.g. protons, neutrons and pions. The bulk damage of a SiPM can be observed as: 1. Crystal defects in the bulk of the Si lattice. This is usually generated by the heavy particles that penetrate to the bulk of the SiPM die and cause a mono-/multi-displacement of the Si atoms (the Primary Knock-on Atoms (PKA)). 2. Change of effective doping concentration by producing acceptor like defects which modifies the depletion (breakdown) voltage. 3. Increase of charge carrier trapping which leads to a loss of charge (signal), and hence gain. 4. Easier thermal excitement of electrons and holes that causes an increase of the leakage current, hence the dark current noise. Numerous experimental investigations have been conducted to study the influence of proton radiation exposure on the performance of the SiPMs [@HEERING2008; @MUSIENKO2009; @BOHN2009; @MATSUMURA2009; @MUSIENKO2015; @HEERING2016; @Lacombe2019]. However, the results from the different studies are found to be not consistent with each other, especially on the operational conditions of the SiPMs after exposure to high and low fluxes. Table $\ref{tab:pre_work}$, summarizes the previous work on SiPMs radiation hardness studies with protons, which mostly were conducted at low radiation energy ranges, below [@HEERING2008; @MUSIENKO2009; @BOHN2009; @MATSUMURA2009; @MUSIENKO2015; @Lacombe2019]. There is only one measurement at very high energy, [@HEERING2016]. Comparing different measurements in Table $\ref{tab:pre_work}$ requires non-trivial assumptions and there is ambiguity in interpreting data, which supports our claim for the importance of an experimental measurement of the specific-devices that are of interest to the $\overline{\textrm{P}}$ANDA detector. A first step of these studies is the irradiation test at low energy, which is covered in this paper. These studies will be continued with higher proton momenta up to about 3 GeV/c, an energy region more relevant for the $\overline{\textrm{P}}$ANDA experiment for which no previous studies of SiPMs concerning radiation hardness were performed. ------------------------- ------------- -------------------- --------------------------- --------------- ------------------------------------------------- **Reference** **E$_{P}$** **$\phi_{P}$** **$\phi_{n-eq}$** **Dose** **Main Results** **\[MeV\]** **\[P/cm$^{2}$\]** **\[n$_{eq}$/cm$^{2}$\]** **\[Gy\]** Heering 212 3.00x10$^{13}$ 8.00x10$^{12}$ 1.68x10$^{4}$ - Increase in leakage current [@HEERING2008] (2008) - Increase in dark count rate - Decrease in gain - At max. **$\phi_{P}$** SiPM are not working Musienko 83 1.00x10$^{10}$ 2.00x10$^{10}$ 1.05x10$^{1}$ - Increase in leakage current [@MUSIENKO2009] (2009) - Increase in dark count rate - No change in $\it{U_\textrm{bd}}$ and R$_{q}$ - Reduction in PDE (&lt; ) - Reduction in gain (&gt; ) Bohn 212 3.00x10$^{10}$ 2.40x10$^{10}$ 1.68x10$^{1}$ - Increase in leakage current [@BOHN2009] (2009) - Increase in dark noise - At max. **$\phi_{P}$** SiPMs are working - Reduction in PDE ( - ) Matsumura 53.3 2.80x10$^{10}$ 4.80x10$^{10}$ 4.20x10$^{1}$ - Increase in leakage current [@MATSUMURA2009] (2009) - No significant change in gain - At 21 Gy no photon counting - Pulse height reduced at 42 Gy Heering 62 6.00x10$^{12}$ 1.20x10$^{13}$ 8.02x10$^{3}$ - Increase in dark current [@HEERING2016] (2014) - shift in $\it{U_\textrm{bd}}$ Musienko 62 1.00x10$^{12}$ 2.00x10$^{12}$ 1.34x10$^{3}$ - Increase in leakage current [@MUSIENKO2015] (2015) - Increase in dark noise - shift in $\it{U_\textrm{bd}}$ - Reduction in gain (&lt; ) Heering 23000 1.30x10$^{14}$ 2.20x10$^{14}$ - Increase in dark noise [@HEERING2016] (2016) - $\sim$  shift in $\it{U_\textrm{bd}}$ - Reduction in PDE by - At max. **$\phi_{P}$** SiPMs are working Lacombe 10 7x10$^{10}$ 3.19x10$^{11}$ 3.87x10$^{2}$ - Increase in dark current [@Lacombe2019] (2019) 49.7 5x10$^{10}$ 8.42x10$^{10}$ 3.19x10$^{1}$ - No change in $\it{U_\textrm{bd}}$ ------------------------- ------------- -------------------- --------------------------- --------------- ------------------------------------------------- : A summary of the previous studies on the SiPMs radiation hardness, with the main results of each study summarized. **E$_{P}$** stands for proton energy, **$\phi_{P}$** for proton fluence, **$\phi_{n-eq}$** for neutron equivalent fluence and **Dose** for absorption dose.[]{data-label="tab:pre_work"} According to our calculations that depends on deducing the stopping power value of protons, with energy stated in the reference, in Si lattice, from the NIST database, and using the stated integrated flux, the dose value of this measurement must be $\sim$ 7.88x10$^{1}$ Gy. Instrumentation {#sec:Inst} =============== The Test Setup {#sec:test_station} -------------- Figure $\ref{fig:CylBeam}$ shows a picture (left) with a sketch (right) of the radiation test station located at the Institut f[ü]{}r Kernphysik (IKP), Forschungszentrum J[ü]{}lich, Germany. The proton beam produced by the JULIC cyclotron [@JULIC], with energy $\sim$ , was defocused covering an area of about diameter in order to reduce the particle rate. The SiPMs were mounted in a closed light-tight box at a circle of diameter around the beam center in order to reduce the dose variation due to position uncertainty. The arrangement is sketched in Figure $\ref{fig:CylBeam}$ (right) for a situation with a centered Gaussian beam profile. For the dose measurement, 4 calibrated Farmer chamber dosimeters [@Dosimeters] were installed in addition at the same circle. Before installing the SiPMs, the beam was centered by adjusting the dose rates on the dosimeters to a comparable level resulting in an equal dose seen by the SiPMs. The sensitive volume of a dosimeter is $^{3}$ with a sensitivity of 20 nC/Gy. Every second the accumulated charge is measured in each dosimeter by which the beam position and intensity is monitored during the irradiation. The area covered by a dosimeter was comparable to the area covered by a SiPM resulting in a comparable mean dose rate. The absolute dose measurement error was below 10$\%$. The stability of the temperature inside the irradiation box was monitored by a DALLAS DS18B20 programmable resolution 1-wire digital temperature sensor controlled with a micro controller board “Arduino UNO control board” [@arduino]. ![ Arrangement of the SiPMs in the irradiation box (left), positioned at a diameter of with four dosimeters as shown in the sketch on the (right). In the photo on the left side the dosimeters were not mounted. []{data-label="fig:CylBeam"}](Fig_CylBeam.pdf){width="0.8\linewidth"} The Photo-Sensors {#sec:SiPMs} ----------------- In this study five SiPM devices of interest to the $\overline{\textrm{P}}$ANDA barrel-ToF detector were tested; from KETEK [@ketek] (PM3315-WB-BO and PM3325-WB-BO), Hamamatsu [@hamamatsu] (S13360-3050CS), SensL [@SensL] (MicroFC-30035-SMT) and AdvanSiD [@AdvanSiD] (ASD-NUV3S-P-40). Detailed information for each device is given in Table $\ref{tab:SiPM_prope}$.\ The contacting of the SiPMs was done by soldering pins at the anode and cathode, which were connected to coax cables with LEMO plugs. The operating bias of the SiPMs (and the preamplifier) was supplied by a TTI QL564T power supply [@powersupply]. ------------ ------------------ ------------ ------------------------------------ ---------------------------- --------------- **Device** **Dimentions** **Fill** **Breakdown** **Over** **Microcell** **factor** **voltage ($\it{U_\textrm{bd}}$)** **voltage** **size** **\[mm$^{2}$\]** **\[% \]** **@22 $\degree$C** **($\it{U_\textrm{ov}}$)** \[$\micro$m\] KETEK- 3 x 3 65 27.5 V 4.0 V 15 KETEK- 3 x 3 65 26.5 V 4.2 V 25 Hamamatsu 3 x 3 74 54.8 V 4.1 V 50 SensL 3 x 3 64 26.0 V 3.8 V 35 AdvanSiD 3 x 3 60 28.1 V 4.2 V 40 ------------ ------------------ ------------ ------------------------------------ ---------------------------- --------------- : Characteristics of the SiPM devices used in this study.[]{data-label="tab:SiPM_prope"} The Data Acquisition (DAQ) System and Data Collection {#sec:DAQ} ----------------------------------------------------- The corresponding electronic circuit for the readout is shown Figure $\ref{fig:SchDia}$. The SiPMs were operated at 21 $\pm$ 0.2  and $\it{U_{ov}}$ of $\sim$ . The signals were passed to a KETEK PEVAL-KIT MCX with preamplifier [@KeteckPreamp] resulting in signal heights of   for a single photon. The preamplified signals from the SiPMs were digitized by a $4-$channel CAEN DT5720B digitizer unit [@caen] with 12 bit resolution, dynamic range, and a maximum sampling rate of . The captured pulses were sent to a PC and recorded by the DAQ CAEN Multi-PArameter Spectroscopy Software (CoMPASS) [@CoMPASS] for further off-line processing with ROOT [@ROOT]. Before irradiation, each SiPM was separately tested for its performance. In order to reduce the noise level, the irradiation box and the preamplifier were placed in an aluminium box that was equipped with feedthroughs for power supplies and signal. In this configuration a noise level of the amplified signal of $\sim$  could be achieved, which was sufficiently low for a clear separation of the single photon signals from the background. Therefore the signals from the SiPMs were acquired by setting a  mV threshold on the SiPM output signal amplitude in the discriminator node of the CoMPASS program. The digitizer acquisition time window was set to , with a sampling rate of 250$\times{10^{6}}$ samples per second (MS/s) the waveforms were integrated over a fixed time window of after the trigger to obtain the collected charge of the event. ![Schematics of the readout electronics. For the measurements the SiPM and preamplifier were placed for shielding in an aluminium-box (indicated by the dotted line.)[]{data-label="fig:SchDia"}](Fig_SchDia.pdf){width="0.6\linewidth"} Data Analysis {#sec:Data analsis} ============= The analysis routine used in this experiment was similar to that explained in detail in [@SiPM_2018]. The offline analysis first discriminates light signals from noise using a pulse finding algorithm (PFA), then calculates the total charge, $\it{Q}_{tot}$, collected by the SiPMs, for each event. The PFA selects signal pulses using a series of cuts. It first sets a lower limit on the pulse amplitude above the baseline. It then looks at the correlation between the pulse width and the corresponding integrated charge, as a 2D histogram. A ROOT graphical cut is used in this 2D plot to select and exclude the false signals and noise pulses with low charge and/or small width. The signal selection and noise rejection efficiency of the cuts are measured from the individual spectrum of each cut. The total charge for each identified signal is calculated by integrating the total ADC values in the pulse after baseline subtraction. The baseline is defined as the average of the waveform in the time window prior to the trigger. Figure $\ref{fig:Single_photon}$ shows an example of the output charge spectrum for the AdvanSiD SiPM (before radiation exposure), where each peak above zero corresponds to a quantized number of photoelectrons, p.e. The multi p.e. peaks are fitted with a sum of independent Gaussian functions to estimate the gain. ![An example of the output charge spectrum of the AdvanSiD SiPM, before exposure. The multi p.e. peaks are fitted with a sum of independent Gaussian functions (red line).[]{data-label="fig:Single_photon"}](Fig_QwFit_BIrr.pdf){width="0.7\linewidth"} For the radiation hardness measurements the irradiation box was placed at the beam line but outside of the aluminium box, which led to drastically increase in the noise level. Therefore, it was planned to conduct the irradiation test in several steps, with measurements in between. In view of the expected radiation level at the $\overline{\textrm{P}}$ANDA experiment and with the information available about the acceptable dose rates from the existing measurements, we planned for five steps of irradiation with a total dose of 50 Gy (corresponding to an integrated proton flux of $\phi_{P}$=1.2x10$^{10}$ p/cm$^{2}$ and neutron equivalent fluence of $\phi_{n-eq}$=2.45x10$^{10}$ n$_{eq}$/cm$^{2}$) in each step. But already after the first irradiation step, all SiPMs seemed to be completely damaged. The noise was increased to a level of about compared to the before irradiation, as shown in Fig. \[fig:sipmsignal\]. The typical signals of the SiPMs before irradiation were well separated and far above background with a reasonable signal rate (Fig. \[fig:sipmsignal\] (left)). The signal structure after irradiation confirms the drastic increase of the dark current with an drastically increased dark signal rate (Fig. \[fig:sipmsignal\] (right)). In view of a signal height of $\sim$  for a single photon, the devices are not usable for sensitive measurements at a few photoelectrons level. In view of these initial results, and in order to get an insight view of the devices performance, we analyzed the critical parameters of the SiPMs. ![Typical signals of the SPiMs before (left side) and after (right side) irradiation with an integrated dose of 10 Gy. The level as well as the width of the noise band is drastically increased.[]{data-label="fig:sipmsignal"}](Fig_sipmsignal.jpg){width="1.\linewidth"} I-V Curve Studies {#sec:IV_Plots} ----------------- Because the SiPMs I-V curve can reveal subtle changes in their characteristics, we measured the I-V behaviour of each device before and after irradiation. All the measurements were taken in the dark. In this measurement, each SiPM was connected to a picoammeter (Keithley 6485 [@picoammeter]) and its leakage current as a function of the bias voltage was measured. The bias voltage was incremented in steps of up to U$_{bias}$ =, after which the step size was reduced to up to $\sim$  above the $\it{U_\textrm{bd}}$, to improve the accuracy of the breakdown voltage determination. The effective resolution of the system was dominated by noise pickup, which was on the order of . Figure $\ref{fig:IV_Curves}$ shows the I-V curves for the five types of SiPMs, measured in the dark. The onset of breakdown is clearly visible in all SiPMs before irradiation (black markers). After irradiation the behavior drastically changed. A kind of avalanche effect, e.g. a strong current increase at a certain voltage, is still visible but the breakdown voltage is shifted to lower values, by $\sim$ , in all cases. Furthermore the slope of the dark current increased by $\sim$ 2 orders of magnitude. ![I-V curves before (black symbols) and after (red symbols) irradiation for SensL (top-left), AdvanSiD (top-right), KETEK- (middle-left), KETEK- (middle-right) and Hamamatsu (bottom) SiPMs.[]{data-label="fig:IV_Curves"}](IV_SensL_NewBoard_ABIrr_log.pdf "fig:"){width="0.49\linewidth"}   ![I-V curves before (black symbols) and after (red symbols) irradiation for SensL (top-left), AdvanSiD (top-right), KETEK- (middle-left), KETEK- (middle-right) and Hamamatsu (bottom) SiPMs.[]{data-label="fig:IV_Curves"}](IV_AdvanSiD_NewBoard_ABIrr_log.pdf "fig:"){width="0.49\linewidth"} ![I-V curves before (black symbols) and after (red symbols) irradiation for SensL (top-left), AdvanSiD (top-right), KETEK- (middle-left), KETEK- (middle-right) and Hamamatsu (bottom) SiPMs.[]{data-label="fig:IV_Curves"}](IV_KETEK_25um_NewBoard_ABIrr_log.pdf "fig:"){width="0.49\linewidth"}   ![I-V curves before (black symbols) and after (red symbols) irradiation for SensL (top-left), AdvanSiD (top-right), KETEK- (middle-left), KETEK- (middle-right) and Hamamatsu (bottom) SiPMs.[]{data-label="fig:IV_Curves"}](IV_KETEK_15um_NewBoard_ABIrr_log.pdf "fig:"){width="0.49\linewidth"} ![I-V curves before (black symbols) and after (red symbols) irradiation for SensL (top-left), AdvanSiD (top-right), KETEK- (middle-left), KETEK- (middle-right) and Hamamatsu (bottom) SiPMs.[]{data-label="fig:IV_Curves"}](IV_Hamamatsu_NewBoard_ABIrr_log.pdf){width="0.49\linewidth"} SiPM Performance with Light --------------------------- A test of the irradiated SiPMs level-of-sensitivity to photoelectrons was performed using light pulses from a laser diode. For this measurement, two SiPMs from AdvanSiD were used; an irradiated one (with an integrated dose of 1 Gy) and a new one of the same type. Both devices were illuminated by a pulsed blue laser diode and operated at the same operating voltage. The measurements were done by placing the SiPMs alternately at the same position while keeping the light system unchanged to achieve the same illumination. The output signals from both SiPMs were captured and the total charges were compared. Figure $\ref{fig:sipmled}$, shows the corresponding output charge spectrum for both devices. The irradiated SiPM (lower plot) remained sensitive to the light source at the high p.e. level. However, multi-p.e. signals are not separable any more. The large increase in the gain of the irradiated SiPM (lower plot), compared to that with no irradiation (upper plot), is due to the fact that both devices were operated at the same operating voltage. Because, for the irradiated SiPM, the $\it{U_\textrm{bd}}$ is reduced, as explained in section $\ref{sec:IV_Plots}$, the real applied $\it{U_\textrm{ov}}$ is then increased, hence resulted in increasing the gain of this device. ![Comparison of a new “not-irradiated” AdvanSid SiPM (upper plot) and an irradiated one, with 1 Gy integration dose (lower plot). The SiPMs were illuminated by the same light pulses with a mean of about 12 photoelectrons. The blue curve results from selftriggering, while the signals of the red curves were triggered by the light pulse. After irradiation the background level is much higher and no single p.e. peaks are visible, however an external trigger allows to separate the light induced signals from the background distribution. The large increase in the gain in the irradiated device is due to the fact that both SiPMs were operated at the same voltage, but for the irradiated SiPM the $\it{U_\textrm{bd}}$ is reduced, hence the real applied $\it{U_\textrm{ov}}$ was increased.[]{data-label="fig:sipmled"}](Fig_sipmled.pdf){width="0.8\linewidth"} \[sec:Gain\_dark\] SiPMs Visual Inspection {#sec:vis_ins} ----------------------- A visual inspection of the irradiated SiPMs was carried out at the end of the tests using an optical microscope, with magnification power ranging from 20x to 128x. The surfaces of the devices were carefully inspected, and photographs of specific locations were taken, before and after the irradiation tests. No visible evidence for damage was found on the outer surface of the SiPMs or at the microcell level. Data Taking During Irradiation {#sec:DTDI} ============================== In order to follow the radiation damage as a function of the dose rate, an additional measurement was performed with one SiPM (the AdvanSiD) during irradiation. The dark current rate, gain, prompt cross-talk and correlated noise probabilities were studied at different doses and compared to their values in the absence of radiation exposure. To reduce the noise level, the irradiation box that houses the SiPM was shielded by a layer of aluminium foil, while the preamplifier was placed at a distance of about from the irradiation box, in a separate metal box. With these measures a noise level of about was achieved. Furthermore, the cyclotron beam current was reduced to a dose rate of 0.001 Gy/s. The development of the radiation damage can be observed in the signal charge spectra for a short time range with increasing radiation dose. Figure \[fig:QvsD\] shows the signal charge distributions for time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. The damage of the SiPM starts already at rather low integrated dose, of $\sim$0.2 Gy. Since this study mostly involves relative measurements, systematic effects that are independent of the radiation exposure cancel out. For the gain measurements, the total uncertainty is dominated by the systematic uncertainty related to changes in electronics pickup. To estimate this uncertainty we measured the FWHM of the baseline variations at the different dose values. We found at most $\sim$  deviation (at the highest dose value), for widths measured at non-zero radiation dose compared to that at the absence of exposure. This baseline noise was then added to simulated signal pulses to estimate its effect on the total charge calculation. This Monte Carlo study indicates that the excess noise pickup can lead to systematic errors in the measurement of the gain by the PFA of up to . The statistical uncertainty for the gain measurement was found to be negligible. The correlated noise and the dark current rate, on the other hand, are limited by statistical uncertainties. ![Signal charge distribution for the time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. For low radiation levels (distributions (a),(b),(c)) the photoelectron peaks up to 4 p.e. are clearly separated. At higher integrated dose the p.e. peaks start to get smeared. In the distribution (f) 2 p.e. are hardly resolved.[]{data-label="fig:QvsD"}](Fig_QwFit_DIrr_1.pdf "fig:"){width="0.49\linewidth"}   ![Signal charge distribution for the time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. For low radiation levels (distributions (a),(b),(c)) the photoelectron peaks up to 4 p.e. are clearly separated. At higher integrated dose the p.e. peaks start to get smeared. In the distribution (f) 2 p.e. are hardly resolved.[]{data-label="fig:QvsD"}](Fig_QwFit_DIrr_3.pdf "fig:"){width="0.49\linewidth"} ![Signal charge distribution for the time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. For low radiation levels (distributions (a),(b),(c)) the photoelectron peaks up to 4 p.e. are clearly separated. At higher integrated dose the p.e. peaks start to get smeared. In the distribution (f) 2 p.e. are hardly resolved.[]{data-label="fig:QvsD"}](Fig_QwFit_DIrr_4.pdf "fig:"){width="0.49\linewidth"}   ![Signal charge distribution for the time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. For low radiation levels (distributions (a),(b),(c)) the photoelectron peaks up to 4 p.e. are clearly separated. At higher integrated dose the p.e. peaks start to get smeared. In the distribution (f) 2 p.e. are hardly resolved.[]{data-label="fig:QvsD"}](Fig_QwFit_DIrr_5.pdf "fig:"){width="0.49\linewidth"} ![Signal charge distribution for the time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. For low radiation levels (distributions (a),(b),(c)) the photoelectron peaks up to 4 p.e. are clearly separated. At higher integrated dose the p.e. peaks start to get smeared. In the distribution (f) 2 p.e. are hardly resolved.[]{data-label="fig:QvsD"}](Fig_QwFit_DIrr_6.pdf "fig:"){width="0.49\linewidth"}   ![Signal charge distribution for the time intervals indicated in the integrated dose plot in Figure \[fig:Doses\]. For low radiation levels (distributions (a),(b),(c)) the photoelectron peaks up to 4 p.e. are clearly separated. At higher integrated dose the p.e. peaks start to get smeared. In the distribution (f) 2 p.e. are hardly resolved.[]{data-label="fig:QvsD"}](Fig_QwFit_DIrr_7.pdf "fig:"){width="0.49\linewidth"} ![Integrated dose as a function of exposure time.[]{data-label="fig:Doses"}](Fig_Doses.pdf){width="0.6\linewidth"} Relative Gain {#sec:RGain} ------------- The gain of a SiPM can be defined as the mean number of output electrons in the single p.e. peak. We used the charge distribution of the prompt signal, in Figure \[fig:QvsD\], to study the relative stability in the gain of the SiPM at different integrated dose values $\it{D}$. The mean value of each individual fitted Gaussian is used to estimate the average charge of the corresponding number of photoelectrons, $\it{Q_{n~p.e.}(D)}$. The slope of the $\it{Q_{n~p.e.}(D)}$ values, when plotted against the number of photoelectrons $\it{n}$, is then used to calculate the average charge of the SiPM single p.e. peak response at a specific $\it{D}$, $\bar{Q}(D)$. Thus the stability of the SiPM gain at different radiation doses can be assessed by the ratio, $\eta_{Gain}$, of the charge amplitude, $\it\bar{{Q}}$($\it{D}$) to that in the absence of the exposure: $$\eta_{Gain} = \frac{\bar{Q}(D)}{\bar{Q}(D=0)}$$ Figure $\ref{fig:SiPMparVsD}$ (I) shows that the relative gain of the AdvanSiD SiPM stays constant as a function of the dose values until it reduces by $\sim$  at measurement (f), compared to that before irradiation. Relative Dark Current Rate {#sec:RDCR} -------------------------- SiPM dark current signals are mainly caused by thermally generated free charge carriers inside the avalanche region. The pulse of dark noise is similar to that triggered by a photon event. The dark current rate ($\it{DCR}$) is then defined by the rate of the SiPM output pulses, in dark, with amplitude level $\geqslant$ 1 p.e., and can be calculated by the following relation: $$DCR = \frac{N_{\geqslant 1~ p.e.}}{t_{daq} \cdot A}$$ where $\it{N}_{\geqslant 1~ p.e.}$ is the number of the prompt signals with a measured charge of at least 1 p.e. amplitude level, $\it{t}_{daq}$ is the data acquisition time in seconds, and $\it{A}$ is the surface area of the SiPM. Figure $\ref{fig:SiPMparVsD}$ (II) shows $\eta_{DCR}$, the ratio of $\it{DCR}$($\it{D}$) to $\it{DCR}$($\it{D=0}$), as a function of the integrated dose. $\eta_{DCR}$ shows a huge increase, by $\sim$  already during the first exposure measurement compared to its value with no exposure. Then it stays constant over the irradiation period. Prompt Cross-Talk Probability {#sec:PCT} ----------------------------- Correlated signals are an important source of noise in SiPMs. They are composed of prompt optical cross-talk and delayed after-pulses. The delayed correlated noise probability is discussed in section \[sec:PCN\]. The origin of prompt cross-talk can be understood as follows: when undergoing an avalanche, carriers near the p-n junction emit photons, due to the scattering of the accelerated electrons. These photons tend to be at near infrared wavelengths and can travel substantial distances through the device, including to neighboring microcells where they may initiate secondary Geiger avalanches. As a consequence, a single primary photon may generate signals equivalent to two or more photoelectrons. The prompt cross-talk probability, $\it{P}_{CT}$, depends on over-voltage, $\it{U_\textrm{ov}}$, which is the excess bias beyond the breakdown voltage, device-dependent barriers for photons (trenches), and the size of the microcells. The probability of prompt cross-talk can be calculated as: $$P_{CT}=\frac{N_{> 1~ p.e.}}{N_{total}}$$ where $\it{N}_{> 1~ p.e.}$ is the number of the prompt signals with a measured charge of at least 1.5 p.e., and $\it{N}_{total}$ is the total number of prompt signals above noise. Figure $\ref{fig:SiPMparVsD}$ (III) shows $\eta_{P_{CT}}$, the ratio of $\it{P}_{CT}$($\it{D}$) to $\it{P}_{CT}$($\it{D=0}$), as a function of the integrated dose. $\eta_{P_{CT}}$ does not show a dependence on the dose values for measurements a, within the estimated uncertainties, while starting from measurements b to f it reduced by $\sim$ , compared to the value in the absence of exposure. Delayed Correlated Noise Probability {#sec:PCN} ------------------------------------ Both after-pulsing and delayed cross-talk events originate from an existing pulse. After-pulsing is due to the carriers trapped in silicon defects during the avalanche multiplication, then released later during the recharge phase of the microcell. Delayed cross-talk is generated by a similar mechanism to prompt cross-talk. The difference is that the photons generated during the avalanche process are absorbed in the inactive regions of the neighboring cells instead. It takes some time for the minority charge carriers to diffuse into the active region, causing a delayed signal. In our measurement, we cannot separate after-pulsing from delayed cross-talk and we count them together as delayed correlated noise. To estimate the delayed correlated noise probability, $\it{P}_{CN}$, we count the number, $N$, of clearly separated pulses occurring immediately after the primary pulse. The time window used for the pulses integration is limited by the acquisition window. The primary pulse time window is found to be $\sim$ . $\it{P}_{CN}$ is then estimated by normalizing $N$ to the total number of events that contain prompt signals, $N_{prompt}$: $$P_{CN}=\frac{N}{N_{prompt}}$$ Figure $\ref{fig:SiPMparVsD}$ (IV) shows $\eta_{P_{CN}}$ as a function of the integrated dose. It shows a constant response for the low dose measurement (a), while it increased by $\sim$  to $\sim$  for measurements (b) to (f), compared to the value in the absence of exposure. ![The relative gain (I), DCR (II), the prompt CT-probability (III) and the delayed correlated noise (IV) as a function of radiation dose (bottom x-axis) and proton flux (top x-axis).[]{data-label="fig:SiPMparVsD"}](Fig_GainvsDaPhi.pdf "fig:"){width="0.7\linewidth"} ![The relative gain (I), DCR (II), the prompt CT-probability (III) and the delayed correlated noise (IV) as a function of radiation dose (bottom x-axis) and proton flux (top x-axis).[]{data-label="fig:SiPMparVsD"}](Fig_DCRvsDaPhi.pdf "fig:"){width="0.7\linewidth"} ![The relative gain (I), DCR (II), the prompt CT-probability (III) and the delayed correlated noise (IV) as a function of radiation dose (bottom x-axis) and proton flux (top x-axis).[]{data-label="fig:SiPMparVsD"}](Fig_PCTvsDaPhi.pdf "fig:"){width="0.7\linewidth"} ![The relative gain (I), DCR (II), the prompt CT-probability (III) and the delayed correlated noise (IV) as a function of radiation dose (bottom x-axis) and proton flux (top x-axis).[]{data-label="fig:SiPMparVsD"}](Fig_PCNvsDaPhi.pdf "fig:"){width="0.7\linewidth"} Conclusions {#sec:conclusions} =========== The radiation hardness of the photosensors is an important characteristic that needs to be carefully studied, especially for those devices that will have long exposure time. Due to the increased use of SiPMs as photosensors the effect of radiation exposure on their performance is a mandatory investigation. A large number of studies in this field are already available with the general feature of an increased dark current. However, the usability of SiPMs as a function of radiation dose is not well defined due to the manufacturer dependent variation of the detailed structure and the large differences in the detected signals. Based on available data, a rather high tolerable radiation level was expected, but already at a very low integrated dose drastic effects on the signal were observed. A dose of only 0.2 Gy was sufficient to result in a drastic increase of the dark current and a complete dissolution of separate photoelectron peaks. The use of such sensors, the SiPMs, as a standalone single photon detector is nearly impossible due to the high dark count rate if a certain radiation level is expected. But even the exposure to rather high integrated dose doesn’t destroy the SiPMs completely. The functionality as photosensor with avalanche behavior at higher light signals, is maintained and a triggered readout can separate the signal to be measured from the dark counts. In all irradiated SiPMs in our analysis we identified a clear reduction of the breakdown voltage by up to about , resulting in much higher signals and dark count rate when operating at a fixed operating voltage. This effect should be considered by lowering the voltage to operate the device at a fixed over-voltage rather than a fixed operating voltage. A detailed knowledge of the SiPM damage induced by radiation requires more careful studies with devices with a well known internal structure to disentangle the influence of relevant parameters. These irradiation studies are a first step of SiPM radiation hardness investigations and will be continued with higher proton energies close to the minimum ionizing region, which is more relevant for the typical scintillator readout in particle detectors.
--- abstract: 'This paper develops the use of Inter-pulse frequency diversity scheme for a weather Radar system. It establishes the performance of frequency diversity technique comparing it with other inter-pulse schemes for weather radar systems. Inter-pulse coding is widely used for second trip suppression or cross-polarization isolation. Here, a new inter-pulse scheme is discussed taking advantage of frequency diverse waveforms. The simulations and test of performance, is evaluated keeping in mind NASA dual-frequency, dual-polarization, Doppler radar (D3R). A new method is described to recover velocity and spectral width due to incoherence in samples from change of frequency pulse to pulse. This technique can recover the weather radar moments over a much higher dynamic range of the other trip contamination as compared with the popular systematic phase code, for second trip suppression and retrieval.' author: - 'V Chandrasekar, , Mohit Kumar, ' bibliography: - 'references\_arxiv1.bib' title: Inter Pulse Frequency Diversity System for Second Trip Suppression and Retrieval in a Weather Radar --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{} Inter-pulse Coding, Weather radar, Correlation, D3R. Introduction ============ phase coding method tags the transmit waveform with a phase code and the same could be demodulated to retrieve the pulse parameters (amidst interference from different pulses like second trips or cross polar coupling) in a weather radar system. The phase code can be a single tag used for a pulse, (as in case of inter-pulse coding) to separate out multiple trips. This can essentially aid in suppressing the effects of other trips on the one intended, and also for the recovery of these multiple trips under certain restrictions. The suppression effect of the phase code could be observed over the full cycle of integration, which is the coherent processing interval to generate polarimetric moments from raw data. In a pulse doppler weather radar, the pulses are transmitted at the pulse repetition interval of $\tau$ sec, the maximum unambiguous range is given by $r_{unb} = c\tau/2$ and the maximum unambiguous velocity is given by $v_{unb} = \lambda/4\tau$, where $\lambda$ is the wavelength of operation. Hence both $r_{unb}$ and $v_{unb}$ are inversely proportional to each other. Due to this, unambiguous range and velocity cannot be simultaneously optimized. If one is increased, the other one is inversely affected. This is termed as range-doppler dilemma [@Bharadwaj2007]. A high Pulse repetition frequency (PRF) radar system would be able to resolve the doppler frequency better in a wider range but the unambiguous range would suffer, whereas a low PRF system, would detect weather upto a higher range but the velocity would be folded in the doppler domain. For a medium PRF system, it would be difficult to resolve both range and velocity. This problem is more prominent for weather radar systems as the scatterers are continuous and distributed throughout the beam illumination volume, with wide dynamic range [@Bringi2001]. Different phase coding schemes exist in literature to overcome this basic limitation and retrieve polarimetric moments from the second, third and so on, trip echoes (after the unambiguous range) for dual polarimetric weather radar. These codes can be broadly classified into intra-pulse (phase changes on a sub-pulse basis, [@kumar2019intrapulse]) and inter-pulse (phase change on a pulse basis) code. The random phase code and systematic phase code, are examples of Inter-pulse phase code, popular to retrieve first and overlaid second trip [@Zrnic1999]. Staggered or dual PRF techniques are some another methods [@Torres2017], [@Cho2005], generally used to improve upon $r_{unb}$ and $v_{unb}$. Particularly, they are very effective in increasing the range of $v_{unb}$. Typically, it is accomplished through the use of PRF diversity by playing two prf’s one after the other or in batches, but the fundamental concept remain the same. That is, by observing the same volume with different prf’s. However, it consumes scan time and usually uses $N$ times the time required for the uniform prf to play, $N$ being the number of stagger pulses. Hence it is expensive on the radar time. Inter-pulse codes have been explored extensively in weather radar community for second trip suppression [@Zrnic1999] and for orthogonal channel coding in [@Chandrasekar2009] for dual-polarization weather radars. The retrieval of moments for the first and subsequent trips, is based upon spectral processing of weather echoes in batches of coherent interval (which in turn depends upon antenna rotation rate and weather de-correlation time). The orthogonal property of the code, for different trips, is achieved over the coherent interval as the second trips get modulated by cyclic shifts of the phase code and the other trips by multiple cyclic shifts. The orthogonal nature is achieved by having these cyclic shifts uncorrelated to each other. The systematic codes use derivatives of Chu codes which have deep nulls in the correlation function for time delayed versions of itself. However, the rejection of echoes from the trips of non-interest, are a function of spectral width and would degrade in case of multi-modal distribution, wider spectral width and phase noise. In this paper, we introduce a novel frequency diverse inter-pulse system and its implementation on NASA D3R weather radar. In this scheme, we change the IF frequency from pulse to pulse, for example, for second trip suppression, we use two frequencies, $\omega_{1}$ and $\omega_{2}$ for alternate pulses and re-synchronize with these on digital down-converter to remove second trip. As will be shown, it gives excellent performance, if the frequencies are far apart. We also discuss a novel method to retrieve velocity and spectrum width, for the first and second trips. Since the two frequencies are far apart, the data from adjacent pulses (modulated with $\omega_{1}$ and $\omega_{2}$) are uncorrelated, and a method is highlighted which can recover moments, from a batch of coherent processing time (128 pulses, in case of D3R), under such circumstances. The improvement in the second trip recovery region, based on range of $\{P_{1}/P_{2}\}$, where $P_{1}$ is first trip power and $P_{2}$ is second trip power, is immense. A block diagram of this scheme is shown in figure \[fig\_blockDia\]. The carrier generator output two different frequencies, $\omega_{1}$ and $\omega_{2}$, on odd and even pulse, respectively. If the sequence at down-converter is $\omega_{1}$ and $\omega_{2}$ for odd and even pulse, we would be able to recover first trip. However, if the sequence is $\omega_{2}$ and $\omega_{1}$, then we would be recovering second trip parameters. ![The system architecture which makes use of frequency diversity scheme for multi-trip retrieval. The pulse to pulse frequency change is done at the down-converter.[]{data-label="fig_blockDia"}](BlockDia_FD.pdf){width="3in"} In contrast to the scheme of dual-pulse, dual-frequency technique for range and velocity ambiguity mitigation highlighted in [@Torres2010], our technique is working towards the suppression of second trip echoes. The authors in [@Torres2010] have tried to improve on range and velocity measurements to a much higher range and higher velocities by making use of uncorrelated frequencies in two different channels. With this, they have arrived at an algorithm to derive velocity which becomes independent of base PRT. However, this paper doesn’t address the problem of second trip, which might still be present in case the weather event is still happening beyond the increased unambiguous range (due to the processing of dual channels). Many orthogonal polyphase coded systems have been proposed in literature, as in [@Haohe2009], [@Song2016], [@Deng2004] and [@Griep1995]. But it is very difficult to get an isolation between different polyphase codes $> 40dB$. The frequency diversity scheme, proposed in this paper, is meant to give a higher level of isolation that is possible with the polyphase coded systems. The limit on peak auto-correlation and cross-correlation sidelobe, of sequences have been brought out in [@Sarwate1979] and [@Welch1974], therefore there is a need to explore other techniques. Also, this frequency diversity scheme, in general, can be applied to obtain simultaneous co-polar and cross-polar echoes, in case of dual polarization weather radar leading to instantaneous radar polarimetry [@Howard2007], [@Wang2010]. However, this is not the focus of this paper. Additionally, orthogonality requirements in case of MIMO systems [@Stoica2008] can be achieved via this scheme. Moreover, the same logic holds good for CDMA based communication as well [@Liu1995]. The latest efforts to gain on orthogonality between waveforms have been towards the concept of zero-autocorrelation and cross-correlation sidelobes (perfect sequences) offered by Golay type waveforms. However, they display perfect orthogonal behavior over very limited Doppler. A lot of research is currently happening in making these sequences more doppler tolerant [@Tang2014], [@Pezeshki2008], [@Nguyen2016] and [@Yang; @2007]. This paper is organized as follows: A detailed description of the frequency diversity scheme and the Chu phase coded inter-pulse scheme are in section \[section1\]. Section \[section2\] analyze the effect of frequency change at IF level, on other dual-polarization moments. Section \[section3\] presents the D3R weather radar data on which this scheme was implemented and tested, followed by Conclusion in section \[section5\]. Inter-Pulse Chirp Waveforms {#section1} =========================== Generalized Chu codes for second trip mitigation and retrieval: --------------------------------------------------------------- The most popular inter-pulse code, are the systematic phase codes, commonly known as SZ Code, for the retrieval of parameters of overlaid echoes. The SZ code, which are the derivatives of the Chu code, with good correlation properties, have better estimation accuracies over a wide dynamic range of the overlay power and spectral width. In this paper, the performance of Chu code is analyzed for second trip suppression and retrieval. Then we compare it with the frequency diversity scheme. Under wide spectral width, the systematic phase code, which rely on replicating the other trip’s spectrum multiple times, looks more whitened. Another inter-pulse code which introduces random phase on pulses (known as random phase code), attempts to whiten the weaker trip, while it coheres the stronger trip signal, whereas the SZ code produces replicas of the weaker signal spectrum. If a notch filter is used in random phase code, it also notches out some part of the other trip spectrum, which cannot be recovered later on, thus producing a self noise [@Frush2002]. However, in case of SZ code, even after the stronger trip is notched out completely, by using a wider filter width and retaining two replicas of the weaker trip, we can still recover the velocity and spectrum width of the weaker trip. The phase difference between two replicas is sufficient to estimate velocity. While notching out, we need to remember that the Gaussian spectrum of the weather gets broadened out due ground clutter and phase noise etc. Thus, to notch out the stronger trip, we generally need a much wider notch filter than what Gaussian spectrum width would require to be. Another reason for the generation of self noise is due to overlap of the spectral replicas. If the spectrum of the signal is well contained within $M/8$ spectral coefficients (for SZ(8/64)), then there would be less overlap region between successive replicas and better estimation can be performed. But in case of wider spectral width, the overlap region can pose a constraint for the recovery region of velocity and spectral width. In such a case, probably SZ(4/64) code with $n=4$, will provide a better separation but then, it will allow for much lesser notch width, so as to retain a minimum of two spectral replicas. Additionally, the phase noise leads to broadening of the spectrum, which can be linked to phase noise of the coherent oscillator used to synchronize various sub-systems of the radar. The SZ code, simulated here, is designed in accordance with equation with $n=8$ and $M=64$. $$\label{eq_1} \begin{aligned} c_{k} &= exp(-j \varSigma_{n=1}^{M}(n \pi m^{2}/M))\\ & k = 0, 1, 2, ..., M \end{aligned}$$ The cyclic version of this code (known as modulation code) generate eight replicas of the cohered trip in the Nyquist interval along with the original spectrum of the other trip. We carried out simulation of weather echoes, where the first trip has the parameters: $v_{1} = 10m/s$, $w_{1} = 1m/s$ and $\rho_{hv} = 0.995$ and the second trip has the same parameters except velocity is $v_{1} = -5m/s$. The simulation was carried out with the method highlighted in [@GALATI199517]. This was done to see the effectiveness of systematic phase code and later use it to compare with frequency diversity scheme. The simulation was carried out at S-band with $PRF = 1.2KHz$. On the receive, when the first trip is being retrieved, the spectrum consists of first trip specturm and the second trip spectrum getting replicated 8 times (in case of SZ(8/64)), with the power of second trip getting spread out in all these replicas, as shown in figure \[fig\_chu1\]. ![The modulation code spreads out the power of second trip to 8 replicas in the spectral domain.[]{data-label="fig_chu1"}](cod1_b.jpg){width="3in"} After estimation of first trip parameters, the spectrum is notch filtered, so as to remove the power of first trip, and second trip parameters can be estimated. If a rectangular window is used for truncating the signal (after 128 pulses), then the stronger trip will be contaminating the weaker trip spectrum through its spectral sidelobes, and the dynamic range, $\{P_{1}/P_{2}\}$, where the weaker trip could be retrieved, will reduce. The reduction in spectral sidelobe leakage is important to gain on this dynamic range. However, use of other window functions will leads to loss of SNR, which in turn, means a reduction in number of independent samples. We used Hann window for simulation, which has SNR loss of 4.2 dB, but the spectral dynamic range gets increased to 90dB. After the notching process, at least two replicas need to be retained for velocity and spectral width estimate. The spectrum after notch process, is shown in figure \[fig\_chu2\]. ![The retained spectrum after the application of notch filter with normalized width of 0.75.[]{data-label="fig_chu2"}](cod2_b.jpg){width="3in"} The remaining signal is re-cohered for the second trip, and the resulting signal has six symmetrical sidebands centered at the mean velocity of the second trip. This is shown in figure \[fig\_chu3\]. ![The remaining spectrum is re-cohered for the second trip with its six sidebands present in the spectrum.[]{data-label="fig_chu3"}](cod3_b.jpg){width="3in"} Effect of Phase Noise on Chu Codes: ----------------------------------- The phase noise of the system is a major limiting factor for the dynamic range of $P_{1}/P_{2}$, where parameters of both the strong and the weak trip, can be recovered, using a inter-pulse Chu code. Phase noise leads to broadening of spectrum. The overall phase noise is dominated by the phase noise of the basic oscillator, from which all the other clocks are derived. Single-side band phase noise is usually measured in a 1 Hz bandwidth, and is defined as the carrier power at an offset with respect to the power measured at 1 Hz Bandwidth. More precisely, phase noise can be defined as the ratio of noise in a 1 Hz Bandwidth to the signal power at the center frequency. The equivalent noise jitter can be obtained by integrating the phase noise curve over the receiver bandwidth. It is equivalent to: $$\text{RMS Phase Jitter (in radians)} = \sqrt{2 \times 10^{A/10}}$$ where $A$ is the area under the phase noise curve. It has been shown in [@Zrnic1999] that, if there is no phase noise and Hann window function is used, the limit on retrieval of velocity is about 90dB for $P_{1}/P_{2}$ ratio, for spectral width less than 4 m/s. But it drastically reduces to 60dB, if rms jitter is of the order of 0.2 deg rms, and further reduces to 40dB in case of 0.5 deg rms phase jitter. This has been confirmed through our simulation also, that the accuracy of the velocity retrieval drastically reduces, under phase noise conditions. The dynamic range of $P_{1}/P_{2}$, in which the weaker trip velocity can be recovered, with and without phase noise, is shown in figures \[fig\_WOPN\] and \[fig\_WPN\] respectively. ![The dynamic range of $P_{1}/P_{2}$, in which the weaker trip velocity can be recovered, without phase noise.[]{data-label="fig_WOPN"}](cod6_b.jpg){width="3in"} ![The dynamic range of $P_{1}/P_{2}$, in which the weaker trip velocity can be recovered, with phase noise.[]{data-label="fig_WPN"}](cod7_b.jpg){width="3in"} Frequency Diverse Chirp Waveforms: ---------------------------------- Modern day systems with higher computation power and embedded with FPGAs (Field Programmable Gate Arrays) are capable of complex signal processing architecture and speed like never before. The digital Receiver system in D3R, is capable of switching IF (Intermediate Frequency) on a pulse by pulse basis ([@Kumar8517944]). This feature has been utilized here in obtaining frequency diversity at IF frequency. The main factor that would limit the amount of second trip suppression possible, would be the IF filter, which is implemented digitally in FPGA (based on its stop band suppression and roll off in frequency domain). That would decide, how we should be doing the frequency planning for obtaining adequate removal of the undesired trip echo. This has been explained with the help of figure \[fig\_freqDiv\]. ![The use of frequency diversity at IF frequency with NCO being switched from $\omega_{1}$ to $\omega_{2}$ and back, from pulse to pulse.[]{data-label="fig_freqDiv"}](Drawing_FD){width="3in"} Typically, when the transition of the filter from passband to stopband is steep, it would require high number of digital MAC (multiply-accumulate) units. But with the advent of high processing power and FPGA nodes, optimized for DSP application, we can easily obtain very sharp roll-off filter working in real time. The analog filter before A/D converter must be wide enough to accommodate both $\omega_{1}$ and $\omega_{2}$, in addition to the transition band of the digital filter. That is why, a sharper roll off is important, so that we save on bandwidth. Finally, it is difficult to obtain high analog bandwidth stages, because of spurious and inter-modulation products in the mixing process, which may appear in-band. This also leads to reduction in spurious-free dynamic range (SFDR). In figure \[fig\_freqDiv\], the inter-pulse IF change of the transmit waveform is shown, where $\textbf{W}_{1}$ and $\textbf{W}_{2}$ can, in general, be an orthogonal pair. However, in our design, $\textbf{W}_{1} = \textbf{W}_{2}$, and is a LFM waveform (Linear Frequency Modulated) with pulse width = $20\mu$s and bandwidth = 1 MHz and the pulse repetition frequency is 0.5 KHz. The $\omega_{1}$ and $\omega_{2}$ will be selected based on the digital filter characteristics and will be dealt in more detail, later in this section. If $A_{1}$ corresponds to first trip and $A_{2}$ denotes the amplitude of second trip echo, then on the receive IF processing when pulse 1 goes through the final IF mixing stage, the output at mixer out port will be: $$\begin{gathered} O_{1} = A_{1} + A_{1}exp(-j(2\omega_{1})t) + A_{2}exp(-j(\omega_{2}-\omega_{1})t) \\ + A_{2}exp(-j(\omega_{2}+\omega_{1})t)\end{gathered}$$ For first trip processing, the nearest component to be filtered out is $A_{2}exp(-j(\omega_{2}-\omega_{1})t)$ and the first trip information is in $A_{1}$, has to retained. Similarly, for the other pulse as well, we will retain $A_{1}$ and filter out the nearest second trip component, $A_{2}exp(-j(\omega_{1}-\omega_{2})t)$. Later, we coherently integrate for many such pulse sets and retrieve first trip information. The process for second trip retrieval is very similar to this. To demonstrate the frequency diversity scheme, we start with simulation of weather echoes with parameters: $v_{1} = 10m/s$,$w_{1} = 1m/s$ and $\rho_{hv} = 0.995$ and the second trip has the same spectral width and co-polar correlation coefficient, except velocity is $v_{2} = -5m/s$. This simulated time series will be modulated with $\omega_{1}$ for the first pulse and $\omega_{2}$ for the next pulse in a frame. The frame is a basic unit of two pulses, which repeats. The pulse wise returns are then filtered by a digital filter at baseband and then after pulse compression, the power of echo signal is calculated. Based on this, we would get the dynamic range of values, $P_{1}/P_{2}$, where the parameter retrievals are within acceptable range (based on measured bias and standard deviation). This process of time series simulation is shown in figure \[fig\_IFTime\]. ![The time series simulation at IF Frequency.[]{data-label="fig_IFTime"}](Drawing_FD1){width="3.6in"} ![Both the first and the second trip echoes are generated with equal power such that $p1/p2 = 0$dB and with parameters: $v_{1} = 10m/s$,$w_{1} = 1m/s$ and $\rho_{hv} = 0.995$ while the second trip has the same parameters, except velocity of $v_{2} = -5m/s$. []{data-label="fig_IFfig1"}](BothTripsCombined_b.jpg){width="3in"} Figure \[fig\_IFfig1\] shows both the first and second trips, generated with equal power, such that $P_{1}/P_{2} = 0$dB, and both of them have phase jitter of 0.5 deg rms. The waveforms are up-converted to IF frequency. The odd pulse first trip, is up-converted to $\omega_{1}$ and combined with second trip echo (up-converted to $\omega_{2}$). In the same way, the even pulse first trip, is up-converted to $\omega_{2}$ and combined with second trip echo (being up-converted to $\omega_{1}$). The spectrum after this process, where, $\omega_{1}$ = 60MHz and $\omega_{2}$ = 70MHz, is shown in figure \[fig\_IFfig2\]. Next, the time series is down-converted with these IF frequencies. In this whole process, if we have one down-converter and pulse compressor channel, we can either retrieve first trip by switching the Numerically Controlled Oscillator (NCO) to a sequence 1: $\omega_{1}$, $\omega_{2}$ for a frame of two pulses, based on odd or even pulse. For retrieval of second trip, we would need to switch to a sequence 2: $\omega_{2}$, $\omega_{1}$ for the two pulse frame. But, if we have two channels of down-converter and pulse compressor system, then we can do this in parallel, by programming NCO to sequence 1 or 2, depending upon the trip to be retrieved (in respective channels). This would consume double the resources in an FPGA based system, compared with a single channel configuration. Another approach could be, to use alternate pulse-pair frames for first trip, and in-between frame for second trip. In such a case, the sequence of NCO would be : $\omega_{1}$, $\omega_{2}$, $\omega_{2}$ and $\omega_{1}$ for a frame of four pulses. This scheme would save on the resource and computational complexity. The down-converter has been set up here, to use frequency conversion followed by filtering and decimation stage. These stages are composed of FIR and CIC filters. The overall frequency response of the down-converter stage is set to passband ripple of 0.2dB and stop-band attenuation of 80dB. The amplitude and phase response of the filtering stage is shown in figure \[fig\_IFfig3\]. ![The Spectrum of Up-Converted First and Second trip echoes with $\omega_{1} = $60MHz and $\omega_{2} = $70MHz. []{data-label="fig_IFfig2"}](UpSpec_b.jpg){width="3.6in"} ![image](ddcResp_b.jpg){width="7in"} The spectrum after down-conversion and filtering stage, for the sequence (for NCO) set for the second trip retrieval is shown in figure \[fig\_IFfig4\]. The bandwidth of chirp is 0.5 MHz. After down-converter, the data goes through pulse compression stage. As a final computation, we reconstruct the velocity plot of second trip, with a power ratio $P_{1}/P_{2} = 0$dB, over the number of integration pulses (N = 64 here). This is shown in figure \[fig\_IFfig5\]. More detailed analysis of the velocity and spectral width reconstruction process, is given in next section. ![The Spectrum after down-converter stage (cohering to second trip). []{data-label="fig_IFfig4"}](downConv1_b.jpg){width="3.6in"} ![The Velocity Spectrum of the second trip, recovered after frequency planning of $\omega_{1}$ and $\omega_{2}$.[]{data-label="fig_IFfig5"}](SecTripV_b.jpg){width="3.6in"} The noise floor is dominated by phase noise of the echo signal received, which negatively impacts the dynamic range of $P_{1}/P_{2}$, that can be reconstructed. This was also highlighted before, during the recovery of second trip echoes, using Chu inter-pulse codes. It was shown that the second trip could be recovered for the power ratio ($P_{1}/P_{2}$) upto 40dB, under 0.5 deg rms phase jitter. Under similar condition of phase noise (jitter), the frequency diversity scheme, developed in this paper, can recover second trip for the power ratio spanning 60dB, an improvement of 20dB over the Chu inter-pulse code. This is one of the major achievement of this work. This has been substantiated here, with simulation, and the mean bias and standard deviation of second trip velocity as a function of power ratios, for frequency diverse scheme, is shown in the figures \[fig\_IFfig6\] and \[fig\_IFfig7\] below. ![The Mean Bias in the second trip velocity, after frequency planning of $\omega_{1}$ and $\omega_{2}$. []{data-label="fig_IFfig6"}](FreqAgileMeanBias_b.jpg){width="3.6in"} ![The standard deviation of the second trip velocity retrieval, after frequency planning of $\omega_{1}$ and $\omega_{2}$. []{data-label="fig_IFfig7"}](FreqAgileStdDev_b.jpg){width="3.6in"} Velocity and Spectral Width Retrieval: -------------------------------------- This scheme gives immense benefit in terms of suppression of the echoes of undesired trip, but it requires spectral processing to retrieve the velocity and spectral width information. This is because, the two frequencies used, make the data samples in adjacent pulses, uncorrelated. If we retrieve the velocity and spectral width, using alternate samples, then the unambiguous velocity range would becomes half. In this section, we highlight a new method, using which we can still recover the original range of velocity, with some constraints. The uncorrelated data from two frequencies, manifest itself with a different gain and phase term, in adjacent pulse. This phase term is in addition to the phase modulation term, due to Doppler. Hence, even for a stationary target, the gain and phase, would go through two states, over coherent integration time. There would be a fixed amplitude and phase modulation with a cycle rate of $F_{PRF}$, and spectrum would have another sideband at $V_{1 or 2} - \pi$. This is illustrated in Fig. \[fig\_GainPaseImb\]. Moreover, both the original and sideband spectrum would look identical and there arises a necessity for another mechanism, to figure out the original velocity. For correct spectral width retrieval, the sideband would need to be filtered out, otherwise there would be over-estimation. We propose a method here to correctly estimate velocity and spectral width, with this type of fixed phase modulation. This can be used under narrow spectral width assumption and we would define a narrow spectral width signal to be one-tenth of the unambiguous velocity range. For S-band, it would be 5 m/s and for Ku band, it would be close to 2 m/s. ![Fixed gain and phase modulation due to uncorrelated frequencies in alternate pulses. []{data-label="fig_GainPaseImb"}](GainPhaseImb_b.pdf){width="3.8in"} The proposed method rely on spectral processing to retrieve velocity and spectral width in combination with pulse pair after sideband removal. Another explanation of generation of sideband under fixed amplitude and phase imbalance over the integration pulses is explained next. Under these circumstances, the lag-1 auto-correlation function will be zero and at lag-2 will be one. Hence the auto-correlation function at various lags can be written as: $$R_{n}^{com} = R_{n}[1 0 1 0 ... 0]$$ where $R_{n}$ is the single lag auto-correlation function. If we take the fourier transform of $R_{n}^{com}$, we get: $$\begin{aligned} FT\{R_{n}^{com}\} &= FT\{R_{n}[1 0 1 0 ... 0]\} \\ &= FT\{R_{n}\} \ast FT\{[1 0 1 0 ... 0]\} \end{aligned}$$ where $\ast$ is the convolution operator. The term to the right of the convolution operator is the fourier transform of a periodic pulse train. For this, the fourier transform is also periodic and the impulses are spaced by $2pi/N$ ([@Oppenheim:2009:DSP:1795494]) with periodicity $N = 2$. Hence the fourier transform of the overall auto-correlation function is the power spectral density of the weather echoes convolved with an impulse train spaced apart by $pi$ radians. Thus now it is easy to understand that the spectrum of weather echoes with odd and even pulses modulated at different frequency will have a sideband at $V_{1/2} - pi$ within the nyquist interval. Now we explain next how to get rid of this sideband. ### Method {#VelRet} We would start with upper-half of the spectrum, of a range gate with $SNR > 10dB$, for a weather echo, within a ray. Assumption is that, either the original or side-band would fall in this region of the spectrum. This is a fair assumption, because the original and sideband spectrum are separated by $\pi$ radian. We run pulse-pair auto-correlation algorithm, on this half of the spectrum (making the other half zero), to get the initial crude estimate of velocity. As a next step, we use a notch filter with normalized notch width equal to 0.5, on the original spectrum, and passband centered around the estimate of velocity, from last step. This would notch out the other component. We run pulse-pair auto-correlation algorithm, once again, to get an accurate estimate of velocity and spectral width. An example spectra from a D3R ray is shown in figure \[fig\_d3rSpec\]. ![The Velocity Spectrum of one range cell at a certain radial, from D3R weather radar, after frequency planning of $\omega_{1}$ and $\omega_{2}$ in adjacent pulses. The number of pulses considered are 128.[]{data-label="fig_d3rSpec"}](Spec1_b.jpg){width="3.6in"} After the estimate of velocity has been obtained in one range cell, this can be propagated to other neighboring range cells below and above it, and also to the one left and right of it, as an initial crude estimate. Assumption here is that of spatial contiguity of weather signals, which is fair enough for most weather events . With the crude estimate, the notch passband can be centered around the velocity spectrum in the adjoining range bins and the sidebands can be notched out. In turn, again the information of velocity is passed on to the neighboring bins and we continue to get a better estimate of velocity and spectral width, progressively in the same and adjoining rays. However, we need to verify our original assumption of retaining the upper-half of the spectrum, in the very first range cell, that we started from. With this assumption only, the velocity profile was constructed in other range cells. This verification process can be accomplished by comparison with other radars or with different band in the same radar. If we observe that the velocities are not matching, then we need to subtract out $v_{unb}$ from our computed velocities. This would construct a velocity profile, if we had started with the other sideband in the first place. This method is summarized in figure \[fig\_method\]. ![Steps for retrieval of velocity and Spectral width with a frequency diversity scheme.[]{data-label="fig_method"}](figure_method_b.jpg){width="2.6in"} ![image](Suppression_FD.pdf){width="7in"} ![image](Suppression1_FD.pdf){width="7.5in"} ![image](Suppression2_FD.pdf){width="7.5in"} ![image](SecTrpRec.pdf){width="5in"} Effect of Frequency Diversity scheme on other dual-polarization moments {#section2} ======================================================================= It can be easily observed that the frequency diversity scheme would improve the bias induced by unwanted trip, by its suppression to a level, below noise. Practically it is seen that, the accuracy of dual-polarization moments depend upon the co-polar correlation between the horizontal and vertical polarization signals. [@Bringi2001] describe in detail, the effect of co-polar correlation on these moments under alternate and hybrid modes of operation. In this section, we try to analyze the effect of frequency diverse scheme on the estimation of co-polar correlation coefficient. During this, we would also try to see the effect of non-ideal conditions and mis-matched channels. Assume the first trip, for all pulses, be denoted by $H_{1}$ and the second trip by $H_{2}$ and the baseband filter matrix by $\textbf{F}_{bb}$, then the equivalent signal model for H-pol and V-pol, for the first trip retrieval, can be written as: $$\begin{aligned} &\textbf{H}^{1} = \textbf{H}_{1} + \textbf{F}_{bbh}.\textbf{H}_{2} \\ &\textbf{V}^{1} = \textbf{V}_{1} + \textbf{F}_{bbv}.\textbf{V}_{2} \end{aligned}$$ The auto-correlation function for the first trip, for the hybrid mode of operation, can be written as: $$\begin{aligned} R^{1}_{vh}(0) &= \frac{1}{N}Tr\{\textbf{V}^{1}\textbf{H}^{1H}\} \\ &= \frac{1}{N}Tr\{(\textbf{V}_{1} + \textbf{F}_{bbv}.\textbf{V}_{2})(\textbf{H}_{1} + \textbf{F}_{bbh}.\textbf{H}_{2})^{H}\} \\ &= \frac{1}{N}Tr\{\textbf{V}_{1}\textbf{H}_{1}^{H} + \textbf{V}_{2}\textbf{H}_{2}^{H}\textbf{F}_{bbh}^{H}\textbf{F}_{bbv}\} \end{aligned}$$ If the characteristics of both H and V pol filters is the same, then $\textbf{F}_{bbh} = \textbf{F}_{bbv} = \textbf{F}$ and the above equation could be simplified to: $$R^{1}_{vh}(0) = \frac{1}{N}Tr\{\textbf{V}_{1}\textbf{H}_{1}^{H}\} + Tr\{\textbf{V}_{2}\textbf{H}_{2}^{H}\textbf{F}^{H}\textbf{F}\}$$ Similarly, for the second trip, we can model it as: $$\begin{aligned} &\textbf{H}^{2} = \textbf{F}_{bbh}.\textbf{H}_{1} + \textbf{H}_{2} \\ &\textbf{V}^{2} = \textbf{F}_{bbv}.\textbf{V}_{1} + \textbf{V}_{2} \end{aligned}$$ and the autocorrelation function: $$R^{2}_{vh}(0) = \frac{1}{N}Tr\{\textbf{V}_{2}\textbf{H}_{2}^{H}\} + Tr\{\textbf{V}_{1}\textbf{H}_{1}^{H}\textbf{F}^{H}\textbf{F}\}$$ The corresponding Correlation coefficients could be written as [@Bringi2001]: $$\begin{aligned} &\rho^{1}_{vh}(0) = \dfrac{R^{1}_{vh}(0)}{\sqrt{P^{1h}_{co}P^{1v}_{co}}} \\ &\rho^{2}_{vh}(0) = \dfrac{R^{2}_{vh}(0)}{\sqrt{P^{2h}_{co}P^{2v}_{co}}} \end{aligned}$$ where $P^{1,2,h,v}_{co}$ is the co-polar power for first or second trip echoes, and for H or V pol channels. The degree of dissimilarity between the autocorrelations of H and V Pol, will be a factor which would impact the $\rho_{vh}(0)$ for the first and second trip echoes. This dissimilarity could arise due to slight difference in filter characteristics, on the receive (cumulative effects of anti-aliasing or baseband CIC/FIR filter). Now, we would analyze the effect of filter, on differential refkectivity ($Z_{dr}$), when first trip is being retrieved: $$\begin{aligned} Z_{dr}^{1} &= \frac{P^{1h}_{co}}{P^{1v}_{co}} = \frac{R^{1}_{vv}(0)}{R^{1}_{hh}(0)} \\ &= \dfrac{Tr\{(\textbf{V}_{1} + \textbf{F}_{bbv}\textbf{V}_{2})(\textbf{V}_{1} + \textbf{F}_{bbv}\textbf{V}_{2})^{H}\} }{Tr\{(\textbf{H}_{1} + \textbf{F}_{bbh}\textbf{H}_{2})(\textbf{H}_{1} + \textbf{F}_{bbh}\textbf{H}_{2})^{H}\} } \\ &= \dfrac{Tr\{\textbf{V}_{1}\textbf{V}_{1}^{H} + \textbf{V}_{1}(\textbf{F}\textbf{V}_{2})^{H} + (\textbf{F}\textbf{V}_{2})\textbf{V}_{1}^{H} + \textbf{F}\textbf{V}_{2}(\textbf{F}\textbf{V}_{2})^{H}\}}{Tr\{\textbf{H}_{1}\textbf{H}_{1}^{H} + \textbf{H}_{1}(\textbf{F}\textbf{H}_{2})^{H} + (\textbf{F}\textbf{H}_{2})\textbf{H}_{1}^{H} + \textbf{F}\textbf{H}_{2}(\textbf{F}\textbf{H}_{2})^{H}\}} \end{aligned}$$ assuming $\textbf{F}_{bbh} = \textbf{F}_{bbv} = \textbf{F}$. It can be easily observed from the equation above, that major contribution towards bias of $Z_{dr}$, is through the middle two terms in numerator and denominator (getting multiplied by the first trip voltage). The second trip voltage, however, is always preceded by filter and is definitely going to be low. Additionally, the degree of dissimilarity between the filter response on the H and V Pol channels, is also going to contribute towards bias in $Z_{dr}$. Performance test and validation on D3R {#section3} ====================================== With D3R weather radar, we are able to make co-aligned Ku and Ka band observations for a precipitation event. It is a very useful ground validation tool for Global Precipitation Measurement mission (GPM) satellite with dual-frequency radar. D3R uses a combination of short and medium pulses with pulse duration of $1 \mu s$ and $20 \mu s$ respectively. The short pulse is used to provide adequate sensitivity for the duration of medium pulse and mitigate blind range of the medium pulse. The radar has been in numerous field campaigns (see [@First5yrs], [@FiveYrsOp], [@olempex1]) Recently, the D3R radar was upgraded with a new version of digital receiver hardware and firmware which supports larger filter length and multiple phase coded waveforms, change of frequencies pulse to pulse and also newer IF sub-systems ([@Kumar8517944], [@8128188] and [@AdapFilt], [@kumar2019receive]). With these newer sub-systems, D3R was deployed for observing snow at the winter Olympics in pheongchang region of South Korea, 2018 ([@8899120], [@Icepop1]). With a $500 \mu s$ pri, D3R’s unambiguous range is 60 kms, but the first $130 \mu s$ is used to inject noise, for receiver calibration. Hence, D3R first trip is upto 40 kms with a dead range of 20 kms. Beyond 60 kms, it is the second trip range. In this section, we would demonstrate the frequency diversity scheme, with a pulse by pulse change of frequency, on D3R Ku band, and also the retrieval of second trip after 60 kms of range. The first case that is demonstrated in figure \[fig\_case1\], has all of second trip echoes and no weather echoes in the first trip range. The normal transmission is using a chirp waveform for medium pulse centered at 65 MHz and a short pulse at 55 MHz. Whereas, for frequency diversity case, the frequencies used are 55 and 65 MHz for short and medium (odd numbered pulses). For the even numbered pulses, the frequencies are 60 and 70 MHz for short and medium pulse respectively. The suppression of second trip can be observed in the south-east sector. The remaining echoes are the clutter points in near range. Another case is depicted in figure \[fig\_case2\]. Initially, we show normal chirp transmission, as reference, with information about velocity and spectral width. This case has first trip in the south-west sector, with second trip power overlaid. Figure \[fig\_fifth\_case1\] shows the velocity profile along 210 radial and clearly the second trip contamination can be observed in 5 to 15 kms of range. Also, figure \[fig\_sixth\_case1\] plots the spectrum at range 10 km and at radial $210 \degree$ azimuth, showing the second trip velocity. The frequency diversity case is shown in figure \[fig\_case3\], which is taken a couple of minutes later, than the normal transmission. There were no second trip signatures from 5 to 15 kms of range at the same radial but instead there is a replica of the original velocity spectrum as sideband, appearing at a distance of $\pi$ radians from the original. Processing to remove this sideband, has been explained in section \[VelRet\]. After we go through the steps listed there, we can reconstruct velocity and spectral width through filtering out sideband power. The velocity and spectral width recovered after this process is shown in figure \[fig\_fourth\_case2\] and \[fig\_fifth\_case2\] respectively. Also, for this case, we have recovered second trip, which is shown after 60 km range in figure \[fig\_case4\]. For doing first trip retrieval, we programmed sequence, $\omega_{1}$, $\omega_{2}$, in a frame, while for second trip recovery, the sequence of $\omega_{2}$, $\omega_{1}$, was used by the NCO. Short pulse sub-channel was configured to recover second trip and the medium pulse sub-channel for first trip. Hence, both trips were being retrieved simultaneously. However, we would no longer have the short pulse echoes for the first trip, which mitigates the blind range of the medium pulse. Due to this, there is a gap in the beginning (first $\sim 4$ km in ppi) and a bigger gap can be observed in between first and second trip echoes (in addition to the dead range). Conclusion {#section5} ========== We developed the scheme of Inter pulse frequency diversity techniques for weather radar systems and utilized the orthogonality between two frequencies, in IF domain, to reject out the undesired trip echoes. This technique shows improvement in performance of second trip suppression and retrieval under phase noise condition, compared with Chu phase codes (SZ Codes). Extensive time-series simulations were carried out to ascertain the performance of this technique. Comparison with Chu phase code based inter-pulse system was presented and this shows promising results with recovery of the weaker trip echoes under wider dynamic range of overlaid power contamination. However, it should be emphasized that due to the un-correlated data samples in adjacent pulses, velocity and spectral width needs to be reconstructed from coherent processing interval, with a new method, that is described. But it would work under assumption of narrow spectral width. Such assumptions also holds for SZ code based retrievals, which under wider spectral width tend to behave more like random phase codes.
--- abstract: 'In the study of $\mathcal{P}\mathcal{T}$-symmetric quantum systems with non-Hermitian perturbations, one of the most important questions is whether eigenvalues stay real or whether $\mathcal{P}\mathcal{T}$-symmetry is spontaneously broken when eigenvalues meet. A particularly interesting set of eigenstates is provided by the degenerate ground-state subspace of systems with topological order. In this paper, we present simple criteria that guarantee the protection of $\mathcal{P}\mathcal{T}$-symmetry and, thus, the reality of the eigenvalues in topological many-body systems. We formulate these criteria in both geometric and algebraic form, and demonstrate them using the toric code and several different fracton models as examples. Our analysis reveals that $\mathcal{P}\mathcal{T}$-symmetry is robust against a remarkably large class of non-Hermitian perturbations in these models; this is particularly striking in the case of fracton models due to the exponentially large number of degenerate states.' author: - Henry Shackleton - 'Mathias S. Scheurer' bibliography: - 'fracton.bib' title: | Protection of parity-time symmetry in topological many-body systems:\ non-Hermitian toric code and fracton models --- Isolated systems are governed by Hermitian Hamiltonians, with real energy eigenvalues and unitary time evolution. Nonetheless, non-Hermitian Hamiltonians [@Bender_2007; @Bender_2015; @Bender1997; @Rotter2009], for which eigenvalues may generally be complex, are also physically relevant as effective descriptions of a large variety of different systems. For instance, they have been studied in the context of biological [@Nelson1998; @Amir2016; @Murugan2017], mechanical [@MechanicalSystem], and photonic [@Ruter2010; @Brandstetter2014; @Peng2014; @peng2014a; @Chang2014; @Lin2011; @Feng2013; @Hodaei2014; @Feng2014; @Regensburger2012; @Peng2016; @Gao2015; @Xu2016; @Jing2015; @Zhang2016; @Sun2014; @Chong2011; @Ramezani2010; @Guo2009; @Klaiman2008; @Wu2020; @Weidemann311; @Naghiloo2019] systems, electrical circuits [@EC1; @EC2; @ExperimentRonny], cavities [@SchaeferMicrowave; @SchaeferMicrowave2; @Lee2014; @Choi2010], optical lattices [@Diehl2011; @Lee2014a], and superconductors [@SCs1; @SCs2]. On top of a complex spectrum, non-orthogonal eigenstates and exceptional points are unique features of non-Hermitian Hamiltonians, with crucial physical consequences [@kato1966perturbation; @2012JPhA...45R4016H; @2014JPhA...47c5305B]. In the past few years, there has been growing interest in the condensed matter community in studying non-Hermitian generalizations of quantum many-body systems. Most of these recent efforts were motivated by the question of how to generalize topological band theory to non-Hermitian systems [@ReviewCorrelated; @ReviewBands], uncovering a modified bulk-boundary correspondence [@Hu2011; @Esaki2011; @San-Jose2016; @PhysRevLett.116.133903; @Yao2018; @Kunst2018; @Xiong2018; @RobertTopolBands; @2020arXiv200401886K] and topological classification [@Gong2018; @Kawabata2018; @Lieu2018; @Leykam2017; @Shen; @ZhouLee; @Longhi2019; @Yuce2019], as well as exceptional nodal phases [@Kozi2017; @Zhou1009; @Moors2019; @Budich2019; @Okugawa2019; @Rui2019]. Furthermore, there has also been research on disordered systems [@Disorder2; @Disorder3; @NonHermitianDisorder; @Anderson4] and studies of non-Hermitian physics where many-body correlations play a crucial role, such as non-Hermitian fractional quantum Hall phases [@FQHYoshida], Kondo physics [@PhysRevLett.121.203001], critical points [@UedaCritical1; @Littlewood], and many more [@PiazzaQMB; @SkinSuperfluid; @UedaPhaseTransition; @Guo2019; @Guo2020; @FermionicChain; @Superfluidity1; @Superfluidity2; @BoseMottInsulator; @TopMottIns1D]. Among these models, a particularly important and commonly studied class of non-Hermitian Hamiltonians is provided by $\mathcal{P}\mathcal{T}$-symmetric Hamiltonians which are invariant under a combination of parity and time-reversal. Despite being non-Hermitian, these Hamiltonians can exhibit real spectra [@Bender1997; @Bender1998; @Bender2003; @Bender_2007; @Bender_2015]. Intuitively, this may be attributed to a balance of gain and loss between the system and its environment. Mathematically, the protection is related to the fact that $\mathcal{P}\mathcal{T}$ symmetry implies that eigenvalues come in complex-conjugate pairs such that isolated real eigenvalues cannot become complex immediately. When they “meet” with another eigenvalue, they can either stay on the real axis or form complex-conjugate partners; when the latter happens, $\mathcal{P}\mathcal{T}$ is said to be broken. Therefore, the analysis of $\mathcal{P}\mathcal{T}$-symmetry breaking is particularly subtle in systems with (approximate) degeneracies. For symmetry-imposed degeneracies, the reality of the eigenvalues can be simply protected by the symmetry itself and the fact that eigenvalues must come in complex conjugate partners. *A priori*, this is different for degeneracies related to intrinsic topological order [@WenFQH; @SubirReview]: for instance, the toric code model [@Kitaev1997] has four ground states on a torus, that are guaranteed to be (exponentially) close in energy, even if all unitary symmetries are broken; similar statements apply to other spin-liquid phases. An even more dramatic ground-state degeneracy (GSD), that scales exponentially with linear system size, is realized in fracton models—novel quantum states of matter that are characterized by excitations with restricted mobility [@NandkishoreHermele; @PretkoReview]. Similar to spin-liquids, the GSD of fracton phases is topological in the sense that the different ground states are locally indistinguishable. One might be tempted to conclude that turning on a non-Hermitian, $\mathcal{P}\mathcal{T}$-symmetric perturbation in such systems will immediately lead to complex ground-state energies. Contrary to these expectations, we demonstrate in this paper that the reality of the ground-state eigenvalues in these phases can be surprisingly robust against a large class of such perturbations, even if all unitary symmetries are broken and in the presence of exponentially many degenerate states. More specifically, we study under which conditions the eigenvalues of a given (almost degenerate) subspace of a Hermitian quantum system will stay real upon adiabatically turning on a non-Hermitian perturbation such that the total Hamiltonian commutes with a generalized $\mathcal{P}\mathcal{T}$ symmetry. Here, “adiabatically” refers to keeping the gap to all other states finite and “generalized $\mathcal{P}\mathcal{T}$” indicates that $\mathcal{P}$ does not have to be spatial inversion, but might be any unitary operator. We first discuss a general mathematical condition for the eigenvalues to stay real and, hence, $\mathcal{P}\mathcal{T}$ symmetry to be protected. We then demonstrate that this condition has strong implications for the protection of $\mathcal{P}\mathcal{T}$ symmetry in the ground-state manifold of systems with topological GSDs, taking the toric code [@Kitaev1997], the X-cube model [@Vijay2016], the checkerboard models [@Vijay2016; @VijayHaahFu1], Haah’s 17 CSS cubic codes [@Haah2011], and the large class of quantum fractal liquids of Ref. [@Yoshida2013] as examples. It is found that $\mathcal{P}\mathcal{T}$ symmetry will be preserved on systems with even linear system sizes, $L_j$, (in some Haah codes, divisibility by $4$ is required) for a large class of perturbations, while it is generically fragile in systems with odd $L_j$. We emphasize that understanding the preservation or breaking of $\mathcal{P}\mathcal{T}$ symmetry is not only one of the central theoretical questions of $\mathcal{P}\mathcal{T}$-symmetric quantum mechanics, but also of practical relevance for experimental realizations and potential applications of effectively non-Hermitian systems. We hope that our framework for predicting the stability of the reality of eigenvalues and the presence or absence of exceptional points will provide greater control over the effects of non-Hermitian perturbations, which is, e.g., important for the observation of power-law oscillations [@Jorg2019; @Takasu2020; @Regensburger2012; @LeiPTQuantumDynamics] and the potential applications as topological lasers [@TopologicalLaser1; @TopologicalLaser2; @TopologicalLaser3] and sensing devices [@WiersigSensing; @LiuSensing]. The remainder of the paper is organized as follows. In Sec. \[GeneralFormOfPerturbations\], we define the type of non-Hermitian Hamiltonians we are interested in, $\mathcal{P}\mathcal{T}$ symmetry, and the more general condition of pseudo-Hermiticity. We also discuss the general, mathematical condition for colliding eigenvalues to stay real. It is first applied to the toric code, in Sec. \[ToricCode\], to the X-cube, checkerboard models, and Haah’s codes in Sec. \[FractonModels\], and finally to the fractal liquids of Ref. [@Yoshida2013] in Sec. \[FractalLiquids\]. A summary of our findings is provided in Sec. \[Conclusion\]. Pseudo-Hermitian Perturbations {#GeneralFormOfPerturbations} ============================== We start with a general explanation of the class of non-Hermitian perturbations under consideration. To this end, let us first assume that our non-Hermitian Hamiltonian $H$ admits a complete biorthonormal eigenbasis $\{\ket{\psi}, \ket{\phi}\}$ [@2014JPhA...47c5305B], which means that $$\begin{aligned} \begin{split} H \ket{\psi_n} &= E_n \ket{\psi_n}\,, \\ H^\dagger \ket{\phi_n} &= E_n^* \ket{\phi_n}\,, \\ \braket{\phi_n}{\psi_m} &= \delta_{mn}\,. \label{BiorthogonalBasis}\end{split}\end{aligned}$$ This is equivalent to the statement that $H$ is diagonalizable, which is a very natural assumption for a generic (non-Hermitian) Hamiltonian of a physical system. Note, however, that it can be violated, most importantly at exceptional points [@kato1966perturbation; @2012JPhA...45R4016H], which we will discuss separately below. In the study of non-Hermitian perturbations to quantum systems, it is common to further assume that these Hamiltonians are $\mathcal{P}\mathcal{T}$-symmetric [@Bender1997; @Bender1998; @Bender2003; @Bender_2007; @Bender_2015], i.e., $[H,\mathcal{P}\mathcal{T}] = 0$, where $\mathcal{P}$ can be abstractly defined as any unitary operator that squares to $\mathbbm{1}$, and $\mathcal{T}$ is complex conjugation in a certain basis. Doing so imposes additional restrictions on the spectrum of $H$. Eigenvalues must come in complex conjugate pairs, as $H (\mathcal{P} \mathcal{T}) \ket{\psi_n} = E_n^* (\mathcal{P} \mathcal{T}) \ket{\psi_n}$. Importantly, this means that if one starts with a Hermitian, $\mathcal{P} \mathcal{T}$-symmetric Hamiltonian and applies a $\mathcal{P}\mathcal{T}$-symmetric non-Hermitian perturbation, isolated eigenvalues cannot become complex on their own—they must merge with another eigenvalue on the real axis before becoming complex. This feature leads to the reality of energy spectra generally being robust to sufficiently small $\mathcal{P}\mathcal{T}$-symmetric perturbations, although degenerate subspaces are not necessarily protected from becoming complex. When $\mathcal{P} \mathcal{T} \ket{\psi_n} \propto \ket{\psi_n}$, $\mathcal{P} \mathcal{T}$ symmetry is said to be “unbroken” and the associated eigenvalues are real. Once eigenvalues meet and become complex, $\mathcal{P} \mathcal{T}$ symmetry is “broken” and $\ket{\psi_n}$ is not an eigenstate of $\mathcal{P} \mathcal{T}$ any more. In this work, however, we do not restrict ourselves to $\mathcal{P}\mathcal{T}$ symmetry, and instead impose a closely related but more general condition of *pseudo-Hermiticity* [@Mostafazadeh2001; @Mostafazadeh2001a; @Mostafazadeh2002]. A Hamiltonian $H$ is pseudo-Hermitian if there exists a linear operator $\eta$, which we will refer to as the *metric operator*, such that $$\eta H \eta^{-1} = H^\dagger. \label{PseudoHermiticity}$$ In this paper, we take $H$ to consist of a Hermitian component, $H_0$, and a non-Hermitian perturbation, $\epsilon V$, with magnitude that we control with $\epsilon \in \mathbbm{R}$: $$H=H_0 + \epsilon \, V, \quad H_0^\dagger = H_0. \label{GeneralFormOfHam}$$\[CompleteDesciptionOfHam\] For the Hermitian part $H_0$, [Eq. (\[PseudoHermiticity\])]{} implies $\comm{\eta}{H_0} = 0$, i.e., $\eta$ is a symmetry of the unperturbed Hamiltonian. We also take $\eta$ to be unitary, so that [Eq. (\[PseudoHermiticity\])]{} is equivalent to $\eta H \eta^\dagger = H^\dagger$. The purpose of this work is to derive and discuss general conditions under which (certain physically relevant parts of) the spectrum of $H$ in [Eq. (\[GeneralFormOfHam\])]{} can stay real upon adiabatically turning on $\epsilon$. This condition of pseudo-Hermiticity (\[PseudoHermiticity\]) is manifestly identical to $\mathcal{P}\mathcal{T}$ symmetry with $\mathcal{P} \equiv \eta^{-1}$ provided $H$ is symmetric, $H = H^T$. In fact, it was shown [@2018arXiv180101676Z] that any $\mathcal{P}\mathcal{T}$ symmetric, finite-dimensional Hamiltonian is also pseudo-Hermitian. For this reason and since it does not involve any anti-linear operators and, thus, does not require a choice of basis, we focus on pseudo-Hermiticity in this work. Moreover, pseudo-Hermiticity gives a more systematic way of constructing non-Hermitian perturbations $\epsilon\,V$ to Hermitian models: one can immediately obtain all the possible choices of $\eta$ as it has to be a symmetry of the unperturbed, Hermitian part, $H_0$, of the model, which then specifies the suitable non-Hermitian perturbations. Protection of reality of energies --------------------------------- If $H$ is pseudo-Hermitian, complex eigenvalues also must come in conjugate pairs, since the combination of Eqs. (\[BiorthogonalBasis\]) and (\[PseudoHermiticity\]) implies $H \eta^{-1} \ket{\phi_n} = E_n^* \eta^{-1} \ket{\phi_n}$. As is the case with $\mathcal{P}\mathcal{T}$-symmetric perturbations, this means that if a non-Hermitian perturbation is applied, the reality of isolated eigenvalues is stable to small pseudo-Hermitian perturbations. If a group of eigenvalues are degenerate (or almost degenerate) under $H_0$—as is common in models involving symmetries or topological superselection sectors—they are generally not stable to pseudo-Hermitian perturbations. In these cases, we identify two main mechanisms by which these degenerate eigenvalues can stay real under pseudo-Hermitian perturbations: *(I)* The first method of ensuring degenerate eigenvalues stay real is simply to preserve the degeneracy under pseudo-Hermitian perturbations. Pseudo-Hermiticity implies that if degenerate eigenstates are going to become complex, they must acquire imaginary parts with opposite signs. If one forces the (in general complex) eigenvalues to remain degenerate, this can never be satisfied for a non-zero imaginary component, unless the eigenvalues meet with another set of symmetry-unrelated eigenvalues. The latter, however, requires a sufficiently large value of $\epsilon$, as symmetry-unrelated states are generically not degenerate for $\epsilon=0$. The symmetries enforcing the degeneracy can be unitary symmetries, fermionic time-reversal symmetry [@2020arXiv200401886K], or even bosonic time-reversal symmetries unique to pseudo-Hermitian systems [@Sato2011]. *(II)* The second mechanism is more subtle and our main focus in this work. If a pseudo-Hermitian term breaks all symmetries protecting the degeneracy, the eigenvalue splitting will generally be nonzero. This splitting can be either real or imaginary. However, one can show that *if all the eigenstates of $H_0$ of the degenerate (or almost degenerate) subspace of interest have the same eigenvalue under $\eta$, then this splitting will always be real.* This mathematical fact can be readily understood within the framework of $G$-Hamiltonian systems developed by Krein, Gel’fand and Lidskii [@krein1950generalization; @gel1955structure] in the 1950s for the case of Hermitian $\eta$. In Appendix \[ap:perturbation\], we provide a simple and physically insightful proof to all orders of perturbation theory that works for $\eta$ being Hermitian or unitary. Furthermore, our analysis shows that*, if the eigenvalues of $\eta$ are identical, the projections of the associated eigenstates to the (almost) degenerate subspace of $H_0$ will be orthogonal to first order in $\epsilon$ and to zeroth order in the limit of a large gap to the rest of the spectrum*; it also follows that, as long as the energetic separation of the subspace of interest from the rest of the spectrum is sufficiently large, they will be approximately orthogonal in the entire Hilbert space, even though the Hamiltonian is not Hermitian any more. This is very different when the eigenvalues of $\eta$ are not the same. In that case, there can be exceptional points [@kato1966perturbation; @2012JPhA...45R4016H], where the Hamiltonian is defective, eigenstates coalesce and become identical, irrespective of how large the gap to the other states of the system is. Intuitively, this is related to the fact that the Hamiltonian restricted to the degenerate subspace is Hermitian: denoting the degenerate eigenfunctions by $\ket{\psi_i}$ and writing $\eta \ket{\psi_i} = e^{i \delta} \ket{\psi_i}$, we have $$\hat{H}_{ij} \equiv \bra{\psi_i} H \ket{\psi_j} \stackrel{(\ref{PseudoHermiticity})}{=} \bra{\psi_i} \eta^{-1} H^\dagger \eta \ket{\psi_j} = \hat{H}^*_{ji}. \label{HermitianMatrixInEigenspace}$$ The complete argument of Appendix \[ap:perturbation\] involves constructing a full effective Hamiltonian for the degenerate subspace, and showing that it is Hermitian via similar reasoning. Remarks on condition for reality -------------------------------- Some comments should be made regarding the possibility of multiple degeneracies and multiple metric operators. In the case of a two-fold degeneracy, i.e., two eigenvalues being identical, there are only two possibilities for the eigenvalues of $\eta$—either they both have the same eigenvalue under $\eta$, or they are different. In the former case, the splitting is always real. In the latter case, the energy splitting can be real or complex depending on the magnitudes of the matrix elements in the effective Hamiltonian. When there are more than two degenerate eigenstates, the full criteria becomes more complicated, as some eigenvalues may become complex while others stay real. For a concrete system, it should always be possible to determine the nature of the splitting through perturbation theory, using the methods described in Appendix \[ap:perturbation\]. However, we note that it is *always* the case that if all the unperturbed eigenstates have the same eigenvalue under $\eta$, their energies will stay real. One can also consider a case where there is a two-fold degeneracy, but multiple possible choices of metric operators. If there are two metric operators, $\eta_1$ and $\eta_2$, such that both eigenstates have the same eigenvalue under $\eta_1$ and different eigenvalues under $\eta_2$, the degeneracy can be protected as a consequence of the mechanism *(I)* above: $S=\eta^{-1}_1 \eta^{\phantom{-1}}_2$ is, by construction, a symmetry of $H$ and if $S \ket{\psi_1} = \ket{\psi_2}$, the eigenvalues will remain identical for $\epsilon \neq 0$. However, the pseudo-Hermiticity of $H$ with respect to $\eta_2$ can be broken without causing the eigenvalues to become complex. In this paper, we focus on the protection mechanism *(II)* for the reality of the eigenvalues, i.e., on cases where pseudo-Hermitian perturbations break all the relevant symmetries, eigenvalue degeneracies are not preserved and hence the interplay between the metric operator and the unperturbed eigenstates are important in deducing whether the energies stay real. The general procedure for utilizing this phenomenon goes as follows. First, specify a subspace of interest, whose energies are separated from the rest of the spectrum. Next, identify the unitary symmetries under which the subspace has a definite eigenvalue under. These symmetries will yield a class of non-Hermitian perturbations—namely, those that are pseudo-Hermitian with the symmetry as a metric operator—for which the degenerate eigenvalues will stay real. This notion of stability is useful in quantum systems when the subspace under consideration is well-separated from the rest of the spectrum. In the remainder of the paper we will be concerned with gapped many-body systems with several degenerate ground states and discuss under which conditions the ground-state energies can remain real, provided the perturbations do not close the gap between the ground and excited states. Non-Hermitian Toric Codes {#ToricCode} ========================= We begin with a study of non-Hermitian perturbations to the two-dimensional toric code [@Kitaev1997], focusing on the reality of the ground-state subspace. Non-Hermitian generalizations of the toric code [@UedaPhaseTransition; @Guo2020] or closely related models [@Guo2019] have recently been studied; these works, however, have a different focus and a systematic understanding of the stability of $\mathcal{P}\mathcal{T}$ symmetry or, more generally, of the reality of the spectrum in the ground-state subspace remains unexplored. (0,0) grid (3,2); in [0,...,3]{} in [0,...,1]{} (-0.1,+0.4) rectangle (+0.1,+0.6); in [0,...,2]{} in [0,...,2]{} (+0.4,-0.1) rectangle (+0.6,+0.1); (0,0) rectangle (1,1); (2, 0.5) – (2, 1.5); (1.5, 1) – (2.5, 1); (0,0) grid (2,2); in [0,...,2]{} in [0,...,1]{} (-0.1,+0.4) rectangle (+0.1,+0.6); in [0,...,1]{} in [0,...,2]{} (+0.4,-0.1) rectangle (+0.6,+0.1); (0,0) rectangle (1,1); (1,1) rectangle (2,2); The toric code is defined on a square lattice, with Pauli spins on every edge, see [Fig. \[fig:tcCovering\]]{}. We denote the number of sites along the $x$ and $y$ directions by $L_x$ and $L_y$ and focus on periodic boundary conditions. The Hamiltonian is $$H^{TC} = - \alpha \sum_{c} A_c - \beta \sum_{p} B_p, \label{eq:toricCode}$$ where we have introduced the vertex operators, $A_c$, which cover the four spins adjacent to a vertex $c$, and the plaquette operators, $B_p$, which cover the four spins on a plaquette $p$, $$\begin{aligned} A_c = \prod_{i \in c} Z_i, \quad B_p = \prod_{i \in p} X_i. \end{aligned}$$ Unless stated otherwise, we will use $\alpha=\beta=1$. In accordance with quantum code terminology, we refer to $A_c$ and $B_p$ collectively as “stabilizers.” Each term in [Eq. (\[eq:toricCode\])]{} commutes with the rest of the Hamiltonian, so the ground states can be obtained by minimizing the energy of each operator independently. Any state $\ket{\psi}$ in the ground-state subspace satisfies $A_c \ket{\psi} = B_p \ket{\psi} = \ket{\psi}$. If defined on a torus, one can define loops of $Z$ or $X$ operators that wind around either of the two cycles of the torus. These logical string operators, that cannot be deformed to the identity by applications of stabilizers, imply a fourfold degenerate ground state, with string operators acting irreducibly within that subspace. Pseudo-Hermitian perturbations {#pseudo-hermitian-perturbations} ------------------------------ We are interested in pseudo-Hermitian perturbations to [Eq. (\[eq:toricCode\])]{} and how they affect the degenerate ground states. To this end, let us first focus on three possible choices of $\eta$, $$\eta = \prod_i X_i\,, \prod_i Y_i\,, \prod_i Z_i, \label{OneNaturalChoiceOfEta}$$ where the product involves all sites of the system, and postpone the discussion of other options to Sec. \[OtherMetricOperators\] below. One can easily check that $\comm{H^{TC}}{\eta} = 0$. In contrast with many other features of the toric code, which only depend on the topology of the manifold, the eigenvalues of the ground states under $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{} are highly sensitive to the system size. On an even-by-even lattice, the entire ground-state subspace has the same eigenvalue under $\eta$. This can most easily be seen by the fact that $\eta$ can be written as a product of plaquette and vertex operators, and must give eigenvalue $+1$ in the ground-state subspace as a result, see Fig. \[fig:tcCovering\]. This cannot be accomplished on any other lattice size for the following reason: a straight line drawn along the $x$ ($y$) direction going through the centers of the plaquettes will intersect $L_x$ ($L_y$) sites. If we attempt to cover the full lattice with plaquette operators, the placement of an additional operator will always change the number of covered sites on the line by an even amount. The same holds true for vertex operators and lines drawn through the vertices. Therefore, if either $L_x$ or $L_y$ is odd, the full lattice can never be assembled solely from stabilizers. The fact that $\eta$ cannot be written as a product of stabilizers is sufficient to show that not all ground states can have the same eigenvalue under $\eta$. To see this, suppose that all ground states have the same eigenvalue under $\eta$. If this holds, then we can add $\eta$ to the group of stabilizers of the toric code without modifying the GSD. If $\eta$ is independent from the rest of the stabilizers, we arrive at a contradiction, since increasing the number of independent stabilizers lowers the GSD. The observation that all the ground states have the same sign under $\eta$ can also be seen by noting that $\eta$ commutes with all the logical string operators on an even-by-even lattice, which take the system between different ground states. On an odd-by-even or an odd-by-odd lattice, $\eta$ anti-commutes with at least one of the logical string operators, which in both cases lead to two ground states having eigenvalue $+1$ and the other two having eigenvalue $-1$. What sort of perturbations, $\epsilon V$, can we add to our Hamiltonian for which $\eta V \eta^\dagger = V^\dagger$? Writing $V=i \mathcal{O}$, this requires $$\eta \mathcal{O} = - \mathcal{O}^\dagger \eta\,, \label{eq:ptSymmetryCondition}$$ which reduces to $\acomm{\eta}{\mathcal{O}} = 0$ for Hermitian $\mathcal{O}$. Taking $\eta$ to be the product of $Y$ operators for concreteness, this means that $\mathcal{O}$ can be a sum, $\mathcal{O} = \sum_t g_t \mathcal{O}_t$, $g_t\in \mathbbm{R}$, over terms $\mathcal{O}_t$ which are products of Pauli matrices, only constrained to contain an odd number of $X_i$ and $Z_i$. This includes a large class of perturbations such as random, planar fields, $V= i \sum_i (g_{i1} X_i + g_{i3} Z_i)$, $g_{i1},g_{i3}\in \mathbbm{R}$, and highly non-local terms, such as $i \sum_{i<j<k} g_{ijk} X_i X_j X_k$, or $\sum_{i<j<k} g_{ijk} X_i Y_j Y_k$, $g_{ijk}\in\mathbbm{R}$. Since each term satisfies [Eq. (\[eq:ptSymmetryCondition\])]{} separately, there is no relation between the prefactors of the different terms required and we can think of them as random, non-Hermitian disorder, that in general breaks all symmetries of the system (other than $\mathcal{P}\mathcal{T}$). In combination with our results of Sec. \[GeneralFormOfPerturbations\], this implies that on an even-by-even lattice, the ground-state subspace of the toric code remains real under the large class of pseudo-Hermitian perturbations that satisfy [Eq. (\[eq:ptSymmetryCondition\])]{} with $\eta$ given by [Eq. (\[OneNaturalChoiceOfEta\])]{}. As the eigenvalues must stay real for small perturbations, they never exhibit any square root singularities [@2012JPhA...45R4016H] and exceptional points are avoided. This is verified by exact diagonalization (ED) of the toric code spectrum in Fig. \[fig:tcED\](a,b), where it can be seen that the ground-state energies can only become complex when meeting with the excited states. As such, the $\mathcal{P}\mathcal{T}$ symmetry of the ground-state manifold is protected by the gap to the excited states. In contrast, on a lattice that is not even-by-even, the ground states generically become complex immediately upon applying the same non-Hermitian perturbations. This sensitivity of the ground state to the system size can be thought of as representative of the highly entangled nature of the toric code ground states. Even if one was to consider arbitrarily large system sizes, the toric code ground states are still able to “detect" whether the system size is even or odd. A similar interpretation of this phenomenon is that even for local perturbations, the order in perturbation theory in which the ground state energy splitting will occur necessarily involves a non-local operator which winds around the torus and, as such, can be sensitive to (the parity of) the system size. ![Spectrum of the toric with non-Hermitian random field perturbation, $i \epsilon \sum_i g_i X_i$, where $g_i$ was initialized randomly according to a Gaussian distribution with mean and variance $1$. In (a,b) and (c,d) we take the bare toric-code Hamiltonian (\[eq:toricCode\]) and the perturbed one, Eq. (\[eq:tcPerturbation\]) with Gaussian distributed $h_i$ (mean $0$ and standard deviation $0.4$), as starting point, respectively. In (a,c), the real part of the energy is shown with red and gray referring to real-valued ground and excited energy levels, whereas eigenvalues with a complex part (broken $\mathcal{P}\mathcal{T}$ symmetry) are indicated in blue. The corresponding imaginary parts can be found in (b,d) with red indicating the ground states, defined as those four states with the lowest $\text{Re}(E_i)$. \[fig:tcED\]](ToricCodeSpectrum_v3.pdf){width="\linewidth"} Starting with perturbed toric code ---------------------------------- One might wonder whether the remarkable protection of $\mathcal{P}\mathcal{T}$ and reality of the ground-state energies is just a consequence of the highly fine-tuned and exactly solvable toric code Hamiltonian (\[eq:toricCode\]) or a more general property of the underlying topologically ordered phase. To investigate this, let us take as our base Hamiltonian the toric code with some small Hermitian perturbation, for example a field along the $Z$-direction with in general spatially varying amplitude, $$H_0 = H^{TC} + \sum_i h_i Z_i\,, \quad h_i \in \mathbbm{R}. \label{eq:tcPerturbation}$$ The perturbation in [Eq. (\[eq:tcPerturbation\])]{} forces us to choose $\eta = \prod_i Z_i$ in [Eq. (\[OneNaturalChoiceOfEta\])]{}, as it is the only one that commutes with the Hermitian Hamiltonian. Note that, of course, a completely random Hermitian field will break all symmetries and no $\eta$ is possible; we are, however, not interested in this case as the Hamiltonian would break $\mathcal{P}\mathcal{T}$ *explicitly* and the question of whether it is broken *spontaneously* would become ill defined. With the additional perturbation in [Eq. (\[eq:tcPerturbation\])]{}, we no longer have exactly degenerate ground states for $\epsilon=0$, but a finite energy splitting that is exponentially suppressed by the system size. The four low-energy states will still all be even under $\eta$ for an even-by-even lattice, since our perturbation respects the $\eta$ symmetry. Consequently, the ground-state energies will stay real, even when they “meet” each other at finite $\epsilon$, as long as the gap to excited states stays finite. The protection of $\mathcal{P}\mathcal{T}$ symmetry is, thus, a more general property of the underlying phase with topological order. We also demonstrate this with a concrete example in [Fig. \[fig:tcED\]]{}(c,d). Exceptional points ------------------ A surprising observation is that, while these Hermitian perturbations do not change whether the ground states become complex, they do change the nature of *how* they become complex. Non-Hermitian Hamiltonians can exhibit exceptional points [@kato1966perturbation; @2012JPhA...45R4016H]. Here, the eigenvalues coalesce, the matrix becomes defective, i.e., also the eigenvectors become degenerate, and the eigenvalues exhibit a square-root singularity in the tuning parameter, in our case $\epsilon$, in the sense that the difference of eigenvalues scales with $\sqrt{\epsilon_0-\epsilon}$. This has crucial consequences, e.g., for the Green’s function that exhibits a pole of second order in addition to the conventional first-order pole [@2012JPhA...45R4016H]. For pseudo-Hermitian or $\mathcal{P} \mathcal{T}$-symmetric Hamiltonians, exceptional points typically arise at the moment when two eigenvalues meet on the real line and become complex. If we start with an unperturbed toric code on an even-by-odd lattice and apply a non-Hermitian, pseudo-Hermitian perturbation $\epsilon V$, such as an imaginary transverse field, the degenerate ground states can immediately become complex. However, this degeneracy is not an exceptional point, since the degeneracy occurs in the Hermitian limit and must admit a complete basis of eigenvectors. In contrast, if one first applies a Hermitian perturbation, such as in [Eq. (\[eq:tcPerturbation\])]{}, and then $\epsilon V$, we have verified by ED on a $2 \times 3$ lattice that the ground states *will* form an exceptional point when they meet each other on the real line to become complex, and the corresponding eigenstates become identical. We emphasize that this is true for arbitrarily small Hermitian perturbations. This is in stark contrast to systems with even $L_x$, $L_y$; here eigenvalues must stay real for small perturbations, they never exhibit any square root singularities, and exceptional points are avoided. To illustrate this subtle behavior of perturbed systems with odd system sizes, let us take a two-level system as an effective description of two ground states with opposite eigenvalue of $\eta$ meeting to become complex. Denoting Pauli matrices acting in this subspace by $\sigma_{x,y,z}$, we have $\eta=\sigma_z$ and the most general pseudo-Hermitian Hamiltonian has the form $$h = E_0 \mathbbm{1} + \Delta \sigma_z + i \epsilon \left( \cos \alpha \, \sigma_x + \sin \alpha \, \sigma_y \right), \label{ModelHamiltonian}$$ with the real-valued parameters $E_0$, $\Delta$, $\alpha$, and $\epsilon$; the latter parameterizes the strength of anti-Hermitian perturbations as before. Note that the model is $\mathcal{P}\mathcal{T}$ symmetric only if $2\alpha/\pi \in \mathbbm{Z}$. The right eigenvalues and eigenvectors of $h$ in [Eq. (\[ModelHamiltonian\])]{} are given by $E_{\pm} = E_0 \pm \sqrt{\Delta^2 - \epsilon^2}$ and $\psi_{\pm} \propto (\Delta \pm \sqrt{\Delta^2 - \epsilon^2}, i \epsilon \, e^{i\alpha})^T$. The eigenvalues meet when $\epsilon=\pm \Delta$ and become complex for $|\epsilon| > |\Delta|$. When $\Delta = 0$, however, this is not an exceptional point as $\psi_{\pm} \rightarrow (\pm 1,\text{sign}(\epsilon) e^{i\alpha})^T/\sqrt{2}$, forming an orthonormal basis, and $\Delta E = E_+ - E_- \rightarrow 2 |\epsilon|$ scaling linearly with $\epsilon$, for $\Delta \rightarrow 0$. For $\Delta \neq 0$, instead, we get $\psi_+ \rightarrow \psi_-$ when $\epsilon \rightarrow \pm \Delta$, showing that the matrix becomes defective, and the difference of eigenvalues scales as $\Delta E \sim 2\sqrt{2\epsilon_0}\sqrt{\epsilon_0 - \epsilon}$, for $\epsilon$ near $\epsilon_0 = \pm \Delta$. It is also readily verified that the overlap, $\braket{\phi_{\pm}}{\psi_{\pm}}$, with the corresponding left eigenvector is non-zero except for the exceptional points $\epsilon = \pm \Delta \neq 0$, where it vanishes; this “self-orthogonality” rules out the construction of a bi-orthogonal basis as in [Eq. (\[BiorthogonalBasis\])]{}. In summary, we should think of the special case of vanishing splitting, $\Delta=0$ or of the unperturbed toric code, as a fine-tuned limit where two lines of exceptional points, $\epsilon = \pm \Delta$, meet and give rise to a non-defective Hamiltonian, as required by Hermiticity. We finally point out that this behavior is also visible on an even-by-even lattice when taking into account the excited states: as can be seen in [Fig. \[fig:tcED\]]{}(b,d), the imaginary part of the excited states that become complex at infinitesimal $\epsilon$ scales linearly in $\epsilon$, whereas the $\mathcal{P}\mathcal{T}$ symmetry breaking at finite $\epsilon$ exhibits the aforementioned square-root singularity. Other metric operators {#OtherMetricOperators} ---------------------- So far, we have focused on the three different choices of $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{}, but there are in principle many more possibilities for the bare toric code model (\[eq:toricCode\]), as it possesses many other symmetries. Here, we argue that our choices of $\eta$ are unique provided we assume our anti-Hermitian perturbations can be disordered and are not required to have a specific spatial structure. As a starting point, one might use spatial symmetries—lattice translations $T_{x,y}$, four-fold rotation $C_4$, and inversion $I$ and combinations thereof. For instance, $\eta = I$ with $$I \mathcal{O}_i I^{-1} = \mathcal{O}_{-i}, \quad \mathcal{O}_i = X_i,\,Y_i,\,Z_i, \label{SpatialInversion}$$ is clearly a symmetry, $[H^{TC},I]=0$, and it is easy to see that all ground states have the same eigenvalue under it for any system size (the same holds for $T_{x,y}$ but not for $C_4$). However, it is not a natural choice for a generic system with spatially varying Hermitian or non-Hermitian perturbations, such as those discussed above. For example, for an imaginary field, $V=i \sum_i \sum_{\mu=1}^3 g_{i\mu} (X_i,Y_i,Z_i)_\mu$, it would require $g_{i\mu} = - g_{-i\mu}$ and, hence, fine-tuning between spatially distant sites. Not even a site-independent complex field is possible. Having established that choosing an $\eta$ which relates spatially distant sites requires fine tuning, we focus on $\eta$ that commute with all stabilizers separately. This requirement can alternatively be thought of as a restriction to symmetries that are preserved in the presence of spatial disorder in the couplings of the bare toric code, i.e., $\alpha \rightarrow \alpha_c>0$, $\beta\rightarrow \beta_p>0$ in [Eq. (\[eq:toricCode\])]{}. This leads to two distinct classes of possible $\eta$, schematically given by $$\eta =\prod (\text{stabilizers}) \label{FirstClassOfEta}$$ or $$\eta =\prod (\text{stabilizers})(\text{logical strings}) , \label{SecondClassOfEta}$$\[ClassesOfEtas\] where “logical strings” stands for strings of $X_i$ or $Z_i$ operators along a non-contractible loop of the torus connecting the different ground states [^1]. Clearly, the ground states will have the same eigenvalues under $\eta$ in [Eq. (\[FirstClassOfEta\])]{} and, thus, stay real. For even system sizes, $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{} are of this form and, as we have seen above, indeed admit a large class of non-Hermitian perturbations. This is different for $\eta$ of the form of [Eq. (\[SecondClassOfEta\])]{}: the ground states will have different eigenvalues under $\eta$ and $\mathcal{P}\mathcal{T}$ symmetry is in general fragile. However, since $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{} can be written in the form (\[FirstClassOfEta\]), it is clear that [Eq. (\[SecondClassOfEta\])]{} cannot be spatially homogeneous on an even-by-even lattice, but must be distinct on a non-contractible loop around the torus; the same must hold for the associated non-Hermitian perturbation, which requires, again, significant spatial fine-tuning. Let us illustrate this latter point using the concrete example of $\eta=\prod_i X_i \prod_{j\in P} Z_j$, where $P$ is a non-contractible closed path through the centers of the plaquettes. In that case, an imaginary field, $V=i \sum_i \sum_{\mu=1}^3 g_{i\mu} (X_i,Y_i,Z_i)_\mu$, must satisfy $g_{i1}=0$ for $i\notin P$ and $g_{i2}=0$, $g_{i1}\neq 0$ for $i\in P$ (note that $g_{i1}\neq 0$ on $P$ is required, as we otherwise can simply choose $\eta=\prod_i X_i $, which is of the form of [Eq. (\[FirstClassOfEta\])]{}, and all eigenvalues stay real). In other words, the perturbation must have vanishing $X$ components on all sites except for a non-contractible loop with non-zero $X$ components; again, not even a spatially homogeneous perturbation is possible. We conclude that, setting aside fine-tuned non-Hermitian perturbations with special spatial structure along non-contractible loops, suitable metric operators are of the form of [Eq. (\[FirstClassOfEta\])]{} for even-by-even lattices. As the ground states will always have eigenvalue $+1$ under any such $\eta$, the reality of their eigenvalues and, thus, $\mathcal{P}\mathcal{T}$ symmetry are protected. Arbitrary system sizes ---------------------- So far, we have focused our attention on even-by-even lattices since the homogeneous metric operators in [Eq. (\[OneNaturalChoiceOfEta\])]{} can be written as a product of stabilizers, while this is not possible on even-by-odd or odd-by-odd lattices; nevertheless, if one naively applies the covering shown in Fig. \[fig:tcCovering\] on these lattices, one can obtain a modified metric operator $\tilde{\eta}$, defined as the product of Pauli operators, $X_i$, $Y_i$, or $Z_i$, on all sites except for a single line (in the even-by-odd case) or two lines (in the odd-by-odd case) that wind around the odd lengths of the torus. In other words, $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{} is necessarily of the form of [Eq. (\[SecondClassOfEta\])]{} on a lattice with at least one of $L_{x}$, $L_{y}$ odd. Based on our previous discussion, this implies that the reality of the ground-state eigenvalues and $\mathcal{P}\mathcal{T}$ symmetry are generically fragile on even-by-odd and odd-by-odd lattices. We finally mention, for completeness, one less general but potentially useful immediate consequence. As follows from using $\tilde{\eta}$ as metric operator, any pseudo-Hermitian perturbation $\epsilon V$ with anti-Hermitian part, $\epsilon (V-V^\dagger)/2$, that has support only in a subregion of the system that is contractible around the odd lengths of the torus, will leave the ground-state eigenvalues real. Non-Hermitian Fracton Models {#FractonModels} ============================ Our analysis of the toric code carries over to many well-known fracton models in three dimensions. Fracton models [@Chamon2005; @Bravyi2011; @Haah2011; @VijayHaahFu1; @Vijay2016; @Slagle2017; @Shirley2017; @Ma2017; @NandkishoreHermele; @PretkoReview; @Yoshida2013] constitute a unique phase of matter, characterized by excitations with restricted mobility, either by being immobile or only mobile in certain directions. These systems are typically gapped and have GSDs exponential in linear system size. In this section, we analyze various models with fracton order—namely, the X-cube model, checkerboard model, and Haah’s codes—and show that, like the toric code, the full ground-state subspaces are stable against a large class of non-Hermitian perturbations provided the linear system sizes along all directions are even. Unless stated otherwise, we take $\eta$ to be defined in the same way as in [Eq. (\[OneNaturalChoiceOfEta\])]{}, i.e., as a product of $X$, $Y$, or $Z$ operators over all qubits in the system; as motivated in Sec. \[ToricCode\] above in the context of the toric code, these $\eta$ provide the largest class of allowed non-Hermitian perturbations by virtue of being spatially homogeneous. X-cube model ------------ The X-cube model [@Vijay2016] is defined on a cubic lattice, with qubits living on the edges of the lattice. It has a Hamiltonian composed of mutually commuting terms $$H^X = -\sum_c A_c \,\,- \hspace{-0.3em} \sum_{i = x,y,z} \sum_{v} B^i_v \label{eq:xcube}$$ where $A_c = \prod_{j \in \partial c} X_j$ is the product of $X$ operators on the 12 edges of the cube labelled by $c$, and $B^i_v$ is a vertex operator, composed of four $Z$ operators at vertex $v$ in the plane perpendicular to the $i$’th direction. On an even-by-even-by-even lattice, our $\eta$ operators in [Eq. (\[OneNaturalChoiceOfEta\])]{} can be assembled from these terms, thereby showing that the entire ground-state subspace has eigenvalue $+1$ under $\eta$, see Fig. \[fig:xcubeCovering\]. An identical argument as in the toric code case implies that $\eta$ cannot be assembled from stabilizers on a lattice with any odd length. In combination with the fact that it commutes with all stabilizers $A_c$, $B_v^i$ separately, it must be of the form of [Eq. (\[SecondClassOfEta\])]{} for odd system lengths, with “logical strings” here referring to the logical string-like operators of the X-cube model. By our analysis of the toric code, this immediately implies that the X-cube ground states on an even-by-even-by-even lattice stay real under the non-Hermitian perturbations permitted by $\eta$, which includes the application of imaginary transverse fields, non-local terms like the ones considered for the toric code, and many others. One can check that all other features of non-Hermitian toric code perturbations, such as their additional stability against real perturbations and the ability to add contractible perturbations on lattices with odd system sizes, also hold. However, these features are more striking for fracton models: instead of a four-dimensional code subspace being protected against these perturbations, fracton models have a GSD that grows exponentially with system size; for the X-cube model on a three-dimensional torus, the GSD obeys $$\begin{aligned} \log_2 \text{GSD} = 2 L_x + 2 L_y + 2 L_z - 3. \end{aligned}$$ The reality of the code subspace in the presence of pseudo-Hermitian perturbations holds for the X-cube model defined on general three-dimensional manifolds [@Shirley2017], provided the full space can be covered by plaquette or star operators. This sensitivity to system size may be surprising, since the X-cube model exhibits *foliated fracton order* [@Shirley2017]. This means that the length of any of the sides of the X-cube model can always be extended by attaching layers of toric code and applying a series of local unitary transformations. In Appendix \[ap:foliation\], we present a detailed study of how the metric operators $\eta$ behave under foliations. The end result is that, while the ground states can be extended by this foliation procedure, the foliation acts non-trivially on $\eta$, meaning that the interplay between $\eta$ and the X-cube ground states can change depending on the system size. in [0,1,2]{} in [0,1,2]{} in [0,1,2]{} (2,0,0) – (2,0,1); (2,0,0.5) circle (0.025cm); (2,0,0) – (2,1,0); (2,0.5,0) circle (0.025cm); (2,0,0) – (1,0,0); (1.5,0,0) circle (0.025cm); (2,1,1) – (2,0,1); (2,0.5,1) circle (0.025cm); (1,1,0) – (2,1,0); (1.5,1,0) circle (0.025cm); (1,1,0) – (1,0,0); (1,0.5,0) circle (0.025cm); (1,0,1) – (2,0,1); (1.5,0,1) circle (0.025cm); (2,1,1) – (2,1,0); (2,1,0.5) circle (0.025cm); (1,0,1) – (1,0,0); (1,0,0.5) circle (0.025cm); (1,1,0) – (1,1,1); (1,1,0.5) circle (0.025cm); (1,1,1) – (1,0,1); (1,0.5,1) circle (0.025cm); (1,1,1) – (2,1,1); (1.5,1,1) circle (0.025cm); (0,2,2) – (0,2,1); (0,2,1.5) circle (0.025cm); (0,2,2) – (0,1,2); (0,1.5,2) circle (0.025cm); (0,2,2) – (1,2,2); (0.5,2,2) circle (0.025cm); (0,1,1) – (0,2,1); (0,1.5,1) circle (0.025cm); (1,1,2) – (0,1,2); (0.5,1,2) circle (0.025cm); (1,1,2) – (1,2,2); (1,1.5,2) circle (0.025cm); (1,2,1) – (0,2,1); (0.5,2,1) circle (0.025cm); (0,1,1) – (0,1,2); (0,1,1.5) circle (0.025cm); (1,2,1) – (1,2,2); (1,2,1.5) circle (0.025cm); (1,1,2) – (1,1,1); (1,1,1.5) circle (0.025cm); (1,1,1) – (1,2,1); (1,1.5,1) circle (0.025cm); (1,1,1) – (0,1,1); (0.5,1,1) circle (0.025cm); Checkerboard model ------------------ The checkerboard model [@Vijay2016] is another example of a system with fracton excitations. This model has spins defined on the vertices of a three-dimensional cubic lattice, as opposed to the edges. By separating the cubes of the lattice with alternating labels $A$ and $B$, each forming a three-dimensional checkerboard lattice, and denoting the cubic operators $\prod_{i \in \partial c} Z_i$ and $\prod_{i \in \partial c} X_i$ as $Z_c$ and $X_c$ respectively, the checkerboard model is given by the Hamiltonian $$H^{C} = -\sum_{c \in A} Z_c - \sum_{c \in A} X_c. \label{eq:checkerboard}$$ The geometry of the checkerboard model requires it to be defined on an even-by-even-by-even lattice if periodic boundary conditions are imposed, since otherwise one cannot uniformly partition the cubes into $A$ and $B$ labels. On even-by-even-by-even lattices, the entire lattice can be covered by non-overlapping stabilizers, and therefore the ground-state subspace is even under any $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{}. Again, this implies that the ground-state energies of the checkerboard model always remain real under non-Hermitian perturbations that are pseudo-Hermitian under $\eta$. Although only small system sizes are accessible via ED, we have checked these predictions numerically for the checkerboard model on a $2 \cross 2\cross 2$ lattice. A Majorana version of the checkerboard model has also been studied [@VijayHaahFu1], which simply replaces the Pauli spins with Majorana fermions $\gamma_i$, i.e., the model has one Majorana fermion per site $i$ of the cubic lattice. By defining $\prod_{i \in c} \gamma_i = \gamma_c$, the Hamiltonian of the Majorana checkerboard model is $$\begin{aligned} H = -\sum_{c \in A} \gamma_c. \label{MajoranaCheckerboard} \end{aligned}$$ Because the entire lattice can be covered with $\gamma_{c \in A}$, all ground states of the system are even under the operator $$\begin{aligned} \eta = \prod_i \gamma_i\,, \end{aligned}$$ which can be interpreted as the total fermion parity, $\eta \propto \prod_{\alpha} (c^\dagger_\alpha c^{\phantom{\dagger}}_\alpha - 1/2)$, when combining pairs of Majorana fermions into auxiliary complex fermions $c_\alpha$. Therefore, the ground states remain real under perturbations of the form $i\epsilon\mathcal{O}$, where each term in $\mathcal{O}$ contains an odd number of Majorana operators, i.e., changes the total occupation of auxiliary complex fermions by an odd amount. Haah’s codes ------------ Finally, we consider Haah’s 17 CSS cubic codes [@Haah2011], all of which are defined on a cubic lattice with two qubits per site $i$. Each cube has two stabilizers: one is built up of tensor products of $Z$ and $\mathbbm{1}$ operators on each site $i$, such as $Z_i\otimes \mathbbm{1}_i$ or $Z_i\otimes Z_i$; the other one involves tensor products of $X$ and identity operators, e.g., $X_i\otimes \mathbbm{1}_i$. The exact form of the stabilizers differs from code to code, but all have a sub-extensive GSD. We defer a more detailed discussion of these codes to Appendix \[ap:haah\]—our conclusion is that, with the choice of $\eta$ analogous to [Eq. (\[OneNaturalChoiceOfEta\])]{}, $$\eta = \prod_i X_i \otimes X_i, \, \prod_i Y_i \otimes Y_i,\, \prod_i Z_i \otimes Z_i \,, \label{EtasForTheHaahCodes}$$ the behavior of the code subspace under pseudo-Hermitian perturbations is sensitive not only to whether the system lengths are even or odd, but also whether the system lengths are divisible by $4$. Moreover, since not all cubic codes are symmetric under rotations, this behavior is dependent on which directions are even or odd, and which are divisible by $4$. This admits eight different classes of codes, based on the relation between their code subspace stability under pseudo-Hermitian perturbations and their system sizes. These classes range from cubic code $7$, whose code subspace stays real on all system sizes other than odd-by-odd-by-odd, and cubic code $17$, where the code subspace only stays real if $L_x$, $L_y$ are divisible by $4$. We refer to Appendix \[ap:haah\] for a complete characterization of this behavior. Non-Hermitian Quantum Fractal Liquids {#FractalLiquids} ===================================== In this section, we will generalize the previous analysis to also include another class of fracton models dubbed “quantum fractal liquids” [@Yoshida2013] and reformulate the criterion of stability against non-Hermitian perturbations using a polynomial representation of Pauli operators. In this way, we will recover the criterion of stability of the toric code in an algebraic way and show that the reality of eigenvalues of the exponentially large number of ground states of quantum fractal liquids is protected against a wide range of non-Hermitian terms in the Hamiltonian. Polynomial representation of operators {#PolynomialRepresentation} -------------------------------------- To set up the notation, we will briefly introduce the polynomial representation of operators, a commonly used technique [@macwilliams1977theory]. To this end, consider a polynomial of three variables, $x$, $y$, $z$, $$\begin{aligned} f = \sum_{j, k, \ell \in \mathbbm{Z}} c_{jkl}\, x^j y^k z^\ell,\quad c_{jk\ell} = 0, 1, \end{aligned}$$ over $\mathbb{F}_2$, meaning that all coefficients are to be understood modulo $2$. This allows to define a corresponding Pauli operator whose components lie on the vertices of a cubic lattice in three dimensions in the following way $$\begin{aligned} Z(f): = \prod_{jk\ell} Z_{jk\ell}^{c_{jk\ell}}, \quad X(f) := \prod_{jk\ell} X_{jk\ell} ^{c_{jk\ell}}\,. \end{aligned}$$ Here $Z_{jk\ell}$ ($X_{jk\ell}$) is the $Z$ ($X$) operator acting at vertex $(j,k,\ell)$. For example, a stabilizer of the checkerboard model, given by the product of Pauli matrices on the eight vertices of a cube, corresponds to the polynomial $f = 1 + x + y + z + xy + yz + xz + xyz$. On a finite lattice, periodic boundary conditions are specified by imposing $x^{L_x} = y^{L_y} = z^{L_z} = 1$. We denote the dual of $f$, obtained by taking $x \rightarrow x^{-1}$, and likewise for $y$ and $z$, by $\bar{f}$. Certain relations can be expressed more concisely with this polynomial representation. Translating an operator $Z(f)$ one lattice site along the $x$-direction is simply given by $Z(xf)$, and likewise for translations in the $y$ and $z$-direction. Additionally, the polynomials defined over $\mathbb{F}_2$ naturally encode the commutation relations of the Pauli operators. To see this, consider the *commutation polynomial*, defined as $f\bar{g}$ for two polynomials $f$ and $g$. Writing $f \bar{g}$ as $$\begin{aligned} f \bar{g} = \sum_{ijk} d_{ijk}\, x^i y^j z^k\,, \end{aligned}$$ $d_{ijk} = 1$ ($0$) implies that $Z(f)$ and $X(x^i y^j z^k g)$ anti-commute (commute). Quantum fractal liquids are defined on a cubic lattice with two spins on every vertex. The form of their stabilizers is given by [@Yoshida2013] $$\begin{aligned} \begin{split} Z(\alpha, \beta),& \quad X(\bar{\beta},\bar{\alpha}), \\ \alpha = 1-f(x) y,& \quad \beta =1-g(x) z, \label{eq:fractalStabilizers} \end{split}\end{aligned}$$ and translations thereof, where the two arguments of $Z$ and $X$ denote operators on the two distinct spins per site. Different choices of polynomials $f$ and $g$ define different models. Clearly, all stabilizers commute, as follows from the associated commutation polynomial, $\alpha\bar{\bar{\beta}} + \beta \bar{\bar{\alpha}} = 2\alpha\beta =0$. For codes defined by stabilizers of this type, the logical operators take the form $$\begin{split} \begin{aligned} &\ell_i^{(Z)} = Z(0, x^i \bm{f}(x,y))\,,\quad r_i^{(Z)} = Z(x^i \bm{g}(x, z), 0) \,, \\ &\ell_i^{(X)} = X(x^i \bar{\bm{f}}(x,y), 0) \,,\quad r_i^{(X)} = X(0, x^i \bar{\bm{g}}(x,z)), \end{aligned}\label{StringOperators}\end{split}$$ for integer $i=0,1,\dots, L_x-1$, where we define $$\bm{f} = \sum_{k=1}^{L_y} (fy)^{k-1}, \quad \bm{g} = \sum_{\ell=1}^{L_z} (gz)^{\ell-1}.$$ It is straightforward to verify that the operators in [Eq. (\[StringOperators\])]{} commute with the stabilizers and constitute logical operators if $$f^{L_y} = 1, \quad g^{L_z} = 1. \label{ConditionForStabilizers}$$ There are various ways to satisfy [Eq. (\[ConditionForStabilizers\])]{}: the “trivial” solution, that works for any set, $L_x$, $L_y$, $L_z$, of system sizes, is $f=g=1$. This corresponds to layers of toric code in the ($\hat{y}$,$\hat{z}$) plane, upon noting that the bond variables of the toric code, see [Fig. \[fig:tcCovering\]]{}, can be seen as two qubits per vertex. In this case, $\ell_i^{(Z,X)}$ and $r_i^{(Z,X)}$ in [Eq. (\[StringOperators\])]{} become Z-, X-type string operators in the $i$th layer along the $\hat{y}$ and $\hat{z}$ direction, respectively. Another way of satisfying [Eq. (\[ConditionForStabilizers\])]{} that works for arbitrary isotropic system sizes, $L_x=L_y=L_z=L$, is $f=x^{n_f}$, $g=x^{n_g}$. However, the largest class of possible polynomials $f$, $g$ and, thus, possible models is allowed in the isotropic case with $L=2^{n_L}$, since [Eq. (\[ConditionForStabilizers\])]{} will hold as long as $f(1)=g(1)=1$ [@Yoshida2013]. Here, we refer to the latter set of models as “quantum fractal liquids,” which have been shown to exhibit exponential scaling of the GSD, obeying $ \log_2 \text{GSD}(2L) = 2 \log_2 \text{GSD}(L)$ [@Yoshida2013]. Note, however, that the absence of string-like logical operators and mobile quasiparticles further requires that $f$ and $g$ are not algebraically related, i.e., that there are no integers $n_1$ and $n_2$ such that $f^{n_1} = g^{n_2}$ (neglecting periodic boundary conditions). An example of a model free of string-like logical operators is provided by $f=1+x+x^2$ and $g=1+x+x^3$. Pseudo-Hermitian perturbations {#pseudo-hermitian-perturbations-1} ------------------------------ As before, we are interested in adding pseudo-Hermitian perturbations to this class of models that will leave the ground-state subspace real. We take $\eta$ to be defined analogous to [Eq. (\[EtasForTheHaahCodes\])]{} or, in polynomial representation, $$\begin{aligned} \begin{split} \eta &= Z(h,h), \, X(h,h), \, iX(h,h)Z(h,h), \\ h & = \sum_{j=1}^{L_x}\sum_{k=1}^{L_y}\sum_{\ell=1}^{L_z} x^{j-1}y^{k-1}z^{\ell-1}. \label{EtaExpressedByPolynomials}\end{split}\end{aligned}$$ Any $\eta$ in [Eq. (\[EtaExpressedByPolynomials\])]{} will commute with all stabilizers (\[eq:fractalStabilizers\]). This readily follows from the associated commutation polynomial upon noting that $h=\bar{h}$ is invariant under multiplication by any monomial, physically related to the translation invariance of $\eta$, and that the number of monomials in both $f$ and $g$ must be odd. The latter is a consequence of [Eq. (\[ConditionForStabilizers\])]{} and of the observation that the parity of the number of terms of a polynomial $f$ over $\mathbb{F}_2$ is the same as that of any of its powers, $f^n$ with $n>0$. Based on our discussion of Sec. \[GeneralFormOfPerturbations\], we want to analyze under which conditions the ground-state subspace is even under these operators to guarantee that their eigenvalues stay real. Previously, we had verified this by attempting to assemble $\eta$ via the stabilizers of the model. In the set of models introduced above, the polynomial representation makes it easier to instead verify whether $\eta$ commutes with all the logical operators (\[StringOperators\]), which in turn implies that all ground states have the same eigenvalue of $\eta$ \[and that $\eta$ is of the form of [Eq. (\[FirstClassOfEta\])]{} rather than [Eq. (\[SecondClassOfEta\])]{}\]. The condition for $\eta$ to commute with all the logical string operators, is given by $$\begin{aligned} & h \bm{g} = h \bm{f} = 0, \label{eq:fractalCommutation} \end{aligned}$$ for any of the three possible choices in [Eq. (\[EtaExpressedByPolynomials\])]{}. This simple expression arises from the fact that $\bar{h}=h$ and that the logical operators come in exactly the form of operators relevant to the commutation polynomial, so one can verify that $\eta$ commutes with all the string operators with one equation. It would technically suffice for $h \bm{g}$ and $h \bm{f}$ to be only a function of $y$ and $z$, since one is only concerned with the commutations of operators like $Z(h)$ and $X(x^i \bm{f})$, but not those with relative shift along the $\hat{y}$ or $\hat{z}$ directions. However, recalling that $h$ is invariant under multiplication by any monomial, there is no way for $\bm{g}$ or $\bm{f}$ to conspire to cancel out only the terms independent of $y$ and $z$ in $h \bm{g}$ and $h \bm{f}$ without simply giving $0$. Another important consequence of $h$ being invariant under the multiplication by any monomial is that [Eq. (\[eq:fractalCommutation\])]{} is satisfied if and only if $\bm{f}$ and $\bm{g}$ contain an even number of monomials. As argued above, [Eq. (\[ConditionForStabilizers\])]{} implies that $f$, $g$ and, therefore, also $f^n$, $g^n$ must contain an odd number of terms. Taken together, [Eq. (\[eq:fractalCommutation\])]{} is obeyed and, thus, *the reality of the eigenvalues of the ground states is protected against pseudo-Hermitian perturbation with $\eta$ given in [Eq. (\[EtaExpressedByPolynomials\])]{} if $L_y$ and $L_z$ are even.* Note that the $x$-direction is distinguished from the other two directions in this criterion, a reflection of the fact that the stabilizers given by [Eq. (\[eq:fractalStabilizers\])]{} also distinguishes the $x$-direction. Let us illustrate this for the different special cases of $f$ and $g$ noted above. Taking $f=g=1$ corresponds to $L_x$ uncoupled layers of toric code and the above statement implies that the toric code is protected if and only if the number of sites in each in-plane direction is even, reproducing the result of Sec. \[ToricCode\]. Our current formalism, however, captures many more cases. For instance, we immediately conclude that any model with $f=x^{n_f}$, $g=x^{n_g}$ and $L_x=L_y=L_z=L$ is protected only for even $L$. As the two polynomials are algebraically related, this two-parameter family of models is characterized by string-like logical operators and has excitations mobile along the direction $n_g \hat{y} - n_f \hat{z}$ [@Yoshida2013]. Finally, as already noted above, quantum fractal liquids with arbitrary $f$ and $g$, only constrained by $f(1)=g(1)=1$, are in general defined on lattices with an even number of sites and, as such, are always protected against pseudo-Hermitian perturbations with metric operator in [Eq. (\[EtaExpressedByPolynomials\])]{}. Summary and Conclusions {#Conclusion} ======================= In this work, we studied the behavior of the eigenvalues of quantum many-body Hamiltonians of the form of [Eq. (\[CompleteDesciptionOfHam\])]{}, i.e., starting from a Hermitian system, $H_0$, we turn on a non-Hermitian perturbation, $\epsilon V$, and demand that the entire Hamiltonian be pseudo-Hermitian. Using pseudo-Hermiticity rather than $\mathcal{P}\mathcal{T}$ symmetry is related to the fact that the former is more general than the latter [@2018arXiv180101676Z]; we note, however, that all of the explicit examples considered here are both $\mathcal{P}\mathcal{T}$ symmetric and pseudo-Hermitian. We analyzed whether the energies, $E_i$, of a given subspace of interest of $H_0$ will remain real as long as the gap to other states of the system is finite ($\mathcal{P}\mathcal{T}$ symmetry protected) or whether they can move into the complex plane without closing the gap ($\mathcal{P}\mathcal{T}$ symmetry fragile). While symmetries can enforce degeneracies ($E_i = E_{i'}$) and protect eigenvalues from becoming complex in conjunction with pseudo-Hermiticity ($E_i=E_{i'}^*$), we discussed that this is also possible in the absence of symmetries: if the eigenvalues of the metric operator $\eta$ are the same for all states in the subspace of interest, $E_i$ are guaranteed to stay real and $\mathcal{P}\mathcal{T}$ symmetry is protected. We demonstrated that this criterion can be readily applied to various paradigmatic many-body models with crucial implications. As a first example, we took the toric code model (\[eq:toricCode\]) as unperturbed Hamiltonian, $H_0$. On a torus, it exhibits four degenerate ground states and one would generically expect them to become complex when turning on $\epsilon V$. However, we have shown that $\eta$ of the form given in [Eq. (\[OneNaturalChoiceOfEta\])]{} allows for a large class of non-Hermitian perturbations; these are shown to leave the ground-state energies real on an even-by-even lattice, even if all symmetries are broken. They can only become complex and $\mathcal{P}\mathcal{T}$ can only be broken in the ground-state subspace, when the gap to the excited states closes. In fact, we have argued that any sufficiently generic non-Hermitian perturbation (see Sec. \[OtherMetricOperators\]) in a system with both linear system sizes even (at least one of them odd) will only allow for $\eta$ of the form of [Eq. (\[FirstClassOfEta\])]{} \[of the form of [Eq. (\[SecondClassOfEta\])]{}\] and the ground-state eigenvalues are protected (not protected) from becoming complex. This sensitivity to system size reflects the highly entangled nature of the toric-code ground states. We came to the same conclusions for the ground-state manifolds of the X-cube (\[eq:xcube\]), the spin (\[eq:checkerboard\]) and Majorana (\[MajoranaCheckerboard\]) checkerboard models, and for the fractal liquids of Ref. [@Yoshida2013]. In these cases, the stability of $\mathcal{P}\mathcal{T}$ symmetry is even more surprising due to the enormous GSD that grows exponentially with system size. For Haah’s 17 codes, the stabilizers have a slightly more complicated form and the minimal requirement for stability differs from code to code, although we observe several groups of codes which all obey the same requirements. This classification of Haah’s codes based on stability of $\mathcal{P}\mathcal{T}$ symmetry approximately follows previous classifications based on entanglement renormalization [@Dua2019]. On a more general level, our work illustrates that $\mathcal{P}\mathcal{T}$ symmetry and the reality of energies can be protected in the degenerate ground-state manifold of correlated many-body systems with different forms of topological order—even in the absence of any symmetries and although exceptional points are generically expected to be ambundant [@PiazzaQMB]. By virtue of being exact and simple, our framework can be readily applied to a large class of systems and provides a systematic method for constructing pseudo-Hermitian perturbations that ensures the reality of the resulting eigenvalues. This is not only relevant for experimental studies [@Jorg2019; @Takasu2020; @Regensburger2012; @LeiPTQuantumDynamics] and potential applications [@TopologicalLaser1; @TopologicalLaser2; @TopologicalLaser3; @WiersigSensing; @LiuSensing], but might also help deepen our theoretical understanding of non-Hermitian systems hosting exotic phases of matter, e.g., by providing novel ways of classifying spin-liquid or fracton phases according to their sensitivity to such perturbations. Acknowledgments {#acknowledgments .unnumbered} =============== This research was supported by the National Science Foundation under Grant No. DMR-1664842. We thank Darshan Joshi, Subir Sachdev, Saranesh Prembabu, and Nathan Tantivasadakarn for helpful discussions. Perturbative derivation of the condition for reality of eigenvalues {#ap:perturbation} =================================================================== Here we provide a formal derivation of the statement of Sec. \[GeneralFormOfPerturbations\] of the main text that the eigenvalues of any (almost) degenerate subspace of $H_0$ in [Eq. (\[GeneralFormOfHam\])]{} will remain real upon adiabatically turning on the non-Hermitian perturbation $\epsilon V$, if all states in the (almost) degenerate subspace have the same eigenvalue under the metric operator $\eta$. We will discuss two different perturbative expansions and prove that the above holds true to all orders. We will then discuss the approximate orthogonality of the associated eigenstates. To this end, we will consider a pseudo-Hermitian Hamiltonian of the form of [Eq. (\[GeneralFormOfHam\])]{}, $$H_{\epsilon} = H_0 + \epsilon \, V, \quad \epsilon\in\mathbbm{R}, \label{FullHamiltonian}$$ and a metric operator $\eta$, such that $\comm{\eta}{H_0} = 0$ for the Hermitian unperturbed part, $H_0^{{{\phantom{\dag}}}}=H_0^\dagger$, and $\eta V \eta^{-1} = V^\dagger$ for the perturbation. We are interested in the behavior of the eigenvalues of a subset of (orthonormal) eigenstates, $\{\ket{\psi_i},i=1,\dots,n\}$, of $H_0$, which can be arbitrarily close or identical in energy but are well separated from all other eigenvalues. We refer to the space spanned by $\{\ket{\psi_i},i=1,\dots,n\}$ as the almost degenerate subspace. To analyze how their eigenvalues, $E_i(\epsilon)$, $i=1,2,\dots, n$, evolve when turning on $\epsilon V$ in [Eq. (\[FullHamiltonian\])]{}, we define the projectors $P$ and $Q$, $$P = \sum_{i=1}^n \ket{\psi_i}\bra{\psi_i}, \quad Q = \mathbbm{1} - P\,,$$ to the almost degenerate subspace and its complement. We use that the exact eigenstates, $\ket{\Psi_i(\epsilon)}$, obeying $$H_\epsilon \ket{\Psi_i(\epsilon)} = E_i(\epsilon) \ket{\Psi_i(\epsilon)},$$ must also satisfy [@Buth2004] $$H^{\text{eff}}_\epsilon(E_i(\epsilon)) P \ket{\Psi_i(\epsilon)} = E_i(\epsilon) P \ket{\Psi_i(\epsilon)} \label{EffectiveSchrEq}$$ with the effective Hamiltonian $$\begin{aligned} H^{\text{eff}}_\epsilon(E) &= P H_{\epsilon} P + P H_{\epsilon} Q G_{\epsilon}(E) Q H_{\epsilon} P, \label{EffectiveHam} \\ G_{\epsilon}(E) &= \left[ E - Q H_{\epsilon} Q \right]^{-1}. \label{GreensFunction}\end{aligned}$$ As follows from [Eq. (\[EffectiveSchrEq\])]{}, the eigenvalues $E_i(\epsilon)$, $i=1,2,\dots, n$, can be obtained by diagonalizing the effective Hamiltonian $H^{\text{eff}}_\epsilon$ in the almost degenerate subspace. Of course, this is not straightforward to do as the effective Hamiltonian itself depends on these eigenvalues; however, the effective-Hamiltonian formulation is a good starting point to develop a perturbative expansion. Expansion in $\epsilon$ ----------------------- Since we view the non-Hermitian part $\epsilon V$ as a perturbation to $H_0$ in our discussion in the main text, it is very natural to expand in $\epsilon$. Note that $P H_0 Q = 0$, so the second term in the effective Hamiltonian (\[EffectiveHam\]) is $\order{\epsilon^2}$, $$H^{\text{eff}}_\epsilon(E) = P H_0 P + \epsilon P V P + \epsilon^2 P V Q G_{\epsilon}(E) Q V P. \label{SimplifiedEffHam}$$ Let us now assume that we can obtain $E_i(\epsilon)$ via perturbative expansion in $\epsilon$. To keep the notation compact, let us define the operator $T_N$ which performs a Taylor expansion on a function or operator up to and including order $N$, i.e., $T_N[f(x)] := \sum_{k=0}^N \frac{x^k}{k!} \frac{\textrm{d} f}{\textrm{d} x}(0)$. As follows from [Eq. (\[SimplifiedEffHam\])]{}, $T_1[E_i(\epsilon)]$, for any $i=1,2,\dots , n$, is obtained by diagonalization of $$h^{(1)}_{ij} := \bra{\psi_i}( H_0 + \epsilon V ) \ket{\psi_j}, \quad i,j=1,2,\dots , n. \label{LeadingOrderEffHam}$$ Since, by construction, all $\ket{\psi_i}$ have the same eigenvalue under $\eta$, we conclude from [Eq. (\[HermitianMatrixInEigenspace\])]{} that $h^{(1)}_{ij}$ is Hermitian and, thus, $T_1[E_i(\epsilon)] \in \mathbbm{R}$. Higher orders, $T_{N >1}[E_i(\epsilon)]$, are obtained by iteratively diagonalizing $$\begin{aligned} \begin{split} h^{(N)}_{ij} &:= \bra{\psi_i}( H_0 + \epsilon V \\ &\quad + \epsilon^2 T_{N-2}[ V Q G_{\epsilon}(T_{N-2}[E_i(\epsilon)]) Q V ] ) \ket{\psi_j}. \label{NthOrderDiagonalization} \end{split}\end{aligned}$$ First, note that $Q$ commutes with $\eta$ which implies that $G_{\epsilon}(E)$ and, thus, also $V Q G_{\epsilon}(E) Q V$ are pseudo-Hermitian if $E\in\mathbbm{R}$. Since this holds for a continuous set of values of $\epsilon$, this property holds for each order in the Taylor expansion separately. As such, it also applies to $T_{N-2}[V Q G_{\epsilon}(E) Q V]$ in [Eq. (\[NthOrderDiagonalization\])]{}. Due to $T_1[E_i(\epsilon)] \in \mathbbm{R}$, iterative diagonalization of [Eq. (\[NthOrderDiagonalization\])]{} will always yield real eigenvalues. Taken together we have shown that $E_i(\epsilon)$, $i=1,2,\dots, n$, stay real to any order in $\epsilon$. If the eigenstates are exactly degenerate for $\epsilon=0$, the leading non-zero contribution to the energy splitting will determine whether the eigenvalues of $H_{\epsilon}$ stay real or become complex. In most cases, the first order corrections, given by diagonalizing $P H_\epsilon P$, break the degeneracy. Since $P H_\epsilon P$ is clearly Hermitian, our result is simple if the first order energy splitting is non-zero. In fact, a mathematical proof to first order in perturbation theory has been provided in Ref. [@Caliceti2004]. However, topological degeneracies are often broken only at higher orders in perturbation theory, so a more general result is required. If, however, the degeneracy is already broken for $\epsilon=0$, our current perturbative approach cannot be used to understand whether the eigenvalues stay real or not: by construction, we assume that $E_{i}(\epsilon)$ is an analytic function of $\epsilon$ and therefore will never be able to reproduce the $\epsilon$ dependence of real eigenvalues meeting and moving into the complex plane. For this reason, we next present an alternative approach. Expansion in energy separation ------------------------------ The problem noted above that arises when the eigenstates of $H_0$ are not exactly degenerate can be reconciled by performing an expansion in the energy gap between the almost degenerate subspace and the rest of spectrum. More formally, we generalize the effective Schrödinger equation (\[EffectiveSchrEq\]) by introduction of a dimensionless parameter $\lambda$, $$H^{\text{eff}}_{\epsilon,\lambda}(E_{i\epsilon}(\lambda)) P \ket{\Psi_{i\epsilon}(\lambda)} = E_{i\epsilon}(\lambda) P \ket{\Psi_{i\epsilon}(\lambda)},$$ where $$H^{\text{eff}}_{\epsilon,\lambda}(E) = P H_{\epsilon} P + \lambda\, P H_{\epsilon} Q G_{\epsilon}(E) Q H_{\epsilon} P. \label{ModifiedEffectiveHam}$$ We assume that we can expand $E_{i\epsilon}(\lambda)$ in a power series of $\lambda$, but treat its $\epsilon$-dependence exactly, and show that it stays real to all orders in $\lambda$. Since $\lambda$ multiplies $G_{\epsilon}$ in [Eq. (\[ModifiedEffectiveHam\])]{}, this expansion is controlled by the gap to the other states of the spectrum being large (compared to $\epsilon V$). The argument proceeds similar to the one above: the zeroth order contribution, $T_0[E_{i\epsilon}(\lambda)] = E_{i\epsilon}(0)$, is obtained from diagonalization of [Eq. (\[LeadingOrderEffHam\])]{} and as such real. One can compute $T_N[E_{i\epsilon}(\lambda)]$ from $T_{N-1}[E_{i\epsilon}(\lambda)]$ by iterative diagonalization of $$\begin{aligned} \begin{split} &h^{(N)}_{ij} := \\ &\,\,\,\bra{\psi_i}( H_\epsilon + \lambda\, T_{N-1}[ H_\epsilon Q G_{\epsilon}(T_{N-1}[E_{i\epsilon}(\lambda)]) Q H_\epsilon ] ) \ket{\psi_j}. \end{split}\end{aligned}$$ With the same arguments as above, this implies that $T_N[E_{i\epsilon}(\lambda)]$ will stay real for any $N$. Of course, the perturbative approach is expected to break down when the gap between the almost degenerate subspace and another part of the spectrum with different eigenvalue under $\eta$ closes since $G_{\epsilon}$ will develop a pole. Approximate orthogonality ------------------------- Above, we have argued that the effective Hamiltonians in Eqs. (\[EffectiveHam\]) and (\[ModifiedEffectiveHam\]) will be Hermitian if the eigenvalues of $\eta$ are identical in the almost degenerate subspace. This does not only have consequences for the reality of the eigenvalues, but also for their mutual orthogonality. To first order in $\epsilon$ and zeroth order in $\lambda$, i.e., to leading order in the limit of a large gap to the excited states, the effective Hamiltonian is also independent of $E$. Therefore, the projections $P\ket{\Psi_i(\epsilon)}$, $i=1,2,\dots n$, are obtained as eigenstates of the same Hermitian Hamiltonian and, as such, orthogonal. Naturally, this does not mean that $\ket{\Psi_i(\epsilon)}$ are orthogonal in the full Hilbert space; however, the differences between the full and the projected states, $\ket{\Psi_i(\epsilon)}-P\ket{\Psi_i(\epsilon)} = Q\ket{\Psi_i(\epsilon)}$, are also suppressed in the limit of large energetic separation to the rest of the spectrum since [@Buth2004] $$Q\ket{\Psi_i(\epsilon)} = \epsilon G_{\epsilon}(E_{i}(\epsilon))QVP\ket{\Psi_i(\epsilon)},$$ as stated in the main text. Interplay between X-Cube foliation and metric operators {#ap:foliation} ======================================================= In the main paper, we noted that the ground states of the X-cube model all have the same eigenvalue under our choice of metric operator $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{}, provided all lengths are even. This is because $\eta$ can be assembled by a collection of stabilizers. While $\eta$ cannot be assembled by stabilizers on a system with odd lengths, it is known that the X-cube model exhibits *foliated fracton order* [@Shirley2017], which implies that an $L \cross L \cross L$ X-cube model ground state can be enlarged to a ground state of an $L \cross L \cross L+1$ model by the attachment of an $L \cross L$ toric code ground state and the application of local unitary operators. If $L$ is even, then the original X-cube ground states and the toric code ground states will all have the same eigenvalue under $\eta$. Because of this, one may suspect that the resulting $L \cross L \cross L + 1$ ground states may also have the same eigenvalue under the appropriately enlarged $\eta$. However, as we will show, the process of attaching the two states and applying local unitary operators yields an $L \cross L \cross L + 1$ state that is not an eigenstate of the enlarged $\eta$. We first describe the process of adding an extra layer to the X-cube model, illustrated in Fig. \[fig:foliation\]. We begin with an $L \cross L \cross L$ X-cube ground state, $\ket{\psi_X}$, an $L \cross L$ toric code ground state, $\ket{\psi_{TC}}$, and a collection of $L^2$ additional qubits initialized in the $\ket{0}$ state, $Z \ket{0} = \ket{0}$. The statement of foliated fracton order is that an $L \cross L \cross L + 1$ X-cube ground state, $\ket{\psi'_X}$, can be written as $$\ket{\psi'_X} = S \left( \ket{\psi_X} \otimes \ket{\psi_{TC}} \otimes \ket{0}^{L^2} \right)$$ where $S$ is a series of local unitary transformations, which in our case is given by a collection of CNOT gates [@Shirley2017]. This foliation allows us to deduce the behavior of $\eta$ in [Eq. (\[OneNaturalChoiceOfEta\])]{} applied to $\ket{\psi'_X}$ based on the action of $S^\dagger \eta S$ on the three constituent states, assuming $L$ is even. This behavior is dependent on the form of $\eta$. We first begin with an analysis of $\eta_Z \equiv \prod_i Z_i$. Carrying out the corresponding CNOT gate transformations, we see in Fig. \[fig:foliationMetricOperator\] that the action of $S^\dagger \eta_Z S$ on the original X-cube ground state is not simply the product of all $Z_i$ operators—some sites are missing in a way that cannot simply be compensated by a product of stabilizers; this means that $\ket{\psi_X} \otimes \ket{\psi_{TC}} \otimes \ket{0}^{L^2}$ will generally not be an eigenstate of $S^\dagger \eta_Z S$. Carrying this through with $\eta_X = \prod_i X_i$ and $\eta_Y = \prod_i Y_i$ yields a similar result. In accordance with the analysis of the main text, we conclude that not every ground state of an even-by-even-by-odd X-cube model will be an eigenstate of $\eta$, as the foliation process complicates the behavior of the metric operator. One can take this $L \cross L \cross L + 1$ model and attach additional toric code layers in either of the two remaining directions, and an identical analysis implies that even-by-odd-by-odd and odd-by-odd-by-odd ground states will not all have the same eigenvalue under $\eta$. Of course, one can add another toric code layer to give an $L \cross L \cross L + 2$ model, in which case the metric operator *does* decompose nicely into the metric operators on the constituent ground states. \] (0,0,2) – (0,0,0); (0,0,0) – (0,2,0); (0,2,0) – (0,2,2); (0,2,2) – (0,0,2); (0,0,0) – (2,0,0); (1,0,0) circle (0.06cm); (0,0,2) – (2,0,2); (1,0,2) circle (0.06cm); (0,2,0) – (2,2,0); (1,2,0) circle (0.06cm); (0,2,2) – (2,2,2); (1,2,2) circle (0.06cm); (2,0,2) – (2,0,0); (2,0,1) circle (0.06cm); (2,0,0) – (2,2,0); (2,1,0) circle (0.06cm); (2,2,0) – (2,2,2); (2,2,1) circle (0.06cm); (2,2,2) – (2,0,2); (2,1,2) circle (0.06cm); (4,0,0) – (2,0,0); (3,0,0) circle (0.06cm); (4,0,2) – (2,0,2); (3,0,2) circle (0.06cm); (4,2,0) – (2,2,0); (3,2,0) circle (0.06cm); (4,2,2) – (2,2,2); (3,2,2) circle (0.06cm); (4,0,2) – (4,0,0); (4,0,1) circle (0.06cm); (4,0,0) – (4,2,0); (4,1,0) circle (0.06cm); (4,2,0) – (4,2,2); (4,2,1) circle (0.06cm); (4,2,2) – (4,0,2); (4,1,2) circle (0.06cm); (1,0,0) to\[bend left=45\] (3,0,0); (1,2,0) to\[bend left=45\] (3,2,0); (1,0,2) to\[bend left=45\] (3,0,2); (1,2,2) to\[bend left=45\] (3,2,2); \] (0,0,2) – (0,0,0); (0,0,0) – (0,2,0); (0,2,0) – (0,2,2); (0,2,2) – (0,0,2); (0,0,0) – (2,0,0); (1,0,0) circle (0.06cm); (0,0,2) – (2,0,2); (1,0,2) circle (0.06cm); (0,2,0) – (2,2,0); (1,2,0) circle (0.06cm); (0,2,2) – (2,2,2); (1,2,2) circle (0.06cm); (2,0,2) – (2,0,0); (2,0,1) circle (0.06cm); (2,0,0) – (2,2,0); (2,1,0) circle (0.06cm); (2,2,0) – (2,2,2); (2,2,1) circle (0.06cm); (2,2,2) – (2,0,2); (2,1,2) circle (0.06cm); (4,0,0) – (2,0,0); (3,0,0) circle (0.06cm); (4,0,2) – (2,0,2); (3,0,2) circle (0.06cm); (4,2,0) – (2,2,0); (3,2,0) circle (0.06cm); (4,2,2) – (2,2,2); (3,2,2) circle (0.06cm); (4,0,2) – (4,0,0); (4,0,1) circle (0.06cm); (4,0,0) – (4,2,0); (4,1,0) circle (0.06cm); (4,2,0) – (4,2,2); (4,2,1) circle (0.06cm); (4,2,2) – (4,0,2); (4,1,2) circle (0.06cm); (2,0,1) – (3,0,0); (2,0,1) – (3,0,2); (2,0,1) – (4,0,1); (2,2,1) – (3,2,0); (2,2,1) – (3,2,2); (2,2,1) – (4,2,1); (2,1,0) – (4,1,0); (2,1,2) – (4,1,2); \] (0,0,2) – (0,0,0); (0,0,0) – (0,2,0); (0,2,0) – (0,2,2); (0,2,2) – (0,0,2); (0,0,0) – (2,0,0); (1,0,0) circle (0.06cm); (0,0,2) – (2,0,2); (1,0,2) circle (0.06cm); (0,2,0) – (2,2,0); (1,2,0) circle (0.06cm); (0,2,2) – (2,2,2); (1,2,2) circle (0.06cm); (2,0,2) – (2,0,0); (2,0,1) circle (0.06cm); (2,0,0) – (2,2,0); (2,1,0) circle (0.06cm); (2,2,0) – (2,2,2); (2,2,1) circle (0.06cm); (2,2,2) – (2,0,2); (2,1,2) circle (0.06cm); (4,0,0) – (2,0,0); (3,0,0) circle (0.06cm); (4,0,2) – (2,0,2); (3,0,2) circle (0.06cm); (4,2,0) – (2,2,0); (3,2,0) circle (0.06cm); (4,2,2) – (2,2,2); (3,2,2) circle (0.06cm); (4,0,2) – (4,0,0); (4,0,1) circle (0.06cm); (4,0,0) – (4,2,0); (4,1,0) circle (0.06cm); (4,2,0) – (4,2,2); (4,2,1) circle (0.06cm); (4,2,2) – (4,0,2); (4,1,2) circle (0.06cm); \] (0,0,2) – (0,0,0); (0,0,0) – (0,2,0); (0,2,0) – (0,2,2); (0,2,2) – (0,0,2); (0,0,0) – (2,0,0); (1,0,0) circle (0.06cm); (0,0,2) – (2,0,2); (1,0,2) circle (0.06cm); (0,2,0) – (2,2,0); (1,2,0) circle (0.06cm); (0,2,2) – (2,2,2); (1,2,2) circle (0.06cm); (2,0,2) – (2,0,0); (2,0,1) circle (0.06cm); (2,0,0) – (2,2,0); (2,1,0) circle (0.06cm); (2,2,0) – (2,2,2); (2,2,1) circle (0.06cm); (2,2,2) – (2,0,2); (2,1,2) circle (0.06cm); (4,0,0) – (2,0,0); (3,0,0) circle (0.06cm); (4,0,2) – (2,0,2); (3,0,2) circle (0.06cm); (4,2,0) – (2,2,0); (3,2,0) circle (0.06cm); (4,2,2) – (2,2,2); (3,2,2) circle (0.06cm); (4,0,2) – (4,0,0); (4,0,1) circle (0.06cm); (4,0,0) – (4,2,0); (4,1,0) circle (0.06cm); (4,2,0) – (4,2,2); (4,2,1) circle (0.06cm); (4,2,2) – (4,0,2); (4,1,2) circle (0.06cm); Haah’s Cubic Codes {#ap:haah} ================== In this section, we provide a more detailed account of Haah’s 17 cubic codes, and the behavior of their ground states under pseudo-Hermitian perturbations. Throughout, we assume periodic boundary conditions as before. Haah’s 17 CSS cubic codes [@Haah2011] are defined on a cubic lattice, with two Pauli spins on each vertex, $i$. There are two classes of stabilizers—one consisting solely of $Z$ operators, and the other with $X$ operators. The structure of these stabilizers is detailed in Fig. \[fig:cubicStabilizer\] and Table \[table:cubicCodes\]. In the polynomial representation used in Sec. \[PolynomialRepresentation\], the stabilizers take the general form $Z(f,g)$ and $X(\bar{g}, \bar{f})$ for polynomials $f$ and $g$. As stated in the main text, these codes admit a large set of possible pseudo-Hermitian perturbations that leave the code subspace real: in analogy to [Eq. (\[OneNaturalChoiceOfEta\])]{}, a very natural set of choices for the metric operator $\eta$ is given by [Eq. (\[EtasForTheHaahCodes\])]{}. Since all the stabilizers in Haah’s cubic codes are mutually commuting, all ground states have the same eigenvalue under $\eta$, provided $\eta$ can be assembled by stabilizers. For the toric code, the X-cube and checkerboard model discussed in the main text, it is straightforward both to find the combination of stabilizers that yield $\eta$ on a lattice with an even number of sites in all directions, and to show that $\eta$ cannot be made of stabilizers on any other lattice. For Haah’s codes, the more complex form of the stabilizers makes the analysis more demanding, but possible using the polynomial representation of stabilizers [@macwilliams1977theory]. Using the same conventions as in Sec. \[PolynomialRepresentation\], the metric operators in [Eq. (\[EtasForTheHaahCodes\])]{} can be written as $$\begin{aligned} \begin{split} \eta &= Z(h,h), \, X(h,h), \, iX(h,h)Z(h,h), \\ h & = \sum_{j=1}^{L_x}\sum_{k=1}^{L_y}\sum_{\ell=1}^{L_z} x^{j-1}y^{k-1}z^{\ell-1}. \label{ExpressionOfEtaAppendix} \end{split}\end{aligned}$$ We will first consider $\eta = Z(h,h)$. For stabilizers $Z(f,g)$, a choice of covering (i.e., a product of stabilizers at different points) can be specified by a *covering polynomial* $k$, with the covering given by $Z(kf, kg)$. For example, if $k = 1 + x$, then the covering $Z(kf, kg)$ would consist of the product of two stabilizers—one at the origin, and one at $(x,y,z) = (1,0,0)$. Therefore, the question of whether $\eta$ can be assembled from stabilizers is equivalent to the question of whether $h = kf = kg$ for some polynomial $k$. Mathematically, this factorization takes place in the quotient ring $P/I$, where $P$ is the ring of polynomials of three variables with coefficients over $\mathbb{F}_2$, and $I$ is the ideal generated by $x^{L_x} + 1$, $y^{L_y} + 1$, and $z^{L_z} + 1$. This quotienting procedure imposes the periodic boundary conditions of the model. We calculate this factorization with the computer algebra system SageMath. Generically, this factorization procedure will yield two different coverings, $h = k_f f = k_g g$. To determine whether these two coverings are compatible, we calculate whether $k_f + k_g$ can be separated into two polynomials $d_f + d_g$, where $d_f \in (I:f)$ and $d_g \in (I:g)$, where $(I:f)$ is the *colon ideal*, $(I:f) = \{p \in P: pf \in I\}$. This is equivalent to checking whether $k_f + k_g$ belongs to the ideal generated by $(I:f) \cup (I:g)$. If such a separation exists, then $k_f + d_f = k_g + d_g \equiv k$, and $h = kf = kg$ in $P/I$. This covering may not be unique, as $k+d_{fg}$ also works as a covering, where $d_{fg} \in (I:f) \cap (I:g)$; however, for the purposes of understanding the behavior of non-Hermitian perturbations, we are only interested in the existence of such a covering. We note that this procedure should always be able to find a covering $k$ if it exists, so if a decomposition $k_f + k_g = d_f + d_g$ does not exist, it should imply the non-existence of a covering. Once we have obtained the covering $k$ for $Z(h,h)$, we immediately know that $X(h,h)$ in [Eq. (\[ExpressionOfEtaAppendix\])]{} can be assembled from $X$-stabilizers with the covering $\bar{k}$, since $X(\bar{k}\bar{g},\bar{k}\bar{f}) = X(\bar{h},\bar{h}) = X(h,h)$. This calculation is done in SageMath (see supplementary files) for system sizes $L_x \times L_y \times L_z$ for $1 \leq L_x, L_y, L_z \leq 19$. Although the existence/non-existence of a covering follows no clear pattern for very small system sizes, we see regular behavior emerge once the system size is larger than $3 \times 3 \times 3$. Specifically, the existence/non-existence of a covering for a certain cubic code is only dependent on whether each length is even or odd and, if it is even, whether it is divisible by $4$. This admits $3^3 = 27$ different possible classes of system sizes—however, we find that some classes are equivalent in terms of which cubic codes have coverings on them. A full table of this behavior is shown in Table \[table:cubicCoverings\]. We note several trends. On an odd-by-odd-by-odd lattice, none of the 17 cubic codes have code subspaces that stay real under pseudo-Hermitian perturbations. If only a portion of the system lengths are odd, the reality of the code subspace depends on which dimensions have odd lengths, and whether the remaining lengths are divisible by 4. In contrast, if $L_x$ and $L_y$ are divisible by $4$ and $L_z$ is even, all the code subspaces stay real under pseudo-Hermitian perturbations. Overall, cubic code 17 is the most unstable to pseudo-Hermitian perturbations, in that its code subspaces will become complex for almost all system sizes. In contrast, cubic code 7 has the most stable code subspace. There are some groups of codes with the same sensitivity to system sizes. If we consider codes with the same behavior up to a lattice rotation, these groups are $(11, 12, 14, 15)$, $(5, 8, 10, 16)$, and $(2,3,6,9)$. It is interesting to note that, with the exception of cubic code 16, all codes within a group transform the same under entanglement renormalization [@Dua2019]. While our results are purely numerical, an analytic verification of these trends for all system sizes is likely possible if one was to manually follow the factorization processes carried out in SageMath and show that their conclusions are only sensitive to the system sizes’ evenness/oddness and whether they are divisible by $4$. We do not attempt this, as there are $459$ separate cases that must be checked ($27$ possible system sizes for the $17$ codes), and instead analyze the numerical results which show clear trends up to $19 \times 19 \times 19$ lattices. (0,0) – (1, 0); at (1.15, 0) [$D$]{}; (1.15, 0.15) – (1.15, 1.15); at (1.15, 1.3) [$B$]{}; (0, 1.3) – (1, 1.3); at (-0.15, 1.3) [$A$]{}; (-0.15, 0.15) – (-0.15, 1.15); at (-0.15, 0) [$C$]{}; (1.3, 0.15) – (1.6, 0.45); at (1.75, 0.6) [$A'$]{}; (1.3, 1.45) – (1.6, 1.75); at (1.75, 1.9) [$C'$]{}; (0, 0.15) – (0.3, 0.45); at (0.45, 0.6) [$B'$]{}; (0, 1.45) – (0.3, 1.75); at (0.45, 1.9) [$D'$]{}; (0.6, 0.6) – (1.6, 0.6); (1.75, 0.75) – (1.75, 1.75); (0.45, 0.75) – (0.45, 1.75); (0.6, 1.9) – (1.6, 1.9); $A$ $B$ $C$ $D$ $A'$ $B'$ $C'$ $D'$ ---- ------ ------ ------ ------ ------ ------ ------ ------ 1 $ZI$ $ZZ$ $IZ$ $ZI$ $IZ$ $II$ $ZI$ $IZ$ 2 $IZ$ $ZZ$ $ZI$ $ZI$ $ZI$ $ZZ$ $IZ$ $ZI$ 3 $IZ$ $ZZ$ $ZZ$ $ZI$ $ZZ$ $II$ $IZ$ $IZ$ 4 $IZ$ $ZZ$ $ZI$ $ZI$ $IZ$ $II$ $IZ$ $ZI$ 5 $ZI$ $ZZ$ $II$ $ZZ$ $ZI$ $II$ $IZ$ $IZ$ 6 $ZI$ $II$ $ZI$ $ZZ$ $IZ$ $ZZ$ $II$ $IZ$ 7 $ZI$ $ZZ$ $ZI$ $IZ$ $IZ$ $II$ $II$ $ZZ$ 8 $ZI$ $ZI$ $IZ$ $ZZ$ $IZ$ $II$ $IZ$ $ZI$ 9 $ZI$ $IZ$ $ZZ$ $ZZ$ $IZ$ $ZZ$ $II$ $IZ$ 10 $ZI$ $IZ$ $ZI$ $ZZ$ $IZ$ $ZZ$ $ZI$ $ZI$ 11 $ZI$ $ZZ$ $II$ $IZ$ $ZI$ $II$ $IZ$ $ZZ$ 12 $ZI$ $IZ$ $ZZ$ $ZZ$ $ZI$ $II$ $II$ $IZ$ 13 $ZI$ $ZZ$ $IZ$ $ZI$ $IZ$ $II$ $II$ $ZZ$ 14 $ZI$ $IZ$ $ZZ$ $ZZ$ $IZ$ $II$ $ZZ$ $IZ$ 15 $ZI$ $IZ$ $II$ $ZZ$ $IZ$ $ZZ$ $II$ $ZI$ 16 $ZI$ $ZI$ $II$ $IZ$ $IZ$ $ZZ$ $II$ $ZZ$ 17 $ZI$ $ZZ$ $IZ$ $ZI$ $IZ$ $ZI$ $ZI$ $ZZ$ : The $Z$ stabilizers for Haah’s 17 CSS cubic codes, defined on the eight vertices of a cube, with vertices labeled according to Fig. \[fig:cubicStabilizer\]. The $X$ stabilizers are obtained by exchanging $A \leftrightarrow A'$, and likewise for the other vertices, and by exchanging the two Pauli spins on each site.[]{data-label="table:cubicCodes"} [@ &gt; m[1.67cm]{}| \*[8]{}[&gt; m[0.7cm]{}]{}@]{} System Size & $CC_1$ & $CC_2$ $CC_3$ $CC_6$ $CC_9$ & $CC_4$ & $CC_5$ $CC_8$ $CC_{10}$ $CC_{16}$ & $CC_7$ & $CC_{11}$ $CC_{12}$ $CC_{14}$ $CC_{15}$ & $CC_{13}$ & $CC_{17}$\ E $\times$ E $\times$ E\ E $\times$ E $\times$ e & & & & & & & &\ e $\times$ E $\times$ E\ o $\times$ E $\times$ E\ E $\times$ e $\times$ E\ E $\times$ e $\times$ e & & & & & & & &\ e $\times$ e $\times$ E\ e $\times$ E $\times$ E\ E $\times$ o $\times$ E\ e $\times$ e $\times$ e & & & & & & & &\ E $\times$ E $\times$ o & & & & & & & &\ e $\times$ o $\times$ E\ E $\times$ o $\times$ e\ e $\times$ o $\times$ e & & & & & & & &\ e $\times$ E $\times$ o\ E $\times$ e $\times$ o\ e $\times$ e $\times$ o\ e $ \times$ o $\times$ o\ E $\times$ o $\times$ o & & & & & & & &\ o $\times$ e $\times$ E\ o $\times$ E $\times$ e\ o $\times$ e $\times$ e & & & & & &&&\ o $\times$ o $\times$ E\ o $\times$ o $\times$ e & & & & & & & &\ o $\times$ E $\times$ o\ o $\times$ e $\times$ o\ o $\times$ o $\times$ o & & & & & & & &\ [^1]: Strictly speaking, $\eta$ can also be a sum of terms of the form in [Eq. (\[ClassesOfEtas\])]{}. While this is already excluded by our assumption of unitary $\eta$, extending to non-unitary $\eta$ would not alter the discussion here. To see this, consider $\eta= \eta_1 + \eta_2$, with $\eta_j$ of the form (\[ClassesOfEtas\]), and anti-Hermitian $V$. In order for this to admit a pseudo-Hermitian perturbation that would not have been allowed by $\eta_{1,2}$ separately, we must have $\acomm{\eta_1}{V}=-\acomm{\eta_2}{V}\neq 0$. To show that this is not possible, we first take $V$ to be, like $\eta_j$, just a product of Pauli operators. With this assumption, the anti-commutator will be proportional to the product of Pauli operators that were in one and only one of $\eta_j$ and $V$ (or just be zero, which we are not interested in). So if $\eta_1$ and $\eta_2$ are distinct, $\acomm{\eta_1}{V}$ and $\acomm{\eta_2}{V}$ must be different by more than a minus sign and, thus, cannot add up to zero. Of course, $V$ is not generally just a product of Pauli operators, but rather a sum of products. However, this still does not allow for $\acomm{\eta_1}{V}=-\acomm{\eta_2}{V}\neq 0$ either, since the sum of two Pauli operators never yields a different Pauli operator (i.e., if $A_i$ and $B_i$ are products of Pauli operators, then $\sum_i A_i \neq \sum B_i$ unless $A_i \propto B_i$ or some permutation thereof). In other words, there is no way for the different terms in $V$ to conspire together to give some non-trivial case.
--- abstract: 'We consider a countably infinite system of spiking neurons introduced by Ferrari et al. in [@ferrari]. In this model each neuron has a membrane potential which takes value in the non-negative integers. Each neuron is also associated with two point processes. The first one is a Poisson process of some parameter $\gamma$, representing the *leak times*, that is the times at which the membrane potential of the neuron is spontaneously reset to $0$. The second point process, which represents the *spiking times*, has a non-constant rate which depends on the membrane potential of the neuron at time $t$. This model was previously proven to present a phase transition with respect to the parameter $\gamma$ (see [@ferrari]). It was also proven in [@andre] that the renormalized time of extinction of a finite version of the system converges in law toward an exponential random variable when the number of neurons goes to infinity, which indicates a metastable behavior. Here we prove a result which is in some sense the symmetrical of this last result: we prove that when $\gamma > 1$ (super-critical) the renormalized time of extinction converges in probability to $1$.' author: - | Morgan André\ *Instituto de Matemática e Estatística,*\ *Universidade de São Paulo.* title: '**Asymptotically Deterministic Time of Extinction for a Stochastic System of Spiking Neurons**' --- [**MSC Classification**]{}: 60K35; 82C32; 82C22.\ [**Keywords**]{}: systems of spiking neurons; interacting particle systems; extinction time.\ Introduction ============ In the present paper we consider an infinite system of spiking neurons which is as follows. We have a countably infinite set of neurons indexed by $\Z$. Each neuron can be in two different states, $1$ or $0$, respectively called *active* and *quiescent*. To each neuron $i$ is associated a Poisson process $(N^{\dagger}_i(t))_{t \geq 0}$ of some parameter $\gamma$, representing the *leak times*. At any of these leak times the state of neuron $i$ is immediately reset to $0$. Another point process $(N_i(t))_{t \geq 0}$ representing the *spiking times* is also associated to each neuron, which rate at time $t$ is equal to $1$ if the neuron is active and to $0$ otherwise. Whenever neuron $i$ spikes its state is also reset to $0$ and the state of each of its neighbours in the one-dimensional lattice (i.e. neurons $i-1$ and $i+1$) immediately becomes $1$. We denote by ${\xi(t)}_i$ the state of neuron $i$ a time $t$. The resulting process $\big(\xi(t)\big)_{t \geq 0}$ is an interacting particle system, that is: a markovian process taking value in $\{0,1\}^\Z$ (see [@ips]). This model is a specific instantiation of a model introduced by Ferrari et al. in [@ferrari], and we refer to section 2 of this same article for a more formal description. It can be seen as a continuous time variant of the model introduced in [@galves]. Other continuous time variants of this model have been studied since this first paper, see for example [@demasi], [@duarte] and [@fournier]. We refer to [@review] for a general review. It has been proven in [@ferrari] that this model is subject to a phase transition. More precisely the following theorem was proven. \[thm:phasetransition\] Suppose that for any $i \in \Z$ we have $X_i(0) \geq 1$. There exists a critical value $\gamma_c$ for the parameter $\gamma$, with $0 < \gamma_c < \infty$, such that for any $i \in \Z$ $$\P \Big( N_i([0,\infty[) \text{ } < \infty \Big) = 1 \text{ if } \gamma > \gamma_c$$ and $$\P \Big( N_i([0,\infty[) \text{ } = \infty \Big) > 0 \text{ if } \gamma < \gamma_c.$$ In words there exists a critical value $\gamma_c$ for the leaking rate which is such that the process dies almost surely above it, and survive with positive probability below it. Moreover this model was also proven to exhibit a metastable behavior in [@andre]. What we mean by this is that if you consider a finite version of the process $\big(\xi(t)\big)_{t \geq 0}$, where the neurons are indexed on $\Z \cap [-N,N]$ instead of $\Z$, and if you denote $\tau_N$ the time of extinction of this finite process, then the following holds in the sub-critical region: $$\frac{\tau_N}{{\mathbb{E}}(\tau_N)} \overset{\mathcal{D}}{\underset{N \rightarrow \infty}{\longrightarrow}} \mathcal{E} (1),$$ where ${\mathbb{E}}$ denotes the expectation, $\mathcal{D}$ denotes a convergence in distribution and $\mathcal{E} (1)$ an exponential random variable of mean $1$. To be exact, for technical reasons related to the way the proofs were constructed in [@andre], this wasn’t proven for any $0 < \gamma < \gamma_c$ but only for $0 < \gamma < \gamma_c'$, where $\gamma_c'$ is some value satisfying $\gamma_c' \leq \gamma_c$. In this article we consider the super-critical case. We’re aimed to prove that in the super-critical regime the following holds: $$\frac{\tau_N}{{\mathbb{E}}(\tau_N)} \overset{\P}{\underset{N \rightarrow \infty}{\longrightarrow}} 1,$$ where the $\P$ denotes a convergence in probability. This is the object of Theorem \[mainth\], which is our main result. This result is symmetrical to the result proven in [@andre]. Indeed the later tells us that the time of extinction in the sub-critical regime is asymptotically memory-less, which means that it is highly unpredictable: knowing that the process survived up to time $t$ doesn’t give you any information about what should happen after time $t$. What we prove here is that in the super-critical regime the time of extinction is asymptotically constant, so that it is highly predictable. We don’t prove this result for the whole super-critical region but only for $\gamma>1$ (it was shown in [@ferrari] that $\gamma_c < 1$). This allows us to use a coupling argument with a continuous time branching process, which greatly simplifies the proof. The proof of the general case is not out of reach but we believe that it would require to prove a large set of other intermediary results which would go far beyond the scope of this paper. The paper is organized as follows. In Section \[notation\] we briefly introduce the notation used in this article. In Section \[result\] we prove our main result. Finally we give some of the classical results used throughout the proof in Section \[annex\] (which is an annex). Notations and other formalities {#notation} =============================== For any $\eta \in \{0,1\}^\Z$ we denote $(\xi^\eta(t))_{t \geq 0}$ the process with initial state $\xi^\eta(0) = \eta$. By convention, when the initial state is the “all one state”, we omit the superscript, writing simply $(\xi(t))_{t \geq 0}$, In the rest of this article we will repeatedly identify the state space $\{0,1\}^\Z$ with $\mathcal{P}(\Z)$, the set of all subsets of $\Z$. Indeed any state $\eta$ of the process, laying in $\{0,1\}^{\Z}$, can be seen as well as an element $A$ of $\mathcal{P}(\Z)$, writing $A = \{i \in \Z \text{ such that } \eta_i = 1\}$. For example we will write $(\xi^0(t))_{t \geq 0}$ to indicate the process which start with the neuron $0$ active and all other neurons quiescent. Notice that putting $0$ to indicate the initial state with only neuron $0$ active is an abuse of notation as we should write $\{0\}$ instead. We will also write such thing as $\xi^0 (t) \neq \emptyset$ to indicate the event in which the process didn’t die yet at time $t$. The finite process, defined on the window $[-N,N] \cap \Z$, will be written $(\xi_N (t))_{t \geq 0}$. As for the infinite process we adopt the convention of omitting the superscript when the initial state is the whole window $[-N,N] \cap \Z$. Note that by elementary results on Markov processes the time of extinction $\tau_N$ of this process is almost surely finite, since the state space is finite and the state where all neurons are quiescent is an absorbing state. Result ====== Our main result follows almost entirely from the following proposition, which basically says that the time of extinction in the super-critical regime is asymptotically logarithmic in $N$. A similar result was proven for the contact process in [@liu], which served as an inspiration for our own proof. \[convproba\] Suppose that $\gamma > 1$. Then there exists a constant $0<C<\infty$ depending on $\gamma$ such that the following convergence holds $$\frac{\tau_N}{\log (2N+1)} \overset{\P}{\underset{N \rightarrow \infty}{\longrightarrow}} C.$$ We define the following function $$t \mapsto f(t) = \log \Big( \P \left( \xi^0 (t) \neq \emptyset \right)\Big).$$ We also define the following constant $$C' = -\sup_{s>0} \frac{f(s)}{s}.$$ The first step is to show that the function $f$ is superadditive. For any $s,t \geq 0$ we have the following inequality $$\P\left(\xi^0 (t+s) \neq \emptyset \text{ } | \text{ } \xi^0(t) \neq \emptyset \right) \geq \P \left(\xi^0 (s) \neq \emptyset \right).$$ Indeed saying that the process is still alive at time $t$ is the same as saying that it possesses at least one active neuron at time $t$, which happens to be the number of active neurons at time $0$. Then the inequality follows from the fact that having a higher number of active neurons in the initial configuration implies having a higher probability to be alive at any given time $s$. Moreover this last inequality can be rewritten as follows $$\P\left(\xi^0 (t+s) \neq \emptyset \right) \geq \P \left(\xi^0 (t) \neq \emptyset \right) \P \left(\xi^0 (s) \neq \emptyset \right),$$ and taking the log gives the superaddtivity we are looking for. Now from a well-known result about superadditive functions (Lemma \[fekete\] in the annex) we get he following convergence $$\label{superaddconv} \frac{f(t)}{t} {\underset{t \rightarrow \infty}{\longrightarrow}} - C.$$ This implies that we have the following inequality $$\label{expbound} \P \left( \xi^0 (t) \neq \emptyset \right) \leq e^{-C't}$$ Now notice that while it is clear that $C' < \infty$, it is not obvious that $C' > 0$. We show that it is the case using a coupling with a branching process. Let $(Z(t))_{t \geq 0}$ be the branching process which dynamic is defined in the annex. The coupling is done as follows, at time $0$ the only active neuron in $\xi^0(0)$ is coupled with the only individual in $Z(0)$. By this we mean that if this neuron becomes quiescent then the individual dies, and if the neuron spikes, then the individual gives birth to another individual. When a spike occurs, there is three possibilities: two neurons are activated, one neuron is activated, no neuron is activated. In the first case the individual which was coupled to the neuron that just spiked is coupled with one of the newly activated neuron, and the newly born individual is coupled with the other one. In the second and third cases, the supernumerary individuals are given their own independent exponential clocks and then evolve freely (as well as their possible offspring). At any time $t \geq 0$ we obviously have $|\xi^0_t| \leq Z_t$. Using Markov inequality and Proposition \[prop:meanbranching\] from the annex it follows that $$\P \left( \xi^0(t) \neq \emptyset \right) \leq \P \left( Z_t \geq 1 \right) \leq {\mathbb{E}}(Z_t) = e^{-(\gamma - 1)t}.$$ Then we take the log and divide by $t$ in the previous inequality and we obtain at the limit that $C' \geq \gamma - 1$, and from the assumption that $\gamma > 1$ we get $C' > 0$. Let us break the suspense and already reveal that the constant $C$ we are looking for is actually simply the inverse of $C'$. Therefore in order to prove our result we are going to prove that for any $\epsilon > 0$ we have the two following convergences $$\label{pluseps} \P \left( \frac{\tau_N}{\log(2N+1)} - \frac{1}{C'} > \epsilon \right) {\underset{N \rightarrow \infty}{\longrightarrow}} 0,$$ and $$\label{moinseps} \P \left( \frac{\tau_N}{\log(2N+1)} - \frac{1}{C'} < -\epsilon \right) {\underset{N \rightarrow \infty}{\longrightarrow}} 0.$$ Let us start with (\[pluseps\]), which is the easiest part. Using inequality (\[expbound\]) we get $$\label{expboundN} \P \left( \xi_N(t) \neq \emptyset \right) \leq (2N + 1) \P \left( \xi^0(t) \neq \emptyset \right) \leq (2N+1) e^{-C't}.$$ Now, for any $\epsilon > 0$, if you let $t = (\frac{1}{C'} + \epsilon)\log(2N+1)$ then the following holds $$\P \left( \frac{\tau_N}{\log(2N+1)} - \frac{1}{C'} > \epsilon \right) = P \left( \xi_N(t) \neq \emptyset \right) \leq e^{-C'\epsilon \log(2N + 1)}.$$ Then the fact that $C' > 0$ insure us that the term on the right hand of the inequality goes to $0$ as $N$ diverges, which proves (\[pluseps\]). It remains to prove (\[moinseps\]). If for some $N \in \N^*$ we take $t = \left(\frac{1}{C'} - \epsilon \right) \log (2N + 1)$, then we can write $$\P \left( \frac{\tau_N}{\log(2N+1)} - \frac{1}{C'} < -\epsilon \right) = \P \left( \xi_N(t) = \emptyset \right),$$ so that it suffices to show that the right-hand part converges to $0$ for this choice of $t$ as $N$ goes to $\infty$. From (\[superaddconv\]) (and from the fact that $C'> 0$) we get that for any $\epsilon > 0$ and for big enough $t$ $$\frac{f(t)}{t} \geq - (1+\epsilon) C',$$ which can be written $$\P \left( \xi^0_t = \emptyset \right) \leq 1 - e^{- (1+\epsilon) C't}.$$ Taking $t = \left(\frac{1}{C'} - \epsilon \right) \log (2N + 1)$ we have $$\label{boundxi0} \P \left( \xi^0_t = \emptyset \right) \leq 1 - \frac{1}{2N+1}.$$ Now for any $k \in \Z$ we define $$F_k \text{ } {\stackrel{\mathclap{\normalfont\mbox{def}}}{=}}\text{ } \Z \cap \big[(2k-1)K\log(2N+1), (2k+1)K\log(2N+1)\big],$$ where $K$ is some constant depending on $N$ which value will be chosen later in order for $K\log(2N+1)$ to be an integer. We then consider a modification of the process $(\xi_N(t))_{t \geq 0}$ where all neurons at the border of one of the sub-windows $F_k$ defined above (i.e. all neurons indexed by $(2k+1)K\log(2N+1)$ for some $k \in \Z$) are fixed in quiescent state and therefore are never allowed to spike. This modified process is denoted $(\zeta_N(t))_{t \geq 0}$. We also define the following configuration $$A_N \text{ } {\stackrel{\mathclap{\normalfont\mbox{def}}}{=}}\text{ } \left\{ 2kK\log(2N+1) \text{ for } k \in \Z \cap \left[-\frac{2N+1}{2K\log(2N+1)}, \frac{2N+1}{2K\log(2N+1)}\right] \right\}.$$ Notice that the fact that the neurons at the borders of the windows $F_k$ are never allowed to spike makes the evolution of $(\zeta_N(t))_{t \geq 0}$ independent from one window to an other. Moreover notice that the integers belonging to $A_N$ are all at the center of one of these windows. Now for any $t \geq 0$ we define $r_t {\stackrel{\mathclap{\normalfont\mbox{def}}}{=}}\max \xi^0_t$. Considering the spiking process $(\xi_t)_{t \geq 0}$ with no leaking it is easy to see that the right edge $r_t$ can be coupled with an homogeneous Poisson process of parameter $1$, which we denote $(M(t))_{t \geq 0}$, in such a way that for any $m \geq 0$ $$\P \left( \sup_{s \leq t} r_s \geq m \right) \leq \P \Big( M(t) \geq m \Big).$$ We have $${\mathbb{E}}\left( e^{M(t)} \right) = e^{t(e - 1)},$$ so taking the exponential, using Markov inequality and taking $m=K't$ (where $K'$ is some constant that we are going to fix in a moment) we get $$\begin{aligned} \P \left( \sup_{s \leq t} r_s \geq K't\right) & \leq e^{t(e - 1 - K')}\\ & \leq e^{t(2 - K')},\end{aligned}$$ where in the last inequality we simply used the fact that $e - 1 < 2$. Now taking again $t = \left(\frac{1}{C'} - \epsilon \right) \log (2N + 1)$ and $K' = 2(1 + C')$ and we get $$\P \left( \sup_{s \leq t} r_s \geq m \right) \leq e^{-2(1-\epsilon) \log(2N+1)},$$ and assuming without loss of generality that $\epsilon < \frac{1}{2}$ we get $$\label{boundEt} \P \left( \sup_{s \leq t} r_s \geq m \right) \leq \frac{1}{2N+1}.$$ It is now possible to fix the value of the constant $K$ we introduced earlier. We take $$K = \inf \left\{ x \in \R \text{ such that } x \geq \frac{K'}{C'} \text{ and } x \log(2N+1) \in \N \right\}.$$ In words we take $K$ equal to $\frac{K'}{C'}$ and then enlarge it slightly in order for $K\log(2N+1)$ to be an integer. We also define the following event $$E_t \text{ } {\stackrel{\mathclap{\normalfont\mbox{def}}}{=}}\text{ } \Big\{ \zeta^0_s \text{ doesn't escape from } \Z \cap [-K\log(2N+1), \ldots, K\log(2N+1)] \text{ for any } s \leq t\Big\}.$$ Now taking $N$ large enough and $t = \left(\frac{1}{C'} - \epsilon \right) \log (2N + 1)$ we have $$\begin{aligned} \P \left( \xi_N(t) = \emptyset \right) &\leq \P \left( \zeta^{A_N}_N (t) = \emptyset \right)\\ & = \P \left( \zeta^0_N (t) = \emptyset \right)^{(2N+1) / (2K\log(2N+1))}\\ & \leq \Big( \P \left( \zeta^0_N (t) = \emptyset \cap E_t\right) + \P \left( E_t^c \right) \Big)^{(2N+1) / (2K\log(2N+1))}\\ & \leq \Big( \P \left( \xi^0_N (t) = \emptyset\right) + \P \left( E_t^c \right) \Big)^{(2N+1) / (2K\log(2N+1))}\\ & \leq \left( 1 - \left(\frac{1}{(2N+1)^{1 - \epsilon^2}} - \frac{2}{2N+1} \right) \right)^{(2N+1) / (2K\log(2N+1))}.\\\end{aligned}$$ To obtain the inequality above we used (\[boundxi0\]) and the fact that inside $E_t$ the process $(\zeta^0_s)_{s \in [0,t]}$ evolves just like $(\xi^0_s)_{s \in [0,t]}$, which allows us to bound $\P \left( \xi^0_N (t) = \emptyset\right)$, and we used (\[boundEt\]) to bound $\P \left( E_t^c \right)$. Finally we write $$a_N = \frac{1}{(2N+1)^{1 - \epsilon^2}} - \frac{2}{2N+1},$$ and $$b_N = \frac{2N+1}{2K\log(2N+1)},$$ and we an easily verify that $(a_N)_{N \geq 0}$ and $(b_N)_{N \geq 0}$ verify the assumptions of Lemma \[convseq\] in the annex, so that $$(1 - a_N)^{b_N} \underset{N \rightarrow \infty}{\longrightarrow} 0.$$ The next step consists in showing that the same convergence holds for the expectation, which is the object of the following proposition. \[convexpect\] Suppose that $\gamma > 1$. Then the following convergence holds $$\frac{{\mathbb{E}}\left(\tau_N\right)}{\log (2N+1)} \underset{N \rightarrow \infty}{\longrightarrow} C,$$ where $C$ is the same constant as in Proposition \[convproba\]. It is well-known that the fact that a sequence of random variables $(X_n)_{n \in \N}$ converges in probability to some random variable $X$ doesn’t necessarily implies that ${\mathbb{E}}(X_n) \underset{N \rightarrow \infty}{\longrightarrow} {\mathbb{E}}(X)$. Nonetheless this implication holds true with the additional assumption that the sequence is uniformly integrable (see for example Theorem 5.5.2 in [@tande] page 259), i.e. if the following holds $$\lim_{M \rightarrow \infty} \left( \sup_{n \in \N} {\mathbb{E}}\Big( |X_n| \mathbbm{1}_{\{|X_n| > M\}} \Big)\right) = 0.$$ It is therefore sufficient to show that $\big(\tau_N/\log(2N+1)\big)_{N \in \N^*}$ is uniformly integrable, and the result will follows from Proposition \[convproba\]. For some $M > 0$ and some $N \in \N^*$ it is easy to see that we have the following $${\mathbb{E}}\left( \frac{\tau_N}{\log (2N+1)} \mathbbm{1}_{\{\frac{\tau_N}{\log (2N+1)} > M\}} \right) = \int_0^\infty \P \left( \frac{\tau_N}{\log (2N+1)} > \max(t,M) \right) dt.$$ Now using inequality (\[expboundN\]) and the previously proven fact that $C'>0$ when $\gamma > 1$ we have the following $$\begin{aligned} &\int_0^\infty \P \left( \frac{\tau_N}{\log (2N+1)} > \max(t,M) \right) dt\\ &= \int_0^M \P \left( \frac{\tau_N}{\log (2N+1)} > M \right) dt + \int_M^\infty \P \left( \frac{\tau_N}{\log (2N+1)} > t \right) dt\\ &\leq (2N +1) \left[ \int_0^M e^{-C'\log (2N+1)M} dt + \int_M^\infty e^{-C'\log (2N+1)t} dt \right]\\ &= (2N +1) \left[ M (2N+1)^{-C'M} + \frac{(2N+1)^{-C'M}}{C'\log(2N+1)}\right]\\ &= (2N +1)^{1 - C'M} \left[ M + \frac{1}{C'\log(2N+1)}\right],\end{aligned}$$ where $C'$ is the same constant as in the previous proof. Without loss of generality we assume that $M > \frac{1}{C'}$, so that the bound above is decreasing in $N$, from what we get $$\label{supbound} \sup_{n \in \N^*} {\mathbb{E}}\left( \frac{\tau_N}{\log (2N+1)} \mathbbm{1}_{\{\frac{\tau_N}{\log (2N+1)} > M\}}\right) \leq 3^{1 - C'M} \left[ M + \frac{1}{C'\log(3)}\right].$$ Finally the right-hand side of inequality (\[supbound\]) goes to $0$ when $M$ goes to $\infty$, which ends the proof. From these two propositions we get \[mainth\] Suppose that $\gamma > 1$. Then the following convergence holds $$\frac{\tau_N}{{\mathbb{E}}(\tau_N)} \overset{\P}{\underset{N \rightarrow \infty}{\longrightarrow}} 1.$$ This theorem is a trivial consequence of the two previous propositions. Annex ===== Continuous time branching process --------------------------------- We define a continuous time branching process as follows. At time $0$ we have a single individual. Two independent exponential random clocks of parameter $1$ and $\gamma$ respectively are attached to this individual. If the rate $\gamma$ clock rings before the other one, then the individual dies. In the contrary case the individual gives birth to another one, to which another couple of exponential clocks is attached and so on. All individuals behave independently of each other. We denote by $(Z_t)_{t \geq 0}$ the process corresponding to the number of individuals of the population along the time. Note that by hypothesis we have $Z_0 = 1$. We have the following result, giving an explicit value for the expectation at time $t$. \[prop:meanbranching\] For any value of the parameter $\gamma$, and for any $t \geq 0$, we have $${\mathbb{E}}\left(Z_t \right) = e^{- (\gamma - 1)t}.$$ A proof can be found in [@schinazi] (chapter 8). Lemmas ------ \[convseq\] Let $(a_n)_{n \in \N}$ and $(b_n)_{n \in \N}$ be two sequences of real numbers satisfying the following conditions: $$\lim_{n \infty} a_n = 0, \text{ }\lim_{n \infty} b_n = +\infty \text{ and } \lim_{n \infty} a_n b_n = +\infty.$$ Then we have the following convergence $$\lim_{n \infty} (1 - a_n)^{b_n} = 0.$$ For any $n \geq 0$ we can write $(1 - a_n)^{b_n} = e^{b_n \log (1 - a_n)}$. Now the well-known inequality $e^{-x} \geq 1-x$ can be written $\log(1 - x) \leq -x$, which gives us $$(1 - a_n)^{b_n} \leq e^{-b_n a_n},$$ and this last bound goes to $0$ when $n$ goes to $\infty$ using the hypothesis. The following lemma is a classical result in real analysis about superadditive functions, sometime called Fekete lemma. \[fekete\] Let $f: \R^+ \mapsto \R$ be a locally bounded function such that for any $s,t \geq 0$ the following holds $$f(s+t) \geq f(s) + f(t).$$ Then we have the following $$\lim_{t \rightarrow \infty} \frac{f(t)}{t} = \sup_{t > 0} \frac{f(t)}{t}.$$ Let fix some $s>0$. Then any $t\geq 0$ can be written $t = q(t)s + r(t)$, where $q(t)$ (the “quotient”) is a non-negative integer and $r(t)$ (the “remaining”) belongs to $[0,s[$. Iterating the super-additivity property we have $f\big(q(t)s\big) \geq q(t) f\left(s\right)$ so that $$f(t) = f\big(q(t)s + r(t)\big) \geq q(t) f(s) + f\big(r(t)\big).$$ Now, using the assumption that $f$ is locally bounded, we have $$\frac{q(t) f(s) + f\big(r(t)\big)}{t} \underset{t \rightarrow \infty}{\longrightarrow} \frac{f(s)}{s},$$ from what it follows that $$\liminf_{t \rightarrow \infty} \frac{f(t)}{t} \geq \frac{f(s)}{s}.$$ The inequality above being true for any $s > 0$ we get $$\liminf_{t \rightarrow \infty} \frac{f(t)}{t} \geq \sup_{t>0} \frac{f(t)}{t}.$$ The result then follows from the inequality above together with the trivial inequality $$\sup_{t>0} \frac{f(t)}{t} \geq \limsup_{t \rightarrow \infty} \frac{f(t)}{t} \geq \liminf_{t \rightarrow \infty} \frac{f(t)}{t} .$$ Acknowledgements {#acknowledgements .unnumbered} ================ This work is part of my PhD thesis. I thank my PhD adviser Antonio Galves for introducing me to the model discussed in this paper. Many thanks also to Antonio Marcos, for the useful references he gave me and the fruitful discussions we had. This article was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant number 2013/07699-0 , S.Paulo Research Foundation), and the author was supported by a FAPESP scholarship (grant number 2017/02035-7). R.B. SCHINAZI (2014). “Classical and Spatial Stochastic Processes With Applications to Biology”. *Birkhäuser Basel, ed. 2*. R. DURRETT and X.F. LIU (1988). “The Contact Process on a Finite Set”. *The Annals of Probability, Vol. 16, No. 3, 1158-1173*. M. ANDRE (2019). “A Result of Metastability for an Infinite System of Spiking Neurons”. *arXiv:1905.07053*. A. DE MASI, A. GALVES, E. LÖCHERBACH and E. PRESUTTI (2015). “Hydrodynamic Limit for Interacting Neurons”. *Journal of Statistica Physics, Vol. 158, Issue 4, pp 866–902* A. DUARTE and G. OST (2014). “A Model for Neural Activity in the Absence of External Stimuli”. *Markov Processes And Related Fields, Vol. 22, pp 37-52*. N. FOURNIER and E. LÖCHERBACH (2016). “On a Toy Model of Interacting Neurons”. *Annales de l’Institut Henri Poincaré, Probababilité et Statistiques, Vol. 52, Number 4, pp 1844-1876*. R. DURRETT (2010). “Probability: Theory and Examples”. *Cambridge University Press, Fourth Edition*. P.A. FERRARI, A. GALVES, I. GRIGORESCU and E. LÖCHERBACH (2018). “Phase transition for infinite systems of spiking neurons”. *Journal of Statistical Physics, Vol.172, pp 1564–1575*. A. GALVES and E. LÖCHERBACH (2013). “Infinite Systems of Interacting Chains with Memory of Variable Length”. *Journal of Statistical Physics, Vol.151, pp 896–921* A. GALVES and E. LÖCHERBACH (2015). “Modeling Networks of Spiking Neurons as Interacting Processes with Memory of Variable Length”. *Journal de la Société Française de Statistique, Vol. 157, No. 1, pp 17-32* T. M. LIGGETT (1985). “Interacting Particle Systems”. *Grundlehren der mathematischen Wissenschaften, 276*.
--- author: - 'A. Lazarian' title: Grain Alignment in Molecular Clouds --- One of the most informative techniques of studying magnetic fields in molecular clouds is based on the use of starlight polarization and polarized emission arising from aligned dust. How reliable the interpretation of the polarization maps in terms of magnetic fields is the issue that the grain alignment theory addresses. I briefly review basic physical processes involved in grain alignment. Why do we care? {#sec:1} =============== The fact that the grain got aligned has been known for more than half a century. Nevertheless, it has been always a puzzle why grains get aligned in interstellar medium. Very soon after the discovery of grain alignment by Hall (1949) and Hiltner (1949) it became clear that the alignment happens in respect to magnetic field. Since that time grain alignment stopped to be the issue of pure scientific curiosity, but became an important missing link of connecting polarimetry observations with the all-important interstellar magnetic fields[^1]. The history of grain alignment ideas is excited (see review by Lazarian 2003) and here we do not have space here to dwell upon it. Within this very short review we will discuss the modern understanding of grain alignment processes applicable to molecular clouds. Last decade has been marked by a substantial progress in understanding new physics associated with grain alignment. The theory has become predictive, which enables researchers to interpret observational data with more confidence. Recent theory reviews of the grain alignment theory include Roberge (1996), Lazarian (2000, 2003). Progress in testing theory is covered in Hildebrand (2000), while particular aspects of grain dynamics are discussed in Lazarian & Yan (2003). The presentation in Lazarian (2003) goes beyond molecular cloud environment and deals with the possibility of alignment in circumstellar regions, interplanetary medium, coma of comets etc. The interested reader may use these reviews to guide her in the vast and exciting original literature on grain alignment. How do grains rotate? ===================== Dynamics of grains in molecular clouds is pretty involved (see Fig. 1). First of all, grains rotate. The rotation can arise from chaotic gaseous bombardment of grain surface and be Brownian, or it can arise from systematic torques discovered by Purcell (1975, 1979). The most efficient among those are torques arising from H$_2$ formation over grain surface. One can visualize those torques imagining a grain with tiny rocket nozzles ejecting nascent high velocity hydrogen molecules (see Fig. 1). Grains are known to enable atoms of hydrogen to form molecules. The reactions are believed to take place over particular catalytic sites on grain surface. Those catalytic sites act as “Purcell rockets”. Even when the surroundings of dust grains is mostly molecular, grains can rotate suprathermally, i.e. with kinetic energies much larger that $kT_{gas}$, due to the variation of the accommodation coefficient. Indeed, if the temperatures of gas and dust are different, those variations allow parts of the grain to bounce back impinging gaseous atoms with different efficiencies. It is easy to understand that this also results in systematic torques. In addition, Purcell (1979) identified electron ejection as yet another process that can drive grain to very large angular velocities. ![Grain alignment implies several alignment processes acting simultaneously and spanning many time scales (shown for $10^{-5}$ cm grain in cold interstellar gas). The rotational dynamics of a grain is rather complex. The internal alignment introduced by Purcell (1979) was thought to be slower than precession until Lazarian & Draine (1999b, henceforth LD99b) showed that it happens $10^6$ times faster when relaxation through induced by nuclear spins is accounted for (approximately $10^4$ s for the $10^{-5}$ cm grains)](grain.eps){height="6cm"} . A very different process of grain spin-up can be found in a very important, but not timely appreciated work by Dolginov & Mytrophanov (1976). These authors considered differential scattering of photons of right and left circular polarization by an irregular dust grain. As the size of the irregularities gets comparable with the wavelength, it is natural that interaction of a grain with photons will depend on the photon polarization. Unpolarized light can be presented as a superposition of equal number of left and right circularly polarized photons. Therefore it is clear that the interaction with photons of a particular polarization would deposit angular momentum to the grain. The authors concluded that for typical diffuse ISM conditions this process should induce grain rotation at suprathermal velocities. However, while Purcell’s torques became a textbook stuff, radiative torques had to wait 20 years before they were reintroduced to the field (Draine 1996, Draine & Weingartner 1996, 1997). The minimal rotational velocity of grain is the velocity of their Brownian motion. This can be characterized by a temperature that is somewhere between that of the gas and the dust. In the case of PAHs or very small grains emitting copious microwave radiation as they rotate (Draine & Lazarian 1998) the effective rotational temperature may be subthermal (see discussion in Lazarian & Yan 2003). It was realized by Martin (1971) that rotating charged grains will develop magnetic moment and the interaction of this moment with the interstellar magnetic field will result in grain precession. The characteristic time for the precession was found to be comparable with $t_{gas}$. However, soon a process that renders much larger magnetic moment was discovered (Dolginov & Mytrophanov 1976). This process is the Barnett effect, which is converse of the Einstein-de Haas effect. If in Einstein-de Haas effect a paramagnetic body starts rotating during remagnetizations as its flipping electrons transfer the angular momentum (associated with their spins) to the lattice, in the Barnett effect the rotating body shares its angular momentum with the electron subsystem causing magnetization. The magnetization is directed along the grain angular velocity and the value of the Barnett-induced magnetic moment is $\mu\approx 10^{-19}\omega_{(5)}$ erg gauss$^{-1}$ (where $\omega_{(5)}\equiv \omega/10^5{\rm s}^{-1}$). Therefore the Larmor precession has a period $t_{Lar}\approx 3\times 10^6 B_{(5)}^{-1}$ s. Nevertheless, suprathermal, thermal and subthermal grain rotation are just components of complex grain dynamics. As any solid body, interstellar grains can rotate about 3 different body axes. As the result they tumble while rotating. This effect was attracting attention of the early researchers (see Jones & Spitzer 1967) till Purcell (1979) identified internal relaxation within grains as the process that can suppress grain rotation about all axes, but the axis corresponding to the grain maximal moment of inertial (henceforth axis of maximal inertia). Indeed, consider a spheroidal grain, which kinetic energy can be presented as (see Lazarian & Roberge 1997) $ E(\theta)=\frac{J^2}{I_{max}}\left(1+\sin^2\theta (h-1)\right), $ where $\theta$ is the angle between the axis of major inertia and grain angular momentum. In the absence of external torques grain angular momentum is preserved. The minimum of grain energy corresponds therefore to $\theta=0$, or grain rotating exactly about the axis of maximal inertia. As internal dissipation decreases kinetic energy, it sounds natural that $\theta=0$ is the expected state of grain subjected to fast internal dissipation. Purcell (1979) introduced a new process of internal dissipation which he termed “Barnett relaxation”. This process may be easily understood. We know that a freely rotating grain preserves the direction of ${\bf J}$, while angular velocity precesses about ${\bf J}$ and in grain body axes. We learned earlier that the Barnett effect results in the magnetization vector parallel to $\vec \Omega$. As a result, the Barnett magnetization will precess in body axes and cause paramagnetic relaxation. The “Barnett equivalent magnetic field”, i.e. the equivalent external magnetic field that would cause the same magnetization of the grain material, is $H_{BE}=5.6 \times10^{-3} \omega_{(5)}$ G, which is much larger than the interstellar magnetic field. Therefore the Barnett relaxation happens on the scale $t_{Bar}\approx 4\times 10^7 \omega_{(5)}^{-2}$ sec, i.e. essentially instantly compared to the time that it takes to damp grain rotation for typical molecular cloud conditions. Even stronger relaxation process has been identified recently by Lazarian & Draine (1999a). They termed it “nuclear relaxation”. This is an analog of Barnett relaxation effect that deals with nuclei. Similarly to unpaired electrons nuclei tend to get oriented in a rotating body. However the nuclear analog of “Barnett equivalent” magnetic field is much larger and Lazarian & Draine (1999a) concluded that the nuclear relaxation can be a million times faster than the Barnett relaxation. Why would the actual relaxation rate matter? The rate of internal relaxation couples grain rotational and vibrational degrees of freedom. LD99b showed that this will result in grain “thermal flipping”. Such a flipping would average out Purcell’s torques and result in grain being “thermally trapped” in spite of the presence of uncompensated torques. Whether grain gets “thermally trapped” depends on its size (with the grains less than a critical size $a_c$ rotating thermally). While Barnett and inelastic relaxation (see also Lazarian & Efroimsky 1999) results in $a_c$ equal or less than $10^{-5}$ cm, the nuclear internal relaxation provides $a_c\sim 10^{-4}$ cm. This means that most grains rotate thermally in the presence of Purcell’s torques. The exception to this thermallization are radiative torques that are not fixed in grain coordinates. Such torques can spin-up dust in spite of thermal flipping. What does align grains? ======================= [**Paramagnetic Alignment**]{} Davis-Greenstein (1951) mechanism (henceforth D-G mechanism) is based on the paramagnetic dissipation that is experienced by a rotating grain. Paramagnetic materials contain unpaired electrons which get oriented by the interstellar magnetic field ${\bf B}$. The orientation of spins causes grain magnetization and the latter varies as the vector of magnetization rotates in grain body coordinates. This causes paramagnetic loses at the expense of grain rotation energy. Note, that if the grain rotational velocity ${\vec \Omega}$ is parallel to ${\bf B}$, the grain magnetization does not change with time and therefore no dissipation takes place. Thus the paramagnetic dissipation acts to decrease the component of ${\vec \Omega}$ perpendicular to ${\bf B}$ and one may expect that eventually grains will tend to rotate with ${\vec \Omega}\| {\bf B}$ provided that the time of relaxation $t_{D-G}$ is much shorter than $t_{gas}$, the time of randomization through chaotic gaseous bombardment. In practice, the last condition is difficult to satisfy. For $10^{-5}$ cm grains in the diffuse interstellar medium $t_{D-G}$ is of the order of $7\times 10^{13}a_{(-5)}^2 B^{-2}_{(5)}$s , while $t_{gas}$ is $3\times 10^{12}n_{(20)}T^{-1/2}_{(2)} a_{(-5)}$ s ( see table 2 in Lazarian & Draine 1997) if magnetic field is $5\times 10^{-6}$ G and temperature and density of gas are $100$ K and $20$ cm$^{-3}$, respectively. However, at the time when it was introduced ,in view of uncertainties in interstellar parameters, the D-G mechanism looked plausible. The first detailed analytical treatment of the problem of D-G alignment was given by Jones & Spitzer (1967) who described the alignment of ${\bf J}$ using a Fokker-Planck equation. This approach allowed them to account for magnetization fluctuations within grain material and thus provided a more accurate picture of ${\bf J}$ alignment. The first numerical treatment of D-G alignment was presented by Purcell (1969). By that time it became clear that the D-G mechanism is too weak to explain the observed grain alignment. However, Jones & Spitzer (1967) noticed that if interstellar grains contain superparamagnetic, ferro- or ferrimagnetic (henceforth SFM) inclusions[^2], the $t_{D-G}$ may be reduced by orders of magnitude. Since $10\%$ of atoms in interstellar dust are iron the formation of magnetic clusters in grains was not far fetched (see Martin 1995). However, detailed calculations in Lazarian (1997), Roberge & Lazarian (1999) showed that the alignment achievable cannot account for observed polarization coming from molecular clouds provided that dust grains rotate thermally. This is the consequence of thermal fluctuations within grain material. These internal magnetic fluctuations randomize grains orientation in respect to magnetic field if grain body temperature is close to the rotational temperature. Purcell (1979) pointed out that fast rotating grains are immune to both gaseous and internal magnetic randomization. Thermal trapping limits the range of grain sizes for which Purcell’s torques can be efficient (Lazarian & Draine 1999ab). Grains with the sizes larger than the wavelength size can be spun up by the incoming starlight, however (see Draine & Weingartner 1996). [**Mechanical Alignment**]{} Gold (1951) mechanism is a process of mechanical alignment of grains. Consider a needle-like grain interacting with a stream of atoms. Assuming that collisions are inelastic, it is easy to see that every bombarding atom deposits angular momentum $\delta {\bf J}= m_{atom} {\bf r}\times {\bf v}_{atom}$ with the grain, which is directed perpendicular to both the needle axis ${\bf r}$ and the velocity of atoms ${\bf v}_{atom}$. It is obvious that the resulting grain angular momenta will be in the plane perpendicular to the direction of the stream. It is also easy to see that this type of alignment will be efficient only if the flow is supersonic[^3]. Thus the main issue with the Gold mechanism is to provide supersonic drift of gas and grains. Gold originally proposed collisions between clouds as the means of enabling this drift, but later papers (Davis 1955) showed that the process could only align grains over limited patches of interstellar space, and thus the process cannot account for the ubiquitous grain alignment in diffuse medium. Suprathermal rotation introduced in Purcell (1979) persuaded researchers that mechanical alignment is marginal. Indeed, fast rotation makes it difficult for gaseous bombardment to align grains. However, two new developments must be kept in mind. First of all, a number of papers proved that mechanical alignment of suprathermally rotating grains is possible (Lazarian 1995, Lazarian & Efroimsky 1996, Efroimsky 2002). Moreover, recent work on grain dynamics (Lazarian & Yan 2002, Yan & Lazarian 2003) proved that MHD turbulence can render grains with supersonic velocities. While we do not believe that mechanical alignment is the dominant process, it should be kept in mind in analyzing observations (see Rao et al. 1998). [**Alignment via Radiative Torques**]{} Anisotropic starlight radiation can both spin the grains and align them. This was first realized by Dolginov & Mytrophanov (1976), but this work came before its time. The researchers did not have reliable means to study dynamics of grains and the impact of their work was marginal. Before Bruce Draine realized that the torques can be treated with the versatile discrete dipole approximation (DDA) code ( Draine & Flatau 1994) the radiative torque alignment was very speculative. For instance, earlier on difficulties associated with the analytical approach to the problem were discussed in Lazarian (1995a). However, very soon after that Draine (1996) modified the DDA code to calculate the torques acting on grains of arbitrary shape. His work revolutionized the field! The magnitude of torques were found to be substantial and present for grains of various irregular shape (Draine 1996, Draine & Weingartner 1996). After that it became impossible to ignore these torques. One of the problem of the earlier treatment was that in the presence of anisotropic radiation the torques will change as the grain aligns and this may result in a spin-down. Moreover, anisotropic flux of radiation will deposit angular momentum which is likely to overwhelm rather weak paramagnetic torques. These sort of questions were addressed by Draine & Weingartner (1997) and it was found that for most of the tried grain shapes the torques tend to align ${\bf J}$ along magnetic field. The reason for that is yet unclear and some caution is needed as the existing treatment ignores the dynamics of crossovers which is very important for the alignment of suprathermally rotating grains. A recent work by Weingartner & Draine (2003) treats flipping of grains in the presence of monochromatic radiation. What is Future Work? -------------------- Observational testing of alignment is extremely important. Both the dependences of the polarization degree versus wavelength that follow Serkowski law (Serkowski 1973) and studies of changes of polarization degree with the wavelength done in Far Infrared (see Hildebrand 2000) are consistent with theoretical predictions (see discussion in Lazarian, Goodman & Myers 1997). According to Lazarian (2003) the study of grain alignment at the diffuse/dense cloud interface by Whittet et al. (2001) is suggestive that grains there are being aligned by radiative torques. Radiative torques look as the most attractive mechanism to align grains in molecular clouds. However more theoretical work is required to understand why grains subjected to anisotropic radiation get preferentially aligned with their long axes perpendicular to magnetic field. [**Acknowledgments.**]{} I thank H. Yan for suggestions and help. The research is supported by the NASA grant 0830 300 N665 736. [12]{} Bradley, J.P. 1994, Sicence, 265, 925 Davis, L. 1955, Vistas in Astronomy, ed. A.Beer, 1, 336- Davis, L. & Greenstein, J.L., 1951, ApJ, 114, 206-240 Dolginov A.Z. & Mytrophanov, I.G. 1976, Ap&SS, 43, 291-317 Draine, B.T. 1996, in Polarimetry of the Interstellar Medium, eds Roberge W.G. and Whittet, D.C.B., A.S.P. 97. 16-25 Draine, B.T., & Flatau, P.J. 1994, J.Opt.Soc.Am.A., 11, 1491 Draine, B.T. & Lazarian A. 1998a, ApJ, 494, L19-L22 ------------------------------------------------------------------------ 1998b, ApJ, 508, 157-179 ------------------------------------------------------------------------ 1999, ApJ, 512, 740-754 Draine, B.T. & Weingartner, J.C. 1996, ApJ, 470, 551-565 ------------------------------------------------------------------------ 1997, ApJ, 480, 633-646 Efroimsky, M. 2002, ApJ, 575, 886-899 Gold, T. 1951, Nature, 169, 322-323 Hall, J.S. 1949, Science, 109, 166-168 Hildebrand, R.H. 2000, PASP, 112, 1215-1235 Hiltner, W.A. 1949, ApJ, 109, 471 Jones, R.V., & Spitzer, L.,Jr, 1967, ApJ, 147, 943-964 Lazarian, A. 1995, ApJ, , 453, 229-237 ------------------------------------------------------------------------ 1997a, ApJ, 483, 296-308 ------------------------------------------------------------------------ 1997b, MNRAS, 288, 609-617 ------------------------------------------------------------------------ 2000, in “Cosmic Evolution and Galaxy Formation”, ASP v. 215, eds. Jose Franco, Elena Terlevich, Omar Lopez-Cruz, p. 69-79, astro-ph/0003314 ------------------------------------------------------------------------ 2003, Journal of Quantitative Spectroscopy and Radiative Transfer, 79, 881 Lazarian, A., Goodman, A.A. & Myers, P.C. 1997, ApJ, 490, 273-280 Lazarian, A., & Efroimsky, M. 1996, ApJ, 466, 274-281 ------------------------------------------------------------------------ 1999, MNRAS, 303, 673-684 Lazarian, A., & Draine, B.T., 1997, ApJ, 487, 248-258 ------------------------------------------------------------------------ 1999a, ApJ, 520, L67-L70 ------------------------------------------------------------------------ 1999b, ApJ, 516, L37-L40 Lazarian, A., & Prunet, S. 2002, in proc. of the AIP conf. “Astrophysical Polarized Backgrouds”, eds S. Cecchini, S. Cortiglioni, R. Sault, and C. Sbarra, p.32-p.44 Lazarian, A., Yan, H. 2002, ApJ, 566, L105-L108 Lazarian, A. & Yan, H. 2003, to appear in the proceedings of Astrophysics Dust, astro-ph Lazarian, A., & Roberge, W.G., 1997a ApJ, 484, 230-237 ------------------------------------------------------------------------ 1997b, MNRAS, 287, 941-946 Martin, P.G. 1971, MNRAS, 153, 279-286 ------------------------------------------------------------------------ 1995, ApJ, 445, L63-L66 Purcell, E.M. 1969, On the Alignment of Interstellar Dust, Physica, 41, 100-127 ------------------------------------------------------------------------ 1975, in Dusty Universe, eds. G.B. Field & A.G.W. Cameron, New York, Neal Watson, p. 155-165 ------------------------------------------------------------------------ 1979, ApJ, 231, 404-416 Purcell, E.M., & Spitzer, L., Jr 1971, ApJ, 167, 31-62 Rao, R, Crutcher, R.M., Plambeck, R.L., Wright, M.C.H. 1998, ApJ, 502, L75-L79 Roberge, W.G. 1996 in Polarimetry of the Interstellar Medium, eds, Roberge W.G. and Whittet, D.C.B., A.S.P. Vol. 97, p. 401-416 Roberge, W.G., & Lazarian, A. 1999, MNRAS, 305, 615-630 Serkowski, K. 1973, in IAU Symp. 52, Interstellar Dust and Related Topics, ed. J.M. Greenberg & H.C. van de Hulst (Dordrecht: Kluwer), 145-160 Whittet, D.C.B., Gerakines, P.A., Hough, J.H. & Snenoy 2001, ApJ, 547, 872-884 Weingartner, J.C., & Draine, B.T. 2003, ApJ, 589, 289 Yan, H. & Lazarian, A. 2003a, ApJ, 592, 33L [^1]: Additional interest to grain alignment arises from recent attempts to separate the polarized CMB radiation from the polarized foregrounds (see Lazarian & Prunet 2002 for a review). [^2]: The evidence for such inclusions was found much later through the study of interstellar dust particles captured in the atmosphere (Bradley 1994). [^3]: Otherwise grains will see atoms coming not from one direction, but from a wide cone of directions (see Lazarian 1997a) and the efficiency of alignment will decrease.
--- abstract: 'An energy functional for orbital based $O(N)$ calculations is proposed, which depends on a number of non orthogonal, localized orbitals larger than the number of occupied states in the system, and on a parameter, the electronic chemical potential, determining the number of electrons. We show that the minimization of the functional with respect to overlapping localized orbitals can be performed so as to attain directly the ground state energy, without being trapped at local minima. The present approach overcomes the multiple minima problem present within the original formulation of orbital based $O(N)$ methods; it therefore makes it possible to perform $O(N)$ calculations for an arbitrary system, without including any information about the system bonding properties in the construction of the input wavefunctions. Furthermore, while retaining the same computational cost as the original approach, our formulation allows one to improve the variational estimate of the ground state energy, and the energy conservation during a molecular dynamics run. Several numerical examples for surfaces, bulk systems and clusters are presented and discussed.' address: - 'Department of Physics, The Ohio State University, Columbus OH 43210.' - | Institut Romand de Recherche Numérique en Physique des Matériaux (IRRMA),\ IN-Ecublens, 1015 Lausanne, Switzerland. author: - Jeongnim Kim - 'Francesco Mauri[@FMAddress] and Giulia Galli' title: '**Total energy global optimizations using non orthogonal localized orbitals**' --- Introduction ============ Most electronic structure calculations performed nowadays in condensed matter physics are based on a single particle orbital formulation. Within this framework, the ground state energy ($E_{\rm 0}$) of a multi-atomic system is obtained by solving a set of eigenvalue equations. Until recently, this has been accomplished by searching directly the eigenstates of the single particle Hamiltonian ($\hat H$), which in general are extended states, e.g. Bloch states in a periodic system [@rassegna92]. In the last few years, methods for electronic structure (ES) calculations have been introduced, which are based on a Wannier-like representation of the electronic wave functions [@GP92; @WT92; @MGC93; @MG94; @ODGM93; @ODGM94; @K93]. The main motivation for choosing such a representation was the search for methods for which the computational effort scales linearly with system size ($O(N)$ methods). Very recently, real space Wannier-like formulations were also used to describe the response of an insulator to an external electric field [@NV94CM94]. Within these approaches, a suitably defined total energy functional ([**E**]{}) is minimized with respect to orbitals constrained to be localized in finite regions of real space, called localization regions. The minimization of the energy functional does not require the computation of either eigenvalues or eigenstates of $\hat H$. In the absence of localization constraints, one can prove [@MGC93] that the absolute minimum of [**E**]{} ($\tilde{E_{\rm 0}}$) coincides with $E_{\rm 0}$. In the presence of localization constraints, a variational approximation to the electronic wave functions is introduced and therefore $\tilde{E_{\rm 0}}$ lies above $E_{\rm 0}$. However, the difference between $\tilde{E_{\rm 0}}$ and $E_{\rm 0}$ can be reduced in a systematic way, by increasing the size of the localization regions. We note that localization constraints do not introduce any approximation when the resulting localized orbitals can be obtained by a unitary transformation of the occupied eigenstates. Therefore the use of localized orbitals is well justified for, e.g., periodic insulators, for which exponentially localized Wannier functions can be constructed by a unitary transformation of occupied Bloch states [@K59K73]. The minimization of the functional [**E**]{} with respect to [*extended*]{} states can be easily performed so as to lead directly to the ground state energy $E_{\rm 0}$, without traps at local minima or metastable configurations [@MG94]. On the contrary, the minimization of [**E**]{} with respect to localized orbitals can lead to a variety of minima [@MG94; @ODGM94]. In order to attain the minimum representing the ground state, information about the bonding properties of the system has to be included in the input wavefunctions. This implies a knowledge of the system that may be available only in particular cases, and it constitutes the major drawback of the orbital based $O(N)$ method, which has otherwise been shown to be an effective framework for large scale quantum simulations [@GM94]. In this paper, we propose a functional for orbital based $O(N)$ calculations, whose minimization with respect to localized orbitals leads directly to a physical approximation of the ground state, without traps at local minima. This overcomes the multiple minima problem present within the original formulation [@MGC93; @MG94] and makes it possible to perform $O(N)$ calculations for an arbitrary system, with totally unknown bonding properties. The present formulation has also other advantages with respect to the original one. While retaining the same computational cost, it allows one to decrease the error in the variational estimate of $E_{\rm 0}$, for a given size of the localization regions, and to improve the energy conservation during a molecular dynamics run. The novel functional depends on a number of electronic orbitals ($M$) larger than the number of occupied states ($N/2$) of the $N$-electron system, and contains a parameter $\eta$ determining the total charge. During the functional minimization $\eta$ is varied until the total charge of the system equals the total number of electrons; thus when convergence is achieved, i.e. the ground state is attained, the value of $\eta$ coincides with that of the electronic chemical potential $\mu$. Once the ground state is obtained for a given ionic configuration, the corresponding wave functions and ionic positions can be used as a starting point for molecular dynamics simulations, which are then performed at fixed chemical potential. This is at variance with conventional ES calculations based on orbital formulations, where $N$ is always fixed, e.g. by imposing orthonormality constraints. Similar to the present approach, $O(N)$ calculations based on a density matrix formulation [@LNV93D93] are performed at fixed chemical potential. Consistently, the functional describing the total energy does not have multiple minima in the subspace of localized density matrices. However, whereas a density matrix approach presupposes the use of all the occupied and unoccupied states (i.e. a number of states equal to $n_{\rm basis}$, where $n_{\rm basis}$ is the number of basis functions), in our formulation only a limited number of unoccupied states needs to be added to the set of occupied states, regardless of the basis set size. Therefore the present formulation can be efficiently applied also in computations where the number of basis functions is much larger than the number of occupied states in the system (e.g. first principles plane wave calculations). The rest of the paper is organized as follows: In section II we present a generalization of the original formulation of orbital based $O(N)$ approaches; we first introduce an energy functional which depends on a number of orbitals larger than the number of occupied states, and we then discuss its properties and the role of the chemical potential. In section III we present the results of tight binding calculations based on the generalized $O(N)$ method, showing that the novel approach overcomes the multiple minima problem, and allows one to improve on variational estimates of the ground state properties and on the efficiency of molecular dynamics simulations. Conclusions are given in Section IV. Electronic structure calculations at a given chemical potential ================================================================ Definition of the functional ---------------------------- We consider the energy functional [**E**]{} defined in Ref. 5, which depends on $N/2$ occupied orbitals, for a $N$ electron system. We generalize [**E**]{} so as to depend on an arbitrary number $M$ of orbitals, which can be larger than the number of occupied states $N/2$. For simplicity, we consider a non self-consistent Hamiltonian; however the conclusions of this section are easy to extend to self-consistent Hamiltonians. The energy functional is written as: $${\bf E}[\{\phi\},\eta, M] = 2 \sum_{ij=1}^{M} Q_{ij} < \phi_j | \hat H -\eta | \phi_i > + \eta N. \label {deffunc}$$ Here $\{\phi\}$ is a set of $M$ overlapping orbitals, $\hat H$ the single particle Hamiltonian, $\eta$ a parameter and [**Q**]{} a ($M$ $\times$ $M$) matrix: $${\bf Q} = 2 {\bf I} -{\bf S}. \label{Q}$$ [**S**]{} is the overlap matrix: $S_{ij} = <\phi_i|\phi_j>$ and [**I**]{} is the identity matrix. This definition of the [**Q**]{} matrix corresponds to truncate the series expansion of the inverse of the overlap matrix to the first order (${\cal N} = 1$, in the notation of Ref. 5). The charge density is defined as $$\rho({\bf r}) = 2 \sum_{ij=1}^M <\phi_j|{\bf r}><{\bf r}|\phi_i> Q_{ij}. \label{cd}$$ For $M = N/2$, one recovers the original energy functional for $O(N)$ calculations. We note that the energy functional in Eq. (1) can be expressed in terms of a density matrix $\hat \sigma[\{\phi\}]$ : $${\bf E}[\{\phi\},\eta,M]= 2 Tr[ (\hat H-\eta) \hat \sigma] +\eta N \label{densmat}$$ Here the trace is computed over the $ n_{\rm basis}$ functions used for the expansion of the $\{\phi\}$, and $\hat \sigma[\{\phi\}]= \sum_{ij=1}^M|\phi_i>Q_{ij}<\phi_j|$. Before discussing the use of the functional of Eq. (1) within a localized orbital formulation, it is useful to assess some of its general properties. \(i) ${\bf E}[\{\phi\},\eta, M]$ [*is invariant under unitary transformations*]{} of the type $\phi^{\prime}_i = \sum_{j=1}^{M} U_{ij} \phi_j$, where [**U**]{} is a ($M\times M$) unitary matrix. \(ii) [*Orbitals with vanishing norms do not give any contribution to the energy functional ${\bf E}[\{\phi\},\eta, M]$.*]{} If the overlap matrix [**S**]{} entering Eq. (1) has ($M - M'$) eigenvalues equal to zero, then a unitary transformation [**U**]{} exists, such that $\{\phi^{\prime}\}$ satisfies the condition: $$<\phi^{\prime}_i | \phi^{\prime}_i> = 0,\;\;\; {\rm for} \;\;\; i=M'+1,...,M. \label{zero}$$ Under this condition: $${\bf E}[\{\phi\},\eta, M] = {\bf E}[\{\phi^{\prime}\},\eta, M']. \label {original}$$ We note that if [**Q**]{} is replaced by [**S**]{}$^{-1}$ in the definition of ${\bf E}[\{\phi\},\eta, M]$ (Eq. (\[deffunc\])), then orbitals with a vanishing norm give a non zero contribution to the total energy, since for $<\phi_i | \phi_i> \rightarrow 0$ the eigenvalues of [**S**]{}$^{-1}$ go to infinity. Therefore the functional ${\bf E}[\{\phi\},\eta, M]$, with [**Q**]{} replaced by [**S**]{}$^{-1}$, does not satisfy property (ii). (iii) [*The ground state energy $E_0$ is a stationary point of*]{} ${\bf E}[\{\phi\},\eta, M]$. In order to prove this statement, we consider the following set of orbitals $\{\phi^{0} \}$: $$\begin{aligned} |\phi^{0}_i> & = & |\chi_i> \;\;\; {\rm for} \;\;\; i=1,N/2 \nonumber \\ & & |0> \;\;\; {\rm for} \;\;\; i=N/2+1, M\end{aligned}$$ where $|\chi_k>$ are the $n_{\rm basis}$ eigenvectors of $ \hat H$ with eigenvalue $\epsilon_k$. Hereafter we assume that $<\chi_k|\chi_k>=1$ and $\epsilon_k \le \epsilon_{k+1}$. The set $\{\phi^{0} \}$ fulfills Eq.(\[zero\]), and therefore ${\bf E}[\{\phi^0\},\eta, M] = {\bf E}[\{\phi^0\},\eta, N/2] = E_0$. In addition, the set $\{\phi^{0}\}$ is a stationary point of ${\bf E}[\{\phi\},\eta,M]$, since $\delta {\bf E} / \delta \phi_k|_{\{\phi^{0}\}}=0$, where $${\delta {\bf E} \over \delta \phi_k}= 4 \sum_{j=1}^{M} [ (\hat H -\eta )| \phi_j > (Q_{jk}) - | \phi_j > < \phi_j| (\hat H - \eta) | \phi_k > ].$$ \(iv) [*The stationary point*]{} $E_0$ [*is a minimum of*]{} ${\bf E}[\{\phi\},\eta, M]$ [*if*]{} $\eta$ [*is equal to the electronic chemical potential*]{} $\mu$. We will only consider electrons at zero temperature, and therefore we choose $\mu$ such that $\epsilon_{N/2} < \mu < \epsilon_{N/2 + 1}$. This property will be proved in the next section. Role of the chemical potential ------------------------------ Before giving a proof of property (iv) stated in section II.A, we discuss a simple example which is useful to illustrate the role played by $\eta$ in the minimization of the energy functional [**E**]{}. For this purpose, we evaluate the functional ${\bf E}[\{\phi\},\eta, M]$ for a set of $M$ eigenstates of the Hamiltonian. In particular, we choose a set $\{ \phi \}$ such that $|\phi_i> = a_i |\chi_i>$, with arbitrary $a_i$. In this case the energy functional becomes: $${\bf E} [\{a\}, \eta, M] = 2 \sum_{i=1}^M (\epsilon_i -\eta) (2 -a_i^2)a_i^2 + \eta N \label{efi}$$ As illustrated in Fig. 1, the function $(\epsilon_i -\eta) (2 -a_i^2)a_i^2$ has a minimum at $a_i=0$ if $\epsilon_i >\eta$, and a minimum at $a_i=1$ if $\epsilon_i <\eta$. Thus the functional ${\bf E} [\{a\}, \eta, M]$ has a minimum for a set $\{a^0\}$ such that $a^0_i=1$ if $\epsilon_i <\eta$, and $a^0_i=0$ if $\epsilon_i >\eta$. At the minimum, Eq. (\[efi\]) becomes $$E_{\rm min} = 2 \sum_{i=1}^{M'}\epsilon_i + \eta (N-2M'),$$ where $\epsilon_{M'} < \eta< \epsilon_{M'+1}$ and the total charge of the system is $\rho_{\it tot} = 2 \sum_{i=1}^{M} (2-a_i^2)a_i^2 = 2 M'$. We can now choose $\eta$ so that $\rho$ is equal to the actual number of electrons in the system. This is accomplished by setting $\epsilon_{N/2}<\eta <\epsilon_{N/2+1}$, i.e. by choosing $\eta$ equal to the electronic chemical potential $\mu$. We then have $\rho_{\it tot} = 2M'=N$ and and $E_{\rm min}= E_0$. In order to give a general proof of property (iv) (section II.A), we show that the Hessian matrix ($h$) of the functional ${\bf E}[\{\phi\},\eta, M]$ at the ground state is positive definite, if $\eta = \mu$. The computation of the eigenvalues of $h$ follows closely the procedure used in Ref.  to calculate the Hessian matrix of ${\bf E}[\{\phi\},\eta,N/2]$ at the ground state. Since the functional ${\bf E}[\{\phi\},\eta,M]$ is invariant under unitary rotations of the $\{\phi\}$, we can write a generic variation of the wave function with respect to the ground state as $$\begin{aligned} |\phi^{0}_i> & = & |\chi_i> +| \Delta_i >\;\;\; {\rm for} \;\;\; i=1,N/2 \nonumber \\ & & |0> +| \Delta_i >\;\;\; {\rm for} \;\;\; i=N/2+1, M \label{displac}\end{aligned}$$ where $$| \Delta_i > = \sum_{l=1}^{n_{\rm basis}} c^i_l | \chi^0_l >. \label{delta}$$ By inserting Eq. (\[displac\]) into Eq. (1), it is straightforward to show that the first order term in the $\{c\}$ coefficients vanishes for any value of the parameter $\eta$, consistently with property (iii) stated in section II.A. The remaining second order term can be written as follows: $$\begin{aligned} E^{(2)} & = & \sum_{i=1}^{N/2} \sum_{m=N/2+1}^{n_{\rm basis}} 2 [\epsilon_m - \epsilon_i] (c^i_m)^2 + \sum_{ij=1}^{N/2} 8 [ \eta - { {(\epsilon_i+\epsilon_j)}\over {2} } ] [{1\over \sqrt{2}} (c^i_j + c^j_i)]^2 + \nonumber \\ & & \sum_{i=N/2+1}^{M} \sum_{m=N/2+1}^{n_{\rm basis}} 4[\epsilon_m - \eta] (c^i_m)^2. \label{eigenmodes}\end{aligned}$$ The eigenvalues $ 2 [\epsilon_m - \epsilon_i]$ are independent of $\eta$ and always positive, whereas the eigenvalues $8[ \eta - (\epsilon_i+\epsilon_j)/2]$ and $4[\epsilon_m - \eta]$ are positive, if and only if $\eta$ coincides with the chemical potential $\mu$. This proves property (iv) of section II.A. $O(N)$ calculations with overlapping localized orbitals ======================================================= Localization of orbitals and practical implementation ------------------------------------------------------ We now turn to the discussion of the functional defined in section II.A within a localized orbital formulation. The use of localized orbitals is a key feature to achieve linear system-size scaling [@MG94] calculations. Orbitals are constrained to be localized in appropriate regions of space, called localization regions, i.e. they have non zero components only inside a given localization region, whereas they are zero outside the localization region. The choice of the number of localization regions and of their centers is arbitrary. In the calculations that will be discussed in the next sections, we chose a number of localization regions equal to the number of atoms, each centered at an atomic site ($I$). We then associated an equal number of localized orbitals (n$_s$) to a localization region, e.g. two and three localized orbitals for $M = N/2$ and $M = 3N/2$, respectively. We will present electronic structure calculations and molecular dynamics simulations of various carbon systems, carried out within a tight binding approach. We adopted the TB Hamiltonian proposed by Xu et al. [@xwch92; @ftn1], which includes non zero hopping terms only between the first nearest neighbors. In a tight-binding picture, a localization region centered on the atomic site $I$ can be identified with the set $\{LR_I\}$ of atoms belonging to the localization region. Atoms are included in $\{LR_I\}$, if they belong to the N$_h$ nearest neighbor of the center atom. Then, the localized orbital $|\phi_i^L>$, whose center is the $I$th atom, is expressed as $$|\phi_i^L> = \sum_{J\in \{LR_I\}} \sum_{l} C^i_{Jl} |\alpha_{Jl}>,$$ where $ |\alpha_{Jl}>$’s are the atomic basis functions of the atom $J$ and the index $l$ indicates the atomic components ($s, p_x, p_y$ or $p_z$). In our computations, the generalized energy functional was minimized with respect to the localized orbitals $\{\phi^L\}$ by performing a conjugate gradient (CG) procedure, both for structural optimizations and molecular dynamics simulations. For some calculations it was necessary to use a non zero Hubbard-like term [@xwch92] to prevent unphysical charge transfers. In this case the line minimization required in a CG procedure reduces to the minimization of a polynomial of eighth degree in the variation of the wavefunction along the conjugate direction. We performed an exact line minimization by evaluating the coefficients of the polynomial, and by solving iteratively for the polynomial roots. The multiple minima problem ----------------------------- As mentioned in the introduction, the major drawback of the original formulation of orbital based $O(N)$ calculations is the so called multiple minima problem. Experience has shown that the minimization of ${\bf E}[\{\phi\},\eta, N/2]$ with respect to localized orbitals usually leads to a variety of minima [@MG94; @ODGM94], and that the physical properties of the minimum reached during a functional minimization depend upon the choice for the input wave functions. If the input wave functions are constructed by taking advantage of bonding information about the ground state, then a minimum representing a physical approximation to the ground state may be reached, after an iterative minimization. On the contrary, if no information on the ground state is included in the localized orbitals from the start, the functional minimization usually leads to a local minimum, which is characterized by an unphysical charge density distribution. This is illustrated for a particular case in Table I and Fig. 2, where we present the results of a series of tight binding (TB) calculations using localized orbitals, for a 256 carbon atom slab. The slab, consisting of 16 layers, represents bulk diamond terminated by a C(111)-2 $\times$ 1 Pandey reconstructed surface on each side. We considered localization regions (LRs) extending up to second neighbors (N$_h$=$2$). We performed conjugate gradient minimizations of the electronic structure using two localized orbitals per LR (n$_s$=$2$), which correspond to the case $M = N/2$ in Eq. (1), i.e. to the original formulation of $O(N)$ calculations. These minimizations were carried out by starting from different wave function inputs. The only calculation which lead to a physical minimum was the one started with orbitals containing symmetry information about the system, as shown by comparing the results of Fig. 2C with those of direct diagonalization, reported in Fig. 3B. The other calculations lead to unphysical minima: when starting with a totally random input (Figs. 2A), we found a local minimum with charged sites, located predominantly in the surface layers and in the middle of the slab. When starting from an atom by atom input (Fig. 2B) we obtained a local minimum corresponding to two differently charged surfaces, one positively and the other negatively charged. The local minima problem present in the original $O(N)$ formulation can be illustrated with a simple one dimensional model.[@sardegna] We consider a linear chain with $N_{\rm site}$ sites and 2 $N_{\rm site}$ electrons in a uniform electric field of magnitude $F$, with Hamiltonian: $$\hat H= \sum_{K=1}^{N_{\rm site}} [E_{gap}|e_K><e_K|-FK(|e_K><e_K|+|g_K><g_K|)].$$ Here $|e_K>$ and $|g_K>$ are the highest and the lowest level of the isolated site $K$, respectively, and $E_{gap}$ is the splitting between these two levels. Since the hopping terms between different sites are set at zero, $|e_K>$ and $|g_K>$ are also eigenfunctions of the linear chain Hamiltonian. We now study the ground state of the system as a function of the electric field $F$. If $0<F<E_{gap}/(N_{\rm site}-1)$, the total energy of the system is minimized by the set of orbitals $|\phi^{0}_i>$ given by: $$|\phi^{0}_i> = |g_i> \;\;\; {\rm for} \;\;\; i=1,N_{\rm site}.\label{F0}$$ If $E_{gap}/(N_{\rm site}-1)<F<E_{gap}/(N_{\rm site}-2)$, the eigenvalue of $|g_1>$ is higher than that of $|e_{N_{\rm site}}>$, and therefore the total energy of the system is minimized by the following set of orbitals $|\phi^{0}_i>$: $$\begin{aligned} |\phi^{0}_i> & = & |g_{i+1}> \;\;\; {\rm for} \;\;\; i=1,N_{\rm site}-1, \nonumber \\ & & |e_{i}>\phantom{_{+1}} \;\;\; {\rm for} \;\;\; i=N_{\rm site}.\label{Fn0}\end{aligned}$$ In both cases, the total energy of the linear chain system can be obtained exactly within a localized orbital picture, by considering $N_{\rm site}$ LRs centered on atomic sites, which extend up to the first neighbors of a given site. We first describe the total energy of the system with the functional ${\bf E}[\{\phi\},\eta,N/2]$. Within this framework, the set $|\phi^{0}_i>$ which minimizes ${\bf E}[\{\phi\},\eta,N/2]$ in the presence of a small field, i.e. when $0<F<E_{gap}/(N_{\rm site}-1)$, is also a local minimum of ${\bf E}[\{\phi\},\eta,N/2]$ in the presence of a large field, i.e. when $E_{gap}/(N_{\rm site}-1)<F<E_{gap}/(N_{\rm site}-2)$. This can be easily seen from the second order expansion ($E^{(2)}$) of ${\bf E}[\{\phi\},\eta,N/2]$ around the set of orbitals defined in Eq. (\[F0\]): $$E^{(2)}= \sum_{i=1}^{N_{\rm site}} \sum_{m\in \{LR_i\}} 2 [E_{gap}-F(m-i)] (e^i_{m})^2 + \sum_{i=1}^{N_{\rm site}} \sum_{j\in \{LR_i\}} 8 [ \eta + F{i+j\over 2 } ] [{1\over \sqrt{2}} (g^i_{j} + g^j_{i})]^2,$$ where $g^i_{K}$ and $e^i_{K}$ are the projections of the vector $|\phi_i>-|\phi_i^0>$ on the state $|g_k>$ and $|e_K>$, respectively. If the orbital are extended, the difference $(m-i)$ can be as large as ($N_{\rm site}-1$) and the eigenvalues $[E_{gap}-F(m-i)]$ can be negative when $E_{gap}/(N_{\rm site}-1)<F<E_{gap}/(N_{\rm site}-2)$. However, if the orbital are localized the difference $(m-i)$ is smaller than ($N_{\rm site}-1$) and the eigenvalues $[E_{gap}-F(m-i)]$ are always positive also for $E_{gap}/(N_{\rm site}-1)<F<E_{gap}/(N_{\rm site}-2)$. We now turn to a description of the total energy of the linear chain system with the functional ${\bf E}[\{\phi\},\mu,M]$, where $M$ is larger than the number of occupied states $N/2$, e.g. $M=2N_{\rm site}$. It is straightforward to show that contrary to a description with ${\bf E}[\{\phi\},\eta,N/2]$, when using ${\bf E}[\{\phi\},\mu,M]$ the set of orbitals of Eq. (\[F0\]) is not a local minimum of the system in the presence of a large field. Indeed, according to Eq. (\[eigenmodes\]), the second order expansion $E^{(2)}$ is now given by: $$\begin{aligned} E^{(2)}&=& \sum_{i=1}^{N_{\rm site}} \sum_{m\in \{LR_i\}} 2 [E_{gap}-F(m-i)] (e^i_{m})^2 + \sum_{i=1}^{N_{\rm site}} \sum_{j\in \{LR_i\}} 8 [ \mu + F{i+j\over 2 } ] [{1\over \sqrt{2}} (g^i_{j} + g^j_{i})]^2+\nonumber \\ & & \sum_{i=N_{\rm site}+1}^{2N_{\rm site}} \sum_{m\in \{LR_i\}} 4[E_g -Fm - \mu] (g^i_{m})^2.\end{aligned}$$ Here the LOs with indices $i$ and $i+N_{\rm site} $ are assigned to the localization region $\{LR_i\}$. Both within an extended [*and*]{} a localized orbital picture, the eigenvalue $4[E_g -FN_{\rm site} - \mu]$ is negative when $E_{gap}/(N_{\rm site}-1)<F<E_{gap}/(N_{\rm site}-2)$. This simple model shows that the extremum properties of the functionals ${\bf E}[\{\phi\},\eta,N/2]$ and ${\bf E}[\{\phi\},\mu,M]$ are in general different, and in particular that local minima of ${\bf E}[\{\phi\},\eta,N/2]$ are not necessarily so for ${\bf E}[\{\phi\},\mu,M]$. This suggests that the use of the functional ${\bf E}[\{\phi\},\mu,M]$ can overcome the multiple minima problem encountered within a formulation based on ${\bf E}[\{\phi\},\eta,N/2]$. This simple model suggests also the reason why the multiple minima problem should be overcome: the presence of the [*global*]{} variable $\mu$, together with the augmented variational freedom of extra orbitals added to the definition of the functional, can account for global changes taking place in the system. Overcome of the multiple minima problem ---------------------------------------- We now present a series of numerical examples, showing that the minimization of the generalized functional ${\bf E}[\{\phi\},\eta, M]$ (Eq. (1)) with respect to localized orbitals can be performed without traps at local minima, as indicated by the simple model discussed in the previous section. We performed calculations for various carbon systems (bulk solids, surfaces, clusters and liquids), by using again LRs extending up to second neighbors (N$_h$=$2$). We considered three LOs per site (n$_s$=$3$), i.e. $M = 3 N/2$ in Eq. (1). In all cases, using n$_s$=$3$ was sufficient to overcome the multiple minima problem present in the original formulation. We note that the generalized functional, although it includes a number of localized orbitals larger than the number of occupied states, still allows one to carry out electronic minimizations and molecular dynamics simulations with a computational effort scaling linearly with system size. In Fig. 4, we show the energy and the charge per atom during a conjugate gradient minimization of ${\bf E}[\{\phi\},\eta, M]$, for a 256 carbon atom slab, starting from a totally random input. The system is the same as the one studied in the previous section with n$_s$=$2$. The minimization was started with $\eta = 20$ eV; the parameter was then decreased every 20 iterations, and finally set at 3.1 eV, which corresponds to the value of the chemical potential. As discussed in section II.B, for a given $\eta$ the integral of the charge density converges to a value which corresponds to filling all the orbitals with energies smaller than $\eta$. For example, for $\eta = 20$ eV the total charge per atom is equal to 6, i.e. all the $3 N/2$ orbitals are filled. Eventually, when $\eta= \mu$ the total charge becomes equal to the number of electrons in the system. The way $\eta$ is varied during a minimization is not unique; however the final value of $\eta$ must be always adjusted so as to obtain the correct charge in the system. It is seen in Table I that all the minimizations with n$_s$=$3$ converge to the same value, irrespective of the input chosen for the wave function. This value corresponds to a physical minimum, as shown in Fig. 3 where we compare the charge density distribution with that obtained by direct diagonalization. Improvement on variational estimates of the ground state properties -------------------------------------------------------------------- The use of the generalized functional and LOs not only overcomes the problem of multiple minima, but it also improves the variational estimate of $E_0$, for a given size of the LRs. This is shown is Table II and III, where we compare the results of calculations using the same LRs but different number of orbitals (n$_s$=2 and 3), for various carbon systems. The improvement is particularly impressive in the case of C$_{\rm 60}$, where we also performed an optimization of the ionic structure. The error on the cohesive energy is decreased from 3 to 1.5 $\%$ by increasing n$_s$ from $2$ to $3$. Most importantly the optimized ionic structure obtained with n$_s$=3 is in excellent agreement with that obtained with an extended orbital calculation. We note that localization constraints introduce a symmetry breaking in the system, i.e. LOs do not satisfy all the symmetry properties of the Hamiltonian eigenstates. In C$_{\rm 60}$ the symmetry breaking is large when using n$_s$=$2$; the deviation of the double and single bond lengths with respect to their average values are 3.5 and 6.3 $\%$, respectively. On the contrary, in the optimized geometry obtained with n$_s$=$3$ the symmetry breaking is very small (0.1 and 0.5 $\%$, for the double and single bonds, respectively), compared to the icosahedral structure. When using n$_s$=$2$, the ground state LOs are nearly orthonormal[@MG94], whereas minimizations with n$_s$=$3$ yield overlapping LOs. Indeed when using n$_s$=$3$, at the minimum the overlap matrix [**S**]{} has 2n$_s$ eigenvalues close to 1 and n$_s$ eigenvalues close to 0, and this condition can be satisfied with a non diagonal [**S**]{} matrix. We define a quantity measuring the orthogonality of the orbitals as $\Delta^2 = (\sum_{ij=1}^M (\delta_{ij} - S_{ij})^2)/M$. In the case of C$_{\rm 60}$, $\Delta^2$ is 2.5 10$^{-3}$ and 0.17 for n$_s$=$2$ and n$_s$=$3$, respectively. We also note that for various systems, the centers of the LOs $<{\bf r}^L>$, defined as $$<{\bf r}^L> = { \sum_K \sum_l <\phi^L|\alpha_{K,l}>({\bf r}_K)<\alpha_{K,l}|\phi^L> \over <\phi^L|\phi^L>},$$ were always found to be located at distances shorter than one bond length from the center of their own LRs, when using n$_s$=$3$. In the case of n$_s$=$2$, we instead found cases, e.g. the C$_{\rm 60}$ molecule, where some orbitals were centered far from their atomic sites and close to the border of their LRs. Molecular dynamics simulations ------------------------------ In order to investigate the performances of the generalized functional (Eq. 1) for molecular dynamics (MD) simulations, we carried out MD runs for liquid carbon at low density (2 gr cm$^{-3}$) and at 5000 K. We used a 64 atom cell with simple cubic periodic boundary conditions and only the $\Gamma$ point to sample the BZ. We used a cutoff radius of 2.45 Å  for the hopping parameters entering the TB Hamiltonian and for the two body repulsive potential [@xwch92] and $U=8$ eV. In the case of l-C it was necessary to add an Hubbard like term to the Hamiltonian, in order to prevent unphysical charge transfers during the simulations. Equilibration of the system was performed in the canonical ensemble by using a Nosé thermostat [@N84W85]. Within the original $O(N)$ approach, MD runs for l-C were found to be particularly demanding from the computational point of view, since they required many iterations (N$_{\rm iter}$) per ionic move (e.g. N$_{\rm iter}$=$300$ for $\Delta t$=30 a.u.), in order to minimize the energy functional [@MG94]. Most importantly, during the simulation the system could be trapped at a local minimum, evolve adiabatically from that minimum for some time, and suddenly jump to another minimum lower in energy. This shows up as a spike in the constant of motion of the system (E$_{\rm const}$), as can be seen in the line (c) of Fig. 5, which displays E$_{\rm const}$ for a run performed with n$_s$=$2$. Because of local minima, a perfect conservation of energy could never be achieved with n$_s$=$2$, even by increasing N$_{\rm iter}$ to a very large number. When MD runs are performed with n$_s$=$3$, the problem of local minima is overcome; furthermore a significant improvement in the conservation of energy can be achieved at the same computational cost as simulations with n$_s$=$2$. This is seen in Fig. 5 by comparing lines (b) and (c). When the generalized functional is used, the accuracy of the energy conservation during a MD run is related only to the convergence of the electronic minimization scheme: a good conservation of energy can be obtained just by increasing N$_{\rm iter}$. This is shown by the line (a) in Fig. 5. We note that the behavior of E$_{\rm const}$ observed for all the simulations was not affected by the presence of the thermostat. This was checked by repeating all MD runs with three different masses (Q$_s$) for the Nosé thermostat (Q$_s$=1,4,100 in the same units). The structural properties of l-C computed from the MD runs with n$_s$=$3$ showed a very good agreement with those previously obtained with n$_s$=$2$. Conclusions =========== We have presented a generalization of orbital based $O(N)$ approaches, which relies upon a novel functional, depending on a number of localized states larger than the number of occupied states, and on a parameter which determines the total number of electrons in the system. We have shown that the minimization of this functional with respect to localized orbitals can be carried out without traps at local minima, irrespective of the input chosen for the wave functions. In this way, the multiple minima problem present in the original formulation is overcome, and $O(N)$ computations can be performed for an arbitrary system, without knowing any bonding properties of the system for the calculation input. We have also presented a series of tight binding calculations for various carbon systems, showing that the generalized $O(N)$ approach allows one to decrease the error in the variational estimate of the ground state properties, and to improve energy conservation, i.e. efficiency, during a molecular dynamics run. This can be accomplished at the same computational cost as within the original formulation. At variance from $O(N)$ density matrix approaches, our formulation requires that only a limited number of unoccupied states be included in the energy functional, regardless of the basis set size. Therefore the present formulation can be efficiently applied also in computations where the number of basis functions is much larger than the number of occupied states in the system (e.g. first principles plane wave calculations). **Acknowledgements** It is a pleasure to thank A. Canning, A. Dal Corso, M. Steiner and J. W. Wilkins for useful discussions and a critical reading of the manuscript. This work was partly supported by DOE (JK) and by the Swiss National Science Foundation under grant No 20-39528.93 (GG and FM). [10]{} Present address: Department of Physics, University of California at Berkeley, CA 94720. For a review see, e.g., G. Galli and A. Pasquarello, in Computer Simulation in Chemical Physics, Edited by M.P.Allen and D.J.Tildesley, p. 261, Kluwer, Dordrecht (1993), and M. C. Payne, M. P. Teter, D. C. Allan, T. A. Arias and J. D. Joannopoulos, Rev. Mod. Phys. [**64**]{}, 1045 (1993). G. Galli and M.Parrinello, Phys. Rev. Lett. [**69**]{}, 3547 (1992). W.-L. Wang and M. Teter, Phys. Rev. B [**46**]{}, 12798 (1992). F. Mauri, G. Galli, and R. Car, Phys. Rev. B [**47**]{}, 9973 (1993). F. Mauri, G. Galli, Phys. Rev. B [**50**]{}, 4316 (1994). P. Ordejón, D. Drabold, M. Grunbach, and R. Martin, Phys. Rev. B [**48**]{}. 14646 (1993). P. Ordejón, D. Drabold, M. Grunbach, and R. Martin, Phys. Rev. B (to be published, Jan. 15, 1995). W. Kohn, Chem. Phys. Lett. [**208**]{}, 167 (1993). R. W. Nunes and D. Vanderbilt, Phys. Rev. Lett. [**73**]{}, 712 (1994); A. Dal Corso and F. Mauri, Phys. Rev. B [**50**]{}, 5756 (1994). W. Kohn, Phys. Rev. [**115**]{}, 809 (1959) and Phys. Rev. B [**7**]{}, 4388 (1973). G. Galli and F. Mauri, Phys. Rev. Lett. (in press). X.-P. Li, R. Nunes, and D. Vanderbilt, Phys. Rev. B [**47**]{}, 10891 (1993); M. S. Daw, Phys. Rev. B [**47**]{}, 10895 (1993). C. Xu, C. Wang, C. Chan, and K. Ho, J.Phys.: Condens. Matter [**4**]{}, 6047 (1992). We used a cutoff radius of 2.30 Å for the hopping parameters of ${\hat H}$ and for the two body repulsive potential, and U=0 eV for the Hubbard like term (see Ref.(13)). A similar model was used by D. Vanderbilt to show numerically the presence of local minima in the functional ${\bf E}[\{\phi^0\},\eta, N/2]$ (private communication). S.Nosé, Mol. Phys.[**52**]{}, 255 (1984); W. Hoover, Phys. Rev. A [**31**]{}, 1695 (1985). ---------------------- -------------------------- -------------------------- Wave Function input [ E]{}$_c$ \[n$_s$=$2$\] [ E]{}$_c$ \[n$_s$=$3$\] [*Totally random*]{} 6.837 6.978 [*Atom by atom*]{} 6.721 6.978 [*Layer by layer*]{} 6.930 6.978 ---------------------- -------------------------- -------------------------- : Cohesive energy E$_c$ (eV) of a 256 carbon atom slab. The slab, consisting of 16 layers, represents bulk diamond terminated by a C(111)-2 $\times$ 1 Pandey reconstructed surface on each side. E$_c$ was obtained by performing localized orbital calculations with two and three states (n$_s$) per atom (see text), and with three different inputs for the starting wave functions. [*Totally random*]{} input: the wave function expansion coefficients ($C^i_{Jl}$, see Eq. (14)) on each site of a localization region (LR) are random numbers, and orbitals belonging to the same LR are orthonormalized at the beginning of the calculations. [*Atom by atom*]{} input: each orbital has a non zero $C^i_{Jl}$ only on the atomic site to which it is associated, and for each atomic site this coefficient is chosen to be the same. [*Layer by layer*]{} input: each orbital has a non zero $C^i_{Jl}$ only on the atomic site to which it is associated, and the value of this coefficient is chosen to be the same for each equivalent atom in a layer. In the case of [*atom by atom*]{} and [*layer by layer*]{} inputs, the initial wave functions are an orthonormal set. The calculations were performed with $\eta$=$7.5$ eV and $\eta$=$3.1$ eV for n$_s$=$2$ and n$_s$=$3$, respectively, and with LRs extending up to second neighbors (N$_h$=$2$, amounting at most to 17 atoms per LR). The value for E$_c$ obtained by direct diagonalization is 7.04 eV. (See also Fig. 1). The highest occupied and lowest unoccupied eigenvalues are 2.85 and 3.42 eV, respectively. In all calculations the Hubbard like term was set at zero. []{data-label="tab1"} 0.5truecm -------------------------------- ---------------------- ---------------------- ---------------------- Physical properties Cohesive Energy/atom Double-bond distance Single-bond distance LO\[N$_{\rm h}$=2, n$_s$=$2$\] 6.69 (6.89) 1.358-1.407 1.420-1.512 LO\[N$_{\rm h}$=2, n$_s$=$3$\] 6.81 (6.91) 1.386-1.388 1.445-1.453 Extended Orbitals 6.91 1.393 1.440 -------------------------------- ---------------------- ---------------------- ---------------------- : Cohesive energy (eV) and length (Å ) of the double and single bonds of C$_{60}$, as obtained from structural optimizations using localized (LO) and extended orbitals. In all calculations the Hubbard like term was set at zero. For comparison, cohesive energies obtained by direct diagonalization are given in parentheses. Computations with LO were performed by including two shells in a localization region (N$_h$=$2$, amounting to 10 atoms per localization region), and by considering two and three orbitals (n$_s$) per atom (see text). []{data-label="tab2"} 0.5truecm --------------------------------- -------------------------- ------------------------------ --------------------------- Crystal structure Diamond (r$_0$ = 1.54 Å) 2D-Graphite (r$_0$ = 1.42 Å) 1D-Chain (r$_0$ = 1.25 Å) E$_c$ \[N$_h$=$2$,  n$_s$=$2$\] 7.16 7.09 5.62 E$_c$ \[N$_h$=$2$,  n$_s$=$3$\] 7.19 7.12 5.67 E$_c$ \[N$_h$=$ \infty $\] 7.26 7.28 5.93 --------------------------------- -------------------------- ------------------------------ --------------------------- : Cohesive energy E$_c$ (eV) of different forms of solid carbon computed at a given bond length $r_0$. The calculations were performed with supercells containing 216, 128 and 100 atoms for diamond, two-dimensional graphite and the linear chain, respectively. In calculations with localized orbitals we used 2 and 3 orbitals per atom (n$_s$, see text). The LRs included two shells of neighbors (N$_h$=$2$), amounting to 17, 10 and 5 atoms per LR in the case of diamond, two-dimensional graphite and the linear chain, respectively. 0.5truecm
--- address: ' Physics Department, University of South Africa, P.O.Box 392,Pretoria 0001, South Africa' author: - 'S. A. Sofianos and S. A. Rakityansky' title: 'On the possibility of $\eta$–mesic nucleus formation[^1]' --- Although the $\eta$–meson was discovered 40 years ago, only recently particle and nuclear physicists focused their attention on it. In many respects the $\eta$–meson is similar to the $\pi^0$–meson despite it being four times heavier. Both are neutral, spinless, and have almost the same lifetime, $\sim 10^{-18}$ sec. The kinship between the two mesons manifests itself very clearly in their decay modes. They are the only mesons which have a high probability of pure radiative decay, i.e., their quarks can annihilate into on-shell photons. The pion almost entirely decays into the radiative channel $\pi^0\to\gamma+\gamma$ (98.798%). For the $\eta$ the purely radiative decay is also the most probable mode [@PDG], $$\eta\rightarrow\left\{ \begin{array}{ll} \gamma+\gamma & (38.8 \%)\\ \pi^0+\pi^0+\pi^0 & (31.9 \%)\\ \pi^{+}+\pi^{-}+\pi^0 & (23.6 \%)\\ \pi^{+}+\pi^{-}+\gamma & (\phantom{0}4.9 \%)\\ {\rm other \ decays}&(\phantom{0}0.8 \%)\ \ .\\ \end{array} \right.$$ Therefore, when $\pi^0$ and $\eta$ are viewed as elementary particles, they look quite similar. However when one considers their interaction with nucleons, their difference is clearly manifested. Firstly, one expects a manifestation of the large $\eta\pi^0$–mass difference in the meson–nucleon dynamics and, at low energies, this is indeed observed. For example, the $S_{11}$–resonance $N^*(1535)$ is formed in both $\pi N$ and $\eta N$ systems, but at different collision energies, $$\begin{array}{lcccrr} E^{res}_{\pi N}(S_{11}) &=& 1535\ {\rm MeV}\ - m_{N} -m_{\pi} &\approx& 458\ {\rm MeV}&\\ \phantom{-}&&&&&\\ E^{res}_{\eta N}(S_{11}) &=& 1535\ {\rm MeV}\ - m_{N} -m_{\eta} &\approx& 49\ {\rm MeV}\,.&\\ \end{array}$$ Note that due to the large mass of the $\eta$–meson (547.45MeV), this resonance is very close to the $\eta N$–threshold. Furthermore it is very broad, with $\Gamma\approx 150\,{\rm MeV}$, covering the whole low energy region of the $\eta N$ interaction. As a result the interaction of nucleons with $\eta$–mesons in this region, where the $S$–wave interaction dominates, is much stronger than with pions. Another consequence of the $S_{11}$ dominance is that the interaction of the $\eta$–meson with a nucleon can be considered as a series of formations and decays of this resonance as shown in Fig. 1. =0.5mm (285,70) (135,5)[Fig. 1]{} (30,45) (0,0)(-15,-15)(0.1,0)[10]{}[(1,1)[12]{}]{} (43,-15)(-0.1,0)[10]{}[(-1,1)[12]{}]{} (-15,15)[(1,-1)[12]{}]{} (43,15)[(-1,-1)[12]{}]{} (3,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (0,0) (28,0) (-23,13)[$\eta$]{} (-23,-17)[$N$]{} (47,13)[$\eta$]{} (47,-17)[$N$]{} (11,4)[$N^*$]{} (90,43)[+]{} (130,45) (0,0)(-15,-15)(0.1,0)[10]{}[(1,1)[12]{}]{} (-15,15)[(1,-1)[12]{}]{} (3,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (0,0) (28,0) (-23,13)[$\eta$]{} (-23,-17)[$N$]{} (40,18)[$\eta$]{} (40,-23)[$N$]{} (11,4)[$N^*$]{} (101,-15)(-0.1,0)[10]{}[(-1,1)[12]{}]{} (101,15)[(-1,-1)[12]{}]{} (61,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (58,0) (86,0) (105,13)[$\eta$]{} (105,-17)[$N$]{} (69,4)[$N^*$]{} (43,4)[(30,20)\[t\]]{} (42.5,-4)(0.1,0)[10]{}[(30,20)\[b\]]{} (250,43)[+etc.]{} As with any resonant state, the $N^*(1535)$–resonance has branching ratios of the decay modes which do not depend on the formation channel and after its creation it decays into $\eta N$ and $\pi N$ channels with equally high probabilities [@PDG] $$\label{N*decay} N^*(1535)\rightarrow\left\{ \begin{array}{ll} N+\eta & (35 - 55\ \%)\\ N + \pi & (35 - 55\ \%)\\ {\rm other\ \ decays}&(\le10\ \%)\ \ \ \ .\\ \end{array} \right.$$ Therefore, the series depicted in Fig. 1, must also include terms describing real and virtual transitions into the $\pi N$–channel (see Fig 2). =0.5mm (285,70) (135,5)[Fig. 2]{} (30,45) (0,0)(-15,-15)(0.1,0)[10]{}[(1,1)[12]{}]{} (43,-15)(-0.1,0)[10]{}[(-1,1)[12]{}]{} (-15,15)[(1,-1)[12]{}]{} (43,15)[(-1,-1)[12]{}]{} (3,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (0,0) (28,0) (-23,13)[$\eta$]{} (-23,-17)[$N$]{} (47,13)[$\pi$]{} (47,-17)[$N$]{} (11,4)[$N^*$]{} (90,43)[+]{} (130,45) (0,0)(-15,-15)(0.1,0)[10]{}[(1,1)[12]{}]{} (-15,15)[(1,-1)[12]{}]{} (3,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (0,0) (28,0) (-23,13)[$\eta$]{} (-23,-17)[$N$]{} (40,18)[$\pi$]{} (40,-23)[$N$]{} (11,4)[$N^*$]{} (101,-15)(-0.1,0)[10]{}[(-1,1)[12]{}]{} (101,15)[(-1,-1)[12]{}]{} (61,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (58,0) (86,0) (105,13)[$\pi(\eta)$]{} (105,-17)[$N$]{} (69,4)[$N^*$]{} (43,4)[(30,20)\[t\]]{} (42.5,-4)(0.1,0)[10]{}[(30,20)\[b\]]{} (250,43)[+etc.]{} Thus in the energy region covered by the $S_{11}$–resonance, the $\eta N$ and $\pi N$ interactions should be treated as a coupled channel problem. When such an analysis was performed, it was found that the near–threshold $\eta N$ interaction is attractive [@bhal]. This raises the question as to whether this attraction is strong enough so that an $\eta$–mesic nucleus can be formed. Since $\eta$–mesons decay very fast, it is impossible to produce beams of them and therefore they can only be observed in final states of certain nuclear reactions with other particles. This makes investigations of $\eta$–meson dynamics quite complicated. Therefore the possibility of sustaining an $\eta$–meson inside a nucleus would be an exciting one as it would expose itself for a relatively long period in a series of successive interactions with nucleons, i.e., inside the nucleus it would undergo a series of absorptions and emissions through formations and decays of the $N^*(1535)$–resonance as depicted in Fig. 3. =0.5mm (215,75) (100,5)[Fig. 3]{} (0,30) (0,0)(30,0)(46,18)[3]{} (0,0)(-15,-15)[(1,1)[12]{}]{} (0,0) (3,-2)(0,0.1)[41]{}[(1,0)[22]{}]{} (28,0) (152,33) (0,0)(0,0)[(1,-1)[12]{}]{} (15,-15) (18,-17)(0,0.1)[41]{}[(1,0)[22]{}]{} (15,-0.5)(0,0.1)[10]{}[(1,0)[11]{}]{} (62,-0.5)(0,0.1)[10]{}[(1,0)[130]{}]{} (15,17.5)(0,0.1)[10]{}[(1,0)[57]{}]{} (108,17.5)(0,0.1)[10]{}[(1,0)[55]{}]{} (15,35.5)(0,0.1)[10]{}[(1,0)[103]{}]{} (154,35.5)(0,0.1)[10]{}[(1,0)[38]{}]{} (6,-2)(0,18)[3]{}[$N$]{} (7,-17)[$\eta$]{} (195,-2)[$N$]{} (195,16)[$N^*$]{} (195,34)[$N$]{} The lifetime of such an $\eta$–mesic nucleus would not be limited by the lifetime of the itself because after each creation of the $S_{11}$–resonance the $\eta$–meson is generated anew. However, such an $\eta$–nucleus state can not be stable, since eventually the $N^*(1535)$–resonance will produce a pion instead of $\eta$ as their creation probabilities are, according to (\[N\*decay\]), equally high. Of course such a pion can generate an $N^*(1535)$–resonance again which in turn may revive the $\eta$ but such a possibility is rather low since the pion acquires, through the decay of the resonance, a kinetic energy of $\sim 400$MeV and can thus easily escape. It is therefore clear that if an $\eta$–meson is bound inside a nucleus, it can only be in a quasi–bound state with nonzero width. First estimation, obtained in the framework of the optical potential theory, put a lower bound on the number of nucleons $ A$ which is necessary to bind the $\eta$–meson, namely, $ A\ge 12$ [@haider]. Thereafter other theoretical investigations were devoted to this problem. All of them predicted $\eta$–nucleus bound states obeying this constraint. However, the search for narrow $\eta$-nuclear bound states in an experiment with lithium, carbon, oxygen, and aluminum by Chrien et al. [@chrien] produced negative results. The conclusion of this experimental work, however, did not discourage theoreticians in examining the possibility of an $\eta$–nucleus formation. The relatively large scattering lengths obtained for $\eta {}^3$He and $\eta {}^4$He systems using a zero–range $\eta N$–interaction [@Wilk] cast doubt on the $ A\ge 12$ constraint. Speculations of this kind are based on the argument that in the vicinity of the origin of the complex momentum plane the amplitude $f$ can be replaced by the scattering length $a$ and therefore the $S$–matrix in this area can be written as $$S=1+2ikf\approx1+2ika\approx \frac{1+ika}{1-ika}\ .$$ This expression is valid only for small $k$ and can have a pole in this region only if $a$ is large. If $a$ is negative the pole would be on the positive imaginary axis (bound state). Thus, a large negative scattering length indicates that a weakly bound state exists. Decreasing the interaction strength transforms the bound state into a resonance, and vice versa, implying a change of the scattering length from $-\infty$ to $+\infty$. Such simple reasoning, however, is valid only when the interaction is described by a real potential. In the case of an $\eta$–nucleus system the inelastic $\eta A\to \pi A$ channel is always open giving rise to a significant imaginary part in the $\eta$–nucleus potential. The resonance and quasi–bound state poles of the $S$–matrix generated by a complex potential have quite different distribution in the complex $k$–plane. In Ref. [@cassing] it was shown that starting from a purely real potential and introducing an imaginary part which is gradually increased, results in $S$–matrix pole-behaviour shown in Fig. 4. =0.5mm (300,100) (50,5)[Fig. 4]{} (60,60) (0,0)(-40,0)[(1,0)[90]{}]{} (0,-40)[(0,1)[80]{}]{} (0,0)[(-1,1)[40]{}]{} (0,15) (0,25) (15,-10) (25,-15) (-15,-10) (-25,-15) (-25,-15)[(-1,2)[20]{}]{} (-15,-10)[(-1,2)[10]{}]{} (15,-10)[(0,-1)[15]{}]{} (25,-15)[(0,-1)[20]{}]{} (0,15)[(-2,1)[15]{}]{} (0,25)[(-2,1)[25]{}]{} (3,35)[${\rm Im\,}k$]{} (40,-7)[${\rm Re\,}k$]{} (-40,-26)[*resonances*]{} (30,-26)[*resonances*]{} (5,21)[*bound*]{} (5,13)[*states*]{} (160,0) (0,0) (60,5)[Fig. 5]{} (70,60) (0,0)(-50,20) (-50,-20) (-30,0) (10,20) (10,-20) (40,20) (40,-20) (-50,-17.5)[(0,1)[35]{}]{} (-32.5,0)[(-1,0)[17.5]{}]{} (10,-17.5)[(0,1)[35]{}]{} (40,-17.5)[(0,1)[35]{}]{} (40,0)[(-1,0)[30]{}]{} (-54,25)[$N_1$]{} (-54,-31)[$N_2$]{} (-25,-3)[$N_3$]{} (6,25)[$N_1$]{} (6,-31)[$N_2$]{} (36,-31)[$N_4$]{} (36,25)[$N_3$]{} (-60,-2)[$\vec x_1$]{} (-45,3)[$\vec x_2$]{} (0,-2)[$\vec x_1$]{} (20,3)[$\vec x_2$]{} (43,-2)[$\vec x_3$]{} Therefore, in the case of a complex potential, both resonance and quasi–bound state poles are situated in the second quadrant of the complex momentum plane, under and above its diagonal respectively. The diagonal separates them because the energy $E_0=k_0^2/2\mu$ corresponding to a pole at $k=k_0$, $$E_0=\frac{1}{2\mu}\left[({\rm Re\,}k_0)^2-({\rm Im\,}k_0)^2+ 2i({\rm Re\,}k_0)({\rm Im\,}k_0)\right]\ ,$$ has a positive (negative) real part when $k_0$ is under (above) it. Thus, the transition from resonances to quasi–bound states is a crossing of the diagonal. Since this can take place rather far from the point $k=0$, we should not expect, in contrast to the real potential case, to have a large scattering length even if the binding energy, $|{\rm Re\,}E_0|$, is small. Moreover, crossing the diagonal is not associated with dramatic changes of $a$. In short, scattering length calculations cannot provide a definite answer and a more rigorous approach must be employed. The most adequate way to solve this problem is to locate the poles of the $S$-matrix in the second quadrant of the $k$-plane. In Refs. [@ours; @Raki4] we developed a microscopic method that enabled us to calculate the elastic scattering amplitude for any complex value of $k$ and thereby to locate its poles. The influence of inelastic channels is taken into account via a complex $\eta$N potential. In what follows this method is described in somewhat more detail. Consider the scattering of an $\eta$-meson from a nucleus consisting of $A$ nucleons. The Hamiltonian of the system is given by $$H = H_0 + V_{\eta A} + H_A \label{H1}$$ where $H_0$ is the free Hamiltonian corresponding to the $\eta$–nucleus relative motion, $ V_{\eta A} = V_1 + V_2 +\cdots+ V_A $ is the sum of $\eta N$ potentials, $V_i\equiv V_{\eta N}(|\vec R-\vec r_i|)$, where $\vec R$ and $\vec r_i$ are the coordinates of the $\eta$ and the $i$-th nucleon with respect to the $c.m.$ of the nucleus, and $H_A$ is the total Hamiltonian of the nucleus, $$H_A=\frac {\hbar}{i}\sum_{i=1}^A\nabla_{\vec r_i}+\sum_{i \ne j} V_{NN}(|\vec r_i-\vec r_j|)\,. \label{HA}$$ The elastic scattering amplitude $f(\vec k',\vec k;z)$ describing the transition from the initial, $|\vec{k},\psi_0\rangle$, to the final, $|\vec{k}',\psi_0\rangle$, asymptotic state where $|\psi_0\rangle$ is the nuclear ground state and $\vec k$ the $\eta$-nucleus relative momentum, can be expressed in terms of the T–matrix elements $$f(\vec{k}',\vec{k}; z)=-\frac{\mu}{2\pi}\ <\vec{k}',\psi_0|T(z)|\vec{k},\psi_0>\ . \label{ampli}$$ The operator $T$ is related to the Green function $G_A(z)=(z-H_0-H_A)^{-1}$ by $$T(z)=V+VG_A(z)T(z)\ . \label{tmat}$$ The task of solving Eq. (\[tmat\]) is a formidable one and thus one must resort to approximations. One such approximation is the so–called Finite-Rank Approximation (FRA) of the Hamiltonian. It has been proposed in Refs. [@Bel1; @Bel2] as an alternative to the multiple scattering and optical potential theories. In this method the auxiliary operator $$T^0(z)=V+VG_0(z)T^0(z)\,, \label{tmat0}$$ where $ G_0(z)=(z-H_0)^{-1}$ is the free Green function, is introduced. Using the identity $ A^{-1}-B^{-1}=B^{-1}(B-A)A^{-1} $ with $A=z-H_0-H_A$ and $B=z-H_0$, one gets the resolvent equation $$G_A(z)=G_0(z)+G_0(z)H_A G_A(z)\,, \label{GA}$$ and thus $$T(z)=T^0(z)+T^0(z) G_0(z)H_A G_A(z)T(z)\,. \label{THA}$$ The latter equation has the advantage that the spectral decomposition for the Hamiltonian, $$H_A=\sum_n {\cal E}_n |\psi_n><\psi_n| + \int dE\,E|\psi_E><\psi_E|\,, \label{Hspace}$$ can be employed to bring Eq. (\[tmat\]) into a manageable form. The FRA method is based on the approximation $$H_A\approx {\cal E}_0|\psi_0><\psi_0| \label{Happrox},$$ which means that during the scattering of the $\eta$-meson, the nucleus remains in its ground state $|\psi_0>$. Such an approximation is widely used in multiple scattering and optical potential theories where is known as the coherent approximation. Using (\[Happrox\]) we get $$T(z)=T^0(z)+{\cal E}_0T^0(z)|\psi_0> G_0(z)G_0(z-{\cal E}_0) <\psi_0|T(z)\,. \label{tmata}$$ The matrix elements $T({\vec k}',{\vec k};z) \equiv <\vec{k}',\psi_0|T(z)|\vec{k},\psi_0>$ are thus given by $$\begin{aligned} \nonumber T(\vec{k}',\vec{k};z) &=& <\vec{k}',\psi_0|T^0(z)|\vec{k},\psi_0>\\ &+& {\cal E}_0\int \frac{d\vec{k}''}{(2\pi)^3} \frac{ <\vec{k}',\psi_0|T^0(z)|\vec{k}'',\psi_0>}{ (z-{k''}^2/2\mu)(z- {\cal E}_0-{k''}^2/2\mu)}\,T(\vec{k}'',\vec{k};z)\,. \label{tm3}\end{aligned}$$ The auxiliary operator $T^0$ describes the scattering of the $\eta$-meson from nucleons fixed in their space position within the nucleus. This is clear since Eq. (\[tmat0\]) does not contain any operator acting on the internal nuclear Jacobi coordinates denoted by $\{\vec{r}\}\equiv\{\vec x_1,\vec x_2,\cdots,\vec x_{A-1}\}$. Therefore all operators in Eq. (\[tmat0\]) are diagonal in the configuration subspace {$\vec{r}$} and thus $$T^0(\vec{k}',\vec{k};\vec{r};z)=V(\vec{k}',\vec{k};\vec{r}) + \int \,\frac{d\vec{k}''}{(2\pi)^3} \frac{V(\vec{k}',\vec{k}'';\vec{r})} {z-k''^2/2\mu}\,T^0(\vec{k}'';\vec{k};\vec{r};z) \label{t0m}$$ where $$<\vec{k}',\vec{r}\,'|T^0(z)|\vec{k},\vec{r}> = \delta(\vec{r}\,'-\vec{r})\, T^0(\vec{k}',\vec{k};\vec{r};z)\,,\quad <\vec{k}',\vec{r}\,'|V|\vec{k},\vec{r}> = \delta(\vec{r}\, '-\vec{r})\,V(\vec{k}',\vec{k}; \vec{r})\,.$$ It is clear that $T^0(\vec{k}',\vec{k};\vec{r};z)$ depends parametrically on $\{\vec{r}\}$. Therefore the matrix elements $<\vec{k}^\prime, \psi_0 | T^0(z)|\vec{k}, \psi_0>$ can be obtained by integrating over the Jacobi coordinates $$<\vec{k}^\prime, \psi_0 | T^0(z) | \vec{k}, \psi _0 > = \int d\vec{r} | \psi_0(\vec{r})|^2 T^0(\vec{k}^\prime,\vec{k};\vec{r};z)\,. \label{aver}$$ Thus the solution of the scattering problem can be obtained by solving first Eq. (\[t0m\]), averaging as in Eq. (\[aver\]), and finally calculating $T$ from Eq. (\[tm3\]). We must emphasize that the above scheme, is not the same as the first order optical potential approach used in the traditional pion-nucleus multiple scattering theory [@Land]. Indeed, the latter is based on three approximations; i) the Impulse Approximation; ii) the omission of higher order rescattering terms in constructing the optical potential, and iii) the coherent approximation. In contrast, in the scheme considered here, the Impulse Approximation to obtain the $\eta$N amplitude in nuclear media is not needed and no rescattering terms are omitted. The parameter $z$ in the above equations corresponds to the total $\eta$-nucleus energy, $ z = E - |{\cal E}_0| + i0$, where $E$ is the energy associated with the $\eta$-nucleus relative-motion. On the energy shell we have $E = k^2/2\mu$. Therefore, even the auxiliary $T^0$-matrix differs from the conventional fixed-scatterer amplitude in that it is always taken off the energy shell. In the case of scattering length calculations, we have $E=0$ and thus $z = -|{\cal E}_0|$. This makes Eqs. (\[tm3\]) and (\[t0m\]) nonsingular and easy to handle. For practical calculations we rewrite Eq. (\[t0m\]) using the Faddeev-type decomposition $$\begin{aligned} \nonumber T^0(\vec{k}',\vec{k};\vec{r};z) &=& \sum^A_{i=1} T^0_i (\vec{k}',\vec{k};\vec{r};z),\\ T^0_i (\vec{k}',\vec{k};\vec{r};z) &=& t_i(\vec{k}',\vec{k};\vec{r};z) \ + \int \frac {d\vec{k}''}{(2\pi)^3} \frac {t_i(\vec{k}',\vec{k}'';\vec{r};z)} {z - {k''}^2/2\mu} \sum_{j\neq i} T^0_j(\vec{k}'',\vec{k};\vec{r};z)\ , \label{t0i}\end{aligned}$$ where $t_i$ is the t-matrix for the $\eta$-meson scatterred by the nucleon $i$ and is expressed in terms of the two-body $t_{\eta N}$-matrix via $$t_i(\vec k',\vec k;\vec r;z)=t_{\eta N}(\vec k',\vec k;z) \exp\left[{{\displaystyle i(\vec k-\vec k')\cdot\vec r_i}}\right]\,. \label{telem}$$ Expanding the $|\vec{k}, \vec{r} >$ basis in partial waves and using the fact that, at the low energies considered here, the $\eta$N interactions is dominated by the S$_{11}$–resonance, we may retain the S-wave only. The total orbital momentum is zero and since the $\eta$-meson is a spinless particle the nuclear spin can be ignored. Therefore, when Eq. (\[t0i\]) is projected on the S-wave basis $|k, r >$, it reduces to $$T^0_i(k', k;r; z) = t_i(k', k;r;z) + \frac {1}{2\pi^2} \int^{\infty}_0\,dk'' \frac {k''^2\,t_i(k',k'';r;z)}{z-k''^2/2\mu}\,\sum_{j\neq i}\ T^0_j (k'',k;r,;z) \label{A4}$$ where $$< k', r' | T^0_i (z) |k, r > = \frac {\delta (r'-r)}{4\pi r^2}\ T^0_i (k',k;r; z)$$ and similarly for $< k',r' | t_i (z) | k,r >$. The above formulae are given for the general case of $A$ nucleons. In what follows we restrict ourselves to $A=$2, 3, and 4. The relevant Jacobi vectors are shown in Fig. 5. According to Eq. (\[telem\]), $t_i$ depends on the space configuration of the nucleons because $k$ and $k'$ are the $\eta$-meson momenta with respect to the nuclear centre of mass while the nucleon $i$ is shifted from it by the vector $ \vec{r}_i = a_i\vec{x_1} + b_i\vec{x_2} + c_i\vec{x_3}\,, $ where $a_1 = \frac {1}{2}, a_2 = - \frac {1}{2}, b_1 = b_2 = c_1 = c_2 = 0$ for the deuteron case; $a_1 = \frac {1}{2},\, a_2 = - \frac {1}{2},\, a_3 = 0,\, b_1 = b_2 = \frac {1}{3},\, b_3 = - \frac {2}{3},\, c_1 = c_2 = c_3 = 0$ for the three-nucleon case, and $a_1 = \frac {1}{2},\, a_2 = - \frac {1}{2},\, a_3 = a_4 = 0,\, b_1 = b_2 = \frac {1}{2},\, b_3 = b_4 = - \frac {1}{2},\, c_1 = c_2 = 0, c_3 = \frac {1}{2},\, c_4 = - \frac {1}{2}$ for the four-nucleon case. The $S$-wave projection of Eq. (\[telem\]) gives $$\begin{aligned} \langle k',r' &|& t_i(z)|k,r \rangle= \int \, \frac {d\vec{k}_i'd\vec{k}_i}{(2\pi )^6} d\vec{r}\,''d\vec{r}\,'''\langle k',r' | \vec{k}_i',\vec{r}\,''\rangle \langle\vec{k}_i',\vec{r}\,''| t_i(z)| \vec{k}_i,\vec{r}\,'''\rangle \langle\vec{k}_i,\vec{r}\,'''|k,r\rangle \\ % & =& \frac{\delta(r'-r)}{4\pi r^2}\,j_0(a_ik'x_1) j_0(b_ik'x_2) j_0(c_ik'x_3) \,t_{\eta N}(k',k;z)\,j_0(a_ikx_1)j_0(b_ikx_2)j_0(c_ikx_3)\,,\end{aligned}$$ where $j_0$ is the spherical Bessel function. The $\eta N$ interaction can be described by the $t$-matrix $$t_{\eta N}(k',k;z) = \frac {\lambda}{(k'^2+ \alpha^2)(z - E_0 + i\Gamma/2)(k^2+\alpha^2)}\,. \label{tnN}$$ This ansatz is motivated by the $S_{11}$–resonance dominance. The vertex function for $\eta$N$\leftrightarrow$N$^*$ is chosen as $1/(k^2+\alpha^2)$ which in configuration space has a Yukawa-type behaviour. The propagator is taken to be of a simple Breit-Wigner form. With such a choice the $t_i$ has the following separable form $$\label{set1} t_i(k',k,r;z)= H_i(k';r)\,\tau (z) \,H_i(k,r)\\$$ where $$\label{set2} \tau (z) = \frac{\lambda}{z-E_0+i\Gamma /2}\,,\qquad H_i(k,r)=\frac{j_0(a_ikx_1)j_0(b_ikx_2)j_0(c_ikx_3)} {(k^2+\alpha^2)}\,.$$ Therefore, $$T^0(k',k,r;z) = \sum_{i,j=1}^A\,H_i(k';r)\, \Lambda_{ij}(z)\,H_j(k,r)$$ where $$(\Lambda^{-1})_{i,j} = \frac{\delta_{ij}}{\tau(z)} -(1-\delta_{ij})\Gamma_{ij}(r,z)\,,\qquad \Gamma_{ij}(r,z)=\frac{1}{2\pi^2}\int_0^\infty\, dk\frac{k^2}{z-k^2/2\mu} H_i(k;r)H_j(k,r)\,.$$ Formally, the the last integral involves products of six Bessel functions. However, several of the coefficients $a_i$, $b_i$, and $c_i$ are always zero and therefore only products of at most four Bessel functions can appear in the expression for $\Gamma_{ij}(r,z)$ with the required integrals having the general form $$\gamma(p,u,v,w)=\int_0^\infty dk\frac{k^2j_0(ku) j_0(kv)[j_0(kw)]^2}{(k^2+\alpha^2)^2(k^2-p^2-i0)}\,.$$ To calculate the latter integral we introduce the auxiliary one $$\hat{ \gamma}(p,u,v,w,\delta )=\int_0^\infty dk \frac {k^2j_0(ku)j_0(kv)[\sin(kw)]^2} {(k^2+\alpha^2)^2(k^2-p^2-i0)(k^2 +\delta^2)w^2}$$ and thereafter evaluate the limit $$\gamma (p,u,v,w)= \lim_{\delta \rightarrow 0} \hat{\gamma } (p,u,v,w,\delta )\,.$$ The result thus obtained is $$\begin{aligned} \nonumber \gamma(p,u,v,w)&=&\frac{1}{16uvw^2} [g(u+v+2w)-2g(u+v)+g(u+v-2w)\\ & &-g(u-v+2w) + 2g(u-v)-g(u-v-2w)]\,,\end{aligned}$$ where $$\begin{aligned} \nonumber g(s) &=& \frac {i\pi }{ (p^2 + \alpha^2 )^2} \bigg \{{\rm sign} ({\rm Im}\ p) \,\frac{1}{p^3} \exp{ [ip|s| {\rm sign} ({\rm Im}\ p)]}\\ & & -\frac{i\exp{ (-\alpha |s|)}}{2 \alpha^5}[2\alpha^2 +(3+\alpha |s|)(p^2+\alpha^2)]- \frac {i|s|(p^2+\alpha^2)^2}{\alpha^4 p^2}\bigg\}\end{aligned}$$ with $${\rm sign}(\alpha)=\cases {+1, & for $\alpha \geq 0$ \cr -1,& for $\alpha <$ 0\ . \cr}$$ To obtain the necessary nuclear wave functions $\psi_0$ we employed the Malfliet–Tjon $NN$–potential [@mt] and the integro–differential equation approach (IDEA) [@idea1; @idea2] which, for $S$–wave projected potentials, is equivalent to the exact Faddeev equations. Using the above formalism, the position and movement of poles of the $\eta$–meson–light nuclei elastic scattering amplitude in the complex $k$-plane are studied. The two-body t-matrix is assumed to be of the form (\[tnN\]) with $E_0 = 1535$ MeV$- \,(m_N + m_{\eta})$ and $\Gamma = 150$ MeV [@PDG]. The range parameter used, $\alpha =2.357$ fm$^{-1}$, was obtained in a two–channel fit to the $\pi N\rightarrow\pi N$ and $\pi N\rightarrow\eta N$ experimental data [@bhal]. The parameter $\lambda$ is chosen to provide the correct zero-energy on-shell limit, i.e., to reproduce the $\eta N$ scattering length $a_{\eta N}$, $$t_{\eta N}(0,0,0) = - \frac {2\pi}{\mu_{\eta N}}a_{\eta N}\,.$$ The scattering length $a_{\eta N}$, however, is not accurately known. Different analyses [@batinic] provided values for the real part in the range Re$a_{\eta N}\in [0.27,0.98]\, {\rm fm}$ and for the imaginary part ${\rm Im\,}a_{\eta N}\in [0.19,0.37]\,{\rm fm}$. Therefore in a search for bound states one must use values of $a_{\eta N}$ within these ranges. To achieve this we used $a_{\eta N}=(\zeta 0.55+i0.30)$ fm and vary $\zeta$ until a bound state appears. The poles found with $a_{\eta N}=(0.55+i0.30)$ fm are shown in Fig. 6. The corresponding energies and widths are given in Table 1. When ${\rm Re\,}a_{\eta N}$ increases, all the poles move up and to the right, and when a resonance pole crosses the diagonal it becomes a quasi–bound pole. The minimal values of ${\rm Re\,}a_{\eta N}$ which generate ‘zero–binding’ (the poles just on the diagonal) are given in Table 2. All these values are within the uncertainty interval ${\rm Re\,}a_{\eta N}\in [0.27,0.98]$ fm. Thus even the possibility of an $\eta$d binding cannot be at present excluded. Most recent estimates of ${\rm Re\,}a_{\eta N}$ [@newa] are concentrated around the value ${\rm Re\,}a_{\eta N}\approx 0.7$ fm, which enhances our belief that at least the $\alpha$–particle can entrap an $\eta$–meson. =0.5mm (290,85) (110,30) (0,0)(-50,-30)[Fig. 6]{} (0,0)[(-1,0)[100]{}]{} (0,0)[(0,1)[40]{}]{} (0,-2)(-10,0)[11]{}[(0,1)[2]{}]{} (0,-4)[(0,1)[2]{}]{} (-50,-4)[(0,1)[2]{}]{} (-100,-4)[(0,1)[2]{}]{} (2,0)(0,10)[5]{}[(-1,0)[2]{}]{} (-20,45)[${\rm Im\,}p\, ({\rm fm}^{-1})$]{} (-105,-15)[${\rm Re\,}p\, ({\rm fm}^{-1})$]{} (-58,-12)[-0.5]{} (0,0)[(-1,1)[40]{}]{} (4,8)[0.1]{} (4,18)[0.2]{} (4,28)[0.3]{} (4,-2)[0]{} (-1,-12)[0]{} (-90.259,35.870) (-56.045,23.859) (-54.692,24.478) (-16.504,27.876) (-90,40)[${}^2{\rm H}$]{} (-70,20)[${}^3{\rm H}$]{} (-50,25)[${}^3{\rm He}$]{} (-15,17)[${}^4{\rm He}$]{} (75,15) system $E$ (MeV) $\Gamma^{\mathstrut}_{\mathstrut}$ (MeV) ---------------------- ----------- ------------------------------------------ $\eta\ {}^2{\rm H}$ 31.46 59.38 $\eta\ {}^3{\rm H}$ 10.91 22.68 $\eta\ {}^3{\rm He}$ 10.14 22.70 $\eta\ {}^4{\rm He}$ -2.05 7.48 (225,0)[Table 1]{} For each of the four nuclei considered, the scattering lengths were calculated with eight values of the strength parameter $\lambda$ corresponding to ${\rm Re\,}a_{\eta N}$: $\{(0.2+0.1n)\ {\rm fm}; n=1,8\}$, which extends over the uncertainty interval. The ${\rm Im\,}a_{\eta N}$ was fixed to the value 0.3 fm. An increase of ${\rm Re\,}a_{\eta N}$ moves the points along the trajectories, shown Figs. 7 and 8, anti-clockwise. When ${\rm Re\,}a_{\eta N}$ exceeds the critical values given in Table 2, the $\eta N$ interaction becomes strong enough to generate a quasi–bound state. The corresponding $\eta$–nucleus scattering lengths are shown by filled circles (the trajectories for $^3$He and $^3$H are practically the same). =0.45mm (280,110) (10,25) (0,0)(-40,0)[(1,0)[70]{}]{} (0,0)[(0,1)[70]{}]{} (-40,-2)(10,0)[8]{}[(0,1)[2]{}]{} (-2,10)(0,10)[7]{}[(1,0)[2]{}]{} (40,-10) (-2,75) (-44,-10)[-4]{} (-34,-10)[-3]{} (-24,-10)[-2]{} (-14,-10)[-1]{} (-2,-10)[0]{} (8,-10)[1]{} (17,12)[${}^2{\rm H}$]{} (-42,35)[${}^4{\rm He}$]{} (5.60,11.91) (8.73,15.07) (12.10,19.66) (15.22,26.43) (16.64,36.21) (13.40,48.53) (3.01,59.18) (-11.25,62.50) (-11.91,31.00) (-26.13,32.69) (-35.73,23.09) (-35.94,13.60) (-32.98,8.45) (-30.05,6.33) (-28.42,5.50) (-26.41,4.27) (-10,-25)[Fig. 7]{} (130,25) (0,0)(-40,0)[(1,0)[70]{}]{} (0,0)[(0,1)[70]{}]{} (-40,-2)(10,0)[8]{}[(0,1)[2]{}]{} (-2,10)(0,10)[7]{}[(1,0)[2]{}]{} (40,-10) (-2,75) (-44,-10)[-4]{} (-34,-10)[-3]{} (-24,-10)[-2]{} (-14,-10)[-1]{} (-2,-10)[0]{} (8,-10)[1]{} (-35,47)[${}^3{\rm H}$]{} (-25,33)[${}^3{\rm He}$]{} (2.21,23.48) (0.82,33.01) (-7.38,43.13) (-22.07,46.1) (-33.65,39.41) (-37.84,30.71) (-37.99,24.28) (-36.83,20.19) (2.03,23.43) (0.60,32.80) (-7.42,42.63) (-21.57,45.60) (-37.26,31.17) (-37.69,24.92) (-36.77,20.88) (-10,-25)[Fig. 8]{} (200,57) system min$\{{\rm Re\,}a_{\eta N}\}^{\mathstrut}_{\mathstrut}$ (fm) ---------------------------------------------- -------------------------------------------------------------- $\eta {}^2{\rm H}^{\mathstrut}_{\mathstrut}$ 0.91 $\eta {}^3{\rm H}$ 0.75 $\eta {}^3{\rm He}$ 0.73 $\eta {}^4{\rm He}$ 0.47 (243,0)[Table 2]{} Finally, we would like to emphasize that the spectral properties of Hermitian and non-Hermitian Hamiltonians are quite different. Locating quasi–bound states is a delicate problem which can be treated only by rigorous methods. As we have shown in Ref. [@Raki4] the $\eta$A scattering length tells us nothing about the existence or not of an $\eta$A quasi–bound state. This is clearly seen on Figs. 7 and 8 where the trajectories go smoothly from open to filled circles without any drastic changes or extreme values. In summary, it is shown that within the existing uncertainties of the elementary $\eta$N iteraction all light nuclei considered can support a quasi–bound state which can result in an $\eta$-mesic nucleus which is analogous to hypernuclei. Due to the specific quantum numbers of the $\eta$–meson (I=0, S=0) such states, if they do exist, can be used to access new nuclear states inaccessible by other mesons such as pions and kaons. Furthermore it can be used to elucidate the role played by the $\eta$ meson in Charge Symmetry Breaking reactions and in the violation of the Okubo-Zweig-Iizuka (OZI) rule. Particle Data Group, Phys. Rev. D[**50(3)**]{}, 1319 (1994). R. S. Bhalerao, L. C. Liu, Phys. Rev. Lett. [**54**]{}, 865 (1985). Q. Haider, L. C. Liu, Phys. Lett. [**172B**]{}, 257 (1986). R.E. Chrien et. al., Phys. Rev. Lett. [**60**]{}, 2595 (1988). C. Wilkin, Phys. Rev., C [**47**]{}, R938 (1993). W. Cassing, M. Stingl, and A. Weiguny, Phys. Rev., C [**26**]{}, 22 (1982). S. A. Rakityansky, S. A. Sofianos, W. Sandhas, and V. B. Belyaev, Phys. Lett. [**B 359**]{}, 33 (1995); V. B. Belyaev, S. A. Rakityansky, S. A. Sofianos, W. Sandhas, and M. Braun, Few–Body Systems Suppl. [**8**]{}, 312 (1995); S. A. Rakityansky, S. A. Sofianos, V. B. Belyaev, and W. Sandhas, Few-Body Systems Suppl., [**9**]{}, 227 (1995); S. A. Rakityansky, V. B. Belyaev, S. A. Sofianos, M. Braun, W. Sandhas, Chinese J. Phys., [**34**]{}, 998 (1996). S. A. Rakityansky, S. A. Sofianos, M. Braun, V. B. Belyaev, W. Sandhas, Phys. Rev., C [**53**]{}, R2043 (1996). V. B. Belyaev, J. Wrzecionko, Sov. Journal of Nucl. Phys. [**28**]{}, 78 (1978). V. B. Belyaev, in [*Lectures on the theory of few-body systems*]{} (Springer-Verlag, Heidelberg, 1990). R. H. Landau, A. W. Thomas, Nucl Phys. [**A302**]{}, 461 (1978). R. A. Malfliet and J.A. Tjon, Nucl. Phys. [**A127**]{}, 161 (1969); Ann. Phys. (N.Y.) [**61**]{}, 425 (1970). M. Fabre de la Ripelle, H. Fiedeldey, and S. A. Sofianos , Phys. Rev. C [**38**]{}, 449 (1988). W. Oehm, S. A. Sofianos, H. Fiedeldey, and M. Fabre de la Ripelle, Phys. Rev. C [**44**]{}, 81 (1991). M. Batinic, I. Slaus, A. Svarc, Phys. Rev., C [**52**]{}, 2188 (1995). V. V. Abaev, B. M. K. Nefkens, Phys. Rev., C [**53**]{}, 385 (1996); M. Batinic, I. Dadic, I. Slaus, A. Svarc, nucl-th/9703023 (1997). [^1]: Talk given at European Conference on ADVANCES IN NUCLEAR PHYSICS AND RELATED AREAS, Thessaloniki-Greece 8-12 July 1997
--- abstract: 'For a dominant algebraically stable rational self-map of the complex projective plane of degree at least 2,  we will consider three different definitions of Fatou set and show the equivalence of them. Consequently, it follows that all Fatou components are Stein. This is an improvement of an early result by Fornæss and Sibony (\[FS\]).' address: | Institute for the Promotion of Excellence in Higher Education\ Kyoto University\ Yoshida Nihonmatsu-cho, Sakyo-ku\ Kyoto, 606-8501\ Japan\ author: - Kazutoshi Maegawa title: '**Steinness of the Fatou set for a rational map of the complex projective plane**' --- [^1] Introduction ============ The study of complex dynamics in higher dimensions has been developped intensively, but some early papers such as \[FS\] are still helpful for recent studies. In those papers, the dynamics of rational self-maps of the complex projective spaces are studied and several fundamental theorems concerning Fatou set and Julia set are contained. In this paper, we will improve two theorems which are contained in \[FS\] (Therem 5.2 and Theorem 5.7 in \[FS\]). When we compare rational maps of $\Bbb P^2$ with those of $\Bbb P^1$, one of remarkable differences is that those of $\Bbb P^2$ possibly have indeterminacy points. If we want to generalize the Fatou-Julia theory to two dimension,  we have to find a suitable way to define Fatou set and Julia set paying attention to indeterminacy points. So, in this paper, for an algebraically stable rational self-map $f$ of $\Bbb P^2$ of degree at least 2, we will consider several different definitions of Fatou set and show the equivalence of them. One of them is the standard, that is, it is based on local equicontinuity of the iterates of $f$,  i.e. Lyapunov stability. The others are more complex analytical. They are based on notions of normal family for meromorphic maps. Throughout this paper, we often use Ivashkovich’s theorems in \[I\] and Fornæss-Sibony’s theorems concerning Green currents in \[S\]. Particularly, by using an Ivashkovich’s result about domains of normality for families of meromorphic maps, we can show that all Fatou components for $f$ are Stein manifolds. Preliminaries ============= We denote by $\Bbb P^k$ the complex projective space of complex dimension $k \ge 1$ and we equip the Fubini-Study distance with $\Bbb P^k$. By definition, there is a holomorphic map $$\pi : \Bbb C^{k+1} \setminus \{O\} \rightarrow \Bbb P^k$$ which is a $\Bbb C^{*}$-bundle over $\Bbb P^k$ such that the fiber $\pi^{-1}(p)$ for $p \in \Bbb P^k$ is $L \setminus \{O\}$ where $L$ is a complex line in $\Bbb C^{k+1}$ through the origin $O$. By definition, a rational self-map $f$ of $\Bbb P^k$ is lifted by $\pi$ to a polynomial self-map $F$ of $\Bbb C^{k+1}$ which is of the form $$F=(P_0,\cdots,P_{k})$$ where $P_i\ \ 0 \le i \le k$ are homogeneus polynomials which have the same degree and have no common factors. The degree ${\rm deg}(f)$ of $f$ is defined to be the degree of $P_i$. A point $p \in \Bbb P^k$ such that $F(\pi^{-1}(p))=O$ is an indeterminacy point for $f$. We denote by $I=I(f)$ the set of indeterminacy points for $f$. The dimension of $I$ is at most $k-2$ if $k \ge 2$. (If $k=1$, then $I$ is empty.) We obtain the lift of the $n$-th iterate $f^{n}=f \circ \cdots \circ f$ ($n$ times) by the cancellation of common factors of the component functions for $F^n$. (\[S\])  We say that $f$ is [*algebraically stable (AS)*]{} if $f^n$ maps no complex hypersurface in $\Bbb P^k$ to $I(f)$, for all $n\ge1$. This is equivalent to that ${\rm deg}(f^n)=({\deg}(f))^{n}$ for all $n\ge1$. Let $\omega$ be the normalized Fubini-Study $(1,1)$ form in $\Bbb P^k$ and $f$ be an AS rational self-map $f$ of $\Bbb P^k$ of degree $d \ge 2$ which is dominant, i.e. $f(\Bbb P^k)=\Bbb P^k$. Then,  $$\frac{1}{d^{n}} (f^n)^{*}\omega \rightarrow T$$ as $n \rightarrow \infty$ in the sense of currents, where $T$ is a positive closed (1,1) current such that $f^{*}(T)=dT$ (\[S\]). We call $T$ the [*Green (1,1) current*]{} for $f$. The current $T$ is of the form $$T=\omega+{\rm dd^{c}}v$$ where $v$ is an integrable function in $\Bbb P^k$. It is obvious that for any $n \ge 1$, the map $f^n$ is holomorphic in the complement of the closure of $\bigcup_{n\ge 1} I(f^n)$. We define the [*Fatou set*]{} ${\cal F}$ to be the maximal open subset of $\Bbb P^k \setminus \overline{\bigcup_{n\ge 1} I(f^n)}$ in which $\{f^n\}$ is locally equicontinuous. A connected component of ${\cal F}$ is called a [*Fatou component*]{}. The complement ${\cal J}$ of ${\cal F}$ is called the [*Julia set*]{} for $f$. The support of $T$ is extremely related to ${\cal J}$. In particular, the following equality is known. (\[FS\], \[U\])    In case when $f$ is holomorphic, ${\cal J}={\rm supp}(T)$. Results ======= We will show that for any dominant AS rational self-map $f$ of $\Bbb P^2$ of degree at least $2$,  each Fatou component for $f$ is Stein. Let us begin with recalling some basic notions in function theory of several complex variables. We set two subsets $\Delta$ and $H$ in $\Bbb C^k$ as $$\Delta:=\{|z_1|,\cdots,|z_k|<1 \},$$ $$H:=\{|z_1|<r_1,\cdots,|z_{k-1}| <r_1,\ |z_k|<1 \}$$ $$\cup \{|z_1|<1,\cdots,\ |z_{k-1}| <1,\ r_2<|z_k|<1\}$$ where $0<r_1,r_2<1$. Let $U$ be a domain in $\Bbb P^k$. We say that $U$ is [*pseudoconvex*]{} if $h:\Delta \rightarrow \Bbb P^k$ is an injective holomorphic map and $h(H) \subset U$, then $h(\Delta) \subset U$. Let $U$ be a domain in $\Bbb P^k$. We say that $U$ is [*Stein*]{} if $U$ is holomorphically convex and holomorphically separable. Above, ’holomorphically convex’ means that for any compact set in $U$, the convex hull by holomorphic functions in $U$ is again compact. ’Holomorphically separable’ means that for any points $p,\ q \in U$, there exists a holomorphic function $h$ in $U$ such that $h(p) \neq h(q)$. By Hartog’s extention theorem of holomorphic functions,  it can be shown that Stein domains are pseudoconvex. The converse is known as the Levi problem (or the Hartogs converse problem). The following theorem due to Takeuchi is the solution for this problem. (Concerning the original work in case of domains in $\Bbb C^k$ by Oka, see \[O\].) \[Leviprob\] (\[T\])   Any pseudoconvex domain in $\Bbb P^k$, which is not $\Bbb P^k$, is Stein. It is known that any Stein domain is biholomorphic to a closed complex submanifold in $\Bbb C^N$ for some $N$. Although pseudoconvexity is just a condition on the shape of a domain, the theorem above gives plenty of complex analytical information of the domain. Concerning whether Fatou components are Stein or not, some partial answers are known. In case of holomorphic maps, the affirmative answer was obtained in \[FS\] and \[U\]. In case of rational maps with indeterminacy points, Fornæss-Sibony obtained the affirmative answer for some special maps. We need the following definition to explain this. Let $f$ be a dominant AS rational self-map of $\Bbb P^k$ of degree at least 2. We say that a point $p \in \Bbb P^k$ is [*regular*]{} for $f$ if there exist a neighborhood $V$ of $p$ and a neighborhood $W$ of $I(f)$ such that the orbit $\{f^{n}(V)\}_{n \ge 0}$ is disjoint from $W$. A regular point is the same as a normal point in the sense of Sibony \[S\]. In this paper, we use ’regular’ to avoid a confusion with normal points in the sense of Montel. Fornæss-Sibony showed that if all points in $\Bbb P^k \setminus \overline{\bigcup_{n\ge 1} I(f^n)}$ are regular for $f$,  then all Fatou components for $f$ are Stein (\[FS\]). More recently, this result was improved. In \[M\], I established a dichotomy of Fatou components from a viewpoint of a dynamical relation with the indeterminacy set. (\[M\])    Let $f$ be a dominant AS rational self-map of $\Bbb P^k$ of degree at least 2. Let $U$ be any Fatou component for $f$. Then, either all points in $U$ are regular or no points in $U$ are regular. We say that a Fatou component $U$ is [*regular*]{} if all points in $U$ are regular. To some extent, regular Fatou components are similar to Fatou components for holomorphic maps. So, the following result can be obtained without much effort. (\[M\])    All regular Fatou components are Stein. Here we will show our main theorem which is a complete answer to the question. \[main2\] Let $f$ be a dominant AS rational self-map of $\Bbb P^2$ of degree at least 2. Then, any Fatou component for $f$ is Stein. Let us prepare some tools. (\[I\]) Let $X,Y$ be complex manifolds. Let $\{g_n\}_{n \ge 1}$ be a sequence of meromorphic maps from $X$ to $Y$. Let $\Gamma_n \subset X\times Y$ denote the graph of $g_n$. Let $g: X \rightarrow Y$ be a meromorphic map and $\Gamma\subset X \times Y$ be the graph of $g$. - We say that $\{g_n\}_{n \ge 1}$ [*strongly converges*]{} to $g$ in $X$ if for any compact set $K \subset X$ $$\lim_{n \rightarrow \infty} \Gamma_n \cap (K \times Y)=\Gamma \cap (K \times Y)$$ with respect to the Hausdorff metric. - We say that $\{g_n\}_{n \ge 1}$ [*weakly converges*]{} to $g$ in $X$ if there is an analytic subset $A \subset X$ of ${\rm codim}_{\Bbb C} A \ge 2$ such that $\{g_n\}_{n \ge 1}$ strongly converges to $g$ in $X \setminus A$. The definition of strong convergence above is slightly modified from the original one in \[I\]. It is just a change in naming. By using these two notions of convergence,  we can introduce notions of normality for a sequence of meromorphic maps in strong and weak senses, that is, we say that a sequence $\{f_n\}$ of meromorphic maps from $X$ to $Y$ is [*strongly (resp. weakly) normal*]{} if for any subsequence $\{f_{n_j}\}$ of $\{f_n\}$, there is a subsequence of $\{f_{n_j}\}$ which converges in $X$ in strong (resp. weak) sense. Thus, for the iterates of a rational self-map of $\Bbb P^2$,  we can define the [*strong (resp. weak) Fatou set*]{} ${\cal F}_s$ (resp. ${\cal F}_w$) as the maximal open subset of $\Bbb P^2$ in which the iterates is strongly (resp. weakly) normal. The following two theorems are contained in an Ivashkovich’s paper \[I\]. (\[I\]) \[Iv\] Let $f$ be a rational self-map of $\Bbb P^2$. Then, the following (i) and (ii) hold. - ${\cal F}_{w}$ is pseudoconvex; - If ${\cal F}_{s} \neq {\cal F}_{w}$, then ${\cal F}_{w} \supset \Bbb P^2 \setminus C$, where $C$ is a rational curve in $\Bbb P^2$. In \[I\], the assertion (i) is shown in a more general situation. That is, it holds for any sequence of meromorphic maps from any complex manifold to any compact Kähler manifold. (\[I\]) (Rouché Principle) \[RP\]    Let $X,Y$ be complex manifolds. Let $f_n :X \rightarrow Y,\ n\ge1,\ $ be meromorphic maps which strongly converge in $X$ to a meromorphic map $f:X \rightarrow Y$ as $n \rightarrow \infty$. Then,  -  If $f$ is holomorphic, then for any relatively compact open subset $D$ in $X$,  the restrictions $f_{n}|D$ for sufficiently large $n$ are holomorphic in $D$, and $\{f_n\}$ converges to $f$ uniformly on compact sets in $X$. -  If $\{f_n\}$ are holomorphic, then $f$ is also holomorphic, and $\{f_n\}$ converges uniformly on compact sets in $X$. We also need the following two theorems which come from the study of dynamics using pluripotential theory. Especially, Theorem \[charge\] plays a crucial role. [(\[S\])]{} \[Green\] Let $T$ be the Green (1,1) current for a dominant AS rational self-map $f$ of $\Bbb P^k$ of degree at least 2. Then, ${\cal F} \subset \Bbb P^k \setminus {\rm supp}(T)$. \[M\] We can easily obtain a slightly modified version of this theorem. That is, if a subsequence $\{f^{n_j}\}_{j\ge1}$ converges uniformly in an open set $U$ in $\Bbb P^k$, then $U \subset \Bbb P^k \setminus {\rm supp}(T)$. Let $K$ be a closed set in $\Bbb P^k$ and $S$ be a positive current in $\Bbb P^k \setminus K$ with locally bounded mass near $K$. The trivial extension of $S$ to $\Bbb P^k$ is obtained by setting the coefficients of $S$ to be $0$ on $K$. (Note that the coefficients of $S$ are complex measures.) [(\[S\])]{} \[charge\] Let $T$ be the Green (1,1) current for a dominant AS rational self-map $f$ of $\Bbb P^k$ of degree at least 2. Then, $T$ does not charge on any complex hypersurface $V$, i.e. $T$ is equal to the trivial extension of $T|_{\Bbb P^k \setminus V}$. [Proof of Theorem \[main2\]:]{} By definition, it is easily shown that $${\cal F} \subset {\cal F}_{s} \subset {\cal F}_{w}.$$ Let us show that ${\cal F}_s = {\cal F}_w$. Let $U_w$ be a connected component of ${\cal F}_{w}$. Suppose that a sequence $\{f^{n_k}\}_{k\ge1}$ weakly converges in $U_w$. Then, there is a discrete set $A \subset U_w$ such that $\{f^{n_k}\}_{k \ge 1}$ strongly converges to a meromorphic map $\psi$ in $U_w \setminus A$. Let ${I(\psi)}$ denote the indeterminacy set of $\psi$. By definition, $\psi$ is holomorphic in $U_w \setminus {I(\psi)}$. Suppose that there is $m \ge 0$ such that $f^m$ is not holomorphic at $p \in U_w \setminus ({I(\psi)} \cup A)$. Since $f$ is AS, it follows that $f^n$ is not holomorphic at $p$ for $\forall n \ge m$. By Rouché principle (Theorem \[RP\]), $\psi$ is also not holomorphic at $p$. This is a contradiction. So, $f^n$ are holomorphic in $U_w \setminus ({I(\psi)} \cup A)$ for all $n \ge 1$. Further, $\{f^{n_k}\}_{k \ge 1}$ converges locally uniformly to $\psi$ in $U_w \setminus ({I(\psi)} \cup A)$. By Remark \[M\],  it follows that $U_w \setminus ({I(\psi)} \cup A)$ is contained in $\Bbb P^2 \setminus {\rm supp}(T)$. Since ${I(\psi)} \cup A$ is discrete, the local potential function for $T$ should be pluriharmonic in $U_w$. Hence, it follows that $U_w \subset \Bbb P^2 \setminus {\rm supp}(T)$. Thus, ${\cal F}_{w} \subset \Bbb P^2 \setminus {\rm supp}(T)$. Suppose that ${\cal F}_s \neq {\cal F}_w$. By (ii) in Theorem \[Iv\], ${\cal F}_w \supset \Bbb P^2 \setminus C$, where $C$ is a rational curve. Hence, $C \supset {\rm supp}(T)$. However this is impossible because $T$ does not charge on $C$ (Theorem \[charge\]). Thus, it follows that ${\cal F}_s = {\cal F}_w$. Let us show that ${\cal F} = {\cal F}_s$. Let $U_s$ be a connected component of ${\cal F}_s$. Let $\{f^{n_k}\}_{k \ge 1}$ be any subsequence of $\{f^n\}_{n \ge 1}$ which strongly converges in $U_s$. Denote the limit map by $\phi$ and let ${I(\phi)}$ be the indeterminacy set for $\phi$. From Rouché principle and $f$ being AS, in the same way as we have done above, it follows that $f^{n_k}$ $k \ge 1$ are holomorphic in $U_s \setminus {I(\phi)}$ and the convergence is locally uniform. Hence, $U_s \setminus {I(\phi)} \subset \Bbb P^2 \setminus {\rm supp}(T)$. Since ${I(\phi)}$ is discrete, it follows that $U_s \subset \Bbb P^2 \setminus {\rm supp}(T)$. Suppose that ${I(\phi)}$ is nonempty and $q \in {I(\phi)}$. By Rouché principle, there is an integer $m \ge 0$ such that $f^{m}(q) \in {I(f)}$. So, it follows that $q \in {\rm supp}(T)$ since the Lelong number for $T$ at $q$ must be strictly positive (this can be shown by the assumption that $f$ is AS). This is a contradiction to that $U_s \subset \Bbb P^2 \setminus {\rm supp}(T)$. Hence, ${I(\phi)} = \emptyset$. Again by Rouché principle,  it follows that $\{f^{n_k}\}_{k \ge 1}$ are holomorphic in $U_s$ and converges locally uniformly to $\phi$ in $U_s$. This leads to that $U_s \subset {\cal F}$. Hence, ${\cal F}_s \subset {\cal F}$. Thus, $${\cal F} = {\cal F}_{s} = {\cal F}_{w}$$ are verified. By (i) in Theorem \[Iv\] and these equalities, ${\cal F}$ is pseudoconvex. So, by Theorem \[Leviprob\], it follows that ${\cal F}$ is Stein. Let us state an important fact which we have already shown in the proof above. \[Feq\] Let $f$ be a dominant AS rational self-map of $\Bbb P^2$ of degree at least 2. Then,  $${\cal F} = {\cal F}_{s} = {\cal F}_{w}.$$ It is known that ${\cal F} \subset \Bbb P^2 \setminus {\rm supp}(T)$ (Theorem \[Green\]). However, there is an example for which ${\cal F} \neq \Bbb P^2 \setminus {\rm supp}(T)$. See Example 2 in \[RS\]. As a consequence of Theorem \[main2\], the connectivity of the Julia set follows. It is a common property of pseudoconcave sets in the complex projective space. Let $f$ be a dominant AS rational self-map of $\Bbb P^2$ of degree at least 2. Then, the Julia set ${\cal J}$ is connected. [Proof:]{}  The same way as in Theorem 5.2 in \[FS\] is valid. Let us note that Theorem \[Feq\] does not necessarily hold in case of non AS maps. An example is a birational map $f([x:y:z])=[yz:zx:xy]$. When $m$ is even, $f^m$ is the identy map, and when $m$ is odd, $f^m=f$. Hence, it follows immediately that ${\cal F}_{s} = {\cal F}_{w}=\Bbb P^2$. However, since $f$ has indeterminacy points $[1:0:0],[0:1:0],[0:0:1]$, the set ${\cal F}$ is smaller than $\Bbb P^2$. In fact, ${\cal F}=\Bbb P^2 \setminus \pi(\{xyz=0\})$. [90]{} , [Complex Dynamics in Higher Dimension II, ]{} [*Ann.Math.Studies, *]{} [**137**]{} [, 1994, 201-231.]{} , [On convergency properties of meromorphic functions and mappings (Russian), ]{} [*Complex analysis in modern mathematics, FAZIS, Moscow*]{} [, 2001, 133-151. ]{} [English manuscript avail. at ArXiv math.CV/9804007.]{} , [Fatou sets for rational maps in $\Bbb P^k$, ]{} [*Michigan Math.J., *]{} [**52**]{} [, 2004, 3-11]{} ,  [Sur les fonctions analytiques de plusieurs variables, VI Domaines pseudoconvexes, ]{} [*T$\hat{o}$hoku Math.J., *]{} [**49**]{} [, 1942]{} ,  [Value distribution for sequences of rational mappings and complex dynamics, ]{} [*Indiana Math.J., *]{} [**46**]{} [, No.3, 1997, 897-932]{} ,  [Dynamique des applications rationnelles de $\Bbb P^k$, ]{} [*Panor.Syntheses, *]{} [**8**]{} [, 1999, 97-185]{} ,  [Domaines pseudoconvexes infinis et la metrique riemannienne dans un espace projectif, ]{} [*J. Math. Soc. Japan, *]{} [**16**]{} [, 1964, 159-181]{} [T.Ueda]{},  [Fatou sets in complex dynamics in projective spaces]{} [*J.Math.Soc.Japan, *]{} [**46**]{} [, 1994, 545-555]{} [km@math.h.kyoto-u.ac.jp]{} [^1]: 2000 *Mathematics Subject Classification*. Primary 32H50; Secondary 32Q28.
--- abstract: 'Faced with saturation of Moore’s law and increasing size and dimension of data, system designers have increasingly resorted to parallel and distributed computing to reduce computation time of machine-learning algorithms. However, distributed computing is often bottle necked by a small fraction of slow processors called “stragglers” that reduce the speed of computation because the fusion node has to wait for all processors to complete their processing. To combat the effect of stragglers, recent literature proposes introducing redundancy in computations across processors, e.g., using repetition-based strategies or erasure codes. The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers. In this paper, we propose a novel technique – that we call “Short-Dot” – to introduce redundant computations in a coding theory inspired fashion, for computing linear transforms of long vectors. Instead of computing long dot products as required in the original linear transform, we construct a larger number of redundant and short dot products that can be computed faster and more efficiently at individual processors. In reference to comparable schemes that introduce redundancy to tackle stragglers, Short-Dot reduces the cost of computation, storage and communication since shorter portions are stored and computed at each processor, and also shorter portions of the input is communicated to each processor. Further, only a subset of these short dot products are required at the fusion node to finish the computation successfully, thus enabling us to ignore stragglers. We demonstrate through probabilistic analysis as well as experiments on computing clusters that Short-Dot offers significant speed-up compared to existing techniques. We also derive trade-offs between the length of the dot-products and the resilience to stragglers (number of processors to wait for), for any such strategy and compare it to that achieved by our strategy.' author: - | Sanghamitra Dutta\ Carnegie Mellon University\ `sanghamd@andrew.cmu.edu` Viveck Cadambe\ Pennsylvania State University\ `viveck@engr.psu.edu` Pulkit Grover\ Carnegie Mellon University\ `pgrover@andrew.cmu.edu` bibliography: - 'sample.bib' title: '“Short-Dot”: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products' --- Introduction {#sec:introduction} ============ This work proposes a coding-theory inspired computation technique for speeding up computing linear transforms of high-dimensional data by distributing it across multiple processing units that compute shorter dot products. Our main focus is on addressing the “straggler effect,” *i.e.*, the problem of delays caused by a few slow processors that bottleneck the entire computation. To address this problem, we provide techniques (building on [@kananspeeding] [@gauristraggler] [@gauriefficient] [@gauri2014delay] [@huang2012codes]) that introduce redundancy in the computation by designing a novel error-correction mechanism that allows the size of individual dot products computed at each processor to be shorter than the length of the input. Shorter dot products offer advantages in computation, storage and communication in distributed linear transforms. The problem of computing linear transforms of high-dimensional vectors is “the" critical step [@dally2015] in several machine learning and signal processing applications. Dimensionality reduction techniques such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), taking random projections, require the computation of short and fat linear transforms on high-dimensional data. Linear transforms are the building blocks of solutions to various machine learning problems, e.g., regression and classification etc., and are also used in acquiring and pre-processing the data through Fourier transforms, wavelet transforms, filtering, etc. Fast and reliable computation of linear transforms are thus a necessity for low-latency inference [@dally2015]. Due to saturation of Moore’s law, increasing speed of computing in a single processor is becoming difficult, forcing practitioners to adopt parallel processing to speed up computing for ever increasing data dimensions and sizes. Classical approaches of computing linear transforms across parallel processors, e.g., Block-Striped Decomposition [@kumar1994introduction], Fox’s method [@fox1987matrix; @kumar1994introduction], and Cannon’s method [@kumar1994introduction], rely on dividing the computational task equally among all available processors[^1] without any redundant computation. The fusion node collects the outputs from each processors to complete the computation and thus has to wait for all the processors to finish. In almost all distributed systems, a few slow or faulty processors – called “stragglers”[@straggler_tail] – are observed to delay the entire computation. This unpredictable latency in distributed systems is attributed to factors such as network latency, shared resources, maintenance activities, and power limitations. In order to combat with stragglers, cloud computing frameworks like Hadoop [@hadoop] employ various straggler detection techniques and usually reset the task allotted to stragglers. Forward error-correction techniques offer an alternative approach to deal with this “straggler effect” by introducing redundancy in the computational tasks across different processors. The fusion node now requires outputs from only a subset of all the processors to successfully finish. In this context, the use of preliminary erasure codes dates back to the ideas of algorithmic fault tolerance [@ABFT1984] [@faultbook]. Recently optimized Repetition and Maximum Distance Separable (MDS) [@ryan2009channel] codes have been explored [@gauristraggler]  [@gauriefficient]  [@kananspeeding] [@mohammad2016] to speed up computations. We consider the problem of computing $\bm{A}\bm{x}$ where $\bm{A}_{(M \times N)}$ is a given matrix and $\bm{x}_{( N \times 1)}$ is a vector that is input to the computation $(M\ll N)$. In contrast with [@kananspeeding], which also uses codes to compute linear transforms in parallel, we allow the size of individual dot products computed at each processor to be smaller than $N$, the length of the input. **Why might one be interested in computing short dot products while performing an overall large linear transform?**\ One reason is straightforward: the computation time depends on the length of the dot-products computed. Processors are also inherently memory limited, which limits the size of dot products that can be computed. In some distributed and cloud computing systems, the computation time is dominated by the time taken to communicate $\bm{x}$ to the processors. In systems where multi-casting $\bm{x}$ is not possible or is inefficient, it may be faster to communicate a subset of the co-ordinates of $\bm{x}$ to each processor. In such systems, we anticipate that communicating shorter vectors, each formed by these subsets of coordinates of $\bm{x}$, is likely to result in substantial speedups over schemes that require the entire $\bm{x}$ vector (in particular when multi-casting is difficult).[^2] In Sections \[sec:analysis\] and \[sec:experiments\], we show both theoretically (under model assumptions inspired from [@kananspeeding] that admit simplified expected time analysis while being a crude approximation of experimental observations) and experimentally that the speed-up using Short-Dot can be increased beyond that obtained using the strategy proposed in  [@kananspeeding], in straggler-prone environments. To summarize, our main contributions are: 1. To compute $\bm{A} \bm{x}$ for a given matrix $\bm{A}_{(M \times N)}$, we instead compute $\bm{F} \bm{x}$ where we construct $\bm{F}_{(P \times N)}$ (total no. of processors $P$ &gt; Required no. of dot-products $M$) such that each $N$-length row of $\bm{F}$ has at most $ N(P-K+M)/P$ non-zero elements. Because the locations of zeros in a row of $\bm{F}$ are known by design, this reduces the complexity of computing dot-products of rows of $\bm{F}$ with $\bm{x}$. Here $K$ parameterizes the resilience to stragglers: any $K$ of the $P$ dot products of rows of $\bm{F}$ with $\bm{x}$ are sufficient to recover $\bm{A}\bm{x}$, *i.e.*, any $K$ rows of $\bm{F}$ can be linearly combined to generate the rows of $\bm{A}$.\ 2. We provide fundamental limits on the trade-off between the length of the dot-products and the straggler resilience (number of processors to wait for) for *any* such strategy in Section 3. This suggests a lower bound on the length of task allotted per processor. Our limits show that Short-Dot is near-optimal.\ 3. Assuming exponential tails of service-times at each server (used in [@kananspeeding]), we derive the expected computation time required by our strategy and compare it to uncoded parallel processing, repetition strategy and MDS coding [@ryan2009channel] (see Fig. \[multiavp\]) based linear computation. We also explicitly show a regime ($M=\frac{P}{\log P}$) where Short-Dot outperforms all its competing strategies in expected computation time, by a factor of $\frac{\log(P)}{\log( \log P)}$, that diverges to infinity for large $P$. In general, Short-Dot is found to be universally faster than all its competing strategies over the entire range of $M \leq P$. When $M$ is linear in $P$, Short-Dot offers speed-up by a factor of $\Omega(\log(P))$ over uncoded, parallel processing and repetition. When $M$ is sub-linear in $P$, Short-Dot out-performs repetition or MDS coding based linear computations by a factor of $\Omega \left(\frac{P}{M\log(P/M)}\right)$ .\ 4. We also provide experimental results showing that Short-Dot is faster than existing strategies. In a concurrent work, in [@gradientcoding], Tandon et al. consider a coded computation problem similar to ours for the special case where $M$, the number of $N$-length dot-products to be computed, is $1$ and the given matrix $\bm{A}_{M \times N}$ (in this case just a single row vector) is $[1, 1, \dots , 1]_{1 \times N}$. We note that for $M=1$, the gain of using coded strategies over replication-based strategies is bounded even as $N$ and $P \to\infty$ for $s=\Theta(N/P) $ . Our paper differs from [@gradientcoding] in that we consider the more general case $M\geq 1$, and observe that the gains over replication can be unbounded with this scaling in the regime $s=\Theta(MN/P) $. For $M>1$, the number of operations per processor using our strategy is lower than an application of [@gradientcoding] for the same worst-case straggler resilience. To see this, note that a straightforward extension of the strategy proposed in [@gradientcoding] that encodes each row of $\bm{A}$ separately for $M (> 1)$ rows would require $M$ dot-products of length $\frac{N(P-K+1)}{P}$ at each processor while using a “joint” encoding across rows, Short-Dot only requires a single dot-product of length $\frac{N(P-K+M)}{P}$ (note that $ \frac{N(P-K+M)}{P} < M \times \frac{N(P-K+1)}{P}$) at each processor, while still requiring the same number of processors (any $K$ out of $P$) to finish. Further, we also provide a tighter converse for $M > 1$ that proves that Short-Dot is near-optimal. It is worth noting that [@gradientcoding] also introduces the notion of partial stragglers, which is outside the scope of our paper. For the rest of the paper, we define the sparsity of a vector $\bm{u} \in \mathbb{R}^N$ as the number of nonzero elements in the vector, *i.e.*, $ \|\bm{u}\|_0 = \sum_{j=1}^N \mathcal{I}(u_j \neq 0) $. We also assume $N$ is quite large compared to $P$, so that it is reasonable to assume that $P$ divides $N$ ($P \ll N$). Comparison with existing strategies: ------------------------------------ Consider the problem of computing a single dot product of an input vector $\bm{x} \in \mathbb{R}^N$ with a pre-specified vector $\bm{a} \in \mathbb{R}^N$. By an “uncoded” parallel processing strategy (which includes Block Striped Decomposition [@kumar1994introduction]), we mean a strategy that does not use redundancy to overcome delays caused by stragglers. One uncoded strategy is to partition the dot product into $P$ smaller dot products, where $P$ is the number of available processors. E.g. $\bm{a}$ can be divided into $P$ parts – constructing $P$ short vectors of sparsity $N/P$ – with each vector stored in a different processor (as shown in Fig. \[single\_dot\] left). Only the nonzero values of the vector need to be stored since the locations of the nonzero values is known apriori at every node. One might expect the computation time for each processor to reduce by a factor of $P$. However, now the fusion node has to wait for all the $P$ processors to finish their computation, and the stragglers can now delay the entire computation. Can we construct $P$ vectors such that dot products of a subset of them with $\bm{x}$ are sufficient to compute $\langle \bm{a},\bm{x} \rangle $? A simple coded strategy is Repetition with block partitioning *i.e.*, constructing $L$ vectors of sparsity $N/L$ by partitioning the vector of length $N$ into $L$ parts $(L < P)$, and repeating the $L$ vectors $P/L$ times so as to obtain $P$ vectors of sparsity $N/L$ as shown in Fig. \[single\_dot\] (right). For each of the $L$ parts of the vector, the fusion node only needs the output of one processor among all its repetitions. Instead of a single dot-product, if one requires the dot-product of $\bm{x}$ with $M$ vectors $\{\bm{a}_1,\ldots,\bm{a}_M\}$, one can simply repeat the aforementioned strategy $M$ times. For multiple dot-products, an alternative repetition-based strategy is to compute $M$ dot products $P/M$ times in parallel at different processors. Now we only have to wait for at least one processor corresponding to each of the $M$ vectors to finish (see Fig. \[fig:1c\]). Improving upon repetition, it is shown in [@kananspeeding] that an $(P,M)$-MDS code allows constructing $P$ coded vectors such that any $M$ of $P$ dot-products can be used to reconstruct all the $M$ original vectors (see Fig. \[fig:1d\]). This strategy is shown, both experimentally and theoretically, to perform better than repetition and uncoded strategies. \[fig:1b\] \[fig:1c\] \[fig:1d\] \[fig:1e\] *Can we go beyond MDS codes?* MDS codes-based strategies require $N$-length dot-products to be computed on each processor. Short-Dot instead constructs $P$ vectors of sparsity $s$ (less than $N$), such that the dot product of $\bm{x}$ with any $K \ ( \geq M)$ out of these $P$ short vectors is sufficient to compute the dot-product of $\bm{x}$ with all the $M$ given vectors (see Fig. \[fig:1e\]). Compared to MDS Codes, Short-Dot is more flexible as it waits for some more processors (since $K \geq M$), but each processor computes a shorter dot product. Short-Dot also effectively reduces the communication cost since only a shorter portion of the input vector is to be communicated to each processor. We also propose Short-MDS, an extension of the MDS codes-based strategy in [@kananspeeding] to create short dot-products of length $s$, through block partitioning, and compare it with Short-Dot. In regimes where $\frac{N}{s}$ is an integer, Short-MDS may be viewed as a special case of Short-Dot. But when $\frac{N}{s}$ is not an integer, Short-MDS has to wait for more processors in worst case than Short-Dot for the same sparsity $s$, as discussed in Remark 2 in Section \[sec:shortdot\]. Our coded parallelization strategy: Short-Dot {#sec:shortdot} ============================================= In this section, we provide our strategy of computing the linear transform $\bm{A}\bm{x}$ where $\bm{x} \in \mathbb{R}^N$ is the input vector and $\bm{A}_{(M \times N)}=[\bm{a}_1,\bm{a}_2,\ldots,\bm{a}_M]^T$ is a given matrix. Short-Dot constructs a $P \times N$ matrix $\bm{F}=[\bm{f}_1,\bm{f}_2,\ldots,\bm{f}_P]^T$ such that $M$ predetermined linear combinations of *any* $K$ rows of $\bm{F}$ are sufficient to generate each of $\{ \bm{a}_1^T, \ldots, \bm{a}_M^T \}$, and any row of $\bm{F}$ has sparsity at most $s = \frac{N}{P}(P-K+M)$. Each sparse row of $\bm{F}$ (say $\bm{f}_i^T$) is sent to the $i$-th processor ($i=1,\ldots,P$) and dot-products of $\bm{x}$ with all sparse rows are computed in parallel. Let $S_i$ denote the support (set of non-zero indices) of $\bm{f}_i$. Thus, for any unknown vector $\bm{x}$, short dot products of length $|S_i| \leq s = \frac{N}{P}(P-K+M)$ are computed on each processor. Since the linear combination of any $K$ rows of $\bm{F}$ can generate the rows of $\bm{A}$, *i.e.*, $\{ \bm{a}_1^T, \bm{a}_2^T, \ldots, \bm{a}_M^T \}$, the dot-product from the earliest $K$ out of $P$ processors can be linearly combined to obtain the linear transform $\bm{A}\bm{x}$. Before formally stating our algorithm, we first provide an insight into why such a matrix $\bm{F}$ exists in the following theorem, and develop an intuition on the construction strategy. Given row vectors $\{ \bm{a}_1^T, \bm{a}_2^T, \ldots, \bm{a}_M^T \}$, there exists a $P \times N$ matrix $\bm{F}$ such that a linear combination of *any* $K (>M)$ rows of the matrix is sufficient to generate the row vectors and each row of $\bm{F}$ has sparsity at most $s = \frac{N}{P}(P-K+M) $, provided $P$ divides $N$. *Proof:* We may append $(K-M)$ rows to $\bm{A}=[\bm{a}_1,\bm{a}_2,\ldots,\bm{a}_M]^T $, to form a $K \times N$ matrix $\bm{\tilde{A}}=[\bm{a}_1,\bm{a}_2,\ldots,\bm{a}_M,\bm{z}_{1},\ldots,\bm{z}_{K-M}]^T$. The precise choice of these additional vectors will be made explicit later. Next, we choose $\bm{B}$, a $P \times K$ matrix such that *any square sub-matrix of $\bm{B}$ is invertible*[^3] The following lemma shows that *any* $K$ rows of the matrix $\bm{B\tilde{A}}$ are sufficient to generate any row of $\bm{\tilde{A}}$, including $\{\bm{a}_1^T,\bm{a}_2^T,\ldots,\bm{a}_M^T\}$: Let $\bm{F}=\bm{B\tilde{A}}$ where $\bm{\tilde{A}}$ is a $K \times N$ matrix and $\bm{B}$ is any $(P\times K)$ matrix such that every square sub-matrix is invertible. Then, any $K$ rows of $\bm{F}$ can be linearly combined to generate any row of $\bm{\tilde{A}}$. *Proof:* Choose an arbitrary index set $\chi \subset \{1,2,\ldots, P\} $ such that $|\chi|=K$. Let $\bm{F}^{\chi}$ be the sub-matrix formed by chosen $K$ rows of $\bm{F}$ indexed by $\chi$. Then, $\bm{F}^{\chi}=\bm{B}^{\chi}\bm{\tilde{A}}$. Now, $\bm{B}^{\chi}$ is a $K \times K$ sub-matrix of $\bm{B}$, and is thus invertible. Thus, $ \bm{\tilde{A}}=(\bm{B}^{\chi})^{-1}\bm{F}^{\chi} $. The $i$-th row of $\bm{\tilde{A}}$ is $[i$-th Row of $(\bm{B}^{\chi})^{-1}]\bm{F}^{\chi}$ for $i=1,2,\ldots,K$. Thus, each row of $\bm{\tilde{A}}$ is generated by the chosen $K$ rows of $\bm{F}$. $\blacksquare$ In the next lemma, we show how the row sparsity of $\bm{F}$ can be constrained to be at most $\frac{N}{P}(P-K+M)$ by appropriately choosing the appended vectors $\bm{z}_{1},\ldots,\bm{z}_{K-M}$. Given an $M \times N$ matrix $\bm{A}=[\bm{a}_1,\ldots,\bm{a}_M]^T$, let $\bm{\tilde{A}}=[\bm{a}_1,\ldots,\bm{a}_M,\bm{z}_{1},\ldots,\bm{z}_{K-M}]^T$ be a $K \times N$ matrix formed by appending $K-M$ row vectors to $\bm{A}$. Also let $\bm{B}$ be a $P \times K$ matrix such that every square matrix is invertible. Then there exists a choice of the appended vectors $\bm{z}_{1},\ldots,\bm{z}_{K-M}$ such that each row of $\bm{F} = \bm{B}\bm{\tilde{A}}$ has sparsity at most $s =\frac{N}{P}(P-K+M)$. *Proof:* We select a sparsity pattern that we want to enforce on $\bm{F}$ and then show that there exists a choice of the appended vectors $\bm{z}_{1},\ldots,\bm{z}_{K-M}$ such that the pattern can be enforced.\ **Sparsity Pattern enforced on $\bm{F}$:** This is illustrated in Fig. \[sparsity\_fig\]. First, we construct a $P \times P$ “unit block” with a cyclic structure of nonzero entries, where $(K-M)$ zeros in each row and column are arranged as shown in Fig. \[sparsity\_fig\]. Each row and column have at most $s_c= P-K+M$ non-zero entries. This unit block is replicated horizontally $N/P$ times to form an $P \times N$ matrix with at most $s_c$ non-zero entries in each column, and and at most $s=Ns_r/P$ non-zero entries in each row. We now show how choice of $\bm{z}_{1},\ldots,\bm{z}_{K-M}$ can enforce this pattern on $\bm{F}$. From $\bm{F}=\bm{B}\bm{\tilde{A}} $, the $j$-th column of $\bm{F}$ can be written as, $\bm{F}_j=\bm{B}\bm{\tilde{A}}_j $. Each column of $\bm{F}$ has at least $K-M$ zeros at locations indexed by $U \subset \{1,2,\ldots, P\}$. Let $\bm{B}^U$ denote a $( (K-M) \times K )$ sub-matrix of $\bm{B}$ consisting of the rows of $\bm{B}$ indexed by $U$. Thus, $ \bm{B}^U \bm{\tilde{A}}_j= [\bm{0}]_{(K-M) \times 1} $. Divide $\bm{\tilde{A}}_j$ into two portions of lengths $M$ and $K-M$ as follows: $\bm{\tilde{A}}_j =[\bm{A}_j^T \ \ | \ \bm{z}^T]^T \ =[ a_1(j) \ a_2(j) \ldots a_M(j) \ \ \ z_1(j)\ \ldots \ z_{K-M}(j)]^T $ Here $\bm{A}_j= [ a_1(j) \ a_2(j) \ldots a_M(j)]^T $ is actually the $j$-th column of given matrix $\bm{A}$ and $\bm{z} =[ z_{1}(j),\ \ldots \ z_{K-M}(j)]^T$ depends on the choice of the appended vectors. Thus, $$\begin{aligned} & \nonumber \bm{B}^U_{cols \ 1:M}\bm{A}_j + \bm{B}^U_{cols \ M+1:K}\ \bm{z}\ = [\bm{0}]_{K-M \times 1} \\ \Rightarrow & \bm{B}^U_{cols \ M+1:K}\ \bm{z}\ = - \bm{B}^U_{cols \ 1:M}[\bm{A}_j] \\ \Rightarrow & [\ \bm{z}\ ]= - (\bm{B}^U_{cols \ M+1:K})^{-1} \ \bm{B}^U_{cols \ 1:M}[\bm{A}_j]\end{aligned}$$ where the last step uses the fact that $[\bm{B}^U_{cols \ M+1:K}]$ is invertible because it is a $(K-M) \times (K-M)$ square sub-matrix of $\bm{B}$. This explicitly provides the vector $\bm{z}$ which completes the $j$-th column of $\tilde{\bm{A}}$. The other columns of $\tilde{\bm{A}}$ can be completed similarly, proving the lemma. $\blacksquare$\ From Lemmas 1 and 2, for a given $M \times N$ matrix $\bm{A}$, there always exists a $P \times N$ matrix $\bm{F}$ such that a linear combination of *any* $K$ columns of $\bm{F}$ is sufficient to generate our given vectors and each row of $\bm{F}$ has sparsity at most $s = \frac{N}{P}(P-K+M) $. This proves the theorem.$\blacksquare$\ **Remark 1: Relaxed conditions on matrix $\bm{B}$** It has been stated in *Lemmas* $1$ and $2$ that all square sub-matrices of $\bm{B}$ need to be invertible. A matrix with *i.i.d.* Gaussian entries can be shown to satisfy this property with probability $1$. In fact the condition on $\bm{B}$ in *Lemmas* $1$ and $2$ can be relaxed, as evident from the proof. For matrix $\bm{B}_{P \times K}$ we only need two conditions. (1) All $K \times K$ square sub-matrices are invertible. (2) All $(K-M) \times (K-M)$ square sub-matrices in the last $K-M$ columns of $\bm{B}$ are invertible. A Vandermonde Matrix satisfies both these properties and thus can be used for encoding in Short-Dot. With this insight in mind, we now formally state our computation strategy: **\[A\] Pre-Processing Step: Encode $\bm{F}$ (Performed Offline)** **Given:** $ \bm{A}_{M \times N}=[\bm{a}_1,\ldots,\bm{a}_M]^T=[\bm{A}_1, \bm{A}_2,\ldots, \bm{A}_N],\ \ parameter\ K , Matrix \ \bm{B}_{P \times K}$ **For** $ \ j = 1 \ to \ N \ $ **do** **Set** $ \ \ \ \ U \gets (\{ (j-1), \ldots, (j+K-M-1) \} \mod P) +1 $\ $\rhd$ The set of $(K-M)$ indices that are 0 for the $j$-th column of $\bm{F}$ **Set** $\ \ \ \bm{B}^U \gets \text{Rows of } \bm{B} \text{ indexed by $U$}$ **Set** $ \ \ \ \ \ [\ \bm{z}\ ]= - (\bm{B}^U_{cols \ M+1:K})^{-1} \ \bm{B}^U_{cols \ 1:M}[\bm{A}_j] $ $\rhd$ $\bm{z}_{(K-M) \times 1}$ is a row vector. **Set** $ \ \ \ \ \ \bm{F}_{j} = \bm{B}[\bm{A}_j^T \ | \bm{z}^T \ ]^T $ $\rhd$ $\bm{F}_{j}$ is a column vector ( $j$-th col of $\bm{F}$) **Encoded Output:** $\bm{F}_{P \times N}=[\bm{f}_1 \bm{f}_2 \ldots \bm{f}_P]^T$ $\rhd$ Row representation of matrix $\bm{F}$ **For** $ \ i = 1 \ to \ P \ $ **do** **Store** $ \ \ \ S_i \gets Support(\bm{f}_i)$ $\rhd$ Indices of non-zero entries in the $i$-th row of $\bm{F}$ **Send** $ \ \ \ \bm{f}_i^{S_i}$ to $i$-th processor $\rhd$ $i$-th row of $\bm{F}$ sent to $i$-th processor **\[B\] Online computations** **External Input :** $ \bm{x}$ **Resources:** $ P$ parallel processors $(P >M)$ **\[B1\] Parallelization Strategy: Divide task among parallel processors:** **For** $ \ i = 1 \ to \ P \ $ **do** $\langle \bm{f}_i^{S_i} , \bm{x}^{S_i} \rangle$ $\rhd$ $\bm{u}^S$ denotes only the rows of vector $\bm{u}$ indexed by $S$ **Output:** $\langle \bm{f}_i^{S_i} , \bm{x}^{S_i} \rangle$ from $K$ earliest processors **\[B2\] Fusion Node: Decode the dot-products from the processor outputs:** **Set** $\ \ \ \ V \gets \text{Indices of the } $K$ \text{ processors that finished first}$ **Set** $\ \ \ \ \bm{B}^V \gets \text{Rows of } \bm{B} \text{ indexed by $V$}$ **Set** $\ \ \ \ \bm{v}_{K \times 1} \gets [\langle \bm{f}_i^{S_i} , \bm{x}^{S_i} \rangle , \ \forall \ i \ \in V]$ $\rhd$ Col Vector of outputs from first $ K $ processors **Set** $\bm{Ax} = [\langle \bm{a}_1,\bm{x} \rangle, \ldots, \langle\bm{a}_M,\bm{x}\rangle]^T \gets [(\bm{B}^V)^{-1}]^{rows \ 1:M} \bm{v}$ **Output:** $\langle \bm{x} ,\bm{a}_1 \rangle, \ldots, \langle\bm{x}, \bm{a}_M\rangle $ Strategy Length Parameter $K$ ------------ -------- --------------------------------------------------- Repetition $N$ $P- \left \lfloor{\frac{P}{M}}\right \rfloor +1 $ MDS $N$ $M$ Short-Dot $s$ $P-\left \lfloor{\frac{Ps}{N}}\right \rfloor+M$ : Trade-off between the length of the dot-products and parameter $K$ for different strategies[]{data-label="results_1"} Strategy Length Parameter $K$ --------------------------------- -------- ------------------------------------------------------------------------------- Repetition with block partition $s$ $ P-\left \lfloor{\frac{P}{M\left \lceil{N/s}\right \rceil}}\right \rfloor+1$ Short-MDS $s$ $ P-\left \lfloor{\frac{P}{\left \lceil{N/s}\right \rceil}}\right \rfloor+M$ : Trade-off between the length of the dot-products and parameter $K$ for different strategies[]{data-label="results_1"} **Remark 2: Short-MDS - a special case of Short-Dot** An extension of the MDS codes-based strategy proposed in [@kananspeeding], that we call Short-MDS can be designed to achieve row-sparsity $s$. First block-partition the matrix of $N$ columns, into $\left \lceil{N/s}\right \rceil$ sub-matrices of size $M \times s$, and also divide the total processors $P$ equally into $\left \lceil{N/s}\right \rceil$ parts. Now, each sub-matrix can be encoded using a $(\frac{P}{\left \lceil{N/s}\right \rceil},M)$ MDS code. In the worst case, including all integer effects, this strategy requires $K=P-\left \lfloor{\frac{P}{\left \lceil{N/s}\right \rceil}}\right \rfloor+M$ processors to finish. In comparison, Short-Dot requires $K=P- \left \lfloor{\frac{Ps}{N}}\right \rfloor+M$ processors to finish. In the regime where, $s$ exactly divides $N$, Short-MDS can be viewed as a special case of Short-Dot, as both the expressions match. However, in the regime where $s$ does not exactly divide $N$, Short-MDS requires more processors to finish in the worst case than Short-Dot. Short-Dot is a generalized framework that can achieve a wider variety of pre-specified sparsity patterns as required by the application. In Table \[results\_1\], we compare the lengths of the dot-products and straggler resilience $K$, *i.e.*, the number of processors to wait for in worst case, for different strategies. Limits on trade-off between the length of dot-products and parameter *K* {#sec:fundamental} ======================================================================== In this section, we derive fundamental trade-offs between the length of the dot-products computed at each individual processor and the number of processors to wait for, *i.e.*, $K$, which parametrizes the resilience to stragglers. First we derive an information-theoretic limit in Theorem \[thm:fundamental1\] that holds for any matrix $\bm{A}$, such that each column has at least one non-zero entry[^4]. In Theorem \[thm:fundamental2\], we show how this bound can be tightened further, so that in the limit of large number of columns of matrix $\bm{A}$, Short-Dot is near-optimal. \[thm:fundamental1\] Let $\bm{A}_{M \times N}$ be any matrix such that each column has at least one non-zero element. For any matrix $\bm{F}_{P \times N}$ satisfying the property that the span of its *any* $K$ rows contains the span of the $M$ rows of $\bm{A}_{M \times N}$, the average sparsity $\bar{s}$ over the rows of $\bm{F}_{P \times N}$ must satisfy $\bar{s} \geq \frac{N}{P}\big(P-K + 1\big) $. *Proof:* We claim that $K$ is strictly greater than the maximum number of zeros that can occur in any column of the matrix $\bm{F}$. If not, suppose the $j$-th column of $\bm{F}$ has more than $K$ zeros. Then there exists a choice of $K$ rows of $\bm{F}$ such that any linear combination of these rows will always be $0$ at the $j$-th column index. However, since the $j$-th column of $\bm{A}$ has at least one non-zero entry, say at row $i$, it is not possible to generate the $i$-th row of $\bm{A}$ by linearly combining these chosen $K$ rows of $\bm{F}$. Thus, $$\begin{aligned} K &\geq 1 + Max\ No.\ of \ 0s \ in \ any\ column \ of \ \bm{F} \label{eq:col-bound1}\\ & \geq 1+ Avg. \ No.\ of \ 0s \ over \ columns \ of \ \bm{F} \label{eq:col-bound2}\end{aligned}$$ Here the last line follows since maximum value is always greater than average. Note that if $\bar{s}$ is the average sparsity over the rows of $\bm{F}_{P \times N}$, then the average number of zeros over the columns of $\bm{F}_{P \times N}$ can be written as $\frac{(N-\bar{s})P}{N}$. Thus, from , $$K \ \geq 1+ \frac{(N-\bar{s})P}{N}.$$ A slight re-arrangement establishes the lower bound in Theorem \[thm:fundamental1\]. $\blacksquare$\ Recall that, Short-Dot achieves a column sparsity of at most $(P-K+M)$ while a hard lower bound is $(P-K+1)$ from this proof. The bound is tight for $M=1$. The bound on average row-sparsity $s \geq \frac{N}{P}\big(P-K + 1\big) $ is also tight only for $M=1$ (implicitly assuming $P$ divides $N$, since $P \ll N$). Now we tighten this bound further for $M>1$. Tighter Fundamental Bounds -------------------------- \[thm:fundamental2\] Let $M>1$. Then there exists a matrix $\bm{A}_{M \times N}$, such that any $\bm{F}_{P \times N}$ satisfying the property that any $K$ rows of $\bm{F}_{P \times N}$ can span all the rows of $\bm{A}_{M \times N}$, must also satisfy the following property: The average sparsity over the rows of $\bm{F}_{P \times N}$ is lower bounded as $$\bar{s} > \frac{N}{P}\big(P-K +M\big) - \frac{M^2}{P}\binom{P}{K-M+1} \label{eq:fundamental-nonasymptotic}$$ Moreover, if $N$ is sufficiently large, such that $M^2 \binom{P}{K-M+1} =o(N)$, then the average sparsity over the rows of $\bm{F}_{P \times N}$ is lower bounded as $$\bar{s} > \frac{N}{P}(P-K +M) - o\Big(\frac{N}{P}\Big) \label{eq:fundamental-asymptotic}$$ Note that the second term in the lower bound in does not depend on $N$. Thus, if $N$ is sufficiently larger than $P$ and $M$, the second term in the lower bound becomes negligible compared to the first term, and the first term is precisely what Short-Dot can achieve. Thus, from this lower bound, we can conclude that when $N$ is large, Short-Dot is near optimal. Before proceeding with the proof, we give a basic intuition on the proof technique. We basically divide the columns of $\bm{F}_{P \times N}$ into two groups, one with at most $(K-M)$ zeros, and other with more than $(K-M)$ zeros. Then we show that there exist matrices $\bm{A}_{M \times N}$ such that the number of columns in the latter group, *i.e.*, with more than $(K-M)$ zeros is bounded, and this in turn bounds the average sparsity. Now we formally prove the theorem. *Proof:* Let us denote the number of columns of $\bm{F}_{P \times N}$ with more than $(K-M)$ zeros as $\lambda$. We will show later in *Lemma 3* that $\lambda < M\binom{P}{K-M+1}$. Now, compute the average number of zeros over the columns of $\bm{F}$. The columns of $\bm{F}$ can be divided into two groups : $\lambda$ columns with greater than $(K-M)$ zeros and $(N-\lambda)$ columns with at-most $(K-M)$ zeros. Recall from , that if $\bm{A}$ is chosen such that every column has at-least one non-zero entry, then the maximum number of zeros in any column of $\bm{F}$ is upper bounded by $(K-1)$. Thus, the group of $\lambda$ columns can have at-most $K-1$ zeros each. Thus, $$\begin{aligned} \label{column_bound} &\text{Avg. no. of 0s per column} \leq \frac{ (K-1)\lambda + (K-M)(N-\lambda) }{N} \nonumber \\ & = K-M + \frac{\lambda(M-1)}{N} \overset{Lemma \ 3}{<} K-M + \frac{M^2 \binom{P}{K-M+1}}{N} \end{aligned}$$ If $\bar{s}$ is the average sparsity of each row of $\bm{F}$, then the average zeros of each column of $\bm{F}$ is given by $\frac{(N-\bar{s})P}{N}$. Thus, $$\frac{(N-\bar{s})P}{N}< (K-M)+ \frac{M^2}{N} \binom{P}{K-M+1}$$ After slight re-arrangement, the average sparsity of each row of $\bm{F}$ can be bounded as: $$\bar{s} > \frac{N}{P}\left(P-K +M\right ) - \frac{M^2}{P} \binom{P}{K-M+1}$$ Thus, the first part of the theorem, *i.e.*, is proved. Using the condition that $M^2\binom{P}{K-M+1}=o(N)$ in , we can also obtain . Thus, $$s > \frac{N}{P}\left(P-K +M\right ) - o\left(\frac{N}{P}\right)$$ Thus, the theorem is proved. $\blacksquare$ Now it only remains to prove *Lemma 3*. *Lemma 3:* Let $M>1$. Then there exists a matrix $\bm{A}_{M \times N}$, such that any $\bm{F}_{P \times N}$ satisfying the property that any $K$ rows of $\bm{F}_{P \times N}$ can span all the rows of $\bm{A}_{M \times N}$, must also satisfy the following property: The number of columns $(\lambda)$ with more than $K-M$ zeros is upper bounded as $\lambda < M\binom{P}{K-M+1}$. *Proof:* Assume, $\lambda \geq M\binom{P}{K-M+1}$. Now, a column with more than $(K-M)$ zeros will have at least $(K-M+1)$ zeros. There can be at most $\binom{P}{K-M+1}$ different patterns in which $(K-M+1)$ zeros can occur in a column of length $P$. Every column with more than $(K-M+1)$ zeros also has one of these $\binom{P}{K-M+1}$ column sparsity pattern, just with more zeros. From a pigeon-hole argument, at least one of these sparsity patterns of $(K-M+1)$ zeros will surely occur in $ \frac{\lambda}{\binom{P}{K-M+1}}$ columns or more. Let us consider the sub-matrix of $\bm{F}$, of size $P \times \frac{\lambda}{\binom{P}{K-M+1}}$, consisting of only the columns of $\bm{F}$ having $(K-M+1)$ zeros in the same locations, *i.e.*, with similar sparsity pattern. Any $K$ rows of this sub-matrix of $\bm{F}$ should generate all the rows of a corresponding $M \times \frac{\lambda}{\binom{P}{K-M+1}}$ sub-matrix of the given $\bm{A}$, consisting of the same columns of $\bm{A}$ as picked in this sub-matrix of $\bm{F}$. There always exists a fully dense matrix $\bm{A}$ such any $M \times \frac{\lambda}{\binom{P}{K-M+1}}$ sub-matrix of $\bm{A}$ is full-rank, since $\bm{A}$ can be arbitrary. This sub-matrix of $\bm{A}$ is of rank $\min\{M , \frac{\lambda}{\binom{P}{K-M+1}} \} = M $(from assumption). Any $K$ rows of the sub-matrix of $\bm{F}$, should generate $M$ linearly independent rows of this sub-matrix of $\bm{A}$. But since the sub-matrix of $\bm{F}$ has $(K-M+1)$ rows consisting of all zeros, there is a choice of $K$ rows, such that all these zero rows are chosen, and we are only left with at most $M-1$ non-zero rows to generate $M$ linearly independent rows of $\bm{A}$. This is a contradiction. Thus, we must have $\lambda > M\binom{P}{K-M+1}$. $\blacksquare$ Analysis of expected computation time for exponential tail models {#sec:analysis} ================================================================= We now provide a probabilistic analysis of the computation time required by Short-Dot and compare it with uncoded parallel processing, repetition and MDS coding based linear computation scheme as shown in Fig. \[fig0\]. We follow the shifted-exponential computation time model as described in [@kananspeeding]. Although the shifted exponential distribution may only be a crude approximation of the delay of real systems, we use the shifted exponential model since it is analytically tractable and allows for a fair comparison with the strategy proposed in [@kananspeeding]. We assume that the time required by a processor to compute a single dot-product of length $N$ be distributed as: $$\label{eq:dist} \Pr(T_N \leq t) = \begin{cases} 1 - \exp \left(-\mu \ \left(\frac{t}{N}-1\right) \right) , & \forall \ t \geq N \\ 0, & \text{otherwise} \end{cases}$$ Here, $\mu (> 0)$ is the “straggling parameter” that determines the unpredictable latency in computation time. Intuitively, the shifted exponential model states that for a task of size $N$, there is a minimum time offset proportional to $N$ such that the probability of completion of the task before that time is $0$. The probability of task completion is maximum at the time-offset and then decays with an exponential tail after that. This nature of the model might be attributed to the fact that while a processor is most likely to finish its task of size $N$ at a time proportional to $N$, but an unpredictable latency due to queuing and various other factors causes an exponential tail. For an $s$ length dot product, we simply replace $N$ by $s$ in , as suggested in [@kananspeeding]. The analysis of expected computation time requires closed form expressions of the $K$-th statistic which is simplistic for exponential tails. However a more thorough empirical study is necessary to establish any chosen model for straggling in a particular environment. The expected computation time for Short-Dot is the expected value of the $K$-th order statistic of these $P$ *iid* exponential random variables, which is given by: $$\label{eq:expected} {{\rm I\kern-.3em E}}[T_{SD}] = s \left( 1 + \frac{\log(\frac{P}{P-K})}{\mu} \right) = \frac{(P-K+M)N}{P} \left( 1 + \frac{\log(\frac{P}{P-K})}{\mu} \right).$$ Here,  uses the fact that the expected value of the $K$-th statistic of $P$ *iid* exponential random variables with parameter $1$ is $\sum_{i=1}^P \frac{1}{i} - \sum_{i=1}^{P-K} \frac{1}{i} \approx \log(P)-\log(P-K)$ [@kananspeeding]. The expected computation time in the RHS of  is minimized when $P-K = \Theta(M)$. This minimal expected time is $\mathcal{O}(\frac{MN}{P})$ for $M$ linear in $P$ and is $\mathcal{O}\left(\frac{MN\log(P/M)}{P }\right)$ for $M$ sub-linear in $P$. A detailed analysis of the expected computation time for the competing strategies, *i.e.*, uncoded strategy, repetition and MDS coding strategy is provided in the Appendix. Table \[time-table\] shows the order-sense expected computation time in the regimes where $M$ is linear and sub-linear in $P$. Strategy Expected Time $M$ linear in $P$ $M$ sub-linear in $P$ -------------------------- ------------------------------------------------------------------------------------- --------------------------------------------- ---------------------------------------------------------------------------- Only one Processor $MN\left( 1 +\frac{1}{\mu}\right)$ $\Theta \left(MN \right)$ $\Theta \left(MN \right)$ Uncoded (M divides P) $\frac{MN}{P} \left( 1 + \frac{\log(P)}{\mu} \right) $ $\Theta \left(\frac{MN}{P}\log(P)\right) $ $\Theta \left( \frac{MN}{P}\log(P)\right) $ Repetition (M divides P) $N\left( 1 + \frac{M\log(M)}{P\mu} \right) $ $\Theta \left(\frac{MN}{P} \log(P)\right) $ $\Theta \left(N\right)$ MDS $N\Big( 1 + \frac{\log \big(\frac{P}{P-M}\big)}{\mu} \Big)$ $\Theta(N)$ $\Theta(N)$ Short-Dot $ \frac{N(P-K+M)}{P} \Big( 1 + \frac{\log \left(\frac{P}{P-K}\right)}{\mu} \Big) $ $\mathcal{O}(\frac{MN}{P}) $ $ \mathcal{O}\left( \frac{MN}{P}\log \left(\frac{P}{M}\right) \right) $ : Probabilistic Computation Times[]{data-label="time-table"} Refer to Appendix for more accurate analysis taking integer effects into account Note that in the regime where $M$ is linear in $P$, Short-Dot outperforms Uncoded Strategy by a factor diverging to infinity for large $P$. Similarly, in the regime where $M$ is sub-linear in $P$, Short-Dot outperforms MDS coding strategy by a factor that diverges to infinity for large $P$. Thus Short-Dot universally outperforms all its competing strategies over the entire range of $M$. Now we explicitly provide a regime, where the speed-ups from Short-Dot diverges to infinity for large $P$, in comparison to all three competing strategies - MDS Coding, Repetition or Uncoded strategies. Suppose $M$ scales as $\frac{P}{\log{P}}$. Then, Short-Dot with $K=P-\frac{M}{2}$ has an expected computation time (scaled by $N$) as $\frac{{{\rm I\kern-.3em E}}[T_{SD}]}{N}=O(\frac{\log(\log{P})}{\log{P}})$ that decays to $0$ as $P \to \infty$. In contrast, the expected computation time (scaled by $N$) for MDS coding, repetition and uncoded strategies scale as $\Omega(1)$ and thus do not decay to $0$ as $P \to \infty$. *Proof:* For the proof of this theorem, we simply substitute the values of $M$ and $K$ in the expressions of expected computation time as follows. We let $M=\frac{P}{\log P}$ for all the strategies. For uncoded strategy, we thus obtain, $$\frac{{{\rm I\kern-.3em E}}[T_{UC}]}{N} = \frac{M}{P} \left( 1 + \frac{\log(P)}{\mu} \right) = \frac{1}{\log{P}}\left( 1 + \frac{\log(P)}{\mu} \right) \geq \frac{1}{\mu} = \Omega (1)$$ For repetition, we obtain, $$\frac{{{\rm I\kern-.3em E}}[T_{Rep}]}{N} = \Big( 1 + \frac{M\log(M)}{P\mu} \Big) \geq 1 = \Omega (1)$$ For MDS Coding based linear computation, we obtain, $$\frac{{{\rm I\kern-.3em E}}[T_{MDS}]}{N} = 1 + \frac{\log\left(\frac{P}{P-M}\right)}{\mu} = 1 + \frac{\log\left(\frac{\log{P}}{\log{P}-1}\right)}{\mu} \geq 1 = \Omega (1)$$ Now, we consider the Short-Dot strategy with $K=P-\frac{M}{2} = P- \frac{P}{2\log{P}}$. Note that the inequality $K > M$ is satisfied for $\log{P}>\frac{3}{2}$. Now let us calculate the expected computation time for Short-Dot. $$\frac{{{\rm I\kern-.3em E}}[T_{SD}]}{N} = \frac{(P-K+M)}{P} \Big( 1 + \frac{\log \big(\frac{P}{P-K}\big)}{\mu} \Big) = \frac{3}{2\log{P}}\left( 1 + \frac{\log(2\log{P})}{\mu} \right) \leq \mathcal{O}\big( \frac{\log(\log P)}{\log{P}}\big)$$ Thus, the speed-up offered by Short-Dot in this regime is $\frac{\log{P}}{\log(\log P)}$, and thus diverges to infinity for large $P$, as illustrated in Fig. \[fig:comp\_time\_scaling\]. Encoding and Decoding Complexity ================================ Encoding Complexity: -------------------- Even though encoding is a pre-processing step (since $\bm{A}$ is assumed to be given in advance), we include a complexity analysis for the sake of completeness. Recall from Section \[sec:shortdot\] that we first choose an appropriate matrix $\bm{B}$ of dimension $P \times K$, such that every $K \times K$ square sub-matrix is invertible and all $(K-M)\times(K-M)$ sub-matrices in the last $(K-M)$ columns are invertible. Now, for each of the $N$ columns of the given matrix $\bm{A}$, we perform the following. **Set** $ \ \ \ \ U \gets (\{ (j-1), \ldots, (j+K-M-1) \} \mod P) +1 $\ $\rhd$ The set of $(K-M)$ indices that are 0 for the $j$-th column of $\bm{F}$ **Set** $\ \ \ \bm{B}^U \gets \text{Rows of } \bm{B} \text{ indexed by $U$}$ **Solve for** $ \bm{z}: \ \ \ \ \ (\bm{B}^U_{cols \ M+1:K})[\ \bm{z}\ ]= - \ \bm{B}^U_{cols \ 1:M}[\bm{A}_j] $ $\rhd$ $\bm{z}_{(K-M) \times 1}$ is a row vector. **Set** $ \ \ \ \ \ \bm{F}_{j} = \bm{B}[\bm{A}_j^T \ | \bm{z}^T \ ]^T $ $\rhd$ $\bm{F}_{j}$ is a column vector ( $j$-th col of $\bm{F}$) For each of the $N$ columns, the encoding requires a matrix inversion of size $(K-M) \times (K-M)$ to solve a linear system of equations, a matrix-vector product of size $(K-M)\times M $ and another matrix vector product of size $ P \times K$.\ The naive encoding complexity is therefore $\mathcal{O}(N((K-M)^3 + (K-M)M + PK))$. Note that effectively there are only $N/P$ different column sparsity patterns for this particular design discussed in this paper. Thus, there are effectively $N/P$ unique $\bm{B}^U$s , and thus $N/P$ unique matrix inversions can suffice for all the $N$ columns, as sparsity pattern is repeated. Thus, the complexity can be reduced to $\mathcal{O}(\frac{N}{P}(K-M)^3 + (K-M)MN + PKN)= \mathcal{O}(\frac{N}{P}(K-M)^3 +2PKN )$ This is higher than MDS coding based linear computation that has an encoding complexity of $\mathcal{O}(NMP))$, but it is only a one-time cost that provides savings in online steps (as discussed earlier in this section). ### Reduced Complexity using Vandermonde matrices: The encoding complexity can be reduced further for special choices of the matrix $\bm{B}$. Let us choose $\bm{B}$ to be a Vandermonde matrix as given by $$\label{eq:vandermonde} \bm{B}=\begin{bmatrix} h_1^{K-1} & \dots & h_1 & 1\\ h_2^{K-1} & \dots & h_2 & 1\\ \vdots & \ddots & \vdots & \vdots\\ h_P^{K-1} & \dots & h_P & 1 \end{bmatrix}$$ Here, $ h_1, h_2, \dots, h_K \in \mathbb{R} $, and are all distinct. This matrix $\bm{B}$ satisfies all the requirements of the encoding matrix. All $K \times K$ sub-matrices of $\bm{B}$ are invertible, and all $(K-M) \times (K-M)$ sub-matrices in the last $(K-M)$ columns are also invertible. Thus, this matrix can be used to encode the matrix $\bm{F}$. For each of the $N$ columns of $\bm{F}$, the encoding requires solving a linear system of equations for $\bm{z}$, as given by: $$(\bm{B}^U_{cols \ M+1:K})[\ \bm{z}\ ]= - \ \bm{B}^U_{cols \ 1:M}[\bm{A}_j]$$ Here $U$ denotes a set of $(K-M)$ indices $\in \{1,2,\dots,P\}$.\ The matrix-vector product $\bm{B}^U_{cols \ 1:M}[\bm{A}_j]$ is equivalent to the evaluation of a polynomial of degree $(K-1)$ with the $K$ co-efficients as $[\bm{A}_j^T \bm{0}_{(K-M)\times 1} ]$ at $(K-M)$ arbitrary points given by $\{h_l| l \in U\}$. Once this product is obtained, the linear system of equations reduces to the interpolation of the $(K-M)$ unknown co-efficients of a polynomial of degree $(K-M-1)$ (which is $\bm{z}$), from its value at $(K-M)$ arbitrary points as given by $\{h_l| l \in U\}$. Once $\bm{z}$ is obtained, we perform the following operation. $$\bm{F}_{j} = \bm{B}[\bm{A}_j^T \ | \bm{z}^T \ ]^T$$ This step is equivalent to the evaluation of a polynomial of degree $(K-1)$ at $P$ points given by $\{h_l| l=1,2,\dots P \}$. Thus we decompose our encoding problem for each column of $\bm{A}$ into a bunch of polynomial evaluation and interpolation problems, all of degree less than $P$. Now, from [@kung1973fast], [@li2000arithmetic], we know that both the interpolation and the evaluation of a polynomial of degree less than $P$, at $P$ arbitrary points is $\mathcal{O}(P \log^2(P))$. Thus, the complexity of encoding is $\mathcal{O}(NP \log^2(P))$. Decoding Complexity: -------------------- During decoding, we get $K$ dot-products from the first $K$ processors out of $P$. We then perform the following operations. **Set** $\ \ \ \ V \gets \text{Indices of the } \text{ processors that finished first}$ **Set** $\ \ \ \ \bm{B}^V \gets \text{Rows of } \bm{B} \text{ indexed by $V$}$ **Set** $\ \ \ \ \bm{v}_{K \times 1} \gets [\langle \bm{f}_i^{S_i} , \bm{x}^{S_i} \rangle , \ \forall \ i \ \in V]$ $\rhd$ Col Vector of outputs from first $ K $ processors **Solve for** $\bm{w}_{K \times 1}: \ \ \ \ [(\bm{B}^V)]\bm{w}= \bm{v}$ **Output:** $\bm{Ax}=[w_1, w_2, \dots w_M]^T $ $\rhd$ First $M$ values of $\bm{w}$ We solve a system of $K$ linear equations in $K$ variables and use only $M$ values of the obtained solution vector. Thus, effectively we do a single matrix inversion of size $K \times K$ followed by a matrix-vector product of size $K \times M$. The decoding complexity of Short-Dot is thus $\mathcal{O}(K^3 + KM)$ which does not depend on $N$ when $M,K \ll N$. This is nearly the same as $\mathcal{O}(M^3 + M^2)$ complexity of MDS coding based linear computation. ### Reduced Complexity using Vandermonde matrices: Similar to encoding, using Vandermonde matrices can reduce the decoding complexity further. As already discussed, we choose the encoding matrix $\bm{B}$ as a Vandermonde matrix as described in . The decoding problem consists of solving a a system of $K$ linear equations in $K$ variables. $$[(\bm{B}^V)]\bm{w}= \bm{v}$$ Here $V$ is a set of $K$ indices $\in \{1,2,\dots,P\}$. The problem of finding $\bm{w}$ is equivalent to the interpolation of the co-efficients of a polynomial of degree $(K-1)$, from its values at $K$ arbitrary points given by $\{h_l|l \in V\}$. Again, from [@kung1973fast], [@li2000arithmetic], the interpolation of a polynomial of degree $(K-1)$, at $K$ arbitrary points can be done in $\mathcal{O}(K \log^2(K))$, which thus becomes the decoding complexity. Experimental Results {#sec:experiments} ==================== We perform experiments on computing clusters at CMU to test the computational time. We use HTCondor [@HTCondor] to schedule jobs simultaneously among the $P$ processors. We compare the time required to classify $10000$ handwritten digits of the MNIST [@lecun1998mnist] database, assuming we are given a trained $1$-layer Neural Network. We separately trained the Neural network using training samples, to form a matrix of weights, denoted by $\bm{A}_{10 \times 785}$. For testing, the multiplication of this given $10 \times 785$ matrix, with the test data matrix $\bm{X}_{785 \times 10000}$ is considered. The total number of processors was $20$. Assuming that $\bm{A}_{10 \times 785}$ is encoded into $\bm{F}_{20 \times 785}$ in a pre-processing step, we store the rows of $\bm{F}$ in each processor apriori. Now portions of the data matrix $\bm{X}$ of size $s \times 10000$ are sent to each of the $P$ parallel processors as input. We also send a C-program to compute dot-products of length $s=\frac{N}{P}(P-K+M)$ with appropriate rows of $\bm{F}$ using command *condor-submit*. Each processor outputs the value of one dot-product. The computation time reported in Fig. \[sim\_fig\] includes the total time required to communicate inputs to each processor, compute the dot-products in parallel, fetch the required outputs, decode and classify all the $10000$ test-images, based on $35$ experimental runs. Strategy Parameter $K$ Mean STDEV Minimum Time Maximum Time ----------- --------------- --------- -------- -------------- -------------- Uncoded 20 11.8653 2.8427 9.5192 27.0818 Short-Dot 18 10.4306 0.9253 8.2145 11.8340 MDS 10 15.3411 0.8987 13.8232 17.5416 : Experimental computation time of $10000$ dot products ($N=785 , M=10 , P=20 $) \[results\_2\] **Key Observations:** (See Table \[results\_2\] for detailed results). Computation time varies based on nature of straggling, at the particular instant of the experimental run. Short-Dot outperforms both MDS and Uncoded, in mean computation time. Uncoded is faster than MDS since per-processor computation time for MDS is larger, and it increases the straggling, even though MDS waits for only for $10$ out of $20$ processors. However, note that Uncoded has more variability than both MDS and Short-Dot, and its maximum time observed during the experiment is much greater than both MDS and Short-Dot. The classification accuracy was $85.98 \%$ on test data. Discussion {#sec:discussion} ========== Storage and Communication benefits of Shorter Dot Products: ----------------------------------------------------------- [ ]{} Errors instead of erasures: --------------------------- While we focus on the problem of erasures in this paper, Short-Dot can also be used to correct errors. Consider the scenario when instead of straggling or failures, some processors return entirely faulty or garbage outputs, in a distributed system and we do not know which of the outputs are erroneous. We argue from coding theoretic arguments that Short-Dot codes designed to tolerate $(P-K)$ stragglers, can also correct $\lfloor \frac{(P-K)}{2}\rfloor $ errors. First observe that if the code can tolerate $(P-K)$ stragglers, then the Hamming Distance between any two code-words should at least be $(P-K+1)$. Hence, the number of errors that can be corrected is $\lfloor \frac{(\text{Hamming Distance -1})}{2}\rfloor $ which is $\lfloor \frac{(P-K)}{2}\rfloor $. The same result can also be derived by recasting the decoding problem as a sparse reconstruction problem, and borrowing ideas from standard compressive sensing literature [@candes2005decoding] which also yields a concrete, decoding algorithm. The problem reduces to an $l_0$ minimization problem, which can be relaxed into an $l_1$ minimization, or solved using alternate sparse reconstruction techniques, under certain constraints on the encoding matrix $\bm{B}$. More dot-products than processors --------------------------------- While we have presented the case of $M<P$ here, Short-Dot easily generalizes to the case where $M \geq P$. The matrix can be divided horizontally into several chunks along the row dimension (shorter matrices) and Short-Dot can be applied on each of those chunks one after another. Moreover if rows with same sparsity pattern are grouped together and stored in the same processor initially, then the communication cost is also significantly reduced during the online computations, since only some elements of the unknown vector $\bm{x}$ are sent to a particular processor. **Acknowledgments:** Systems on Nanoscale Information fabriCs (SONIC), one of the six SRC STARnet Centers, sponsored by MARCO and DARPA. We also acknowledge NSF Awards 1350314, 1464336 and 1553248. S Dutta also received Prabhu and Poonam Goel Graduate Fellowship. Appendix ======== We now provide a probabilistic analysis of the computational time required by Short-Dot and compare it with uncoded parallel processing, repetition and MDS code based linear computation as shown in Fig. \[fig0\]. We assume that the time required by a processor to compute a single dot-product follows an exponential distribution and is independent of other parallel processors. Let us assume, the time required to compute a single dot-product of length $N$, follow the distribution:- $$\Pr(T_N \leq t) = F(t) =\begin{cases} 1 - \exp \left(-\mu \ \left(\frac{t}{N}-1\right) \right) , & \forall \ t \geq N \\ 0, & \text{otherwise} \end{cases}$$ Here, $\mu (>0)$ is a straggling parameter, that determines the “unpredictable latency” in computation time. We also assume, that if the length of the dot-product is $s$ where $s$ is the sparsity of the vector, the probability distribution of the computational time varies as:- $$\Pr(T_s \leq t) =F\left(\frac{Nt}{s}\right) =\begin{cases} 1 - \exp \left(-\mu \ \left(\frac{t}{s}-1\right) \right) , & \forall \ t \geq s \\ 0, & \text{otherwise} \end{cases}$$ Now we derive the expected computation time using our proposed strategy and compare it with existing strategies in the regimes where the number of dot-products $M$ is linear and sub-linear in $P$. Table \[time-table\] shows the order-sense expected computation time in the regimes where $M$ is linear and sub-linear in $P$. Proposed Strategy – Short-Dot: ------------------------------ The computation time over each of the $P$ processors behaves as *iid* exponential random variables following the distribution:- $$\Pr(T_s \leq t) =F\left(\frac{Nt}{s}\right) = 1 - \exp\left(-\mu \ \left(\frac{t}{s}-1\right)\right) \ \ \forall \ t \geq s \ .$$ Now, the expected computation time is the expected value of the $K$-th order statistic of these $P$ *iid* exponential random variables, which is given by:- $${{\rm I\kern-.3em E}}[T_{SD}] = s \left( 1 + \frac{\log(\frac{P}{P-K})}{\mu} \right) = \frac{(P-K+M)N}{P} \left( 1 + \frac{\log(\frac{P}{P-K})}{\mu} \right)$$ Here we use the result (from [@kananspeeding]) that the $K-$ th order statistic of $P$ exponential random variables that are *iid* as $\sim \exp(-T) \ \forall \ T \ \geq 0 $ is given by $$\sum_{i=1}^P \frac{1}{i} - \sum_{i=1}^{P-K} \frac{1}{i}$$ For large $P$ and $K < P$, we can approximate the following: $$\sum_{i=1}^P \frac{1}{i} - \sum_{i=1}^{P-K} \frac{1}{i} \approx \log(P)-\log(P-K)$$ Note that the expected computation time is minimized when $K =P- \Theta(M)$, and is given by:- $$\label{short_dot_time} {{\rm I\kern-.3em E}}[T_{SD}^{*}] = \mathcal{O}\left(\frac{MN}{P}\left(1 + \frac{\log(P/M)}{\mu} \right)\right)$$ If $M=\Theta(P)$, the expected time is $\mathcal{O}(\frac{MN}{P})$. If $M=o(P)$, the expected time is $\mathcal{O}\left(\frac{MN\log(P/M)}{P }\right)$. Note that $s=\frac{(P-K+M)N}{P}$ is actually an upper bound on the length of each dot-product achieved using Short-Dot. Thus the expression obtained in (\[short\_dot\_time\]) is an upper bound for the actual expected computation time. Thus we use $\mathcal{O}(.)$ instead of $\Theta(.)$. Existing Strategies ------------------- #### One Single Processor: For one single processor to compute all $M$ dot-products of length $N$, the computation time is distributed as $$\Pr(T_{NM} \leq t) =F\big(t/M\big) = 1 - \exp \left(-\mu \ \left(\frac{t}{NM}-1\right) \right) \ \ \forall \ t \geq NM$$ Thus, the expected computation time can be easily derived to be $${{\rm I\kern-.3em E}}[T_{1P}]=MN\left( 1 +\frac{1}{\mu}\right)$$ #### Uncoded - Divide into $P$ parts and wait for all: Now, consider an uncoded strategy where the computation is simply divided into $P$ dot-products and sent to $P$ processors. We assume that each processor is sent only one dot-product at a time. We wait for all the processors to finish computation. Note that integer effects arise when $M$ does not exactly divide $P$. Some rows can be divided among $\left \lceil{\frac{P}{M}}\right \rceil $ processors, while the remaining are divided among $\left \lfloor{\frac{P}{M}}\right \rfloor $ processors. Let $m_1$ and $m_2$ denote the number of rows that get $\left \lceil{\frac{P}{M}}\right \rceil $ processors and $\left \lfloor{\frac{P}{M}}\right \rfloor $ processors respectively. Clearly the values can be obtained by solving:- $$\begin{bmatrix} 1 & 1\\ \left \lceil{\frac{P}{M}}\right \rceil & \left \lfloor{\frac{P}{M}}\right \rfloor \end{bmatrix}\begin{bmatrix} m_1 \\ m_2 \end{bmatrix} = \begin{bmatrix} M \\ P \end{bmatrix}$$ Now, we have two groups of exponential variables: one group consisting of $m_1\left \lceil{\frac{P}{M}}\right \rceil $ *iid* exponential random variables of task size $\frac{N}{\left \lceil{\frac{P}{M}}\right \rceil} $ , and another group consisting of $m_2\left \lfloor{\frac{P}{M}}\right \rfloor $ *iid* exponential random variables of task size $\frac{N}{\left \lfloor{\frac{P}{M}}\right \rfloor} $. The two groups are independent of each other. Note that we assume that $N$ is large compared to $P$ and is divisible by $P,\left \lfloor{\frac{P}{M}}\right \rfloor, \left \lfloor{\frac{P}{M}}\right \rfloor $, so that the integer effects with respect to $N$ do not appear and the plots can be scaled with respect to $N$ for ease of understanding. The expected computation time is thus given by the expectation of the maximum of all these $P=m_1\left \lceil{\frac{P}{M}}\right \rceil + m_2\left \lfloor{\frac{P}{M}}\right \rfloor $ exponential random variables. $$\begin{gathered} \Pr( T_{UC} \leq t)= \left( 1- \exp \left(-\mu \left( \frac{\left \lceil{\frac{P}{M}}\right \rceil t}{N} -1 \right) \right) \right)^{m_1 \left \lceil{\frac{P}{M}}\right \rceil} \times \\ \left( 1- \exp \left(-\mu \left( \frac{\left \lfloor{\frac{P}{M}}\right \rfloor t}{N} -1 \right) \right) \right)^{m_2 \left \lfloor{\frac{P}{M}}\right \rfloor } \forall \ t \geq \frac{N}{\left \lfloor{\frac{P}{M}}\right \rfloor}\end{gathered}$$ The expectation is thus obtained as $${{\rm I\kern-.3em E}}[T_{UC}]=\int_{0}^{\infty} \left(1- \Pr( T_{UC} \leq t) \right) dt$$ This expression is numerically computed using MATLAB and plotted in the plot of theoretical computation time in Fig. \[fig0\]. When $M$ divides $P$ exactly, the expressions are simpler. The computation time for each processor is distributed as $$\Pr(T_{UC} \leq t) =F\big(t/M\big) = 1 - \exp \left(-\mu \ \left(\frac{Pt}{MN}-1\right) \right) \ \ \forall \ t \geq NM/P$$ The expected computation time is the maximum of $P$ such independent and identically distributed random variables, as given by:- $${{\rm I\kern-.3em E}}[T_{UC}]=\frac{MN}{P}\left( 1 +\frac{\log(P)}{\mu}\right)$$ The expected time for this uncoded strategy is $\Theta\left(\frac{MN\log(P)}{P }\right)$ regardless of whether $M$ is linear or sub-linear in $P$. Our strategy Short-Dot thus offers a speed-up of $\Omega(\log(P))$ in expected computation time when $M$ is linear in $P$, as mentioned in , and thus outperforms by a factor that diverges to infinity for large $P$. #### Repetition: When a $(P,M)$ repetition strategy is used, we separate the matrix into $M$ rows and repeat each row $P/M$ times, so as to obtain a total of $P$ tasks. Note that integer effects arise when $M$ does not exactly divide $P$. Some rows are repeated $\left \lceil{\frac{P}{M}}\right \rceil $ times, while the remaining are repeated $\left \lfloor{\frac{P}{M}}\right \rfloor $ times. Let $m_1$ and $m_2$ denote the number of rows that are repeated $\left \lceil{\frac{P}{M}}\right \rceil $ times and $\left \lfloor{\frac{P}{M}}\right \rfloor $ times respectively. Clearly the values can be obtained by solving:- $$\begin{bmatrix} 1 & 1\\ \left \lceil{\frac{P}{M}}\right \rceil & \left \lfloor{\frac{P}{M}}\right \rfloor \end{bmatrix}\begin{bmatrix} m_1 \\ m_2 \end{bmatrix} = \begin{bmatrix} M \\ P \end{bmatrix}$$ Now, the minimum of $\left \lceil{\frac{P}{M}}\right \rceil $ (or similarly $\left \lfloor{\frac{P}{M}}\right \rfloor $ ) *iid* exponential random variables is also exponential with parameter scaled by $\left \lceil{\frac{P}{M}}\right \rceil $ (or similarly $\left \lfloor{\frac{P}{M}}\right \rfloor $ ). The expected computation time is thus given by the expectation of the maximum of $m_1$ independent exponential variables with parameter scaled by $\left \lceil{\frac{P}{M}}\right \rceil $ and $m_2$ independent exponential variables with parameter scaled by $\left \lfloor{\frac{P}{M}}\right \rfloor $. $$\begin{gathered} \Pr( T_{Rep} \leq t)= \left( 1- \exp \left(-\mu \left \lceil{\frac{P}{M}}\right \rceil \left( \frac{t}{N} -1 \right) \right) \right)^{m_1} \times \\ \left( 1- \exp \left(-\mu \left \lfloor{\frac{P}{M}}\right \rfloor \left( \frac{t}{N} -1 \right) \right) \right)^{m_2} \ \forall \ t \geq N\end{gathered}$$ The expectation is thus obtained as $${{\rm I\kern-.3em E}}[T_{Rep}]=\int_{0}^{\infty} \left(1- \Pr( T_{Rep} \leq t) \right) dt$$ This expression is computed using MATLAB in the plot of theoretical expected computation time (Fig. \[fig0\]). When $M$ exactly divides $P$, the analysis is simpler, and the two types of exponential distributions are identical. Following an analysis similar to \[1\], it simplifies to the expectation of the maximum of $M$ *iid* exponential random variables, each of which is the minimum of $P/M$ *iid* exponential random variables. $${{\rm I\kern-.3em E}}[T_{Rep}]=N\left( 1 + \frac{M\log(M)}{P\mu} \right)$$ When $M$ is linear in $P$, the expected computation time is $\Theta(\frac{MN}{P} \log (P))$ while our strategy achieves $\mathcal{O}(N)$ in this regime. When $M$ is sub-linear in $P$, the expected computation time is $\Theta(N)$ while Short-Dot achieves $\mathcal{O}\left(\frac{MN\log(P/M)}{P }\right)$ that offers speed-up by a factor diverging to infinity. #### MDS codes-based strategy: The matrix is separated into $M$ rows and coded into $P$ rows using a $(P,M)$ MDS code. Thus, each processor effectively computes a dot-product of length $N$. We have to wait for any $M$ processors to finish. Assuming the computation of each processor is independent, following an analysis similar to \[1\], we obtain that, $${{\rm I\kern-.3em E}}[T_{MDS}]=N\left( 1 + \frac{\log(P)}{\mu} - \frac{\log(P-M)}{\mu} \right)$$ When $M$ is linear in $P$, the expected computation time is $\Theta(N)$ as compared to our strategy that achieves $\mathcal{O}(MN/P) $. However, in the regime where $M$ is sub-linear in $P$, the expected computation time is also $\Theta(N)$ while our strategy achieves $\mathcal{O}\left(\frac{MN\log(P/M)}{P}\right)$, and thus outperforms MDS codes by a factor that diverges to infinity for large $P$. [^1]: Strassen’s algorithm [@strassen1969gaussian] and its generalizations offer a recursive approach to faster matrix multiplications over multiple processors, but they are often not preferred because of their high communication cost [@ballard2014communication]. [^2]: Another interesting example comes from recent work on designing processing units that exclusively compute dot-products using analog components [@analog_dot; @ericpop]. These devices are prone to errors and increased delays in convergence when designed for larger dot products. [^3]: This condition is relaxed in Remark 1. [^4]: Note that choice of such a class of matrix $\bm{A}$ is reasonable, since if say the $j$-th column of $\bm{A}$ consists entirely of zeros, then the $j$-th column and its corresponding entry in unknown vector $\bm{x}$ can simply be omitted from the problem.
--- abstract: 'We prove a stabilization theorem for algebras of $n$-operads in a monoidal model category $\Ee.$ It implies a version of Baez-Dolan stabilization hypothesis for Rezk’s weak $n$-categories and some other stabilization results.' author: - 'M. A. Batanin' date: 'November 16, 2015' title: 'An operadic proof of Baez-Dolan Stabilization Hypothesis' --- Introduction ============ Breen [@breen] and later Baez and Dolan [@BD] suggested the following [*stabilization hypothesis*]{} in higher category theory The category of $n$-tuply monoidal $k$-categories is equivalent to the category of $(n+1)$-tuply monoidal $k$-categories provided $n\ge k+2.$ Baez and Dolan define $n$-tuply monoidal $k$-category as a weak $n+k$-category which has only one cell in each dimension smaller then $n.$ It is known that such a definition, if taken naively, is not completely satisfactory because even if there is a unique cell in lower dimension the action of higher coherence cells associated to it can be nontrivial (see [@Cheng] for a discussion). To get rid of this problem we have to work with weakest possible morphisms of $k$-categories. Moreover, we want to be able to speak about monoidal structures on $k$-categories. One technically convenient way to do it is to choose a symmetric monoidal model category $(\Ee,\otimes,e)$ whose homotopy category is equivalent to the homotopy category of weak $k$-categories and weak $k$-functors. For example, for $k=1$ one can take the category of categories with cartesian product and ‘folklore’ model structure and for $k=2$ one can consider the category of $2$-categories with Gray-product as tensor product and Lack’s model structure [@Lack]. For any $k\ge 0$ the category of $\Theta_k$ simplicial presheaves $\Theta_k Sp_k$ with Rezk model structure satisfies this requirement [@BergnerRezk; @Rezk]. There is a widely-accepted understanding that a monoidal $k$-category is just an $E_1$-algebra in such a monoidal model category $\Ee$ [@HA]. Now, an $n$-tuply monoidal weak $k$-category must have $n$ monoidal structures which interact coherently. It is another widely-accepted idea that such an interaction of structures is equivalent to an action of an $E_n$-operad [@HA]. This is justified by an additivity theorem for $E_n$-operads ([@HA]\[Theorem 5.1.1.2\]): the tensor product of $E_n$-operad and an $E_m$-operad is an $E_{n+m}$-operad. All operads here have to be understood as $\infty$-operads and tensor product is a ‘derived’ Boardman-Vogt tensor product. The statement and proof of this theorem are subtle because of a tricky homotopy behaviour of Boardman -Vogt tensor product [@FV]. Lurie’s Additivity Theorem holds only in fully homotopised world. As a consequence the stabilisation result ([@HA]\[Example 5.1.2.3\]) gives an equivalence of $(\infty,1)$-categories rather than Quillen equivalence of model categories of algebras[^1][^2]. In this paper we choose a different approach to coherent interaction of monoidal structures which comes closer to the original Baez-Dolan understanding of $n$-tuply monoidal $k$-category as a degenerate $(n+k)$-category[^3] and does not require Additivity Theorem (though we conjecture that Additivity Theorem can be proved using our techniques). [*An $n$-tuply monoidal $k$-category for us is an algebra of a cofibrant contractible $n$-operad in $\Ee.$*]{} For $n=1$ this means that a monoidal $k$-category is an algebra of a cofibrant nonsymmetric contractible operad, which is well known to be homotopy equivalent to an $E_1$-algebra structure. This simple observation was extended in [@SymBat] to arbitrary dimension. It is shown here that the derived symmetrisation functor on a terminal $n$-operad is an $E_n$-operad and, therefore, the homotopy category of algebras of cofibrant contractible $n$-operads is equivalent to the homotopy category of $E_n$-algebras. So, homotopically both approaches are equivalent. The difference is that our point of view allows to avoid Boardman-Vogt tensor product and $\infty$-operads. This has some advantage as we can use classical operadic and model theoretic methods and final result is formulated in terms of Quillen equivalences, which is a stronger statement. Also our techniques allows to prove stabilisation not only for weakly unital $k$-categories but for nonunital algebras also. It is known that Additivity Theorem fails in this case even for cofibrant operads [@FV]\[Section 3\]. Our proof is essentially the same in all cases, we just need to choose an appropriate category of $n$-operads. Higher operads and symmetrization {#higheroperads} ================================= $n$-ordinals and $n$-operads ---------------------------- Let $n \geq 0$. Recall [@SymBat Sec. II] that a [*$n$-ordinal*]{} is a finite set $T$ equipped with $n$ binary relations $<_0, \ldots, <_{n-1}$ such that - $<_p$ is nonreflexive, - for every pair $a,b$ of distinct elements of $T$ there exists exactly one $p$ such that $$a<_p b \ \ \mbox{or} \ \ b<_p a,$$ - if $a<_p b $ and $b<_q c$ then $a<_{\min(p,q)} c$. A morphism of $n$-ordinals $\sigma: T \rightarrow S$ is a map of the underlying sets such that $i<_p j$ in $T$ implies that - $\sigma(i) <_r \sigma(j)$ for some $r\ge p$, or - $\sigma(i)= \sigma(j)$, or - $\sigma(j) <_r \sigma(i)$ for $r>p$. Let $Ord(n)$ be the skeletal category of $n$-ordinals and their morphisms. Each $n$-ordinal can be represented as a pruned planar rooted tree with $n$ levels (*pruned $n$-tree* for short), cf. [@SymBat Theorem 2.1]. For example, the $2$-ordinal $$0 <_0 2,\ 0 <_0 3,\ 0 <_0 4,\ 1 <_0 2,\ 1<_0 3,\ 1 <_0 4,\ 0 <_1 1,\ 2 <_1 3,\ 2 <_1 4,\ 3 <_1 4,$$ is represented by the following pruned tree $$\psscalebox{.4 .4} { \begin{pspicture}(0,-1.7822068)(6.81,1.7822068) \psline[linecolor=black, linewidth=0.05](1.06,-0.35779327)(3.06,-1.7577933)(5.06,-0.35779327)(4.96,-0.35779327) \psline[linecolor=black, linewidth=0.05](0.06,1.0422068)(1.06,-0.35779327)(2.06,1.0422068)(2.06,1.0422068) \psline[linecolor=black, linewidth=0.05](3.46,1.0422068)(5.06,-0.35779327)(6.66,1.0422068)(6.66,1.0422068) \psline[linecolor=black, linewidth=0.05](5.06,1.0422068)(5.06,-0.35779327)(5.06,-0.35779327) \rput[b](0.06,1.5422068){\Huge 0} \rput[b](2.06,1.5422068){\Huge 1} \rput[b](3.46,1.5422068){\Huge 2} \rput[b](5.06,1.5422068){\Huge 3} \rput[bl](6.66,1.5422068){\Huge 4} \end{pspicture} }$$ The initial $n$-ordinal $z^nU_0$ has empty underlying set and its representing pruned $n$-tree is degenerate: it has no edges but consists only of the root at level $0$. The terminal $n$-ordinal $U_n$ is represented by a linear tree with $n$ levels. We also would like to consider the limiting case of $\infty$-ordinals. Let $T$ be a finite set equipped with a sequence of binary antireflexive complimentary relations $<_0,<_{-1} \ldots, <_p , <_{p-1} \ldots $ for all integers $p\le 0.$ The set $T$ is called an $\infty$-ordinal if these relations satisfy: -   $a<_p b $  and  $b<_q c$  implies $a<_{min(p,q)} c .$ The definition of morphism between $\infty$-ordinals coincides with the definition of morphism between $n$-ordinals for finite $n.$ The category $Ord(\infty)$ denotes [*the skeletal category of $\infty$-ordinals .*]{} For an $n$-ordinal $R$ we consider its [*vertical suspension*]{} $S(R)$ which is an $(n+1)$-ordinal with the underlying set $R ,$ and the order $<_m $ equal the order $<_{m-1}$ on $R$ $<_0$ is empty. For example, a vertical suspension of the $2$-ordinal from Figure \[symbol\] is the $3$-ordinal $$\scalebox{0.4} { \begin{pspicture}(0,-2.795)(7.237695,2.81) \psline[linewidth=0.05](1.1882031,0.13)(3.188203,-1.2646877)(5.1882033,0.13)(5.088203,0.13) \psline[linewidth=0.05](0.18820313,1.5353125)(1.1882031,0.13531242)(2.188203,1.5353125) \psline[linewidth=0.05](3.5882032,1.5353125)(5.1882033,0.13531242)(6.7882032,1.5353125) \psline[linewidth=0.05](5.18,1.63)(5.18,0.11) \usefont{T1}{ptm}{m}{n} \rput(0.1566211,2.4353125){\Huge 0} \usefont{T1}{ptm}{m}{n} \rput(2.0829296,2.4353125){\Huge 1} \usefont{T1}{ptm}{m}{n} \rput(3.5604882,2.4353125){\Huge 2} \usefont{T1}{ptm}{m}{n} \rput(5.1338477,2.4353125){\Huge 3} \usefont{T1}{ptm}{m}{n} \rput(6.9860744,2.4353125){\Huge 4} \psline[linewidth=0.05](3.18,-1.25)(3.18,-2.77) \end{pspicture} }$$ More generally, one can consider a $p$-suspension $S_p$ where we trivialise the orders $<_p.$ So, the vertical suspension is $S = S_0.$ For example, the suspension $S_2$ of the $2$-ordinal from Figure \[symbol\] is $$\scalebox{0.4} { \begin{pspicture}(0,-2.6623437)(6.9376955,2.6773438) \psline[linewidth=0.05](1.1482031,-1.2426562)(3.1482031,-2.637344)(5.148203,-1.2426562)(5.048203,-1.2426562) \psline[linewidth=0.05](0.14820312,0.1626563)(1.1482031,-1.2373438)(2.1482031,0.1626563) \psline[linewidth=0.05](3.5482032,0.1626563)(5.148203,-1.2373438)(6.7482033,0.1626563) \psline[linewidth=0.05](5.14,0.2573438)(5.14,-1.2626562) \usefont{T1}{ptm}{m}{n} \rput(0.1566211,2.2826562){\Huge 0} \usefont{T1}{ptm}{m}{n} \rput(2.0429296,2.2826562){\Huge 1} \usefont{T1}{ptm}{m}{n} \rput(3.5804882,2.3026564){\Huge 2} \usefont{T1}{ptm}{m}{n} \rput(5.1138477,2.3026564){\Huge 3} \usefont{T1}{ptm}{m}{n} \rput(6.6860743,2.3026564){\Huge 4} \psline[linewidth=0.05](0.16,1.6773438)(0.16,0.1573438) \psline[linewidth=0.05](2.14,1.6773438)(2.14,0.1573438) \psline[linewidth=0.05](3.56,1.6773438)(3.56,0.1573438) \psline[linewidth=0.05](5.14,1.7573438)(5.14,0.2373438) \psline[linewidth=0.05](6.74,1.6773438)(6.74,0.1573438) \end{pspicture} }$$ Suspension operations give us a family of functors $$S_p:Ord(n)\rightarrow Ord(n+1), \ 0\le p \le n.$$ We also define an $\infty$-vertical suspension functor $Ord(n)\rightarrow Ord(\infty)$ as follows. For an $n$-ordinal $T$ its $\infty$-suspension is an $\infty$-ordinal $S^{\infty}T$ whose underlying set is the same as the underlying set of $T$ and $a<_p b$ in $S^{\infty}T$ if $a<_{n+p-1} b$ in $T .$ It is not hard to see that the sequence $$Ord(0)\stackrel{S}{\longrightarrow} Ord(1) \stackrel{S}{\longrightarrow} Ord(2) \longrightarrow \ldots \stackrel{S}{\longrightarrow} Ord(n) \longrightarrow \ldots \stackrel{S^{\infty}}{\longrightarrow} Ord(\infty),$$ exhibits $Ord(\infty)$ as a colimit of $Ord(n) .$ The categories $Ord(n), 0\le n\le \infty$ are operadic in the sense of Batanin and Markl, cf. [@duoid]. This means that $Ord(n)$ is equipped with cardinality and fiber functors. The cardinality functor $$\label{card}|-|: Ord(n) \to \FinSet$$ associates to a $n$-ordinal $T$ its underlying set. Here $\FinSet$ is a skeletal version of the category of finite sets [@duoid].The fiber functor associates to each morphism of $n$-ordinals $\sigma: T \to S$ and $i \in |S|$ the preimage $\sigma^{-1}(i)$ with the induced structure of an $n$-ordinal. The category $\FinSet$ is another example of an operadic category. The fiber functor is given by the preimage like above [@duoid]. Any operadic category $\mathcal{O}$ has an associated category of operads $Op_{\mathcal{O}}(\Ee)$ with values in an arbitrary symmetric monoidal category $(\Ee,\otimes,e)$ [@duoid]. The category $Op_{\FinSet}(\Ee) = SOp(\Ee)$ is the category of classical symmetric operads in $\Ee.$ The category $Op_n(\Ee)$ of $n$-operads in $\Ee$ is, by definition, the category $Op_{Ord(n)}(\Ee).$ Explicitly an $n$-operad in $\Ee$ is a collection $\{A_T\}, {T\in Ord(n)}$ of objects in $\Ee$ equipped with the following structure: - a morphism $\epsilon: e \rightarrow A(U_n)$ (unit); - a morphism $m_{\sigma}:A(S)\otimes A(T_0)\otimes \dots \otimes A(T_k) \rightarrow A(T)$ (multiplication) for each map of $n$-ordinals $\sigma:T \rightarrow S$, where $T_i = \sigma^{-1}(i), \ i\in\{0,\ldots,k\} = |S|.$ They must satisfy the following identities: - for any composite map of $n$-ordinals T&\^&S&\^&R the associativity diagram (300,45)(2,0) (20,35)[(0,0)]{} (20,31)[(0,-1)[12]{}]{} (94,31)[(0,-1)[12]{}]{} (90,36.5)[(0,0)]{} (88,33.5)[(0,0)]{} (55,36.3)[(0,0)]{} (50,35)[(1,0)[12]{}]{} (20,15)[(0,0)]{} (94,15)[(0,0)]{} (60,5)[(0,0)]{} (35,11)[(4,-1)[19]{}]{} (85,11)[(-4,-1)[19]{}]{} commutes, where $$A(S_{\bullet})= A(S_0)\otimes \dots \otimes A(S_k),$$ $$A(T_{i}^{\bullet}) = A(T_i^0) \otimes \dots\otimes A(T_i^{m_i})$$ and $$A(T_{\bullet}) = A(T_0)\otimes \dots \otimes A(T_k);$$ - for an identity $\sigma = id : T\rightarrow T$ the diagram A(T)… & & & & A(T)A(U\_n)…A(U\_n)\                &(2,2) & & (2,2) &\ & & A(T) & & commutes; - for the unique morphism $T\rightarrow U_n$ the diagram e A(T) & & & & A(U\_n)A(T)\     &(2,2) & & (2,2) &\ & & A(T) & & commutes. Functors between operadic categories which preserve cardinalities and fibers are called [*operadic functors*]{} [@duoid]. The cardinality functor is always an operadic functor. An operadic functor between operadic categories $p:\mathcal{O}\to \mathcal{O}'$ induces a restriction functor $p^*: Op_{\mathcal{O}'}(\Ee)\to Op_{\mathcal{O}}( \Ee)$ [@duoid]. If $\Ee$ is a cocomplete symmetric monoidal category then the restriction functor has a left adjoint $p_!: Op_{\mathcal{O}}(\Ee)\to Op_{\mathcal{O}'}( \Ee).$ Any of the suspension functors is an operadic functor. In particular the following diagram commutes: $$\begin{gathered} \label{cardinality}\begin{diagram}[small,UO] Ord(n) & & \rTo^{S_p} & & Ord(n+1) \\ \quad |-| &\rdTo(2,2) & & \ldTo(2,2) &|-|\quad \\ & & \FinSet & & \end{diagram}\end{gathered}$$ Hence, it induces the following diagram of adjunctions: (200,0)(-10,30) (20,25)[(0,0)]{} (24,21)[(2,-1)[16]{}]{} (35,13)[(-2,1)[16]{}]{} (51,13)[(2,1)[16]{}]{} (63,21)[(-2,-1)[16]{}]{} (43,25)[(0,0)]{} (43,10)[(0,0)]{} (58,26)[(-1,0)[26]{}]{} (32,24)[(1,0)[26]{}]{} (40,21) (60,15) (43,17) (33,17) (18,16) (71,25)[(0,0)]{} (41,27.5) $$\label{dessym}$$ In this diagram the functor $des_n$ is the restriction functor along cardinality functor called [*desymmetrisation functor*]{} and $sym_n$ is its left adjoint called [*symmetrisation functor*]{} [@EHBat]. Let $Ass_n\in Op_n(\Ee)$ be an $n$-operad given by $Ass_{n}(T) = e, T\in Ord(n)(\Ee).$ It is immediately from the definition of the suspension functors that $S_p^*(Ass_{n+1}) = Ass_n.$ On the other hand, $sym_1(Ass_1) = Ass$ is the classical symmetric operad for monoids while for $n\ge 2 \ $, $sym_n(Ass_n) = Com$ is the operad for commutative monoids. This is the classical Ekcman-Hilton argument in disguise, cf. [@EHBat]. Let now $\Ee$ be a closed symmetric monoidal category. An object $X\in \Ee$ has an associated endomorphism symmetric operad $\End_X : $ $$\End_X(n) = \uEe(X^{\otimes^n},X),$$ where $\uEe$ is the internal hom of $\Ee.$ An algebra of a symmetric operad $A\in SOp(\Ee)$ is an object $X\in \Ee$ equipped with a morphism of operads $A\to \End_X.$ An algebra of an $n$-operad $B\in Op_n(\Ee)$ is an object $X\in \Ee$ equipped with a morphism of operads $B\to des_n(\End_X).$ \[algebra\] Let $\Ee$ be a cocomplete closed symmetric monoidal category and let $B\in Op_n(\Ee).$ The following categories are equivalent: 1. the category of algebras of the $n$-operad $B;$ 2. the category of algebras of the $(n+1)$-operad $(S_p)_!(B);$ 3. the category of algebras of the symmetric operad $sym_n(B).$ If $\Ee$ is cocomplete the symmetrisation functor $sym_n$ exists and we use the adjuction $sym_n\dashv des_n$ to transform a $B$-algebra structure $B\to des_n(\End_X)$ to a $sym_n(B)$-algebra structure $sym_n(B)\to \End_X.$ The proof for $(S_p)_!(B)$-algebra structure is similar. A symmetric operad ($n$-operad) $A\in SOp(\Ee)$ ($A\in Op_(\Ee)$) is called constant-free if $A(0)$ ($A(z^n U_0)$) is an initial object in $\Ee.$ The category $CFSOp(\Ee)$ of constant free symmetric operad is equivalent to the category $Op_{\FinSet_0}(\Ee)$ where $\FinSet_0$ is an operadic subcategory of $\FinSet$ of nonemty finite sets and surjective maps. Analogously, the category of constant-free $n$-operads $CFOp_n(\Ee)$ is equivalent to $Op_{Ord_0(n)}(\Ee)$ where $Ord_0(n)$ is the operadic category of nonempty $n$-ordinals and surjections. This observation allows us to reformulate all previous statements for symmetric operads and $n$-operads in the context of constant-free operads. So, the commutative triangle of adjunctions (\[dessym\]), as well as the analogue of Lemma \[algebra\] hold for constant-free operads too. Polynomial monads ----------------- Symmetric and $n$-operads are examples of algebras of polynomial monads. A finitary polynomial $P$ is a diagram in $\Set$ of the form J&\^s&E&\^p&B&\^t&I where $p^{-1}(b)$ is a finite set for any $b\in B.$ Each polynomial $P$ generates a functor called [*polynomial functor*]{} between functor categories $$\underline{P}:\Set^J \to \Set^I$$ which is defined as the composite functor \^J&\^[s\^\*]{}&\^E&\^[p\_\*]{}&\^B&\^[t\_!]{}&\^I So, the functor $\underline{P}$ is given by the formula $$\label{PPP} \underline{P}(X)_i = \coprod_{b\in t^{-1}(i)} \prod_{e\in p^{-1}(b)} X_{s(e)},$$ which explains the name ‘polynomial’ that is a sum of products of formal variables. [*A cartesian morphism between polynomial functors*]{} is their natural transformation such that each naturality square is a pullback. Composition of finitary polynomial functors is again a finitary polynomial functor. Finitary polynomial functors and their cartesian morphisms form a $2$-category category $\Poly_f.$ A finitary polynomial monad is a monad in the $2$-category $\Poly_f.$ A finitary polynomial functor preserves filtered colimits and pullbacks. A polynomial monad is cartesian that is its underlying functor preserves pullbacks and its unit and multiplication are cartesian natural transformations. One can consider more general polynomial functors of nonfinitary type. Since in this paper we don’t need these more general functors we call finitary polynomial monads simply polynomial monads. Let $\Ee$ be a cocomplete symmetric monoidal category and $P$ be a polynomial functor. One can construct a functor $\underline{P}^{\mathcal{E}}:\Ee^I \to \Ee^I$ given by the formula similar to (\[PPP\]): $$\underline{P}^{\Ee}(X)_i = \coprod_{b\in t^{-1}(i)} \bigotimes_{e\in p^{-1}(b)} X_{s(e)}.$$ If $I = J$ and $P$ was given a structure of a polynomial monad then $\underline{P}^{\Ee}$ acquires a structure of a monad on $\Ee^I.$ The category of algebras of a polynomial monad $P$ in a cocomplete symmetric monoidal category $\Ee$ is the category of algebras of the monad $\underline{P}^{\Ee}.$ There is a polynomial monad $SO$ whose category of algebras is isomorphic to the category of symmetric operads. This monad is given by the polynomial $$\begin{diagram}\NN&\lTo^s&{\mbox{${\mathtt{OrderedRootedTrees}}$}}^*&\rTo^p&{\mbox{${\mathtt{OrderedRootedTrees}}$}}&\rTo^t&\NN \end{diagram}$$ in which $\NN$ is the set of isomorphism classes of objects in $\FinSet$ and\ ${\mbox{${\mathtt{OrderedRootedTrees}}$}}$ is the set of isomorphism classes of ordered rooted trees. The multiplication in $SO$ is induced by insertion of a tree to a vertex of another tree, cf. [@BB]\[Section 9.4\]. There is a polynomial monad $O(n)$ whose category of algebras is isomorphic to the category of algebras of $n$-operads. It is generated by the polynomial [Ord(n)]{}&\^s&\^\*&\^p&\_&\^t&[Ord(n)]{} where $\tt Ord(n)$ is the set of isomorphism classes of $n$-ordinals and ${\mbox{${\mathtt{nPlanarRootedTrees}}$}}$ is the set of isomorphism classes of $n$-planar trees. The multiplication of the monad is induced by insertion of $n$-planar trees into vertices of $n$-planar trees, cf. [@BB]\[Proposition 12.15\] The commutative triangle (\[cardinality\]) induces in an obvious way a commutative triangle of polynomial monads $$\begin{gathered} \label{poltr}\begin{diagram}[small,UO] O(n) & & \rTo & & O(n+1) \\ |-| \quad &\rdTo(2,2) & & \ldTo(2,2) & \quad |-| \\ & & SO & & \end{diagram}\end{gathered}$$ and the triangle of adjunctions (\[dessym\]) is also induced by (\[poltr\]). There is a polynomial monad $CFSO$ such that its category of algebras is equivalent to the category of constant-free symmetric operads $CFSOp(\Ee).$ [@BB]\[Section 9.4\]. The corresponding generating polynomial is $$\begin{diagram}\NN_0&\lTo^s&{\mbox{${\mathtt{OrderedRootedTrees}}$}}_{reg}^*&\rTo^p&{\mbox{${\mathtt{OrderedRootedTrees}}$}}_{reg}&\rTo^t&\NN_0\end{diagram}$$ where $\NN_0$ is the set of isomorphism classes of nonempty finite sets and\ ${\mbox{${\mathtt{OrderedRootedTrees}}$}}_{reg}$ is the set of isomorphism classes of [*regular*]{} ordered rooted trees. We call a tree regular if for any vertex of the tree the set of incoming edges at this vertex is not empty (so regular trees do not have stumps) . Similarly there is a polynomial monad $CFO(n)$ whose category of algebras is equivalent to the category of constant-free $n$-operads $CFOp_n(\Ee),$ cf. [@BB]\[Proposition 12.19\]. It is generated by a polynomial [ROrd(n)]{}&\^s&\_[reg]{}\^\*&\^p&\_[reg]{}&\^t&[ROrd(n)]{} where $\tt ROrd(n)$ is the set of isomorphism classes of nonempty $n$-ordinals;\ ${\mbox{${\mathtt{nPlanarRootedTrees}}$}}_{reg}$ is the set of isomorphism classes of regular $n$-planar trees. Classifiers for maps between polynomial monads ---------------------------------------------- For any cartesian morphism of cartesian monads $\phi:S\to T$ one can associate a category (in fact a strict categorical $T$-algebra) ${\mbox{${\bf T}^{\scriptstyle \tt S}$}}$ with certain universal property [@EHBat; @BB; @W]. This category is called [*the classifier of internal $S$-algebras inside categorical $T$-algebras*]{}. Classifiers allow to compute the left adjoint functor between categories of algebras induced by $\phi$ is terms of a colimit over ${\mbox{${\bf T}^{\scriptstyle \tt S}$}}.$ In particuar, the symmetrisation functor $sym_n$ admits an explicit description as a colimit over the classifier ${\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}$ of the map of polynomial monads $|-|:O(n)\to SO$. (See [@EHBat].) The homotopy type of the nerve of the classifier ${\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}$ was computed in [@SymBat]: $$N({\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}})= \coprod_{k}N({\mbox{${\bf SO}_k^{\scriptstyle \tt O(n)}$}})$$ where $N({\mbox{${\bf SO}_k^{\scriptstyle \tt O(n)}$}})$ has homotopy type of the configuration space of $k$ points in $\RR^n.$ It follows that for $n\ge 2$ and $0\le i \le n-2$ the homotopy groups $\pi_i(N({\mbox{${\bf SO}_k^{\scriptstyle \tt O(n)}$}}),a)= 0.$ This can be reformulated as that the nerve of the unique map of categories $$\label{!} !: {\mbox{${\bf SO}_k^{\scriptstyle \tt O(n)}$}}\to 1$$ is an $(n-2)$-local weak equivalence of simplicial sets [@cis06]\[Corollary 9.2.15\]. For $n=\infty$ we also have a classifier ${\mbox{${\bf SO}^{\scriptstyle \tt O(\infty)}$}}.$ It is not hard to see that ${\mbox{${\bf SO}^{\scriptstyle \tt O(\infty)}$}}$ is the colimit of this sequence of classifiers ${\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}$ induced by the vertical suspension functor and so the nerve of ${\mbox{${\bf SO}^{\scriptstyle \tt O(\infty)}$}}$ is a contractible simplicial symmetric operad. Stabilization of algebras of $n$-operads {#stabilization} ======================================== Model categories of symmetric operads, $n$-operads and their algebras --------------------------------------------------------------------- Now we assume that our base symmetric monoidal category $(\Ee,\otimes, e)$ is a cofibrantly generated monoidal model category. Let $C$ be a set and $(T,\mu,\epsilon)$ be a monad on the category $\Ee^C.$ There is a product model structure on the category $\Ee^C$ and so one can try to induce a model structure on the category of $T$-algebras as follows. We define an algebra morphism $f:X\to Y$ to be a weak equivalence (fibration) if $U(f)$ is a weak equivalence (resp. fibration) in $\Ee^C,$ where $U$ is the forgetful functor from $T$-algebras to $\Ee^C.$ It is more often with this definition that we will get only a semimodel structure [@Fresse; @WY] not the full model structure on algebras, but it is sufficient for our purpose. If such a (semi)model structure exists we call it transferred model structure. An algebra $X$ of $T$ is called [*relatively cofibrant*]{} if $U(X)$ is a cofibrant object in $\Ee^C.$ \[poly\] If $e \in \Ee$ is cofibrant then for any polynomial monad $T$ the category $Alg_T(\Ee)$ admits a transferred semimodel structure in which all cofibrant algebras are relatively cofibrant. The category of algebras of $T$ is isomorphic to the category of algebras of a coloured symmetric operad $O(T)$ [@BB] whose spaces of operations are of the form $e\otimes O(c_1,\ldots,c_m;c) = \sqcup_{O(c_1,\ldots,c_m;c)} e$ where $O(c_1,\ldots,c_m;c)$ is a set with free action of symmetric groups. If $e$ is cofibrant this underlying object of operations is a $\Sigma$-cofibrant object and so $O(T)$ is $\Sigma$-cofibrant operad. The statement of proposition follows now from [@WY]\[Theorem 6.3.1\]. \[homo\] If $e$ is cofibrant in $\Ee$ then 1. The categories $Op_n(\Ee), CFOp_n(\Ee), SOp(\Ee)$ and $CFSOp(\Ee)$ admit transferred semimodel structures; 2. Cofibrant symmetric and $n$-operads are relatively cofibrant; 3. The triangle (\[dessym\]) is a triangle of Quillen adjunctions; 4. The category of algebras of cofibrant symmetric and cofibrant $n$-operads admit transferred (semi)model structures; 5. For any weak equivalence between cofibrant operads $f:A\to B$ the induced adjunction $f_!\dashv f^*$ between categories of algebras is a Quillen equivalence. Symmetric operads (general or constant free) as well as $n$-operads (general or constant free) are algebras of polynomial monads. So we are in the conditions of Proposition \[poly\]. The existence of transferred (semi)model structure on algebras of cofibrant symmetric operads is proven in [@Fresse]\[Proposition 4.4.3\] and [@WY]. The existence of transferred model structure on algebras of cofibrant $n$-operads follows from this and Lemma \[algebra\]. Indeed, since $sym_n$ is a left Quillen functor $sym_n(A)$ is a cofibrant symmetric operad for any cofibrant $n$-operad $A.$ The last point of the Proposition is proven in [@Fresse]\[Proposition 4.4.6\]. This semimodel structure on operads is often a full model structures [@BB; @WY] but not always. For example, the category of symmetric operads ($n$-operads for $n\ge 2$) in the category of chain complexes of finite characteristic does not admit full model structures [@BB] but there is a full model structure on the category of constant-free symmetric or $n$-operads for any compactly generated monoidal model category which satisfies the monoid axiom of Schwede and Shipley [@BB]. Stabilization of algebras ------------------------- In this section we prove stabilisation of homotopy categories of algebras of $n$-operads. The same proof works for constant-free $n$-operads so we do not mention them anymore. To simplify notation we fix a $p\ge 0$ and call $p$-suspension of an $n$-ordinal simply a suspension and we denote it $S:Ord(n)\to Ord(n+1).$ We also denote $S$ the map of polynomial monads induced by the suspension. The proof of our main result does not depend on $p.$ Let $\Ee$ satisfies all assumptions of Proposition \[homo\]. Let $G_{n}\in Op_n(\Ee)$ be a cofibrant replacement for $Ass_n .$ We will denote by $B_{n}(\Ee)$ the category of $G_{n}$-algebras in $\Ee.$ Let also $E_{\infty}(\Ee)$ be the model category of $E_{\infty}$-algebras in $\Ee$ that is the category of algebras of a cofibrant replacement $E$ of the symmetric operad $Com.$ The category $B_{n}(\Ee)$ is equivalent to the category of algebras of the symmetric operad $sym_n(G_{n})$ which is a cofibrant $E_n$-operad, cf. [@SymBat]. By Lemma \[algebra\] there is an isomorphism of categories of algebras of an $n$-operad $G_n$ and an $(n+1)$-operad $S_!(G_{n}).$ Also observe that $S_!$ is a left Quillen functor and, hence, preserves cofibrations. In particular, the operad $S_!(G_{n})$ is cofibrant. There is a map of $(n+1)$-operads $i: S_!(G_{n})\to G_{n+1}.$ Indeed, since $S^*(Ass_{n+1}) = Ass_n$ by adjunction we have a map $S_!(G_{n})\to Ass_{n+1}.$ We also have a trivial fibration $G_{n+1}\to Ass_{n+1}.$ Since $S_!(G_{n})$ is cofibrant there is a lifting $i: S_!(G_{n})\to G_{n+1}.$ Without loss of generality we can think that $i$ is a cofibration because if it is not we can always factorise it as cofibration followed by a trivial fibration and so replace $G_{n+1}$ by another cofibrant operad with a trivial fibration to $Ass_{n+1}.$ The morphism $i$ induces a Quillen adjunction between algebras of $S_!(G_{n})$ and algebras of $G_{n+1}$ and so between algebras of $G_{n}$ and $G_{n+1}.$ Slightly abusing notations we will denote this adjunction $i^*\vdash i_!.$ Recall that a *standard system of simplices* in a monoidal model category $\Ee$ is a cosimplicial object $\delta$ in $\Ee$ satisfying the following properties [@BergerMoerdijk0 Definition A.6]: - $\delta$ is cofibrant for the Reedy model structure on $\Ee^{\Delta}$, - $\delta^0$ is the unit object $I$ of $\Ee$ and the simplicial operators $[m]\to[n]$ act via weak equivalences $\delta^m\to\delta^n$ in $\Ee$, and - the simplicial realization functor $|-|_\delta=(-)\otimes_\Delta\delta:\Ee^{\Delta^{op}}\to \Ee$ is a symmetric monoidal functor whose structural maps $$|X|_\delta\otimes_V |Y|_\delta\to|X\otimes_V Y|_{\delta}$$ are weak equivalences for Reedy-cofibrant objects $X,Y \in \Ee^{\Delta^{op}}$. Recall also that a model category $\Ee$ is called [*$k$-truncated*]{} if for all $X,Y\in \Ee$ $$\pi_i(\widetilde{\Ee}(X,Y), a) = 0 \ , \ i > k ,$$ for any choice of base point $a.$ Here $\widetilde{\Ee}(X,Y)$ is a homotopy function complex of $\Ee$ [@Hirschhorn]. \[stab\] Let $(\Ee,\otimes,e)$ be a cofibrantly generated monoidal model category whose unit $e\in \Ee$ is cofibrant. Then - for any $2\le n < \infty$ there is a commutative triangle of Quillen adjunctions: (200,30)(-10,5) (20,25)[(0,0)]{} (24,21)[(2,-1)[16]{}]{} (35,13)[(-2,1)[16]{}]{} (51,13)[(2,1)[16]{}]{} (63,21)[(-2,-1)[16]{}]{} (43,25)[(0,0)]{} (43,10)[(0,0)]{} (58,26)[(-1,0)[26]{}]{} (32,24)[(1,0)[26]{}]{} (40,21) (38,17) (68,25)[(0,0)]{} (40,27) - If $\Ee$ has a standard system of simplices then there is a Quillen equivalence (200,15)(-10,17) (22,25)[(0,0)]{} (58,26)[(-1,0)[26]{}]{} (32,24)[(1,0)[26]{}]{} (38,17) (68,25)[(0,0)]{} - If, in addition, $\Ee$ is $k$-truncated then the triangle from (a) is a triangle of Quillen equivalences for any $ n \ge k+2 .$ Apply the symmetrisation functor $sym_{n}$ to the cofibrant replacement $G_{n}\to Ass_{n}.$ We have a morphism $P_{n}: sym_{n}(G_{n})\to sym_{n}(Ass_{n})= Com$ and, hence a lifting of this morphism to the morphism of operads $sym_{n}(G_{n})\to E.$ By (\[dessym\]) we can replace it by a morphism $sym_{n+1}(S_!(G_{n}))\to E.$ Applying $sym_{n+1}$ to the cofibration $i: S_!(G_{n})\to G_{n+1}$ we have a composite $$sym_{n+1}(S_!(G_n))\to sym_{n+1}(G_{n+1}) \to sym_{n+1}(1_{n+1}) = Com$$ and since $G_n$ is cofibrant we have a lifting $$sym_{n+1}(S_!(G_n))\to E.$$ So, we have a commutative square of operads (40,34)(-29,0) (13,25)[(0,0)]{} (21.8,11)[(3,2)[4]{}]{} (27.5,14.8)[(3,2)[4]{}]{} (34,19)[(3,2)[4]{}]{} (12,21)[(0,-1)[10]{}]{}(28,25)[(1,0)[11]{}]{} (29,8) (-5,15) (42,25)[(0,0)]{} (42,21)[(0,-1)[10]{}]{} (13,7)[(0,0)]{} (27,7)[(1,0)[11]{}]{} (44,7)[(0,0)]{} and, hence, a lifting $sym_{n+1}(G_{n+1})\to E.$ The upper commutative triangle of operads induces the triangle of Quillen adjunctions. This proves statement (a). Let us first prove statement (c) of the theorem, so we assume that $\Ee$ is $k$-truncated. By construction the composite of left vertical morphism and the bottom horizontal morphism is $sym_{n+1}(S_!(P_n))$ and by naturality of isomorphism $sym_n \simeq sym_{n+1}(S_!)$ is isomorphic to $P_n.$ To finish the proof it will be enough to show that $P_{n}$ and $P_{n+1}$ are weak equivalences of operads and so $sym_n(G_{n})$ and $sym_{n+1}(G_{n+1})$ are both cofibrant replacements of $Com$ in the category of symmetric operads in $\Ee.$ The morphism $sym_{n+1}(i)$ is then a weak equivalence by two out of three property. Since $G_{n}$ is cofibrant the operad $sym_n(G_{n})$ is weakly equvalent to the operad $\mathbb{L}sym_n(G_{n}),$ where $\mathbb{L}sym_n$ is the left derived symmetrisation functor. The underlying object of $G_{n}$ is cofibrant and $\Ee$ has standard system of simplices so we can apply Theorem 8.2 from [@BB]. This theorem states that $\mathbb{L}sym_n(G_{n})(T)$ is the homotopy colimit in $\Ee$ of a diagram $\widetilde{G_{n}}: {\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}\to \Ee.$ The functor $\widetilde{G_{n}}$ representing the $n$-operad $G_{n}$ has value on an object $\tau \in {\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}$ given by a certain tensor product of values of the operad $G_{n}$ and, hence, the functor $\widetilde{G_{n}}$ is equipped with a canonical weak equivalence $\widetilde{G_{n}}(\tau)\to !^*(e),$ where $!^*(e)$ is the constant functor on ${\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}$ whose value is the tensor unit $e.$ Since both functors $\widetilde{G_{n}}$ and $!^*(e)$ are pointwise cofibrant we have a weak equivalence of homotopy colimits. It remains to show that the canonical morphism $$\hocolim_{{\bf SO}^{\tt O(n)}}!^*(e)\to e$$ is a weak equivalence. For this it is enough to prove that for any fibrant object $S\in \Ee$ the induced map of simplicial sets $$\widetilde{\Ee}(\hocolim_{{\bf SO}^{\tt O(n)}}!^*(e), S)\leftarrow \widetilde{\Ee}(e,S)$$ is a weak equivalence. Equivalently, we have to prove that for any fibrant $k$-truncated simplicial set $W$ the map $$\label{holim}\holim_{{\bf SO}^{\tt O(n)}}!^*(W) \leftarrow W$$ is a weak equivalence. Let $\Ss(-,-)$ be the internal hom in simplicial sets. We have $$\Ss(N({\mbox{${\bf SO}^{\scriptstyle \tt O(n)}$}}),W) \simeq \Ss(\hocolim_{{\bf SO}^{\tt O(n)}} !^*(1) , W) \simeq$$ $$\simeq\holim_{{\bf SO}^{\tt O(n)}}!^*(\Ss(1,W)) = \holim_{{\bf SO}^{\tt O(n)}} !^*(W),$$ and the map (\[holim\]) is induced by (\[!\]) so it is a weak equivalence since $N(!)$ is an $(n-2)$-equivalence and, hence, $i$-equivalence for each $i\le n-2.$ So, we proved point (c) of the Theorem. The argument for (b) is identical but we don’t need $\Ee$ to be truncated because the classifier of $\infty$-operads inside symmetric operads is contractible. \[wkg\] The suspension functor induces an equivalence between homotopy category of $n$-tuply monoidal weak $k$-groupoids and $(n+1)$-tuply monoidal weak $k$-groupoids for $n\ge k+2.$ We apply Theorem \[stab\] to the category of homotopy $k$-types $Sp_k$ which is the $k$-truncation of the model category of simplicial sets $Sp =Set^{\Delta^{op}}$ with its Kan model structure [@cis06]. Weak $k$-groupoids are fibrant objects in this category. Corollary \[wkg\] implies classical Freudental stabilisation theorem (cf. [@BD]). Recall that Rezk’s $(m+k,m)$-categories are fibrant objects in the model category $\Theta_m Sp_k, -2\le k\le \infty $ which is a truncation of the model category of Rezk’s complete $\Theta_m$-spaces $\Theta_m Sp_{\infty}$ . The category $\Theta_m Sp_{\infty}$ is itself a certain Bousfireld localisation of the category of simplicial presheaves $Sp^{\Theta_m^{op}}$ with its injective model structure. This is a cartesian closed model category which is $(m+k)$-truncated and satisfies all hypothesis of Theorem \[stab\](see [@Rezk]). The category of Rezk’s $n$-tuply monoidal $(m+k,m)$-categories is the category of fibrant objects in the (semi)model category $B_n(\Theta_m Sp_k).$ We immediately have \[wkc\] The suspension functor induces an equivalence between homotopy category of Rezk’s $n$-tuply monoidal $(m+k,m)$-categories and Rezk’s $(n+1)$-tuply monoidal $(m+k,m)$-categories for $n\ge m+k+2.$ If $m=0$ the category $\Theta_0 Sp_k$ is isomorphic (as a cartesian model category) to the category $Sp_k$ (cf. [@Rezk]) and so the Corollary \[wkc\] is a particular case of Corollary \[wkg\]. If $k=0$ the fibrant objects of the category $\Theta_m Sp_m$ are weak $m$-categories and so we proved classical Baez-Dolan Stabilization Hypothesis for Rezk $m$-categories. The choice of the suspension functor amounts to the choice of a multiplicative structure on an algebras from $B_{n+1}(\Ee)$ which we would like to ‘forget’. Theorem \[stab\] asserts that up to homotopy this choice in stable dimensions is not important. The argument of the Theorem \[stab\] works equally well for the Swiss-Cheese type symmetric and $n$-operads [@SymBat]. The stabilisation result amounts then to the stable version of the Swiss-Cheese conjecture of Kontsevich, cf. [@Kon]. Another conclusion from the proof of the Theorem \[stab\] is that some interesting results about equivalence of homotopy categories of algebras can be proved once we have a map of polynomial monads $\phi:S\to T$ such that the classifier ${\mbox{${\bf T}^{\scriptstyle \tt S}$}}$ is aspherical with respect to a fixed fundamental localiser $\mathcal W$ [@cis06]. We hope to make use of this observation in a future. [**Acknowledgements.**]{} I wish to express my gratitude to C. Berger, D.-C.Cisinski, E. Getzler, R. Haugseng, A. Joyal, S. Lack, M. Markl, R. Street, M. Weber, D. White for many useful discussions. I am also grateful to the referee for many suggestions which allow to improve the presentation of the result. The author also gratefully acknowledges the financial support of Scott Russel Johnson Memorial Foundation, Max Planck Institut für Mathematik and Australian Research Council (grants No. DP0558372, No. DP1095346). [99]{} Baez J., Dolan J.,[*Higher-dimensonal algebra and topological quantum field theory,*]{} Journal Math. Phys. **36** (1995), 6073–6105. Batanin M. A., [*The symmetrization of $n$-operads and compactification of real configuration spaces,*]{} Adv. Math. **211** (2007), 684–725. Batanin M. A., [*The Eckmann-Hilton argument and higher operads,*]{} Adv. Math. **217** (2008), 334–385. Batanin M. A., [*Locally constant $n$-operads as higher braided operads,*]{} [J. of Noncommutative Geometry]{} **4** (2010), 237–265. Batanin M.A., Berger C., [*Homotopy theory of algebras over polynomial monads,*]{} arXiv:1305.0086. Batanin M.A., Markl M., [*Operadic categories and duoidal Deligne’s conjecture,*]{} Advances in Mathematics , pp. 1630-1687, 2015. Berger C., Moerdijk I., [*Axiomatic homotopy theory for operads,*]{} Comment. Math. Helv. **78** (2003), 805–831. Berger C., Moerdijk I., [*The Boardman-Vogt resolution of operads in monoidal model categories*]{}, Topology **45** (2006), 807–849. Bergner J., Rezk C., [*Comparison of models for $(\infty,n)$-categories, II*]{}, arXiv:1406.4182. Breen L., [*On the classification of $2$-gerbs and $2$-stacks*]{}, Astérisque [bf 225,]{} Soc. Math. de France (1994). Cheng E., Gurski N., [*The periodic table of n-categories for low dimensions I: degenerate categories and degenerate bicategories*]{}, Contemp. Math. **431** (2007), 143–164. Cisinski D.-C., [*Les prefaisceaux comme modeles des types d’homotopie,*]{} [Astérisque, Soc. Math. France]{}, [**308**]{}, 2006. Fresse B., [*Modules over operads and functors,*]{} Lecture Notes in Mathematics [**1967**]{}, Springer Verlag, 2009, 318 pages. Fiedorowicz Z., Vogt R., [*An Additivity Theorem for the Interchange of $E_n$- Structures,*]{} arXiv:1102.1311. Gepner D., Haugseng R., [*Enriched $\infty$-categories via nonsymmetric $\infty$-operads*]{}, [*Advances in Math.*]{}, **279**, (2015), pp. 575-716. Hirschhorn P., [*Model Categories and their Localizations*]{}, Math. Surveys Monogr., vol. **99**, Amer. Math. Soc., Providence, RI, 2003. Kontsevich M., [*Operads and motives in deformation quantisation,*]{} Lett. Math. Phys. **48** (1999), 35–72. Lack S., [*A Quillen model structure for $2$-categories,*]{} K-theory **26** (2002), 171–205. Lurie J., [*Higher algebra,*]{} available at [*http://www.math.harvard.edu/ lurie/*]{} 2014. May J. P., [*The geometry of iterated loop spaces*]{}, Lect. Notes Math. **271**, Springer-Verlag, Berlin, 1972. Rezk, C., [*A cartesian presentation of weak $n$-categories.*]{} Geometry $\&$ Topology **14** (2010), 521–571. Simpson C., [*On the Breen-Baez-Dolan stabilization hypothesis for TamsamaniÕs weak n-categories,*]{} arXiv:math/9810058. Spitzweck M., [*Operads, Algebras and Modules in Model Categories and Motives*]{}, PhD thesis, Bonn, 2001. Weber M., [*Internal algebra classifiers as codescent objects of crossed internal categories*]{}, [arXiv:1503.07585]{}. Weber M., [*Algebraic Kan extensions along morphisms of internal algebra classifiers*]{}, [arXiv:1511.04911v2]{}. White D., Yau D., [*Bousfield Localization and Algebras over Colored Operads*]{}, arXiv:1503.06720. [Macquarie University,\ North Ryde, 2109 Sydney, Australia.]{}\ *E-mail:* michael.batanin$@$mq.edu.au [^1]: Lurie formulated his argument in the context of $(n,1)$-categories but observed that it can be extended to a more general context of $(n,k)$-categories. This has been done by Gepner and Haugseng in [@GH]. [^2]: A weaker version of stabilisation hypothesis was earlier proved by Simpson in [@Simpson]. [^3]: Gepner and Haugseng [@GH] show that such an interpretation is also possible using their weak enrichment approach.
--- abstract: | Recently it was shown (by the author) that every graph of size $q$ (the number of edges) and minimum degree $\delta$ is hamiltonian if $q\le\delta^2+\delta-1$ (arXiv:1107.2201v1). In this paper we present the exact analog of this result for dominating cycles: if $G$ is a 2-connected graph with $q\le8$ if $\delta=2$ and $q\le (3(\delta-1)(\delta+2)-1)/2$ if $\delta\ge3$, then each longest cycle in $G$ is a dominating cycle. The result is sharp in all respects.\ Key words: Dominating cycle, size, minimum degree. author: - 'Zh.G. Nikoghosyan' title: '**A Size Upper Bound for Dominating Cycles**' --- Introduction ============ Only finite undirected graphs without loops or multiple edges are considered. We reserve $n$, $q$, $\delta$ and $\kappa$ to denote the number of vertices (order), the number of edges (size), the minimum degree and the connectivity of a graph, respectively. A good reference for any undefined terms is [@[1]]. The earliest sufficient condition for a graph to be hamiltonian was developed in 1952 due to Dirac [@[2]] and is based on the natural idea that if a sufficient number of edges are present in the graph on $n$ vertices (by keeping the minimum degree at a fairly high level) then a Hamilton cycle will exist.\ **Theorem A [@[2]].** Every graph with $\delta\ge\frac{1}{2}n$ is hamiltonian.\ A direct link between the number of edges and Hamilton cycles was established in 1959 due to Erdös and Gallai [@[3]]. **Theorem B [@[3]].** Every graph with $q\ge\frac{1}{2}(n^2-3n+5)$ is hamiltonian.\ Recently it was proved a little surprising and, in fact, a contrary statement ensuring the existence of a Hamilton cycle if the number of edges is less than $\delta^2+\delta$.\ **Theorem C [@[5]].** Every graph with $q\le\delta^2+\delta-1$ is hamiltonian.\ In this paper we present the exact analog of Theorem C for dominating cycles.\ **Theorem 1.** Let $G$ be a 2-connected graph. If $$q\le\left\{ \begin{array}{lll} 8 & \mbox{if} & \mbox{ }\delta=2, \\ \frac{3(\delta-1)(\delta+2)-1}{2} & \mbox{if} & \mbox{ }% \delta\ge3, \end{array} \right.$$ then each longest cycle in $G$ is a dominating cycle.\ To show that Theorem 1 is sharp, suppose first that $\delta =2$. The graph $K_1+2K_2$ shows that the connectivity condition $\kappa\ge2$ in Theorem 1 can not be relaxed by replacing it with $\kappa\ge1$. The graph with vertex set $\{v_1,v_2,...,v_8\}$ and edge set $$\{v_1v_2,v_2v_3,v_3v_4,v_4v_5,v_5v_6,v_6v_1,v_1v_7,v_7v_8,v_8v_4\},$$ shows that the size bound $q\le8$ can not be relaxed by replacing it with $q\le9$. Finally, the graph $K_2+3K_1$ shows that the conclusion “each longest cycle in $G$ is a dominating cycle” can not be strengthened by replacing it with “$G$ is hamiltonian”. Now let $\delta\ge3$. The graph $K_1+2K_\delta$ shows that the connectivity condition $\kappa\ge2$ in Theorem 1 can not be relaxed by replacing it with $\kappa\ge1$. Further, the graph $K_2+3K_{\delta-1}$ shows that the size bound $q\le(3(\delta-1)(\delta+2)-1)/2$ can not be relaxed by replacing it with $q\le3(\delta-1)(\delta+2)/2$. Finally, the graph $K_\delta+(\delta+1)K_1$ shows that the main conclusion “each longest cycle in $G$ is a dominating cycle” can not be strengthened by replacing it with “$G$ is hamiltonian”. So, Theorem 1 is best possible in all respects. The following theorems are useful.\ **Theorem D [@[2]].** Every 2-connected graph either has a Hamilton cycle or has a cycle of length at least $2\delta$.\ **Theorem E [@[4]].** Let $G$ be a graph, $C$ a longest cycle in $G$ and $P$ a longest path in $G\backslash C$ of length $\overline{p}$. Then $|C|\ge(\overline{p}+2)(\delta-\overline{p})$.\ **Theorem F [@[6]].** Let $G$ be a graph on $n$ vertices and $d(x)+d(y)\ge n$ for each nonadjacent vertices $x,y$. Then $G$ is hamiltonian. Notations and preliminaries =========================== The set of vertices of a graph $G$ is denoted by $V(G)$ and the set of edges by $E(G)$. For $S$ a subset of $V(G)$, we denote by $G\backslash S$ the maximum subgraph of $G$ with vertex set $V(G)\backslash S$. For a subgraph $H$ of $G$ we use $G\backslash H$ short for $G\backslash V(H)$. The neighborhood of a vertex $x\in V(G)$ will be denoted by $N(x)$. Set $d(x)=|N(x)|$. Furthermore, for a subgraph $H$ of $G$ and $x\in V(G)$, we define $N_H(x)=N(x)\cap V(H)$ and $d_H(x)=|N_H(x)|$. A simple cycle (or just a cycle) $C$ of length $t$ is a sequence $v_1v_2...v_tv_1$ of distinct vertices $v_1,...,v_t$ with $v_iv_{i+1}\in E(G)$ for each $i\in \{1,...,t\}$, where $v_{t+1}=v_1$. When $t=2$, the cycle $C=v_1v_2v_1$ on two vertices $v_1, v_2$ coincides with the edge $v_1v_2$, and when $t=1$, the cycle $C=v_1$ coincides with the vertex $v_1$. So, all vertices and edges in a graph can be considered as cycles of lengths 1 and 2, respectively. A graph $G$ is hamiltonian if $G$ contains a Hamilton cycle, i.e. a cycle of length $n$. Paths and cycles in a graph $G$ are considered as subgraphs of $G$. If $Q$ is a path or a cycle, then the length of $Q$, denoted by $|Q|$, is $|E(Q)|$. We write $Q$ with a given orientation by $\overrightarrow{Q}$. For $x,y\in V(Q)$, we denote by $x\overrightarrow{Q}y$ the subpath of $Q$ in the chosen direction from $x$ to $y$. For $x\in V(Q)$, we denote the $h$-th successor and the $h$-th predecessor of $x$ on $\overrightarrow{Q}$ by $x^{+h}$ and $x^{-h}$, respectively. We abbreviate $x^{+1}$ and $x^{-1}$ by $x^+$ and $x^-$, respectively.\ **Special definitions**. Let $G$ be a graph, $C$ a longest cycle in $G$ and $P=x\overrightarrow{P}y$ a longest path in $G\backslash C$ of length $\overline{p}\ge0$. Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x)\cup N_C(y)$ occuring on $C$ in a consecutive order and let $$I_i=\xi_i\overrightarrow{C}\xi_{i+1}, \ I_i^\ast=\xi_i^+\overrightarrow{C}\xi_{i+1}^- \ \ (i=1,2,...,s),$$ where $\xi_{s+1}=\xi_1$. $(\ast1)$ We call $I_1,I_2,...,I_s$ elementary segments on $C$ induced by $N_C(x)\cup N_C(y)$. $(\ast2)$ We call a path $L=z\overrightarrow{L}w$ an intermediate path between elementary segments $I_a$ and $I_b$ if $$z\in V(I_a^\ast), \ w\in V(I_b^\ast), \ V(L)\cap V(C\cup P)=\{z,w\}.$$ $(\ast3)$ Denote by $M(I_{i_1},I_{i_2},...,I_{i_t})$ the set of all intermediate paths between elementary segments $I_{i_1},I_{i_2},...,I_{i_t}$.\ **Lemma 1.** Let $G$ be a graph, $C$ a longest cycle in $G$ and $P=x\overrightarrow{P}y$ a longest path in $G\backslash C$ of length $\overline{p}\ge1$. If $|N_C(x)|\ge2$, $|N_C(y)|\ge2$ and $N_C(x)\not=N_C(y)$ then $$|C|\ge\left\{ \begin{array}{lll} 3\delta+\max\{\sigma_1, \sigma_2\}-1\ge3\delta & \mbox{if} & \mbox{ }\overline{p}=1, \\ \max\{2\overline{p}+8, 4\delta-2\overline{p}\} & \mbox{if} & \mbox{ }% \overline{p}\ge2, \end{array} \right.$$ where $\sigma_1=|N_C(x)\backslash N_C(y)|$ and $\sigma_2=|N_C(y)\backslash N_C(x)|$.\ **Lemma 2.** Let $G$ be a graph, $C$ a longest cycle in $G$ and $P=x\overrightarrow{P}y$ a longest path in $G\backslash C$ of length $\overline{p}\ge0$. If $N_C(x)=N_C(y)$ and $|N_C(x)|\ge2$ then for each elementary segments $I_a$ and $I_b$ induced by $N_C(x)\cup N_C(y)$, (a1) if $L$ is an intermediate path between $I_a$ and $I_b$ then $$|I_a|+|I_b|\ge2\overline{p}+2|L|+4,$$ (a2) if $M(I_a,I_b)\subseteq E(G)$ and $|M(I_a,I_b)|=i$   $(i\in\{1,2,3\})$ then $$|I_a|+|I_b|\ge2\overline{p}+i+5.$$ **Lemma 3.** Let $G$ be a graph, $S$ a cut set in $G$ and $H$ a connected component of $G\backslash S$ of order $h$. Then $$q_H\ge\frac{h(2\delta-h+1)}{2},$$ where $q_H=|\{xy\in E(G) : \{x,y\}\cap V(H)\not=\emptyset\}|$.\ **Lemma 4.** Let $G$ be a 2-connected graph. If $\delta\ge(n-2)/3$ then either $$q\ge\left\{ \begin{array}{lll} 9 & \mbox{if} & \mbox{ }\delta=2, \\ \frac{3(\delta-1)(\delta+2)}{2} & \mbox{if} & \mbox{ }% \delta\ge3, \end{array} \right.$$ or each longest cycle in $G$ is a dominating cycle.\ Proofs ====== **Proof of Lemma 1**. Put $$A_1=N_C(x)\backslash N_C(y), \ A_2=N_C(y)\backslash N_C(x), \ M=N_C(x)\cap N_C(y).$$ By the hypothesis, $N_C(x)\not=N_C(y)$, implying that $$\max \{|A_1|,|A_2|\}\ge1.$$ Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x)\cup N_C(y)$ occuring on $C$ in a consecutive order. Put $I_i=\xi_i\overrightarrow{C}\xi_{i+1}$ $(i=1,2,...,s)$, where $\xi_{s+1}=\xi_1$. Clearly, $s=|A_1|+|A_2|+|M|$. Since $C$ is extreme, $|I_i|\ge2$ $(i=1,2,...,s)$. Moreover, if $\{\xi_i,\xi_{i+1}\}\cap M\not=\emptyset$ for some $i\in\{1,2,...,s\}$ then $|I_i|\ge\overline{p}+2$. In addition, if either $\xi_i\in A_1$, $\xi_{i+1}\in A_2$ or $\xi_i\in A_2$, $\xi_{i+1}\in A_1$ then again $|I_i|\ge\overline{p}+2$.\ **Case 1**. $\overline{p}=1$. **Case 1.1**. $|A_i|\ge1$ $(i=1,2)$. It follows that among $I_1,I_2,...,I_s$ there are $|M|+2$ segments of length at least $\overline{p}+2$. Observing also that each of the remaining $s-(|M|+2)$ segments has a length at least 2, we get $$|C|\ge(\overline{p}+2)(|M|+2)+2(s-|M|-2)$$ $$=3(|M|+2)+2(|A_1|+|A_2|-2)$$ $$=2|A_1|+2|A_2|+3|M|+2.$$ Since $|A_1|=d(x)-|M|-1$ and $|A_2|=d(y)-|M|-1$, we have $$|C|\ge2d(x)+2d(y)-|M|-2\ge3\delta+d(x)-|M|-2.$$ Recalling that $d(x)=|M|+|A_1|+1$, we get $$|C|\ge3\delta+|A_1|-1=3\delta+\sigma_1-1.$$ Analogously, $|C|\ge3\delta+\sigma_2-1$. So, $$|C|\ge3\delta+\max \{\sigma_1,\sigma_2\}-1\ge3\delta.$$ **Case 1.2**. Either $|A_1|\ge1, |A_2|=0$ or $|A_1|=0, |A_2|\ge1$. Assume w.l.o.g. that $|A_1|\ge1$ and $|A_2|=0$, i.e. $|N_C(y)|=|M|\ge2$ and $s=|A_1|+|M|$ . Hence, among $I_1,I_2,...,I_s$ there are $|M|+1$ segments of length at least $\overline{p}+2=3$. Taking into account that each of the remaining $s-(|M|+1)$ segments has a length at least 2 and $|M|+1=d(y)$, we get $$|C|\ge 3(|M|+1)+2(s-|M|-1)=3d(y)+2(|A_1|-1)$$ $$\ge3\delta+|A_1|-1=3\delta+\max\{\sigma_1,\sigma_2\}-1\ge3\delta.$$ **Case 2**. $\overline{p}\ge2$. We first prove that $|C|\ge2\overline{p}+8$. Since $|N_C(x)|\ge2$ and $|N_C(y)|\ge2$, there are at least two segments among $I_1,I_2,...,I_s$ of length at least $\overline{p}+2$. If $|M|=0$ then clearly $s\ge4$ and $$|C|\ge2(\overline{p}+2)+2(s-2)\ge2\overline{p}+8.$$ Otherwise, since $\max\{|A_1|,|A_2|\}\ge1$, there are at least three elementary segments of length at least $\overline{p}+2$, i.e. $$|C|\ge3(\overline{p}+2)\ge2\overline{p}+8.$$ So, in any case, $|C|\ge 2\overline{p}+8$. To prove that $|C|\ge 4\delta-2\overline{p}$, we distinguish two main cases.\ **Case 2.1**. $|A_i|\ge1$ $(i=1,2)$. It follows that among $I_1,I_2,...,I_s$ there are $|M|+2$ segments of length at least $\overline{p}+2$. Further, since each of the remaining $s-(|M|+2)$ segments has a length at least 2, we get $$|C|\ge (\overline{p}+2)(|M|+2)+2(s-|M|-2)$$ $$=(\overline{p}-2)|M|+(2\overline{p}+4|M|+4)+2(|A_1|+|A_2|-2)$$ $$\ge2|A_1|+2|A_2|+4|M|+2\overline{p}.$$ Observing also that $$|A_1|+|M|+\overline{p}\ge d(x), \quad |A_2|+|M|+\overline{p}\ge d(y),$$ we have $$2|A_1|+2|A_2|+4|M|+2\overline{p}$$ $$\ge 2d(x)+2d(y)-2\overline{p}\ge4\delta-2\overline{p},$$ implying that $|C|\ge4\delta-2\overline{p}$. **Case 2.2**. Either $|A_1|\ge1, |A_2|=0$ or $|A_1|=0, |A_2|\ge1$. Assume w.l.o.g. that $|A_1|\ge1$ and $|A_2|=0$, i.e. $|N_C(y)|=|M|\ge2$ and $s=|A_1|+|M|$. It follows that among $I_1,I_2,...,I_s$ there are $|M|+1$ segments of length at least $\overline{p}+2$. Observing also that $|M|+\overline{p}\ge d(y)\ge\delta$, i.e. $2\overline{p}+4|M|\ge 4\delta-2\overline{p}$, we get $$|C|\ge(\overline{p}+2)(|M|+1)\ge(\overline{p}-2)(|M|-1)+2\overline{p}+4|M|$$ $$\ge 2\overline{p}+4|M|\ge4\delta-2\overline{p}. \quad \quad \rule{7pt}{6pt}$$ **Proof of Lemma 2**. Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x)$ occuring on $C$ in a consecutive order. Put $I_i=\xi_i\overrightarrow{C}\xi_{i+1}$ $(i=1,2,...,s)$, where $\xi_{s+1}=\xi_1.$ To prove $(a1)$, let $L=z\overrightarrow{L}w$ be an intermediate path between elementary segments $I_a$ and $I_b$ with $z\in V(I_a^\ast)$ and $w\in V(I_b^\ast)$. Put $$|\xi_a\overrightarrow{C}z|=d_1, \ |z\overrightarrow{C}\xi_{a+1}|=d_2, \ |\xi_b\overrightarrow{C}w|=d_3, \ |w\overrightarrow{C}\xi_{b+1}|=d_4,$$ $$C^\prime=\xi_ax\overrightarrow{P}y\xi_b\overleftarrow{C}z\overrightarrow{L}w\overrightarrow{C}\xi_a.$$ Clearly, $$|C^\prime|=|C|-d_1-d_3+|L|+|P|+2.$$ Since $C$ is extreme, we have $|C|\ge|C^\prime|$, implying that $d_1+d_3\ge\overline{p}+|L|+2$. By a symmetric argument, $d_2+d_4\ge\overline{p}+|L|+2$. Hence $$|I_a|+|I_b|=\sum_{i=1}^4d_i\ge2\overline{p}+2|L|+4.$$ To proof $(a2)$, let $M(I_a,I_b)\subseteq E(G)$ and $|M(I_a,I_b)|=i$  $(i\in \{1,2,3\})$.\ **Case 1**. $i=1$. It follows that $M(I_a,I_b)$ consists of a single intermediate edge $L=zw$. By (a1), $$|I_a|+|I_b|\ge2\overline{p}+2|L|+4=2\overline{p}+6.$$ **Case 2**. $i=2$. It follows that $M(I_a,I_b)$ consists of two edges $e_1,e_2$. Put $e_1=z_1w_1$ and $e_2=z_2w_2$, where $\{z_1,z_2\}\subseteq V(I_a^\ast)$ and $\{w_1,w_2\}\subseteq V(I_b^\ast)$.\ **Case 2.1**. $z_1\not=z_2$ and $w_1\not=w_2$. Assume w.l.o.g. that $z_1$ and $z_2$ occur in this order on $I_a$.\ **Case 2.1.1**. $w_2$ and $w_1$ occur in this order on $I_b$. Put $$|\xi_a\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}z_2|=d_2, \ |z_2\overrightarrow{C}\xi_{a+1}|=d_3,$$ $$|\xi_b\overrightarrow{C}w_2|=d_4, \ |w_2\overrightarrow{C}w_1|=d_5, \ |w_1\overrightarrow{C}\xi_{b+1}|=d_6,$$ $$C^{\prime}=\xi_a\overrightarrow{C}z_1w_1\overleftarrow{C}w_2z_2\overrightarrow{C}\xi_b x\overrightarrow{P}y\xi_{b+1}\overrightarrow{C}\xi_a.$$ Clearly, $$|C^{\prime}|=|C|-d_2-d_4-d_6+|\{e_1\}|+|\{e_2\}|+|P|+2$$ $$=|C|-d_2-d_4-d_6+\overline{p}+4.$$ Since $C$ is extreme, $|C|\ge |C^{\prime}|$, implying that $d_2+d_4+d_6\ge \overline{p}+4$. By a symmetric argument, $d_1+d_3+d_5\ge\overline{p}+4$. Hence $$|I_a|+|I_b|= \sum_{i=1}^6d_i\ge2\overline{p}+8.$$ **Case 2.1.2**. $w_1$ and $w_2$ occur in this order on $I_b$ Putting $$C^{\prime}=\xi_a\overrightarrow{C}z_1w_1\overrightarrow{C}w_2z_2\overrightarrow{C}\xi_b x\overrightarrow{P}y\xi_{b+1}\overrightarrow{C}\xi_a,$$ we can argue as in Case 2.1.1.\ **Case 2.2**. Either $z_1=z_2$, $w_1\not=w_2$ or $z_1\not=z_2$, $w_1=w_2$. Assume w.l.o.g. that $z_1\not=z_2$, $w_1=w_2$ and $z_1, z_2$ occur in this order on $I_a$. Put $$|\xi_a\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}z_2|=d_2, \ |z_2\overrightarrow{C}\xi_{a+1}|=d_3,$$ $$|\xi_b\overrightarrow{C}w_1|=d_4, \ |w_1\overrightarrow{C}\xi_{b+1}|=d_5,$$ $$C^{\prime}=\xi_a x\overrightarrow{P}y\xi_b\overleftarrow{C}z_1w_1\overrightarrow{C}\xi_a,$$ $$C^{\prime\prime}=\xi_a\overrightarrow{C}z_2w_1\overleftarrow{C}\xi_{a+1}x\overrightarrow{P}y\xi_{b+1}\overrightarrow{C}\xi_a.$$ Clearly, $$|C^{\prime}|=|C|-d_1-d_4+|\{e_1\}|+|P|+2=|C|-d_1-d_4+\overline{p}+3,$$ $$|C^{\prime\prime}|=|C|-d_3-d_5+|\{e_2\}|+|P|+2=|C|-d_3-d_5+\overline{p}+3.$$ Since $C$ is extreme, $|C|\ge |C^{\prime}|$ and $|C|\ge |C^{\prime\prime}|$, implying that $$d_1+d_4\ge \overline{p}+3, \ d_3+d_5\ge \overline{p}+3.$$ Hence, $$|I_a|+|I_b|= \sum_{i=1}^5d_i\ge d_1+d_3+d_4+d_5+1\ge2\overline{p}+7.$$ **Case 3**. $i=3$. It follows that $M(I_a,I_b)$ consists of three edges $e_1,e_2,e_3$. Let $e_i=z_iw_i$ $(i=1,2,3)$, where $\{z_1,z_2,z_3\}\subseteq V(I_a^\ast)$ and $\{w_1,w_2,w_3\}\subseteq V(I_b^\ast)$. If there are two independent edges among $e_1,e_2,e_3$ then we can argue as in Case 2.1. Otherwise, we can assume w.l.o.g. that $w_1=w_2=w_3$ and $z_1,z_2,z_3$ occur in this order on $I_a$. Put $$|\xi_a\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}z_2|=d_2, \ |z_2\overrightarrow{C}z_3|=d_3,$$ $$|z_3\overrightarrow{C}\xi_{a+1}|=d_4, \ |\xi_b\overrightarrow{C}w_1|=d_5, \ |w_1\overrightarrow{C}\xi_{b+1}|=d_6,$$ $$C^{\prime}=\xi_a x\overrightarrow{P}y\xi_b\overleftarrow{C}z_1w_1\overrightarrow{C}\xi_a,$$ $$C^{\prime\prime}=\xi_a\overrightarrow{C}z_3w_1\overleftarrow{C}\xi_{a+1}x\overrightarrow{P}y\xi_{b+1}\overrightarrow{C}\xi_a.$$ Clearly, $$|C^{\prime}|=|C|-d_1-d_5+|\{e_1\}|+\overline{p}+2,$$ $$|C^{\prime\prime}|=|C|-d_4-d_6+|\{e_3\}|+\overline{p}+2.$$ Since $C$ is extreme, we have $|C|\ge |C^{\prime}|$ and $|C|\ge |C^{\prime\prime}|$, implying that $$d_1+d_5\ge \overline{p}+3, \ d_4+d_6\ge \overline{p}+3.$$ Hence, $$|I_a|+|I_b|= \sum_{i=1}^6d_i\ge d_1+d_4+d_5+d_6+2\ge2\overline{p}+8. \quad \quad \rule{7pt}{6pt}$$ **Proof of Lemma 3**. Put $$V(H)=\{v_1,...,v_h\}, \ |N(v_i)\cap S|=\beta_i \ \ (i=1,...,h).$$ Observing that $h\ge d(v_i)-\beta_i+1\ge \delta-\beta_i+1$ for each $i\in \{1,2,...,h\}$, we have $\beta_i\ge\delta-h+1$ $(i=1,2,...,h)$. Therefore, $$q_H=q(H)+\sum_{i=1}^h\beta_i=\frac{1}{2}\sum_{i=1}^hd_H(v_i)+\sum_{i=1}^h\beta_i$$ $$=\frac{1}{2}\sum_{i=1}^h(d_H(v_i)+\beta_i)+\frac{1}{2}\sum_{i=1}^h\beta_i =\frac{1}{2}\sum_{i=1}^hd(v_i)+\frac{1}{2}\sum_{i=1}^h(\delta-h+1)$$ $$\ge\frac{1}{2}h\delta+\frac{1}{2}h(\delta-h+1)=\frac{h(2\delta-h+1)}{2}. \quad \quad \rule{7pt}{6pt}$$ **Proof of Lemma 4**. Let $C$ be a longest cycle in $G$ and $P=x_1\overrightarrow{P}x_2$ a longest path in $G\backslash C$ of length $\overline{p}$. If $|V(P)|\le1$ then $C$ is a dominating cycle and we are done. Let $|V(P)|\ge2$, that is $\overline{p}\ge1$. By the hypothesis, $$|C|+\overline{p}+1\le n\le3\delta+2. \eqno{(1)}$$ Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x_1)\cup N_C(x_2)$ occuring on $C$ in a consecutive order. Put $$I_i=\xi_i\overrightarrow{C}\xi_{i+1}, \ I_i^\ast=\xi_i^+\overrightarrow{C}\xi_{i+1}^- \ \ (i=1,2,...,s),$$ where $\xi_{s+1}=\xi_1.$ Let $Q$ be a longest path in $G$ with $Q=\xi\overrightarrow{Q}\eta$ and $V(Q)\cap V(C)=\{\xi,\eta\}$. Since $C$ is extreme, we have $|\xi\overrightarrow{C}\eta|\ge|Q|$ and $|\eta\overrightarrow{C}\xi|\ge|Q|$, implying that $$|C|=|\xi\overrightarrow{C}\eta|+|\eta\overrightarrow{C}\xi|\ge2|Q|. \eqno{(2)}$$ **Case 1**. $\delta=2$. Since $\kappa\ge2$ and $\overline{p}\ge1$, we have $|Q|\ge3$. By (2), $$|C|=|y\overrightarrow{C}z|+|z\overrightarrow{C}y|\ge2|Q|\ge6,$$ implying that $q\ge|C|+|Q|\ge9$.\ **Case 2**. $\delta=3$. If $n\ge10$ then $$q\ge \frac{n\delta}{2}\ge15=\frac{3(\delta-1)(\delta+2)}{2}.$$ Let $$n\le9. \eqno{(3)}$$ **Case 2.1**. $\overline{p}=1$. By (1) and (3), $$|C|\le n-\overline{p}-1\le 7. \eqno{(4)}$$ Since $\overline{p}=1$ and $\delta=3$, we have $|N_C(x_i)|\ge2$ $(i=1,2)$. If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge 3\delta=9$, contradicting (4). Let $N_C(x_1)=N_C(x_2)$. Further, since $C$ is extreme and $\overline{p}=1$, we have $|I_i|\ge3$ $(i=1,2,...,s)$. If $s\ge3$ then $|C|=\sum_{i=1}^s|I_i|\ge3s\ge9$, contradicting (4). Let $s=2$. If $M(I_1,I_2)\not=\emptyset$ then by Lemma 2, $|C|=|I_1|+|I_2|\ge 2\overline{p}+6=8$, contradicting (4). Thus, $M(I_1,I_2)=\emptyset$, implying that $G\backslash \{\xi_1,\xi_2\}$ is disconnected. Let $H_1,H_2,...,H_t$ be the connected components of $G\backslash \{\xi_1,\xi_2\}$. Clearly, $t\ge3$. Put $$h_i=|V(H_i)|, \ q_i=|\{xy\in E(G) : \{x,y\}\cap V(H_i)\not=\emptyset\}| \ (i=1,2,...,t). \eqno{(5)}$$ Assume w.l.o.g. that $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2)$ and $V(H_3)=\{x_1,x_2\}$. It means that $h_i\ge2$ $(i=1,2,3)$. If $h_i\ge4$ for some $i\in\{1,2\}$ then $$|C|\ge h_1+h_2+|\{\xi_1,\xi_2\}|\ge 8,$$ contradicting (4). Let $2\le h_i\le3$ $(i=1,2,3)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}=\frac{h_i(7-h_i)}{2}\ge5 \quad (i=1,2,3).$$ Hence $$q\ge\sum_{i=1}^3q_i\ge15=\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 2.2**. $\overline{p}\ge2$. By (1) and (3), $|C|\le n-\overline{p}-1\le 6$.\ **Case 2.2.1**. There is a cycle in $G\backslash C$. Let $C^\prime$ be a cycle in $G\backslash C$. Since $\kappa\ge2$, there are two disjoint paths connecting $C^\prime$ and $C$, implying that $|Q|\ge4$. By (2), $|C|\ge 2|Q|\ge8$, contradicting (4).\ **Case 2.2.2**. $G\backslash C$ is acyclic. It follows that $$|N_C(x_i)|\ge |N(x_i)|-1\ge\delta-1=2 \ \ (i=1,2).$$ Hence $|Q|\ge\overline{p}+2\ge4$. By (2), $|C|\ge2|Q|\ge8$, contradicting (4).\ **Case 3**. $\delta=4$. If $n\ge14$ then $$q\ge \frac{n\delta}{2}\ge28>\frac{3(\delta-1)(\delta+2)}{2}.$$ Let $$n\le13. \eqno{(6)}$$ **Case 3.1**. $\overline{p}=1$. By (1) and (6), $$|C|\le n-\overline{p}-1\le11. \eqno{(7)}$$ Since $\overline{p}=1$ and $\delta=4$, we have $|N_C(x_i)|\ge3$ $(i=1,2)$. If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge3\delta=12$, contradicting (7). Let $N_C(x_1)=N_C(x_2)$. Further, since $C$ is extreme and $\overline{p}=1$, we have $|I_i|\ge3$ $(i=1,...,s)$. If $s\ge4$ then $|C|\ge3s\ge12$, contradicting (7). Thus $s=3$.\ **Case 3.1.1**. $M(I_1,I_2,I_3)=\emptyset$. It follows that $G\backslash \{\xi_1,\xi_2,\xi_3\}$ is disconnected. Let $H_1,H_2,...,H_t$ be the connected components of $G\backslash \{\xi_1,\xi_2,\xi_3\}$. Clearly, $t\ge4$. Assume w.l.o.g. that $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2,3)$ and $V(H_4)=\{x_1,x_2\}$. Using notation (5), we have $h_i\ge2$ $(i=1,2,3)$ and $h_4=2$. If $h_i\ge5$ for some $i\in \{1,2,3\}$ then clearly $$|C|\ge\sum_{i=1}^3h_i+|\{\xi_1,\xi_2,\xi_3\}|\ge12,$$ contradicting (7). Let $2\le h_i\le4$ $(i=1,2,3)$. By Lemma 3, $$q_i\ge \frac{h_i(2\delta-h_i+1)}{2}=\frac{h_i(9-h_i)}{2}\ge7 \quad (i=1,2,3,4).$$ So, $$q\ge\sum_{i=1}^4q_i\ge 28>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 3.1.2**. $M(I_1,I_2,I_3)\not=\emptyset$. Assume w.l.o.g. that $M(I_1,I_2)\not=\emptyset$, i.e. there is an intermediate path $L$ between $I_1$ and $I_2$. By Lemma 2, $$|I_1|+|I_2|\ge2\overline{p}+2|L|+4=2|L|+6.$$ If $|L|\ge2$ then $|I_1|+|I_2|\ge10$ and hence $|C|=|I_1|+|I_2|+|I_3|\ge13$, contradicting (7). Otherwise, $|L|=1$, implying that $M(I_1,I_2,I_3)\subseteq E(G)$. If $|M(I_1,I_2)|\ge2$ then by Lemma 2, $|I_1|+|I_2|\ge2\overline{p}+7=9$ and $|C|=\sum_{i=1}^3|I_i|\ge12$, contradicting (7). So, $|M(I_1,I_2)|=1$. By Lemma 2, $|I_1|+|I_2|\ge2\overline{p}+6=8$. Since $|I_3|\ge3$, we have $|C|=\sum_{i=1}^3|I_i|\ge11$. By (1), $n\ge|C|+\overline{p}+1\ge13$. Combining $n\ge13$ and $|C|\ge11$ with (6) and (7), we get $$n=13, \ |C|=11, \ |I_1|+|I_2|=8, \ |I_3|=3, \ V(G)=V(C\cup P). \eqno{(8)}$$ Since $|I_1|+|I_2|=8$ and $|I_i|\ge3$ $(i=1,2)$, we can assume w.l.o.g. that either $|I_1|=|I_2|=4$ or $|I_1|=3$, $|I_2|=5$. If $|I_1|=|I_2|=4$ then by Lemma 2, $M(I_1,I_3)=M(I_2,I_3)=\emptyset$, implying that $|M(I_1,I_2,I_3)|=1$. Further, if $|I_1|=3$ and $|I_2|=5$ then by Lemma 2, $M(I_1,I_3)=\emptyset$ and $|M(I_2,I_3)|\le1$, implying that $|M(I_1,I_2,I_3)|\le2$. So, in any case, $$1\le |M(I_1,I_2,I_3)|\le2. \eqno{(9)}$$ Let $e\in M(I_1,I_2,I_3)$ and $e=zw$. Put $G^\prime=G\backslash e$. Form a graph $G^{\prime\prime}$ in the following way. If $d(z)\ge\delta=4$ and $d(w)\ge\delta=4$ in $G^\prime$ then we take $G^{\prime\prime}=G^\prime$. Next, we let $d(z)=\delta-1=3$ and $d(w)\ge\delta=4$ in $G^\prime$. If $\{\xi_1,\xi_2,\xi_3\}\subseteq N(z)$ then clearly $d(z)\ge4$ in $G^\prime$, contradicting the hypothesis. Otherwise, $zv\not\in E(G^\prime)$ for some $v\in \{\xi_1,\xi_2,\xi_3\}$ and we take $G^{\prime\prime}=G^\prime+ \{zv\}$. Finally, if $d(z)=d(w)=3$ then as above, $zv\not\in E(G^\prime)$ and $wu\not\in E(G^\prime)$ for some $v,u\in \{\xi_1,\xi_2,\xi_3\}$ and we take $G^{\prime\prime}=G^\prime+ \{zv,wu\}$. Clearly, $\delta(G^{\prime\prime})=\delta(G)=4$ and $q=q(G)\ge q(G^{\prime\prime})-1$. Furthermore, deleting step by step all edges from $M(I_1,I_2,I_3)$ and adding at most two appropriate new edges against each deleting edge, we can form a graph $G^\ast$ with $\delta(G^\ast)=\delta(G)=4$ and $q(G)\ge q(G^\ast)-|M(I_1,I_2,I_3)|$. By (9), $q(G)\ge q(G^\ast)-2$. In fact, $G^\ast= (G\backslash M(I_1,I_2,I_3))+ E^\ast$, where $E^\ast$ consists of at most $2|M(I_1,I_2,I_3)|$ appropriate new edges having exactly one end in common with $\{\xi_1,\xi_2,\xi_3\}$, implying that $G^\ast\backslash \{\xi_1,\xi_2,\xi_3\}$ is disconnected. Let $H_1,H_2,H_3,H_4$ be the connected components of $G^\ast\backslash \{\xi_1,\xi_2,\xi_3\}$ with $V(H_i)=V(I_i^\ast)$ $(i=1,2,3)$ and $V(H_4)=\{x_1,x_2\}$. Put $$h_i=|V(H_i)|, \ q_i=|\{xy\in E(G^\ast) : \{x,y\}\cap V(H_i)\not=\emptyset\}| \ \ (i=1,2,3,4).$$ Since $|I_1|+|I_2|=8$, we have either $|I_1|\ge4$ or $|I_2|\ge4$. Assume w.l.o.g. that $|I_1|\ge4$, that is $h_1\ge3$. As in Case 3.1.1, $2\le h_i\le4$ $(i=1,2,3,4)$. By Lemma 3, $$q_1(G^\ast)\ge\frac{h_1(2\delta-h_1+1)}{2}=\frac{h_1(9-h_1)}{2}\ge9,$$ $$q_i(G^\ast)\ge\frac{h_i(2\delta-h_i+1)}{2}=\frac{h_i(9-h_i)}{2}\ge7 \quad (i=2,3,4).$$ Hence $q(G^\ast)\ge \sum_{i=1}^4q_i(G^\ast)\ge30$, implying that $$q(G)\ge q(G^\ast)-2\ge28>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 3.2**. $\overline{p}=2$. Put $P=x_1x_3x_2$. By (1) and (6), $$|C|\le n-\overline{p}-1\le10. \eqno{(10)}$$ Since $\delta=4$ and $\overline{p}=2$, we have $|N_C(x_i)|\ge2$ $(i=1,2)$. If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge4\delta-2\overline{p}=12$, contradicting (10). Let $N_C(x_1)=N_C(x_2)$. Recalling that $C$ is extreme and $\overline{p}=2$, we conclude that $|I_i|\ge4$ $(i=1,2,...,s)$. If $s\ge3$ then $|C|\ge4s\ge12$, contradicting (10). Let $s=2$, implying that $x_1x_2\in E(G)$. By symmetric arguments, we can state that $N_C(x_1)=N_C(x_2)=N_C(x_3)$.\ **Case 3.2.1**. $M(I_1,I_2)=\emptyset$. Let $H_1,H_2,...,H_t$ be the connected components of $G\backslash \{\xi_1,\xi_2\}$. Assume w.l.o.g. that $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2)$ and $V(H_3)=\{x_1,x_2,x_3\}$. Using notation (5), we have $h_i\ge3$ $(i=1,2,3)$. If $h_i\ge6$ for some $i\in \{1,2\}$, then $|C|\ge h_1+h_2+|\{\xi_1,\xi_2\}|\ge11$, contradicting (10). Let $3\le h_i\le5$ $(i=1,2,3)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}=\frac{h_i(9-h_i)}{2}\ge9 \quad (i=1,2,3).$$ Hence $$q\ge \sum_{i=1}^3q_i\ge 27=\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 3.2.2**. $M(I_1,I_2)\not=\emptyset$. By the definition, there is an intermediate path $L$ between $I_1$ and $I_2$. By Lemma 2, $$|I_1|+|I_2|\ge2\overline{p}+2|L|+4=2|L|+8.$$ If $|L|\ge2$ then $|C|=|I_1|+|I_2|\ge12$, contradicting (10). Otherwise, $|L|=1$ and therefore, $M(I_1,I_2)\subseteq E(G)$. If $|M(I_1,I_2)|\ge2$ then by Lemma 2, $$|C|=|I_1|+|I_2|\ge 2\overline{p}+7=11,$$ contradicting (10). Now let $|M(I_1,I_2)|=1$, i.e. $M(I_1,I_2)$ consists of a single edge $e$. By Lemma 2, $$|C|=|I_1|+|I_2|\ge 2\overline{p}+6=10,$$ and by (1), $n\ge |C|+\overline{p}+1\ge13$. Combining $n\ge13$ and $|C|\ge10$ with (6) and (10), we get $$|C|=|I_1|+|I_2|=10, \ n=13, \ V(G)=V(C\cup P). \eqno{(11)}$$ Put $G^\prime =G\backslash e$ and let $H_1,H_2,H_3$ be the connected components of $G^\prime\backslash \{\xi_1,\xi_2\}$ with $V(H_i)=V(I_i^\ast)$ $(i=1,2)$ and $V(H_3)=V(P)$. Since $|I_1|+|I_2|=10$, we can assume w.l.o.g. that $|I_1|\ge5$. Using notation (5) for $G^\prime$, we have $h_1\ge|I_1|-1\ge4$ and $h_i\ge3$ $(i=2,3)$. If $h_i\ge6$ for some $i\in \{1,2\}$ then $|C|\ge h_1+h_2+|\{\xi_1,\xi_2\}|\ge11$, contradicting (11). Let $4\le h_1\le5$ and $3\le h_i\le5$ $(i=2,3)$. If $\delta(G^\prime)=\delta(G)$ then we can argue as in Case 3.2.1. Otherwise, as in Case 3.1.2, we can form a graph $G^\ast$ by adding at most two new edges in $G^\prime$ such that $\delta(G^\ast)=\delta(G)$ and $G^\ast\backslash \{\xi_1,\xi_2\}$ has exactly three connected components. Recalling that $4\le h_1\le 5$ and using Lemma 3, we get $$q_1(G^\ast)\ge\frac{h_1(2\delta-h_1+1)}{2}=\frac{h_1(9-h_1)}{2}=10,$$ $$q_i(G^\ast)\ge\frac{h_i(2\delta-h_i+1)}{2}=\frac{h_i(9-h_i)}{2}\ge9 \quad (i=2,3).$$ So, $q(G^\ast)\ge\sum_{i=1}^3q_i(G^\ast)\ge28$, implying that $$q(G)\geq(G^\ast)-1\ge27=\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 3.3**. $\overline{p}=3$. By (1) and (6), $$|C|\le n-\overline{p}-1\le9. \eqno{(12)}$$ Since $\delta=4$ and $\overline{p}=3$, we have $|N_C(x_i)|\ge1$ $(i=1,2)$. If $|N_C(x_i)|\ge2$ for some $i\in \{1,2\}$ then $|Q|\ge \overline{p}+2=5$ and by (2), $|C|\ge2|Q|\ge10$, contradicting (12). Let $|N_C(x_i)|=1$ $(i=1,2)$. If $N_C(x_1)\not=N_C(x_2)$ then again $|Q|\ge5$ and $|C|\ge10$, contradicting (12). Thus, $N_C(x_1)=N_C(x_2)$. It follows that $G[V(P)]$ is complete. Since $\kappa\ge2$, there are two disjoint paths connecting $G[V(P)]$ and $C$, implying that $|Q|\ge5$ and $|C|\ge10$, contradicting (12).\ **Case 3.4**. $\overline{p}=4$. Put $P=x_1x_3x_4x_5x_2$. By (1) and (6), $$|C|\le n-\overline{p}-1\le8. \eqno{(13)}$$ **Case 3.4.1**. $x_1x_2\in E(G)$. Put $C^\prime =x_1x_2x_5x_4x_3x_1$. Since $\kappa\ge2$, there are two disjoint paths connecting $C^\prime$ and $C$. Since $|C^\prime|=5$, we have $|Q|\ge5$ and by (2), $|C|\ge2|Q|\ge10$, contradicting (13).\ **Case 3.4.2**. $x_1x_2\not\in E(G)$. As in Case 3.3, it can be shown that $$N_C(x_1)=N_C(x_2), \ \ |N_C(x_1)|=|N_C(x_2)|=1.$$ Since $\delta=4$, we have $\{x_1x_4,x_1x_5,x_2x_3,x_2x_4\}\subset E(G)$. Hence, $x_1x_4x_5x_2x_3x_1$ is a Hamilton cycle in $G[V(P)]$ and we can argue as in Case 3.4.1.\ **Case 3.5**. $\overline{p}\ge5$. If $|C|=n$ then $C$ is a dominating cycle. Otherwise, by Theorem D, $|C|\ge2\delta=8$. On the other hand, by (1) and (6), $|C|\le n-\overline{p}-1\le7$, a contradiction.\ **Case 4**. $\delta\ge5$. If $C$ is a Hamilton cycle then we are done. Otherwise, by Theorem D, $$c\ge2\delta. \eqno{(14)}$$ By (1), $\overline{p}\le 3\delta-|C|+1\le\delta+1$. So, $$1\le\overline{p}\le\delta+1. \eqno{(15)}$$ We distinguish two main cases, namely $1\le\overline{p}\le \delta-3$ and $\delta-2\le\overline{p}\le\delta+1$.\ **Case 4.1**. $1\le\overline{p}\le \delta-3$. It follows that $$(\overline{p}+2)(\delta-\overline{p})=(\overline{p}-1)(\delta-\overline{p}-3)+3\delta-3\ge3\delta-3.$$ By Theorem E, $$|C|\ge3\delta-3. \eqno{(16)}$$ By (1) and (16), $\overline{p}\le 3\delta-|C|+1\le4$. So, $$1\le\overline{p}\le4. \eqno{(17)}$$ **Case 4.1.1**. $\overline{p}=1$. It follows that $|N_C(x_i)|\ge\delta-1>2$ $(i=1,2)$. By (1), $|C|\le 3\delta+1-\overline{p}=3\delta$. Combining this with (16), we have $$3\delta-3\le |C|\le3\delta.$$ **Case 4.1.1.1**. $3\delta-3\le|C|\le3\delta-1$. If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge 3\delta$, contradicting the hypothesis. Let $N_C(x_1)=N_C(x_2)$. Since $C$ is extreme and $\overline{p}=1$, we have $|I_i|\ge3$ $(i=1,...,s)$. If $s\ge \delta$ then $|C|\ge3s\ge3\delta$, again contradicting the hypothesis. Let $s\le\delta-1$. On the other hand, $s=|N_C(x_1)|=d(x_1)-1\ge\delta-1$, implying that $s=\delta-1$.\ **Claim 1**. $M(I_1,I_2,...,I_s)\subseteq E(G)$ and $|M(I_1,I_2,...,I_s)|\le\delta-2$. Assume first that $3\delta-3\le|C|\le3\delta-2$. If $M(I_a,I_b)\not=\emptyset$ for some two elementary segments $I_a$ and $I_b$ then by Lemma 2, $|I_a|+|I_b|\ge2\overline{p}+6=8$, implying that $|C|\ge 3\delta-1$, a contradiction. Otherwise, $|M(I_1,I_2,...,I_s)|=0<\delta-2$. Now let $|C|=3\delta-1$. If $M(I_1,I_2,...,I_s)=\emptyset$ then we are done. Let $M(I_1,I_2,...,I_s)\not=\emptyset$, i.e. $M(I_a,I_b)\not=\emptyset$ for some elementary segments $I_a$ and $I_b$. By the definition, there is an intermediate path $L$ between $I_a$ and $I_b$. If $|L|\ge2$ then by Lemma 2, $|I_a|+|I_b|\ge2\overline{p}+2|L|+4=10$, implying that $|C|\ge 3\delta$, a contradiction. Otherwise, $M(I_1,I_2,...,I_s)\subseteq E(G)$ and by Lemma 2, $|I_a|+|I_b|\ge2\overline{p}+6=8$, i.e. $|C|\ge3\delta-1$. Recalling that $|C|=3\delta-1$, we can state that $$|I_a|+|I_b|=8 \ \mbox{and} \ |I_i|=3 \ \mbox{for each} \ i\in \{1,2,...,s\}\backslash \{a,b\}.$$ If $|I_a|=|I_b|=4$ then by Lemma 2, $M(I_i,I_j)=\emptyset$ if $\{i,j\}\not=\{a,b\}$, i.e. $|M(I_1,I_2,...,I_s)|=1<\delta-2$. Otherwise, assume w.l.o.g. that $|I_a|=5$ and $|I_b|=3$, i.e. $|I_a|=5$ and $|I_i|=3$ for each $i\in \{1,2,...,s\}\backslash \{a\}$. As above, $|M(I_a,I_i)|\le1$ for each $i\in \{1,2,...,s\}\backslash \{a\}$. Observing also that $M(I_i,I_j)=\emptyset$ for each distinct $i,j$ if $a\not\in \{i,j\}$, we conclude that $|M(I_1,I_2,...,I_s)|\le s-1= \delta-2$. Claim 1 is proved. $\Delta$\ Put $G^\prime=G\backslash M(I_1,I_2,...,I_s)$. As in Case 3.1.2, we can form a graph $G^\ast$ by adding at most $2|M(I_1,I_2,...,I_s)|$ new edges in $G^\prime$ such that $\delta(G^\ast)=\delta(G)$, $G^\ast\backslash \{\xi_1,\xi_2,...,\xi_s\}$ is disconnected and $q(G)\ge q(G^\ast)-|M(I_1,I_2,...,I_s)|$. By Claim 1, $$q(G)\ge q(G^\ast)-\delta+2. \eqno{(18)}$$ Let $H_1,H_2,...,H_{s+1}$ be the connected components of $G^\ast\backslash \{\xi_1,\xi_2,...,\xi_s\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2,...,s)$ and $V(H_{s+1})=\{x_1,x_2\}$. Using notation (5) for $G^\ast$, we have $h_i\ge2$ $(i=1,2,...,s+1)$. If $h_i\ge6$ for some $i\in \{1,2,...,s\}$ then $n\ge3\delta+3$, contradicting (1). Let $2\le h_i\le 5<2\delta-1$ $(i=1,2,...,s+1)$. It follows that $(h_i-2)(2\delta-h_i-1)\ge0$ which is equivalent to $$\frac{h_i(2\delta-h_i+1)}{2}\ge2\delta-1 \quad (i=1,2,...,s+1).$$ By Lemma 3, $q_i(G^\ast)\ge2\delta-1$ $(i=1,2,...,s+1)$, implying that $$q(G^\ast)\ge\sum_{i=1}^{s+1}q_i(G^\ast)\ge(s+1)(2\delta-1)=\delta(2\delta-1).$$ By (18), $$q\geq(G^\ast)-\delta+2\ge2(\delta^2-\delta+1)\ge\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.1.2**. $|C|=3\delta$. **Case 4.1.1.2.1**. $N_C(x_1)\not= N_C(x_2)$. It follows that $\max \{\sigma_1,\sigma_2\}\ge1$, where $$\sigma_1=|N_C(x_1)\backslash N_C(x_2)|, \quad \sigma_2=|N_C(x_2)\backslash N_C(x_1)|.$$ If $\max \{\sigma_1,\sigma_2\}\ge2$ then by Lemma 1, $|C|\ge3\delta+1$, contradicting the hypothesis. Let $\max \{\sigma_1,\sigma_2\}=1$. Clearly $s\ge\delta$ and $|I_i|\ge3$ $(i=1,2,...,s)$. If $s\ge \delta+1$ then $|C|\ge3s\ge3\delta+3$, a contradiction. Let $s=\delta$, implying that $|I_i|=3$ $(i=1,2,...,s)$. By Lemma 2, $M(I_1,I_2,...,I_s)=\emptyset$. Let $H_1,H_2,...,H_{s+1}$ be the connected components of $G\backslash \{\xi_1,\xi_2,...,\xi_s\}$ with $V(H_i)=V(I^\ast_i)$ $(i=1,2,...,s)$ and $V(H_{s+1})=\{x_1,x_2\}$. Using notation (5), we have $h_i=2$ $(i=1,2,...,s+1)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}=2\delta-1 \quad (i=1,2,...,s+1),$$ implying that $$q\ge \sum^{s+1}_{i=1}q_i\ge (s+1)(2\delta-1)=(\delta+1)(2\delta-1)>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.1.2.2**. $N_C(x_1)= N_C(x_2)$. Clearly, $s\ge\delta-1$. If $s\ge\delta$ then we can argue as in Case 4.1.1.2.1. Let $s=\delta-1$. If $|I_i|+|I_j|\ge10$ for some distinct $i,j\in \{1,2,...,s\}$ then $|C|\ge 10+3(s-2)=3\delta+1$, contradicting the hypothesis. Hence $$|I_i|+|I_j|\le9 \quad \mbox{for each distinct} \quad i,j\in \{1,2,...,s\}. \eqno{(19)}$$ **Claim 2**. $M(I_1,I_2,...,I_s)\subseteq E(G)$ and $(\ast1)$ if $\max_i|I_i|\le4$ then $|M(I_1,I_2,...,I_s)|\le3$, $(\ast2)$ if $\max_i|I_i|=5$ then $|M(I_1,I_2,...,I_s)|\le\delta-1$, $(\ast3)$ if $\max_i|I_i|=6$ then $|M(I_1,I_2,...,I_s)|\le2(\delta-2)$. **Proof**. If $M(I_1,I_2,...,I_s)=\emptyset$ then we are done. Otherwise, $M(I_a,I_b)\not=\emptyset$ for some distinct $a,b\in \{1,2,...,s\}$. By the definition, there is an intermediate path $L$ between $I_a$ and $I_b$. If $|L|\ge2$ then by Lemma 2, $$|I_a|+|I_b|\ge2\overline{p}+2|L|+4\ge10,$$ contradicting (19). Otherwise, $|L|=1$ and $M(I_1,I_2,...,I_s)\subseteq E(G)$. By Lemma 2, $|I_a|+|I_b|\ge2\overline{p}+6=8$. Combining this with (19), we have $$8\le |I_a|+|I_b|\le9. \eqno{(20)}$$ Furthermore, if $|M(I_a,I_b)|\ge3$ then by Lemma 2, $|I_a|+|I_b|\ge2\overline{p}+8=10$, contradicting (20). So, $$1\le |M(I_i,I_j)|\le2 \ \mbox{for each distinct} \ i,j\in \{1,2,...,s\}.$$ Put $r=|\{i\mid |I_i|\ge4\}|$. If $r\ge4$ then $|C|\ge3(s-4)+16=3\delta+1$, contradicting the hypothesis. Further, if $r=0$ then by Lemma 2, $M(I_1,I_2,...,I_s)=\emptyset$. Let $1\le r\le3$.\ **Case a1**. $r=3$. It follows that $|I_{a_i}|\ge4$ $(i=1,2,3)$ for some distinct $a_1,a_2,a_3\in\{1,2,...,s\}$ and $|I_i|=3$ for each $i\in \{1,2,...,s\}\backslash \{a_1,a_2,a_3\}$. Since $s=\delta-1$ and $|C|=3\delta$, we have $|I_{a_1}|=|I_{a_2}|=|I_{a_3}|=4$, i.e. $\max|I_i|=4$. By Lemma 2, $|M(I_{a_i},I_{a_j})|\le1$ for each distinct $i,j\in\{1,2,3\}$. Moreover, we have $|M(I_i,I_j)|=0$ if either $i\not\in \{i_1,i_2,i_3\}$ or $j\not\in \{i_1,i_2,i_3\}$. So, $|M(I_1,I_2,...,I_s)|\le3$.\ **Case a2**. $r=2$. It follows that $|I_a|\ge4$ and $|I_b|\ge4$ for some $a,b\in \{1,2,...,s\}$ and $|I_i|=3$ for each $i\in \{1,2,...,s\}\backslash \{a,b\}$. By (20), we can assume w.l.o.g. that either $|I_a|=|I_b|=4$ or $|I_a|=5$, $|I_b|=4$.\ **Case a2.1**. $|I_a|=|I_b|=4$. It follows that $\max_i |I_i|=4$. By Lemma 2, $|M(I_a,I_b)|\le1$ and $M(I_i,I_j)=\emptyset$ if $\{i,j\}\not=\{a,b\}$, implying that $|M(I_1,I_2,...,I_s)|\le1$.\ **Case a2.2**. $|I_a|=5$, $|I_b|=4$. It follows that $\max_i |I_i|=5$. By Lemma 2, we have $|M(I_a,I_b)|\le2$ and $|M(I_a,I_i)|\le1$ for each $i\in \{1,2,...,s\}\backslash \{a,b\}$ and $M(I_i,I_j)=\emptyset$ if $a\not\in\{i,j\}$. Thus, $|M(I_1,I_2,...,I_s)|\le\delta-1$ .\ **Case a3**. $r=1$. It follows that $|I_a|\ge4$ for some $a\in \{1,2,...,s\}$ and $|I_i|=3$ for each $i\in \{1,2,...,s\}\backslash \{a\}$. By (20), $4\le|I_a|\le6$.\ **Case a3.1**. $|I_a|=4$. It follows that $\max_1 |I_i|=4$. B Lemma 2, $M(I_a,I_i)=\emptyset$ for each $i\in \{1,2,...,s\}\backslash \{a\}$, implying that $|M(I_1,I_2,...,I_s)|=0$.\ **Case a3.2**. $|I_a|=5$. It follows that $\max_i |I_i|=5$. B Lemma 2, $|M(I_a,I_i)|\le1$ for each $i\in \{1,2,...,s\}\backslash \{a\}$ and $M(I_i,I_j)=\emptyset$ if $a\not\in\{i,j\}$, that is $|M(I_1,I_2,...,I_s)|\le\delta-2$.\ **Case a3.3**. $|I_a|=6$. It follows that $\max_i |I_i|=6$. By Lemma 2, $|M(I_a,I_i)|\le2$ for each $i\in \{1,2,...,s\}\backslash \{a\}$ and $M(I_i,I_j)=\emptyset$ if $a\not\in\{i,j\}$, that is $|M(I_1,I_2,...,I_s)|\le2(\delta-2)$. Claim 2 is proved. $\Delta$\ Put $G^\prime=G\backslash M(I_1,I_2,...,I_s)$. As in Case 3.1.2, we can form a graph $G^\ast$ by adding in $G^\prime$ at most $2|M(I_1,I_2,...,I_s)|$ new edges such that $\delta(G^\ast)=\delta(G)$, $G^\ast\backslash \{\xi_1,\xi_2,...,\xi_s\}$ is disconnected and $$q(G)\ge q(G^\ast)-|M(I_1,I_2,...,I_s)|. \eqno{(21)}$$ Let $H_1,H_2,...,H_{s+1}$ be the connected components of $G^\ast\backslash \{\xi_1,\xi_2,...,\xi_s\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2,...,s)$ and $V(H_{s+1})=\{x_1,x_2\}$. Using notation (5) for $G^\ast$, we have $h_i\ge2$ $(i=1,2,...,s+1)$. If $h_i\ge6$ for some $i\in \{1,2,...,s\}$ then $n\ge3\delta+3$, contradicting (1). Let $2\le h_i\le 5<2\delta-1$ $(i=1,2,...,s+1)$. It follows that $(h_i-2)(2\delta-h_i-1)\ge0$ which is equivalent to $$\frac{h_i(2\delta-h_i+1)}{2}\ge2\delta-1 \quad (i=1,2,...,s+1). \eqno{(22)}$$ **Case 4.1.1.2.2.1**. $\max_i |I_i|\le4$. By (22) and Lemma 3, $q_i(G^\ast)\ge2\delta-1$ $(i=1,2,...,s+1)$. Hence $$q(G^\ast)\ge\sum_{i=1}^{s+1}q_i(G^\ast)\ge(s+1)(2\delta-1)=\delta(2\delta-1).$$ Using (21) and Claim 2, we have $$q\ge q(G^\ast)-3\ge\delta(2\delta-1)-3\ge\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.1.2.2.2**. $\max_i |I_i|=5$. Assume w.l.o.g. that $\max_i |I_i|=|I_1|=5$, i.e. $4\le h_1\le5$. By (22) and Lemma 3, $q_i(G^\ast)\ge2\delta-1$ $(i=2,...,s+1)$ and $$q_1(G^\ast)\ge\frac{h_1(2\delta-h_1+1)}{2}\ge2(2\delta-3).$$ Hence $$q(G^\ast)\ge s(2\delta-1)+2(2\delta-3)=2\delta^2+\delta-5.$$ By (21) and Claim 2, $$q\ge q(G^\ast)-(\delta-1)\ge2\delta^2-4>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.1.2.2.3**. $\max_i|I_i|=6$. Assume w.l.o.g. that $\max_i|I_i|=|I_1|=6$, that is $h_1=5$. By (22) and Lemma 3, $q_i(G^\ast)\ge 2\delta-1$ $(i=2,...,s+1)$ and $$q_1(G^\ast)\ge\frac{h_1(2\delta-h_1+1)}{2}=5(\delta-2).$$ Hence $$q(G^\ast)\ge s(2\delta-1)+5(\delta-2)=2\delta^2+2\delta-9.$$ By (21) and Claim 2, $$q\ge q(G^\ast)-2(\delta-2)\ge2\delta^2-5>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.2**. $\overline{p}=2$. Put $P=x_1x_3x_2$. It follows that $|N_C(x_i)|\ge \delta-2>2$ $(i=1,2)$. By (1), $|C|\le3\delta+1-\overline{p}=3\delta-1$. Combining this with (16), we have $$3\delta-3\le |C|\le 3\delta-1. \eqno{(23)}$$ If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge4\delta-2\overline{p}=4\delta-4$. By (23), $4\delta-4\le |C|\le 3\delta-1$, a contradiction. Let $N_C(x_1)=N_C(x_2)$. Since $C$ is extreme, we have $|I_i|\ge4$ $(i=1,2,...,s)$. If $s\ge \delta-1$ then $|C|\ge 4s\ge 4\delta-4\ge3\delta$, contradicting (23). Hence $s\le \delta-2$. Recalling also that $s=|N_C(x_1)|\ge \delta-2$, we get $s=\delta-2$. It follows that $x_1x_2\in E(G)$. By a symmetric argument, $N_C(x_i)=N_C(x_1)$ $(i=2,3)$.\ **Claim 3**. $|M(I_1,I_2,...,I_s)|\le 1$. **Proof**. If $M(I_1,I_2,...,I_s)=\emptyset$ then we are done. Otherwise, $|M(I_a,I_b)|\ge1$ for some $a,b\in\{1,2,...,s\}$, i.e. there is an intermediate path $L$ between $I_a$ and $I_b$. If $|L|\ge2$ then by Lemma 2, $$|I_a|+|I_b|\ge2\overline{p}+2|L|+4\ge12.$$ This yields $$|C|\ge12+4(\delta-4)=4\delta-4\ge3\delta+1,$$ contradicting (23). Otherwise, $|L|=1$ and $M(I_1,I_2,...,I_s)\subseteq E(G)$. By Lemma 2, $|I_a|+|I_b|\ge2\overline{p}+6=10$, implying that $|C|\ge10+4(\delta-4)=4\delta-6$. Combining this with (23), we get $4\delta-6\le |C|\le3\delta-1$, i.e. $\delta\le5$. Since $\delta\ge5$, we have $$\delta=5, \ s=3, \ |C|=3\delta-1=14, \ |I_a|+|I_b|=10,$$ $$|I_i|=4 \ \mbox{for each} \ i\in\{1,2,...,s\}\backslash \{a,b\}.$$ Assume w.l.o.g. that $a=1$ and $b=2$. By Lemma 2, $|M(I_1,I_2)|=1$ and $M(I_1,I_3)=M(I_2,I_3)=\emptyset$, i.e. $|M(I_1,I_2,...,I_s)|=1$. Claim 3 is proved. $\Delta$\ Put $G^\prime=G\backslash M(I_1,I_2,...,I_s)$. As in Case 3.1.2, form a graph $G^\ast$ by adding in $G^\prime$ at most $2|M(I_1,I_2,...,I_s)|$ appropriate new edges such that $\delta(G^\ast)=\delta(G)$, $G^\ast\backslash \{\xi_1,\xi_2,...,\xi_s\}$ is disconnected and $$q(G)\ge q(G^\ast)-|M(I_1,I_2,...,I_s)|.$$ Let $H_1,H_2,...,H_{s+1}$ be the connected components of $G^\ast\backslash \{\xi_1,\xi_2,...,\xi_s\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2,...,s)$ and $V(H_{s+1})=\{x_1,x_2,x_3\}$. Using notation (5) for $G^\ast$, we have $h_i\ge3$ $(i=1,2,...,s+1)$. If $h_i\ge6$ for some $i\in \{1,2,...,s\}$ then $$|C|\ge6+3(s-1)+|\{\xi_1,\xi_2,...,\xi_s\}|=4\delta-5\ge3\delta,$$ contradicting (23). Let $3\le h_i\le 5$ $(i=1,2,...,s+1)$. By Lemma 3, $$q_i(G^\ast)\ge\frac{h_i(2\delta-h_i+1)}{2}\ge 3(\delta-1) \ \ (i=1,2,...,s+1),$$ implying that $$q(G^\ast)\ge\sum_{i=1}^{s+1}q_i(G^\ast)\ge3(s+1)(\delta-1)=3(\delta-1)^2.$$ By Claim 3, $$q\ge q(G^\ast)-1\ge3(\delta-1)^2-1>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.3**. $\overline{p}=3$. Put $P=x_1x_4x_3x_2$. It follows that $|N_C(x_i)|\ge\delta-3\ge2$ $(i=1,2)$. By (1), $|C|\le3\delta+1-\overline{p}=3\delta-2$. Combining this with (16), we have $$3\delta-3\le |C|\le3\delta-2. \eqno{(24)}$$ If $N_C(x_1)\not= N_C(x_2)$ then by Lemma 1, $$|C|\ge4\delta-2\overline{p}=4\delta-6\ge3\delta-1,$$ contradicting (24). Let $N_C(x_1)= N_C(x_2)$. Clearly, $|I_i|\ge5$ $(i=1,2,...,s)$. If $s\ge \delta-2$ than $|C|\ge5s\ge5\delta-10>3\delta-1$, contradicting (24). Hence $s\le \delta-3$. Observing also that $s=|N_C(x_1)|\ge\delta-3$, we get $s=\delta-3$. It follows that $x_1x_2\in E(G)$. By symmetric arguments, $N_C(x_i)=N_C(x_1)$ $(i=2,3,4)$.\ **Claim 4**. $M(I_1,I_2,...,I_s)=\emptyset$. **Proof**. Assume to the contrary, i.e. $M(I_1,I_2,...,I_s)\not=\emptyset$. It means that $M(I_a,I_b)\not=\emptyset$ for some distinct $a,b\in\{1,2,...,s\}$. By Lemma 2, $$|I_a|+|I_b|\ge4\delta-2\overline{p}=4\delta-6,$$ implying that $|C|\ge(4\delta-6)+5(s-2)=9\delta-31$. Combining this with (24), we get $9\delta-31\le|C|\le 3\delta-2$, a contradiction. Claim 4 is proved. $\Delta$\ By Claim 4, $G\backslash \{\xi_1,\xi_2,...,\xi_s\}$ is disconnected. Let $H_1,H_2,...,H_{s+1}$ be the connected components of $G\backslash \{\xi_1,\xi_2,...,\xi_s\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2,...,s)$ and $V(H_{s+1})=V(P)$. By notation (5), we have $h_i\ge4$ $(i=1,2,...,s+1)$. If $h_i\ge8$ for some $i\in\{1,2,...,s\}$ then $$|C|\ge 8+4(\delta-4)+s=5\delta-11\ge3\delta-1,$$ contradicting (24). Let $4\le h_i\le7$ $(i=1,2,...,s+1)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}\ge2(2\delta-3) \quad (i=1,2,...,s+1).$$ Hence $$q\ge\sum_{i=1}^{s+1}q_i\ge2(s+1)(2\delta-3)$$ $$=2(\delta-2)(2\delta-3)\ge\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.1.4**. $\overline{p}=4$. Put $P=x_1x_5x_4x_3x_2$. By (1), $|C|\le3\delta+1-\overline{p}=3\delta-3$, and by (16), $|C|\ge3\delta-3$. It follows that $$|C|=3\delta-3, \ n=3\delta+2, \ V(G)=V(C\cup P). \eqno{(25)}$$ If $\delta\le6$ then $$q\ge\frac{n\delta}{2}=\frac{(3\delta+2)\delta}{2}\ge\frac{3(\delta-1)(\delta+2)}{2}.$$ Let $\delta\ge7$, implying that $|N_C(x_i)|>2$ $(i=1,2)$. If $N_C(x_1)\not= N_C(x_2)$ then by Lemma 1, $$|C|\ge4\delta-2\overline{p}=4\delta-8\ge3\delta-2,$$ contradicting (25). Let $N_C(x_1)= N_C(x_2)$. Clearly, $|I_i|\ge6$ $(i=1,2,...,s)$. If $s\ge \delta-3$ then $$|C|\ge(\overline{p}+2)s\ge6(\delta-3)\ge3\delta-2,$$ contradicting (25). Let $s\le\delta-4$. On the other hand, $s\ge|N(x_1)|-\overline{p}\ge\delta-4$, implying that $s=\delta-4$. It follows that $x_1x_2\in E(G)$. By symmetric arguments, $N_C(x_i)=N_C(x_1)$ $(i=2,3,4,5)$. If $M(I_a,I_b)\not=\emptyset$ for some distinct elementary segments $I_a,I_b$, then by Lemma 2, $$|I_a|+|I_b|\ge4\delta-2\overline{p}=4\delta-8.$$ Hence $$|C|\ge4\delta-8+6(s-2)=10\delta-44\ge3\delta-2,$$ contradicting (25). Otherwise, $M(I_1,I_2,...,I_s)=\emptyset$, i.e. $G\backslash \{\xi_1,\xi_2,...,\xi_s\}$ is disconnected. Let $H_1,H_2,...,H_{s+1}$ be the connected components of $G\backslash \{\xi_1,\xi_2,...,\xi_s\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2,...,s)$ and $V(H_{s+1})=V(P)$. By notation (5), $h_i\ge5$ $(i=1,2,...,s+1)$. If $h_i\ge6$ for some $i\in \{1,2,...,s\}$ then $$|C|\ge6+5(s-1)+s=6\delta-23\ge3\delta-2,$$ contradicting (25). So, $h_i=5$ $(i=1,2,...,s+1)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}=5(\delta-2) \ \ (i=1,2,...,s+1),$$ implying that $$q\ge\sum_{i=1}^{s+1}q_i\ge5(s+1)(\delta-2)=5(\delta-3)(\delta-2)>\frac{3(\delta-1)(\delta-2)}{2}.$$ **Case 4.2**. $\delta-2\le\overline{p}\le\delta+1$. **Case 4.2.1**. $\overline{p}=\delta-2$. It follows that $|N_C(x_i)|\ge2$ $(i=1,2)$. By (1), $$|C|\le 3\delta+1-\overline{p}=2\delta+3. \eqno{(26)}$$ If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge4\delta-2\overline{p}=2\delta+4$, contradicting (26). Let $N_C(x_1)=N_C(x_2)$. If $s\ge3$ then $|C|\ge s(\overline{p}+2)\ge3\delta\ge2\delta+4$, again contradicting (26). Let $s=2$. It follows that $x_1x_2\in E(G)$. By symmetric arguments, $N_C(y)=N_C(x_1)=\{\xi_1,\xi_2\}$ for each $y\in V(P)$. Clearly, $|I_i|\ge\overline{p}+2=\delta$ $(i=1,2)$.\ **Case 4.2.1.1**. $M(I_1,I_2)=\emptyset$. It follows that $G\backslash \{\xi_1,\xi_2\}$ is disconnected. Let $H_1,H_2,...,H_t$ be the connected components of $G\backslash \{\xi_1,\xi_2\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2)$ and $V(P)\subset V(H_3)$. Since $G[V(P)]$ is hamiltonian, we have $V(H_3)=V(P)$. By notation (5), $h_i\ge|I_i|-1\ge\delta-1$ $(i=1,2)$ and $h_3=\delta-1$. If $h_i\ge\delta+3$ for some $i\in \{1,2\}$ then $$|C|\ge(\delta+3)+(\delta-1)+|\{\xi_1,\xi_2\}|=2\delta+4,$$ contradicting (26). So, $\delta-1\le h_i\le \delta+2$ $(i=1,2,3)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}\ge \frac{(\delta-1)(\delta+2)}{2} \quad (i=1,2,3).$$ Hence, $$q\ge \sum_{i=1}^3q_i\ge \frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.2.1.2**. $M(I_1,I_2)\not=\emptyset$. By the definition, there is an intermediate path $L$ between $I_1$ and $I_2$. If $|L|\ge2$ then by Lemma 2, $$|C|=|I_1|+|I_2|\ge2\overline{p}+2|L|+4\ge2\delta+4,$$ contradicting (26). Otherwise, $M(I_1,I_2)\subseteq E(G)$. Further, if $|M(I_1,I_2)|\ge3$ then by Lemma 2, $$|C|=|I_1|+|I_2|\ge2\overline{p}+8=2\delta+4,$$ contradicting (26). Thus $|M(I_1,I_2)|\le2$.\ **Case 4.2.1.2.1**. $|M(I_1,I_2)|=1$. Put $G^\prime=G\backslash M(I_1,I_2)$. As in Case 3.1.2, form a graph $G^\ast$ by adding at most two new edges in $G^\prime$ such that $\delta(G^\ast)=\delta(G)$, $G^\ast\backslash \{\xi_1,\xi_2\}$ is disconnected and $q(G)\ge q(G^\ast)-1$. Let $H_1,H_2,...,H_t$ be the connected components of $G^\ast\backslash \{\xi_1,\xi_2\}$ with $V(I_i^\ast)\subseteq V(H_i)$ $(i=1,2)$ and $V(P)= V(H_3)$. Using notation (5) for $G^\ast$, as in Case 4.2.1.1, we have $\delta-1\le h_i\le \delta+2$ $(i=1,2,3)$. By Lemma 2, $|I_1|+|I_2|\ge2\overline{p}+6=2\delta+2$. It means that $\max_i |I_i|\ge \delta+1$, i.e. $\max_i h_i\ge\delta$. Assume w.l.o.g. that $h_1\ge\delta$. By Lemma 3, $$q_1(G^\ast)\ge\frac{h_1(2\delta-h_1+1)}{2}\ge \frac{\delta(\delta+1)}{2},$$ $$q_i(G^\ast)\ge\frac{h_i(2\delta-h_i+1)}{2}\ge \frac{(\delta-1)(\delta+2)}{2} \quad (i=2,3),$$ implying that $$q(G^\ast)\ge\frac{\delta(\delta+1)}{2}+(\delta-1)(\delta+2).$$ Hence, $$q\ge q(G^\ast)-1\ge \frac{\delta(\delta+1)}{2}+(\delta-1)(\delta+2)-1\ge\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.2.1.2.2**. $|M(I_1,I_2)|=2$. By Lemma 2, $$|C|= |I_1|+|I_2|\ge2\overline{p}+7=2\delta+3.$$ By (26), $|C|=2\delta+3$ and $V(G)=V(P\cup C)$. Put $G^\prime=G\backslash M(I_1,I_2)$. As in Case 3.1.2, form a graph $G^\ast$ by adding at most four new edges in $G^\prime$ such that $\delta(G^\ast)=\delta(G)$, $G^\ast\backslash \{\xi_1,\xi_2\}$ is disconnected and $q(G)\ge q(G^\ast)-2$. Let $H_1,H_2,H_3$ be the connected components of $G^\ast\backslash \{\xi_1,\xi_2\}$ with $V(H_i)=V(I_i^\ast)$ $(i=1,2)$ and $V(H_3)=V(P)$. Using notation (5) for $G^\ast$, we have as in Case 4.2.1.1, $\delta-1\le h_i \le \delta+2$ $(i=1,2,3)$. Since $|I_i|\ge\delta$ and $|C|=|I_1|+|I_2|=2\delta+3$, we can assume w.l.o.g. that either $|I_1|=\delta+2$, $|I_2|=\delta+1$ or $|I_1|=\delta+3$, $|I_2|=\delta$.\ **Case 4.2.1.2.2.1**. $|I_1|=\delta+2$, $|I_2|=\delta+1$. It follows that $h_1=\delta+1$, $h_2=\delta$ and $h_3=\delta-1$. By Lemma 3, $$q_i(G^\ast)\ge\frac{h_i(2\delta-h_i+1)}{2}= \frac{\delta(\delta+1)}{2} \quad (i=1,2),$$ $$q_3(G^\ast)\ge\frac{h_3(2\delta-h_3+1)}{2}=\frac{(\delta-1)(\delta+2)}{2}.$$ Hence $$q\ge \sum_{i=1}^3q_i(G^\ast)-2= \frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.2.1.2.2.2**. $|I_1|=\delta+3$, $|I_2|=\delta$. Let $M(I_1,I_2)=\{e_1,e_2\}$, where $$e_1=y_1z_1, \ e_2=y_2z_2, \ \{y_1,y_2\}\subseteq V(I_1^\ast), \ \{z_1,z_2\}\subseteq V(I_2^\ast).$$ If $y_1\not=y_2$ and $z_1\not=z_2$ then by Lemma 2, $$|I_1|+|I_2|\ge2\overline{p}+8=2(\delta-2)+8=2\delta+4,$$ contradicting (26). Let either $y_1\not=y_2$ and $z_1=z_2$ or $y_1=y_2$ and $z_1\not=z_2$.\ **Case 4.2.1.2.2.2.1**. $y_1\not=y_2$ and $z_1=z_2$. Assume w.l.o.g. that $y_1,y_2$ occur on $I_1$ in this order. If $y_2=y_1^+$ then $$|C|\ge |\xi_1\overrightarrow{C}y_1z_1y_2\overrightarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1|=2\delta+4,$$ contradicting (26). Let $y_2\not=y_1^+$, i.e. $|y_1\overrightarrow{C}y_2|\ge2$. Put $$C^\prime=\xi_1\overrightarrow{C}y_2z_1\overleftarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1,$$ $$C^{\prime\prime}=\xi_1\overleftarrow{C}z_1y_1\overrightarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1.$$ Clearly, $$|C|\ge|C^\prime|=|\xi_1\overrightarrow{C}y_1|+|y_1\overrightarrow{C}y_2|+1+|\xi_2\overrightarrow{C}z_1|+\overline{p}+2,$$ $$|C|\ge|C^{\prime\prime}|=|\xi_1\overleftarrow{C}z_1|+|y_1\overrightarrow{C}y_2|+|y_2\overrightarrow{C}\xi_2|+1+\overline{p}+2.$$ By summing, we get $$2|C|\ge(|\xi_1\overrightarrow{C}y_1|+|y_1\overrightarrow{C}y_2|+|y_2\overrightarrow{C}\xi_2|+|\xi_2\overrightarrow{C}z_1|+|z_1\overrightarrow{C}\xi_1|)+|y_1\overrightarrow{C}y_2|+2+2\delta$$ $$=|C|+|y_1\overrightarrow{C}y_2|+2\delta+2\ge|C|+2\delta+4.$$ Hence $|C|\ge2\delta+4$, contradicting (26).\ **Case 4.2.1.2.2.2.2**. $y_1=y_2$ and $z_1\not=z_2$. Assume w.l.o.g. that $z_2, z_1$ occur on $I_2$ in this order.\ **Case 4.2.1.2.2.2.2.1**. $\delta\ge6$. If $|\xi_1\overrightarrow{C}y_1|\ge\delta-1$ and $|y_1\overrightarrow{C}\xi_2|\ge\delta-1$ then $|I_1|\ge2\delta-2\ge\delta+4$, contradicting the hypothesis. Thus, we can assume w.l.o.g. that $|\xi_1\overrightarrow{C}y_1|\le\delta-2$. If $y_1^-=\xi_1$ then $$|\xi_1\overleftarrow{C}z_2y_1\overrightarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1|\ge2\delta+5,$$ contradicting (26). Let $y_1^-\not=\xi_1$, that is $y_1^-\in V(I_1^\ast)$. Since $|M(I_1,I_2)|=2$, we have $N(y_1^-)\subset V(I_1)$. If $N(y_1^-)\cap V(y_1^+\overrightarrow{C}\xi_2^-)=\emptyset$ then $|N(y_1^-)|\le\delta-1$, a contradiction. Otherwise, $y_1^-w\in E(G)$ for some $w\in V(y_1^+\overrightarrow{C}\xi_2^-)$. Put $$R=\xi_1\overrightarrow{C}y_1^-w\overleftarrow{C}y_1$$ $$C^\prime=\xi_1\overrightarrow{R}y_1z_1\overleftarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1,$$ $$C^{\prime\prime}=\xi_1\overleftarrow{C}z_2y_1\overrightarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1.$$ Clearly, $|R|\ge|\xi_1\overrightarrow{C}y_1|+1$ and $$|C|\ge|C^\prime|=|R|+1+|z_1\overleftarrow{C}\xi_2|+(\overline{p}+2)\ge|\xi_1\overrightarrow{C}y_1|+|z_1\overleftarrow{C}\xi_2|+\delta+2,$$ $$|C|\ge|C^{\prime\prime}|=|\xi_1\overleftarrow{C}z_1|+2+|y_1\overrightarrow{C}\xi_2|+(\overline{p}+2).$$ By summing, we get $$2|C|\ge(|\xi_1\overrightarrow{C}y_1|+|y_1\overrightarrow{C}\xi_2|+|\xi_2\overrightarrow{C}z_1|+|z_1\overrightarrow{C}\xi_1|)+2\delta+4=|C|+2\delta+4.$$ Hence $|C|\ge2\delta+4$, contradicting (26).\ **Case 4.2.1.2.2.2.2.2**. $\delta=5$. It follows that $$|I_1|=\delta+3=8, \ |I_2|=\delta=5, \ |C|=2\delta+3=13.$$ If either $|\xi_1\overrightarrow{C}y_1|\le\delta-2=3$ or $|y_1\overrightarrow{C}\xi_2|\le\delta-2=3$ then we can argue as in Case 4.2.1.2.2.2.2.1. Otherwise, $|\xi_1\overrightarrow{C}y_1|=|y_1\overrightarrow{C}\xi_2|=4$. If $|z_1\overleftarrow{C}\xi_2|\ge4$ then $$|\xi_1\overrightarrow{C}y_1z_1\overleftarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1|\ge14>|C|,$$ a contradiction. Let $|z_1\overleftarrow{C}\xi_2|\le3$. Analogously, $|\xi_1\overleftarrow{C}z_2|\le3$, implying that $I_2=\xi_1z_1^+z_1z_2\xi_2^+\xi_2$. If $z_1^+z_2\in E(G)$ then $$|\xi_1\overrightarrow{C}y_1z_1z_1^+z_2\overleftarrow{C}\xi_2x_2\overleftarrow{P}x_1\xi_1|=14>|C|,$$ a contradiction. So, $N(z_1^+)\subseteq \{\xi_1,\xi_2,z_1,\xi_2^+\}$, again a contradiction, since $|N(z_1^+)|\ge\delta=5$.\ **Case 4.2.2**. $\overline{p}=\delta-1$. By (1), $$|C|\le3\delta+1-\overline{p}=2\delta+2. \eqno{(27)}$$ It follows that $|N_C(x_i)|\ge1$ $(i=1,2)$.\ **Case 4.2.2.1**. $|N_C(x_i)|\ge2$ $(i=1,2)$. If $N_C(x_1)\not=N_C(x_2)$ then by Lemma 1, $|C|\ge2\overline{p}+8=2\delta+6$, contradicting (27). Let $N_C(x_1)=N_C(x_2)$. If $s\ge3$ then $$|C|\ge s(\overline{p}+2)\ge3(\delta+1)>2\delta+2,$$ contradicting (27). Let $s=2$. It follows that $$|C|=2\delta+2, \ |I_1|=|I_2|=\delta+1, \ V(G)=V(C\cup P).$$ Assume that $yz\in E(G)$ for some $y\in V(P)$ and $z\in V(C)\backslash \{\xi_1,\xi_2\}$. Assume w.l.o.g. that $z\in V(I_1^\ast)$. Since $\overline{p}=\delta-1\ge4$, we can assume w.l.o.g. that $|x_1\overrightarrow{P}y|\ge2$. If $x_2w\in E(G)$ for some $w\in \{y^-,y^{-2}\}$ then $$|\xi_1\overrightarrow{C}z|\ge|\xi_1x_1\overrightarrow{P}wx_2\overleftarrow{P}yz|\ge\delta.$$ Observing also that $|z\overrightarrow{C}\xi_2|\ge2$, we have $|I_1|\ge\delta+2$, a contradiction. Otherwise, $d(x_2)\le\delta-1$, a contradiction. So, $N_C(y)\subseteq \{\xi_1,\xi_2\}$ for each $y\in V(P)$. On the other hand, by Lemma 2, $M(I_1,I_2)=\emptyset$ and hence $G\backslash \{\xi_1,\xi_2\}$ is disconnected. Let $H_1,H_2,H_3$ be the connected components of $G\backslash \{\xi_1,\xi_2\}$ with $V(H_i)=V(I_i^\ast)$ $(i=1,2)$ and $V(H_3)=V(P)$. By notation (5), $h_i=\delta$ $(i=1,2,3)$. By Lemma 3, $$q_i\ge\frac{h_i(2\delta-h_i+1)}{2}=\frac{\delta(\delta+1)}{2} \quad (i=1,2,3),$$ implying that $$q\ge\sum_{i=1}^3q_i\ge\frac{3(\delta^2+\delta)}{2}>\frac{3(\delta-1)(\delta+2)}{2}.$$ **Case 4.2.2.2**. Either $|N_C(x_1)|=1$ or $|N_C(x_2)|=1$. Assume w.l.o.g. that $|N_C(x_1)|=1$. It follows that $V(P)\backslash \{x_1\}\subset N(x_1)$. Put $N_C(x_1)=\{y_1\}$.\ **Case 4.2.2.2.1**. $N_C(x_2)\not=N_C(x_1)$. It follows that $x_2y_2\in E(G)$ for some $y_2\in V(C)\backslash \{y_1\}$. Clearly, $|y_1\overrightarrow{C}y_2|\ge\delta+1$ and $|y_2\overrightarrow{C}y_1|\ge\delta+1$. Hence $$|y_1\overrightarrow{C}y_2|=|y_2\overrightarrow{C}y_1|=\delta+1, \ |C|=2\delta+2, \ V(G)=V(C\cup P). \eqno{(28)}$$ If $s\ge3$ then there are at least two elementary segments on $C$ of length at least $\delta+1$. It means that $|C|>2\delta+2$, contradicting (28). Let $s=2$, i.e. $N_C(x_1)\cup N_C(x_2)=\{y_1,y_2\}=\{\xi_1,\xi_2\}$. Assume that $zw\in E(G)$ for some $z\in V(P)$ and $w\in V(C)\backslash \{y_1,y_2\}$, and assume w.l.o.g. that $w\in y_1\overrightarrow{C}y_2$. Since $V(P)\backslash \{x_1\}\subset N(x_1)$, we have $x_1z^+\in E(G)$. Further, since $C$ is extreme, $$|w\overrightarrow{C}y_2|\ge|wz\overleftarrow{P}x_1z^+\overrightarrow{P}x_2y_2|\ge\delta+1.$$ Hence, $|C|>2\delta+2$, contradicting (28). Thus, $N(z)\subseteq \{y_1,y\}$ for each $z\in V(P)$. On the other hand, by Lemma 2, $M(I_1,I_2)=\emptyset$. Further, we can argue as in Case 4.2.2.1.\ **Case 4.2.2.2.2**. $N_C(x_2)=N_C(x_1)=\{y_1\}$. It follows that $$N(x_i)=(V(P)\backslash \{x_i\})\cup \{y_1\} \ \ (i=1.2).$$ Since $\kappa\ge2$, there is a path $R=z\overrightarrow{R}w$ such that $z\in V(P)$ and $w\in V(C)\backslash\{y_1\}$. Since $N_C(x_1)=N_C(x_2)=\{y_1\}$, we have $z\not\in \{x_1,x_2\}$. Then $$|y_1x_1\overrightarrow{P}z^-x_2\overleftarrow{P}zw|=\delta+1,$$ and we can argue as in Case 4.2.2.2.1.\ **Case 4.2.3**. $\overline{p}=\delta$. By (1), $|C|\le 3\delta+1-\overline{p}=2\delta+1$. If $|Q|\ge\delta+1$ then by (2), $|C|\ge2|Q|\ge2\delta+2$, a contradiction. Let $$|Q|\le\delta. \eqno{(29)}$$ **Case 4.2.3.1**. $x_1x_2\not\in E(G)$ It follows that $|N_C(x_i)|\ge1$ $(i=1,2)$. If $|N_C(x_i)|\ge2$ for some $i\in\{1,2\}$ then clearly $|Q|\ge\overline{p}+2=\delta+2$, contradicting (29). Let $|N_C(x_1)|=|N_C(x_2)|=1$. Further, if $N_C(x_1)\not=N_C(x_2)$ then again $|Q|\ge\delta+2$, contradicting (29). Let $N_C(x_1)=N_C(x_2)=\{z_1\}$ for some $z_1\in V(C)$. Since $\kappa\ge2$, there is a path $L=yz_2$ connecting $P$ and $C$ such that $y\in V(P)$ and $z_2\in V(C)\backslash \{z_1\}$. Clearly, $y\not\in \{x_1,x_2\}$. If $x_2y^-\in E(G)$ then $$|Q|\ge|z_1x_1\overrightarrow{P}y^-x_2\overleftarrow{P}yz_2|=\delta+2,$$ contradicting (29). Let $x_2y^-\not\in E(G)$. Further, if $y^-\not=x_1$ then recalling that $x_2x_1\not\in E(G)$, we conclude that $|N_C(x_2)|\ge2$, a contradiction. Otherwise, $y^-=x_1$ and $|Q|\ge|z_1x_2\overleftarrow{P}yz_2|=\delta+1$, contradicting (29).\ **Case 4.2.3.2**. $x_1x_2\in E(G)$. Put $C^\prime=x_1\overrightarrow{P}x_2x_1$. Since $\kappa\ge2$, there are two disjoint paths $L_1,L_2$ connecting $C^\prime$ and $C$. Further, since $P$ is extreme, $|L_1|=|L_2|=1$. Let $L_1=y_1z_1$ and $L_2=y_2z_2$, where, $y_1,y_2\in V(C^\prime)$ and $z_1,z_2\in V(C)$. Since $C^\prime$ is a Hamilton cycle in $G[V(P)]$ and $|C^\prime|\ge\delta+1\ge6$, we can assume that $P$ is chosen such that $x_1=y_1$ and $|x_1\overrightarrow{P}y_2|\ge3$. If $x_2v\in E(G)$ for some $v\in \{y_2^{-1},y_2^{-2}\}$ then $$|Q|\ge|z_1x_1\overrightarrow{P}vx_2\overleftarrow{P}y_2z_2|\ge\delta+1,$$ contradicting (29). Otherwise, $|N_C(x_2)|\ge2$, implying that $x_2z_3\in E(G)$ for some $z_3\in V(C)\backslash \{z_1\}$. Then $$|Q|\ge|z_1x_1\overrightarrow{P}x_2z_3|\ge\delta+2,$$ again contradicting (29).\ **Case 4.2.4**. $\overline{p}=\delta+1$. By (1), $|C|\le 3\delta+1-\overline{p}=2\delta$. Recalling (14), we get $|C|=2\delta$ and $V(G)=V(C\cup P)$. If $|Q|\ge\delta+1$ then by (2), $|C|\ge2|Q|\ge2\delta+2$, a contradiction. Let $$|Q|\le\delta. \eqno{(30)}$$ **Case 4.2.4.1**. $x_1x_2\in E(G)$. Put $C^\prime=x_1\overrightarrow{P}x_2x_1$. Since $\kappa\ge2$, there are two disjoint edges $z_1w_1$ and $z_2w_2$ connecting $C^\prime$ and $C$ such that $z_1,z_2\in V(C^\prime)$ and $w_1,w_2\in V(C)$. Since $C^\prime$ is a Hamilton cycle in $G[V(P)]$ and $|C^\prime|\ge\delta+2\ge7$, we can assume w.l.o.g. that $P$ is chosen such that $z_1=x_1$ and $|x_1\overrightarrow{P}z_2|\ge4$. If $x_2v\in E(G)$ for some $v\in \{z_2^{-1},z_2^{-2},z_2^{-3}\}$ then $$|Q|\ge|w_1x_1\overrightarrow{P}vx_2\overleftarrow{P}z_2w_2|\ge\delta+1,$$ contradicting (30). Now let $x_2v\not\in E(G)$ for each $v\in \{z_2^{-1},z_2^{-2},z_2^{-3}\}$. It follows that $|N_C(x_2)|\ge2$, i.e. $x_2w_3\in E(G)$ for some $w_3\in VC)\backslash \{w_1\}$. But then $|Q|\ge|w_1x_1\overrightarrow{P}x_2w_3|=\delta+3$, contradicting (30).\ **Case 4.2.4.2**. $x_1x_2\not\in E(G)$. If $d_P(x_1)+d_P(x_2)\ge|V(P)|=\overline{p}+2$ then by Theorem F, $G[V(P)]$ is hamiltonian and we can argue as in Case 4.2.4.1. Otherwise, $$d_C(x_1)+d_C(x_2)\ge \delta-1\ge4. \eqno{(31)}$$ Assume w.l.o.g. that $d_C(x_1)\ge d_C(x_2)$.\ **Case 4.2.4.2.1**. $d_C(x_2)=0$. It follows that $N(x_2)=V(P)\backslash \{x_2\}$. By (31), $d_C(x_1)\ge4$. Put $C^\prime=x_1^+\overrightarrow{P}x_2x_1^+$. Since $\kappa\ge2$, there is a path $L=z\overrightarrow{L}w$ connecting $C^\prime$ and $C$ such that $z\in V(C^\prime)\backslash \{x_1^+\}$ and $w\in V(C)$. If $x_1\in V(L)$, i.e. $x_1z\in E(G)$, then $x_1\overrightarrow{P}z^-x_2\overleftarrow{P}zx_1$ is a Hamilton cycle in $G[V(P)]$ and we can argue as in Case 4.2.4.1. Let $x_1\not\in V(L)$. Since $V(G)=V(C\cup P)$, we have $L=zw$. Further, since $d_C(x_1)\ge4$, we have $x_1w_1\in E(G)$ for some $w_1\in V(C)\backslash \{w\}$. Hence, $$|Q|\ge|w_1x_1\overrightarrow{P}z^-x_2\overleftarrow{P}zw|=\delta+3,$$ contradicting (30).\ **Case 4.2.4.2.2**. $d_C(x_2)=1$. Let $N_C(x_2)=\{w_1\}$. By (31), $d_C(x_1)\ge3$, i.e. $x_1w\in E(G)$ for some $w\in V(C)\backslash \{w_1\}$. Hence $$|Q|\ge|wx_1\overrightarrow{P}x_2w_1|=\delta+3,$$ contradicting (30).\ **Case 4.2.4.2.3**. $d_C(x_2)\ge2$. Since $d_C(x_1)\ge d_C(x_2)$, we have $d_C(x_1)\ge2$. Hence $|Q|\ge\overline{p}+2=\delta+3$, contradicting (30). ------------------------------------------------------------------------ \ **Proof of Theorem 1**. Let $G$ be a 2-connected graph, $C$ a longest cycle in $G$ and $P=x_1\overrightarrow{P}x_2$ a longest path in $G\backslash C$ of length $\overline{p}$. If $\overline{p}=0$ then $C$ is a dominating cycle and we are done. Let $\overline{p}\ge1$.\ **Case 1**. $\delta=2$ and $q\le8$. Since $\kappa\ge2$ and $\overline{p}\ge1$, there exist a path $Q=\xi\overrightarrow{Q}\eta$ such that $|Q|\ge3$ and $V(Q)\cap V(C)=\{\xi,\eta\}$. Further, since $C$ is extreme, we have $|C|=|y\overrightarrow{C}z|+|z\overrightarrow{C}y|\ge2|Q|\ge6$ and therefore, $q\ge|C|+|Q|\ge9$, contradicting the hypothesis.\ **Case 2**. $\delta\ge3$ and $q\le(3(\delta-1)(\delta+2)-1)/2$. Since $$q=\frac{1}{2}\sum_{u\in V(G)}d(u)\ge \frac{\delta n}{2},$$ we have $\delta n/2 \le (3(\delta-1)(\delta+2)-1)/2$, which is equivalent to $$\delta\ge \frac{n-2}{3}-\frac{1}{3}+\frac{7}{3\delta}.$$ If $n=3t$ for some integer $t$, then $$\delta\ge \frac{3t-2}{3}-\frac{1}{3}+\frac{7}{3\delta}=t-1+\frac{7}{3\delta},$$ implying that $\delta\ge t=n/3>(n-2)/3$. If $n=3t+1$ for some integer $t$, then $$\delta\ge \frac{3t-1}{3}-\frac{1}{3}+\frac{7}{3\delta}=t-\frac{2}{3}+\frac{7}{3\delta},$$ implying that $\delta\ge t=(n-1)/3>(n-2)/3$. Finally, if $n=3t+2$ for some integer $t$, then $$\delta\ge \frac{3t}{3}-\frac{1}{3}+\frac{7}{3\delta}=t-\frac{1}{3}+\frac{7}{3\delta},$$ implying that $\delta\ge t=(n-2)/3$. So, $\delta\ge(n-2)/3$, in any case. By Lemma 4, each longest cycle in $G$ is a dominating cycle. ------------------------------------------------------------------------ [10]{} J.A. Bondy and U.S.R. Murty, Graph Theory with Applications. Macmillan, London and Elsevier, New York (1976). G. A. Dirac, Some theorems on abstract graphs, Proc. London, Math. Soc., 2 (1952) 69–81. P. Erdös and T. Gallai, On maximal paths and circuits of graphs, Acta Math. Acad. Sci. Hungar. 10 (1959) 337-356. Zh. G. Nikoghosyan, Path-Extensions and Long Cycles in Graphs, Transactions of the Institute for Informatics and Automation Problems of the NAS of RA and of the Yerevan State University, Mathematical Problems of Computer Science 19 (1998), 25-31. Zh.G. Nikoghosyan, A Size Bound for Hamilton Cycles, preprint, arXiv:1107.2201v1. O. Ore, Note on Hamilton circuits, Amer. Math. Monthly 67 (1960) 55. Institute for Informatics and Automation Problems\ National Academy of Sciences\ P. Sevak 1, Yerevan 0014, Armenia\ E-mail: zhora@ipia.sci.am
--- abstract: 'This is mainly a survey of recent work on algebraic ways to “measure” moduli spaces of connecting trajectories in Morse and Floer theories as well as related applications to symplectic topology. The paper also contains some new results. In particular, we show that the methods of [@BaCo] continue to work in general symplectic manifolds (without any connectivity conditions) but under the bubbling threshold.' address: - 'J-F.B.: UFR de MathématiquesUniversité de Lille 159655 Villeneuve d’AscqFrance' - 'O.C.: University of MontréalDepartment of Mathematics and Statistics CP 6128 Succ. Centre Ville Montréal, QC H3C 3J7 Canada' author: - 'Jean-François Barraud and Octav Cornea' date: 'May 20, 2005.' title: 'Homotopical dynamics in symplectic topology.' --- Introduction ============ The main purpose of this paper is to survey a number of Morse-theoretic results which show how to estimate algebraically the high-dimensional moduli spaces of Morse flow lines and to describe some of their recent applications to symplectic topology. We also deduce some new applications. The paper starts with a brief discussion of the various proofs showing that the differential in the Morse complex is indeed a differential. With this occasion we introduce the main concepts in Morse theory and, in particular, the notion of connecting manifold (or, equivalently, the moduli space of flow lines connecting two critical points) which is the main object of interest in our further constructions. Moreover, an extension of one of these proofs leads naturally to an important result of John Franks [@Fr] which describes the framed cobordism class of connecting manifolds between consecutive critical points as a certain relative attaching map. After describing Franks’ result, we proceed to a stronger result initially proved in [@Co1] which computes a framed bordism class naturally associated to the same connecting manifolds in terms of certain Hopf invariants. While these results only apply to consecutive critical points we then describe a recent method to estimate general connecting manifolds by means of the Serre spectral sequence of the path-loop fibration having as base the ambient manifold [@BaCo]. Some interesting topological consequences of these results are briefly mentioned as well as some other methods used in the study of these problems. The third section discusses a number of symplectic applications. We start with some results which first appeared in [@Co2]. These use the non-vanishing of certain Hopf invariants to deduce the existence of bounded orbits of hamiltonian flows (obviously, inside non-compact manifolds). This is a very “soft" type of result even if difficult to prove. We then continue in §\[subsec:strips\] by describing how to use the Serre spectral sequence result to detect pseudo-holomorphic strips as well as some consequences of the existence of the strips. Most of the results of this part have first appeared in [@BaCo] but there are some that are new: we discuss explicitly the detection of pseudoholomorphic strips passing through some submanifold and we present a way to construct in a coherent fashion our theory for lagrangians in general symplectic manifolds as long as we remain [*under a bubbling threshold*]{}. Notice that even the analogue of the classical Floer theory (which is a very particular case of our construction) has not been explicited in the literature in the Lagrangian case even if all the necessary ideas are present in some form - see [@Sch1] for the hamiltonian case. The paper contains a number of open problems and ends with a conjecture which is supported by the results in §\[subsec:strips\] as well as by recent joint results of the second author with François Lalonde. Elements of Morse theory {#sec:Morse} ======================== Assume that $M$ is a compact, smooth manifold without boundary of dimension $n$. Let $f:M\to {\mathbb{R}}$ be a smooth Morse function and let $\gamma: M\times {\mathbb{R}}\to M$ be a negative gradient Morse-Smale flow associated of $f$. In particular, $f$ is strictly decreasing along any non-constant flow line of $\gamma$ and the stable manifolds $$W^{s}(P)=\{x\in M : \lim_{t\to \infty}\gamma_{t}(x)=P\}$$ and the unstable manifolds $$W^{u}(Q)=\{x\in M : \lim_{t\to -\infty}\gamma_{t}(x)=Q\}$$ of any pair of critical points $P$ and $Q$ of $f$ intersect transversally. One of the most useful and simple tools that can be defined in this context is the Morse complex $$C(\gamma)=({\mathbb{Z}}/2<Crit(f)>,d)~.~$$ Here ${\mathbb{Z}}/2< S >$ is the ${\mathbb{Z}}/2$-vector space generated by the set $S$, the vector space ${\mathbb{Z}}/2<Crit(f)>$ has a natural grading given by $|P|=ind_{f}(P), \forall P\in Crit(f)$ and $d$ is the differential of the complex which is defined by $$dx=\sum_{|y|=|x|-1}a^{x}_{y}y$$ so that the coefficients $a^{x}_{y}=\#((W^{u}(x)\cap W^{s}(y))/{\mathbb{R}})$. This definition makes sense because the set $W^{u}(P)\cap W^{s}(Q)$ which consists of all the points situated on some flow line joining $P$ to $Q$ is invariant by the ${\mathbb{R}}$-action given by the flow. Moreover, $W^{u}(P)$ and $W^{s}(P)$ are homeomorphic to open disks which implies that the set $M^{P}_{Q}=(W^{u}(P)\cap W^{s}(Q))/{\mathbb{R}}$ has the structure of a smooth (in general, non-compact) manifold of dimension $|P|-|Q|-1$. We call this space the moduli space of flow lines joining $P$ to $Q$. It is not difficult to understand the reasons for the non-compactness of $M^{P}_{Q}$ when $M$ is compact as in our setting: this is simply due to the fact that a family of flow lines joining $P$ to $Q$ may approach a third, intermediate, critical point $R$. For this to happen it is necessary (and sufficient - see Smale [@Sm] or Franks [@Fr]) to have some flow line which joins $P$ to $R$ and some other joining $R$ to $Q$. This implies that when $|P|=|Q|+1$ the set $M^{P}_{Q}$ is compact and thus the sum above is finite. For further use let’s define also the unstable sphere of a critical point $P$ as $S^{u}_{a}(P)=W^{u}(P)\cap f^{-1}(a)$ as well as the stable sphere $S^{s}_{a}(P)=W^{s}(P)\cap f^{-1}(a)$ where $a$ is a regular value of $f$. It should be noted that this names are slightly abusive as these two sets are spheres, in general, only if $a$ is sufficiently close to $f(P)$. In that case $S^{u}_{a}(P)$ is homeomorphic to a sphere of dimension $|P|-1$ and $S^{s}_{a}(P)$ is homeomorphic to a sphere of dimension $n-|P|-1$. With this notations ths moduli space $M^{P}_{Q}$ is homeomorphic to $S^{u}_{a}(P)\cap S^{s}_{a}(P)$ for any $a\in (f(Q),f(P))$ which is a regular value of $f$. The main properties of the object defined above are that: $$d^{2}=0 \ {\rm and}\ H_{\ast}(C(\gamma))\approx H_{\ast}(M;{\mathbb{Z}}/2)~.~$$ We will sometimes denote this complex by $C(f)$ and will call it the (classical) Morse complex of $f$. The flow $\gamma$ may be in fact even a pseudo- (negative) gradient flow of $f$. There also exists a version of this complex over ${\mathbb{Z}}$ in which the counting of the elements in $M^{x}_{y}$ takes into account orientations. There are essentially four methods to prove these properties: - Deducing the equation $\sum_{y}a^{x}_{y}a^{y}_{z}=0$ (which is equivalent to $d^{2}=0$) from the properties of the moduli spaces $M^{x}_{z}$ with $|x|-|z|=2$. - Comparing $a^{x}_{y}$ with a certain relative attaching map. - Expressing $a^{x}_{y}$ in terms of a connection map in Conley’s index theory. - A method based on a deformation of the de Rham complex (clearly, in this case the coefficients are required to be in ${\mathbb{R}}$). For the rest of this paper the two approaches that are of the most interest are (i) and (ii). Therefore we shall first say a few words on the other two methods and will then describe in more detail the first two. Method (iii) consists in regarding two critical points $x$, $y$ so that $|x|=|y|+1$ as an attractor-repellor pair and to apply the general Conley theory of Morse decompositions to this situation [@Sal1]. Method (iv) has been introduced by Witten in [@Wit] and is based on a deformation of the differential of the de Rham complex which provides a new differential with respect to which the harmonic forms are in bijection with the critical points of $f$. Method (i) has been probably folklore for a long time but it first appeared explicitly in Witten’s paper. It is based on noticing that the moduli space $M^{P}_{Q}$ admits a compactification $\overline{M}^{P}_{Q}$ which is a compact, topological manifold with boundary so that the boundary verifies the formula: $$\label{eq:first_bdry} \partial\overline{M}^{P}_{Q}=\bigcup_{R}\overline{M}^{P}_{R}\times\overline{M}^{R}_{Q}~.~$$ There are two main ways to prove this formula. One is analytic and regards a flow line from $P$ to $Q$ as a solution of a differential equation $\dot{x}=-\nabla f(x)$ and studies the properties of such soultions (this method is presented in the book of Schwarz [@Sch]). A second approach is more topological/dynamical in nature as is described in detail by Weber [@We]. Clearly, from formula (\[eq:first\_bdry\]) we immediately deduce $\sum_{y}a^{x}_{y}a^{y}_{z}=0$ and hence $d^{2}=0$. Just a little more work is needed to deduce from here the second property. Method (ii) was the one best known classically and it is essentially implicit in Milnor’s $h$-cobordism book [@Mil]. It is based on the observation that $a^{x}_{y}$ can be viewed as follows. First, to simplify slightly the argument assume that the only critical points in $f^{-1}([f(y),f(x)])$ are $x$ and $y$. It is well known that for $a\in (f(y),f(x))$, there exists a deformation retract $$r:M(a)=f^{-1}(-\infty,a]\to M(f(y)-\epsilon)\cup_{\phi_{y}} D^{|y|}=M'$$ where the attaching map $$\label{eq:attaching} \phi_{y}: S^{u}_{f(y)-\epsilon}(y)\to M(f(y)-\epsilon)$$ is just the inclusion and $\epsilon$ is small. This deformation retract follows the flow till reaching of $U(W^{u}(y))\cup M(f(y)-\epsilon)$ where $U(W^{u}(y)$ is a tubular neighbourhood of $W^{u}(y)$ so that the flow is transverse to its boundary and then collapses this neighbourhood to $W^{u}(y)$ by the canonical projection. Clearly, applying this remark to each critical point of $f$ provides a $CW$-complex of the same homotopy type as that of $M$ and with one cell $\bar{x}$ for each element of $x\in Crit(f)$. To this cellular decomposition we may associate a celullar complex $(C'(f), d')$ with the property that $d\bar{x}=\sum k^{\bar{x}}_{\bar{y}}\bar{y}$ where $k^{\bar{x}}_{\bar{y}}$ is, by definition, the degree of the composition: $$\label{eq:rel_attaching} \psi^{x}_{y}:S^{u}_{a}(x)\stackrel{\phi_{x}}{\longrightarrow} M(a)\stackrel{r}{\longrightarrow}M' \stackrel{u}{\longrightarrow} M'/M(f(y)-\epsilon)$$ with $\phi_{x}:S^{u}_{a}(x)\to M(a)$ again the inclusion and where the last map, $u$, is the projection onto the respective topological quotient space (which is homeomorphic to the sphere $S^{|y|}$). Notice now that $M^{x}_{y}\subset S^{u}_{a}(x)$ is a finite union of points say $P_{1},\ldots, P_{k}$. Imagine a small disk $D_{i}\subset S^{u}_{a}(x)$ around $P_{i}$. The key (but geometrically clear) remark is that the composition of the flow $\gamma$ together with the retraction $r$ transports $D_{i}$ (if it is chosen sufficiently small) homeomorphically onto a neighbourhood of $y$ inside $W^{u}(y)$. Therefore, the degree of $deg(\psi^{x}_{y})=a^{x}_{y}$ and thus $d=d'$ which shows that $d$ is a differential and that the homology it computes agrees with the homology of $M$. As we shall see further, the points of view reflected in the approaches at (i) and (ii) lead to interesting applications which go much beyond “classical" Morse theory. Method (iv), while striking and inspiring appears for now not to have been exploited efficiently. Connecting Manifolds -------------------- One way to look to the Morse complex is by viewing the coefficients $a^{x}_{y}$ of the differential as a measure of the $0$-dimensional manifold $M^{x}_{y}$. The question we discuss here is in what way we can measure algebraically the similar higher dimensional moduli spaces. This is clearly a significant issue because, obviously, only a very superficial part of the dynamics of the negative gradient flow of $f$ is encoded in the $0$-dimensional moduli spaces of connecting flow lines. As a matter of terminology, the space $M^{P}_{Q}$ when viewed inside the unstable sphere $S^{u}_{a}(P)$ (with $f(P)-a$ positive and very small) is also called [@Fr] the connecting manifold of $P$ and $Q$. It was mentioned above that, in general, a connecting manifold $M^{P}_{Q}$ is not closed. However, if the critical points $P$ and $Q$ are [*consecutive*]{} in the sense that there does not exist a critical point $R$ so that $M^{P}_{R}\times M^{R}_{Q}\not=\emptyset$, then $M^{P}_{Q}$ is closed. ### Framed Cobordism Classes An important remark of John Franks [@Fr] is that connecting manifolds are canonically framed. First recall that a framed manifold $V$ is a submanifold $V\hookrightarrow S^{n}$ which has a trivial normal bundle together with a trivialization of this bundle. Two such trivializations are equivalent (and will generally be identified) if they are restrictions of a trivialization of the normal bundle of $V\times [0,1]$ inside $S^{n}\times [0,1]$. We also recall the Thom-Pontryagin construction in this context [@Mil2]. Assuming $V\hookrightarrow S^{n}$ is framed we define a map $$\phi_{V}:S^{n}\to S^{codim(V)}$$ as follows: consider a tubular neighbourhood $U(V)$ of $V$, use the framing to define a homeomorphism $\psi:U(V)\to D^{k}\times V$ where $D^{k}$ is the closed disk of dimension $k=codim(V)$, consider the composition $\psi':U(V)\stackrel{\psi}{\longrightarrow}D^{k}\times V\stackrel{p_{1}}{\longrightarrow} D^{k}\to D^{k}/S^{k-1}=S^{k}$ and define $\phi_{V}$ by extending $\psi'$ outside $U(V)$ by sending each $x\in S^{n}\backslash U(V)$ to the base point in $D^{k}/S^{k-1}=S^{k}$. The homotopy class of this map is the same if two framings are equivalent. It is easy to see that two framed manifolds (of the same dimension) are cobordant iff their associated Thom maps are homotopic. We return now to Franks’ remark and notice that the manifolds $M^{P}_{Q}$ are framed. First, we make the convention to view $M^{P}_{Q}$ as a submanifold of the unstable sphere of $P$, $S^{u}_{a}(P)$ (the other choice would have been to use $S^{s}(Q)$ as ambient manifold). Notice that we have $$M^{P}_{Q}=S^{u}_{a}(P)\cap W^{s}(Q)$$ and this intersection is transversal. Clearly, as $W^{s}(Q)$ is homeomorphic to a disk, its normal bundle in $M$ is trivial and any two trivializations of this bundle are equivalent. This implies that the normal bundle of $M^{P}_{Q}\hookrightarrow S^{u}_{a}(P)$ is also trivial and a trivialization of the normal bundle of $W^{s}(Q)$ provides a trivialization of this bundle which is unique up to equivalence. Recall that if $P$ and $Q$ are consecutive critical points, then $M^{P}_{Q}$ is closed. As we have seen that it is also framed we may associate to it a framed cobordism class $$\widetilde{M^{P}_{Q}}\in \pi_{|P|-1}(S^{|Q|})~.~$$ Moreover, it is easy to see that the function $f$ may be perturbed without modifying the dynamics of the the negative gradient flow so that the cell attachments corresponding to the critical points $Q$ and $P$ are in succession. Therefore the map $$\psi^{P}_{Q}:S^{|P|-1}\to S^{|Q|}$$ defined as in formula (\[eq:rel\_attaching\]) is still defined. The main result of Franks in [@Fr] is: \[theo:fr\][@Fr] Assume $P$ and $Q$ are consecutive critical points of $f$. Up to sign $\widetilde{M^{P}_{Q}}$ coincides with the homotopy class of $\psi^{P}_{Q}$. The idea of proof of this result is quite simple. All that is required is to make even more precise the constructions used in the approach (ii) used to show $d^{2}=0$ for the Morse complex. For this we fix for $W^{s}(Q)$ a normal framing $o$ which is invariant by translation along the flow $\gamma$ and which at $Q\in W^{s}(Q)$ is given by a basis $e$ of $T_{Q}W^{u}(Q)$ (this is possible because $W^{u}(Q)$ and $W^{s}(Q)$ intersect transversally at $Q$). We also fix the tubular neighbourhood $U(W^{u}(Q))$ so that the projection $r'':U(W^{u}(Q))\to W^{u}(Q)$ has the property that $(r'')^{-1}(Q)=W^{s}(Q)\cap U(W^{u}(Q))$ and, for any point $y\in (r'')^{-1}(Q)$, we have $(r'')_{\ast}(o_{y})=e$. Moreover, we may assume that the normal bundle of $M^{P}_{Q}$ in $S^{u}_{a}(P)$ is just the restriction of the normal bundle of $W^{s}(Q)$ (in fact, the two are, in general, only isomorphic and not equal but this is just a minor issue). Now, follow what happens with the framing of $M^{P}_{Q}$ along the composition $u\circ r$. For this we write $r=r''\circ r'$ where $r'$ follows the flow till reaching $U(W^{u}(Q))$. Now pick a point in $x\in M^{P}_{Q}$ together with its normal frame $o_{x}$ at $x$. After applying $r'$, the pair $(x,o_{x})$ is taken to a pair $(x',o_{x'})$ with $x'\in\partial ((r'')^{-1}(Q))$. Applying now $r''$, the image of $(x',o_{x'})$ is $(Q,e)$. Take now $V$ a tubular neighbourhood of $M^{P}_{Q}\hookrightarrow S^{u}_{a}(P)$ together with an identification $V\approx D^{|Q|}\times M^{P}_{Q}$ which is provided by the framing $o$. The argument above implies that if the constant $\epsilon$ used to construct the map $u: M(f(Q)-\epsilon)\cup W^{u}(Q)=M'\to M'/M(f(Q)-\epsilon)$ is very small, then the composition $u\circ r''\circ r'$ coincides with the relevant Thom-Pontryagin map. ### Framed Bordism Classes and Hopf invariants It it natural to wonder whether, besides their framing, there are some other properties of the connecting manifolds which can be detected algebraically. A useful point of view in this respect turns out to be the following: imagine the elements of $M^{P}_{Q}$ as path or loops on $M$. The fact that they are paths is obvious (we parametrize them by the value of $-f$; the negative sign gives the flow lines the orientation coherent with the negative gradient) but they can be transformed into loops very easily. Indeed, fix a simple path in $M$ which joins all the critical points of $f$ and contract this to a point thus obtaining a quotient space $\widehat{M}$ which has the same homotopy type as $M$. Let $q:M\to \widehat{M}$ be the quotient map. We denote by $\Omega M$ the space of based loops on $M$ and keep the notation $q$ for the induced map $\Omega M\to \Omega\widehat{M}$. This discussion shows that there are continuous maps $$j^{P}_{Q}:M^{P}_{Q}\to \Omega\widehat{M}~.~$$ These maps have first been defined and used in [@Co1] and they have some interesting properties. For example, given such a map $j^{P}_{Q}$ and assuming that $P$ and $Q$ are consecutive it is natural to ask whether the homology class $[M^{P}_{Q}]\in H_{|P|-|Q|-1}(\Omega M; {\mathbb{Z}})$ is computable (here $[M^{P}_{Q}]$ is the image by $j^{P}_{Q}$ of the fundamental class of $M^{P}_{Q}$). We shall see that quite a bit more is indeed possible: the full framed bordism class associated to $j^{P}_{Q}$ and to the canonical framing on $M^{P}_{Q}$ can be expressed as a relative Hopf invariant. To explain this result first recall that if $V\hookrightarrow S^{n}$ is framed and $l:V\to X$ is a continuous map with $V$ a closed manifold we may construct a richer Thom-Pontryagin map as follows. We again consider a tubular neighbourhood $U(V)$ of $V$ in $S^{n}$ together with an identification $U(V)\approx D^{k}\times V$ where $k=codim(V)$ which is provided by the framing. We now define a map $$\bar{\phi}_{V}:U(V)\stackrel{j}{\longrightarrow} D^{k}\times V \stackrel{id\times l}{\longrightarrow} D^{k}\times X\to (D^{k}\times X)/(S^{k-1}\times X)$$ where $j:U(V)=D^{k}\times V\to V$ is the projection and the last map is just the quotient map (which identifies $S^{k-1}\times X$ to the base point). Notice that $\bar{\phi}_{V}(\partial U(V))\subset S^{k-1}\times X$. Therefore, we may extend the definition above to a map $$\bar{\phi}_{V}:S^{n}\to (D^{k}\times X)/(S^{k-1}\times X)$$ by sending all the points in the complement of $U(V)$ to the base point. It is well-known (and a simple exercise of elementary homotopy theory) that there exists a (canonical) homotopy equivalence $(D^{k}\times X)/(S^{k-1}\times X)\simeq \Sigma^{k}(X^{+})$ where $\Sigma X$ is the (reduced) suspension of $X$, $\Sigma^{i}$ is the suspension iterated $i$-times and $X^{+}$ is the space $X$ with an added disjoint point (notice also that $\Sigma^{k}(X^{+})=\Sigma^{k}X\vee S^{k}$ where $\vee$ denotes the [*wedge*]{} or the one point union of spaces). This allows us to view the map $\bar{\phi}_{V}$ as a map with values in $\Sigma^{k}(X^{+})$. The framed bordism class of $V$ is simply the homotopy class $[\bar{\phi}_{V}]\in \pi_{n}(\Sigma^{k}(X^{+}))$. This is independent of the various choices made in the construction. Two pairs of data (framings included) $(V,l)$ and $(V',l')$ admit an extension to a manifold $W\subset S^{n}\times [0,1]$ with $\partial W=V\times\{0\}\cup V'\times\{1\}$ iff $\bar{\phi}_{V}\simeq \bar{\phi}_{V'}$. Notice also that to an element $\alpha\in \pi_{n}(\Sigma^{k} X^{+})$ we may associate a homology class $[\alpha]\in H_{n-k}(X)$ obtained by applying the Hurewicz homomorphism, desuspending $k$-times and projecting on the $H_{\ast}(X)$ term in $H_{\ast}(X^{+})$. Returning now to our connecting manifolds $M^{P}_{Q}$ we again focus on the case when $P$ and $Q$ are consecutive. The map $j^{P}_{Q}$ together with the canonical framing provide a homotopy class $\{M^{P}_{Q}\}\in \pi_{|P|-1}(\Sigma^{|Q|}(\Omega M ^{+}))$ (to simplify notation we have replaced $\widehat{M}$ with $M$ here - the two are homotopy equivalent). As indicated above, it turns out that this class can be computed in terms of a relative Hopf invariant. We shall now discuss how this invariant is defined. Assume that $S^{q-1}\stackrel{\alpha}{\to} X_{0}\to X'$ and $S^{p-1}\stackrel{\beta}{\to} X'\to X''$ are two successive cell attachments and that $X''$ is a subspace of some larger space $X$. In particular, $X'= X_{0}\cup_{\alpha} D^{q}$, $X''= X'\cup_{\beta} D^{p}$. Let $S\subset D^{q}$ be the $q-1$-sphere of radius $1/2$. There is an important map called the coaction associated to $\alpha$ which is defined by the composition $$\nabla: X'\to X'/S \approx S^{q}\vee X'$$ where the first map identifies all the points of $S$ to a single one and the second is a homeomorphism (in practice it is convenient to also assume that all the maps and spaces involved are pointed and in that case we view $D^{q}$ as the reduced cone over $S^{q-1}$). We consider the composition $$\psi(\beta,\alpha):S^{p-1}\stackrel{\beta}{\longrightarrow} X'\stackrel{\nabla}{\longrightarrow}S^{q}\vee X'\stackrel{id\vee j}{\longrightarrow} S^{q}\vee X''\hookrightarrow S^{q}\vee X$$ and notice that if $p_{2}:S^{q}\vee X\to X$ is the projection on the second factor, then the composition $p_{2}\circ \psi(\beta,\alpha)$ is null-homotopic. This is due to the fact that this composition is homotopic to $S^{p-1}\stackrel{\beta}{\longrightarrow}X'\to X''\hookrightarrow X$ which is clearly null-homotopic. We now consider the map $p_{2}:S^{q}\vee X\to X$. It is well-known in homotopy theory that any map may be transformed into a fibration. In our case this comes down to considering the free path fibration $$t: \widetilde{P}X\to X$$ where $\widetilde{P}X$ is the set of all continuous path in $X$ parametrized by $[0,1]$, $t(\gamma)=\gamma(0)$. We take the pull back of this fibration over $p_{2}$. The total space $\widetilde{E}$ of the resulting fibration has the same homotopy type as $S^{q}\vee X$ and it is endowed with a canonical projection $$\widetilde{p}:\widetilde{E}\to X$$ which replaces $p_{2}$, $\widetilde{p}(z,\gamma)=\gamma(1)$. It is an exercise in homotopy theory to see that the fibre of the fibration $\widetilde{p}$ is homotopic to $\Sigma^{q}((\Omega X)^{+})$ and that, moreover, the inclusion of this fibre in the total space is injective in homotopy. As the composition $p_{2}\circ\psi(\beta,\alpha)$ is homotopically trivial, the homotopy exact sequence of the fibration $\widetilde{E}\to X$ implies that $\psi(\beta,\alpha)$ admits a lift to $\bar{\psi}(\alpha,\beta):S^{p-1}\to \Sigma^{q}((\Omega X)^{+})$ whose homotopy class does not depend on the choice of lift. We let $H(\alpha,\beta)\in \pi_{p-1}(\Sigma^{q}((\Omega X)^{+}))$ be equal to this homotopy class and we call it the relative Hopf invariant associated to the attaching maps $\alpha$ and $\beta$ (for a discussion of the relations between this Hopf invariants and other variants see Chapters 6 and 7 in [@CLOT]). To return to Morse theory, recall from (\[eq:attaching\]) that passing through the two consecutive critical points $Q$ and $P$ leads to two successive attaching maps $\phi_{Q}:S^{|Q|-1}\to M(f(Q)-\epsilon)$ and $\phi_{P}:S^{|P|-1}\to M(f(P)-\epsilon)$ (we assume again - as we may - that the set $f^{-1}([f(Q),f(P)])$ does not contain any other critical points besides $P$ and $Q$). Moreover, as we know the inclusion $M'=M(f(Q)-\epsilon)\cup W^{u}(Q)\hookrightarrow M(f(P)-\epsilon)$ is a homotopy equivalence. Therefore, the construction above can be applied to $\phi_{Q}$ and $\phi_{P}$ and it leads to a relative Hopf invariant $$H(P,Q)\in \pi_{|P|-1}(\Sigma ^{|Q|}(\Omega M^{+}))~.~$$ With these constructions our statement is: \[theo:bordism\][@Co1][@Co2] The homotopy class $H(P,Q)$ coincides (up to sign) with the bordism class $\{M^{P}_{Q}\}$. In particular, the homology class $[M^{P}_{Q}]$ equals (up to sign and desuspension) the Hurewicz image of $H(P,Q)$. The proof of this result can be found in [@Co2] (a variant proved by a slightly different method appears in [@Co1]). The proof is considerably more complicated than the proof of Theorem \[theo:fr\] so we will only present a rough justification here. To simplify notation we let $M_{0}=M(f(Q)-\epsilon)$. Let $M_{1}=M(f(Q)-\epsilon)\cup U(W^{u}(Q))$. Recall, that the inclusions $M'\hookrightarrow M_{1} \hookrightarrow M(f(P)-\epsilon)$ are homotopy equivalences. Let $\mathcal{P}: \Omega M\to PM\to M$ be the path-loop fibration (of total space the [*based*]{} paths on $M$ and of fibre the space of based loops on $M$). We denote by $E_{0}$ the total space of the pull-back of the fibration $\mathcal{P}$ over the inclusion $M_{0}\subset M$. Similarly, we let $E_{1}$ be the total space of the pull-back of $\mathcal{P}$ over the inclusion $M_{1}\to M$. The key remark is that the attaching map $\phi_{P}: S^{u}(P)\to M_{1}$ admits a natural lift to a map $\widetilde{\phi_{P}}:S^{u}(P)\to E_{1}$. Indeed, we assume here that all the critical points are identified to the base point. The space $E_{1}$ consists of the based paths in $M$ that end at points in $M_{1}$. But each element of the image of $\phi_{P}$ corresponds to precisely such a path which is explicitly given by the corresponding flow line (we need to use here Moore paths and loops which are paths parametrized by arbitrary intervals $[0,a]$ and not only the interval $[0,1]$). Consider the inclusion $E_{0}\hookrightarrow E_{1}$. It is not difficult to see that the quotient topological space $E_{1}/E_{0}$ admits a canonical homotopy equivalence $\eta: E_{1}/E_{0}\to \Sigma^{|Q|}(\Omega M ^{+})$. Therefore, we may consider the composition $\eta'=\eta\circ\widetilde{\phi_{P}}$. It is possible to show that this map $\eta'$ is homotopic to $H(P,Q)$. At the same time, we see that the restriction of $\widetilde{\phi_{P}}$ to $M^{P}_{Q}$ coincides with $j^{P}_{Q}$. Moreover, by making explicit $\eta$ it is also possible to see that $\widetilde{\phi_{P}}$ is homotopic to the Thom-Pontryagin map associated to $M^{P}_{Q}$. ### Some topological applications {#subsubsec:top_appl} We now decribe a couple of topological applications of Theorem \[theo:bordism\]. The idea behind both of them is quite simple: the function $-f$ is also a Morse function and the critical points $Q$, $P$ are conscutive critical points for $-f$. Therefore, the connecting manifold $M^{Q}_{P}$ is well defined as well as its associated bordism homotopy class $\{M^{Q}_{P}\}$. Clearly, the underlying space for both $M^{P}_{Q}$ and $M^{Q}_{P}$ is the same. The map $j^{Q}_{P}$ is different from $j^{P}_{Q}$ just by reversing the direction of the loops. The two relevant framings may also be different. The relation between them is somewhat less straightforward but it still may be understood by considering $M$ embedded inside a high dimensional euclidean space and taking into account the twisting induced by the stable normal bundle. In all cases, this establishes a relationship between the two Hopf invariants $H(P,Q)$ and $H(Q,P)$. A. The first application [@Co1] concerns the construction of examples of non-smoothable, simply-connected, Poincaré duality spaces. The idea is as follows: we construct Poincaré duality spaces which have a simple $CW$-decomposition and with the property that for certain two successively attached cells $e,f$ the resulting Hopf invariant $H$ and the Hopf invariant $H'$ associated to the dual cells $f',e'$ are not related in the way described above. If the respective Poincaré duality space is smoothable, then the given cell decomposition can be viewed as associated to an appropriate Morse function and this leads to a contradiction. The obstructions to smoothability constructed in this way are obstructions to the lifting of the Spivak normal bundle to $BO$. This is an obstruction theory problem but one which can be very difficult to solve directly in the presence of many cells. Thus, this approach is quite efficient to construct examples. B. The second application [@Co3] concerns the detection of obstructions to the embedding of $CW$-complexes in euclidean spaces in low codimension. The argument in this case goes roughly as follows. If the $CW$-complex $X$ embedds in $S^{n}$, then we may consider a neighbourhood $U(X)$ of $X$ which is a smooth manifold with boundary. We consider a smooth Morse function $f:U(X)\to {\mathbb{R}}$ which is constant, maximal and regular on the boundary of $U(X)$. If $P$ and $Q$ are two consecutive critical points for this funtion we obtain that $\Sigma^{k}H(P,Q)=\Sigma^{k'}H(Q,P)$ for certain values of $k$ and $k'$ which can be estimated explicitly - the main reason for this equality is that the Morse function in question is defined on the sphere so all the questions involving the stable normal bundle become irrelevant. If $X$ admits some reasonably explicit cell-decomposition it is possible to express $H(P,Q)$ as the Hopf invariant $H$ of some successive attachment of two cells $e,f$ and $H(Q,P)$ as $\Sigma^{k''}H'$ where $H'$ is another similar Hopf invariant. The obstructions to embedding appear because the low codimension condition translates to the fact that $k'+k''> k$. This can be viewed as an obstruction because it means that after $k$ suspensions the homotopy class of $H$ has to de-suspend more than $k$-times. [The applications at A and B are purely of homotopical type. It is natural to expect that the Morse theoretical arguments that were used to establish these statements can be replaced by purely homotopical ones but this has not been done till now.]{} ### The Serre spectral sequence {#subsubsec:serre_ss} Theorem \[theo:bordism\] provides considerable information on connecting manifolds for pairs of [*consecutive*]{} critical points. However, it does not shed any light on the case of non-consecutive ones. Clearly, if the critical points are not consecutive the respective connecting manifold is not closed and thus no bordism or cobordism class can be directly associated to it. However, after compactification, the boundary of this connecting manifold has a special structure reflected by equation (\[eq:first\_bdry\]). As we shall see following [@BaCo], this structure is sufficient to construct an algebraic invariant which provides an efficient “measure" of all connecting manifolds. This construction is based on the fact that the maps $$j^{P}_{Q}:M^{P}_{Q}\to \Omega M$$ are compatible with compactification and with the formula (\[eq:first\_bdry\]) in the following sense. Recall that here $\Omega M$ are the based Moore loops on $M$ (these are loops parametrized by intervals $[0,a]$), the critical points of $f$ have been identified to a single point and, moreover, in the definition of $j^{P}_{Q}$ we use the parametrization of the flow lines by the values of $-f$. Recall that we have a product given by the concatenation of loops $$\mu:\Omega M\times \Omega M\to \Omega M~.~$$ With these notations it is easy to see that we have the following formula: $$\label{eq:product} j^{P}_{Q}(u,v)=\mu(j^{P}_{R}(u),j^{R}_{Q}(v))$$ where $(u,v)\in \overline{M}^{P}_{R}\times \overline{M}^{R}_{Q}\subset\partial\overline{M}^{P}_{Q}$. We proceed with our consruction. Let $C_{\ast}(X)$ be the (reduced) cubical complex of $X$ with coefficients in ${\mathbb{Z}}/2$. Notice that there is a natural map $$C_{k}(X)\otimes C_{k'}(Y)\to C_{k+k'}(X\times Y)~.~$$ A family of cubical chain $s^{x}_{y}\in C_{|x|-|y|-1|}(\overline{M}^{x}_{y})$, $x,y\in Crit(f)$ is called a [*representing chain system*]{} for the moduli spaces $M^{x}_{y}$ if for each pair of critical points $x,z$ we have: - $$d s^{x}_{z}=\sum_{y} s^{x}_{y}\otimes s^{y}_{z}$$ - $s^{x}_{z}$ represents the fundamental class in $H_{|x|-|z|-1}(M^{x}_{z},\partial M^{x}_{z})$. It is easy to show by induction on the index difference $|x|-|z|$ that such representing chain systems exist. We now fix such a representing chain system $\{s^{x}_{y}\}$ and we define $a^{x}_{y}\in C_{|x|-|y|-1}(\Omega M)$ by $$a^{x}_{y}=(j^{x}_{y})_{\ast}(s^{x}_{y})~.~$$ Notice that this definition extends the definition of these coefficients in the usual Morse case when $|x|-|y|-1=0$. We have a product map $$\cdot:C_{k}(\Omega M)\otimes C_{k'}(\Omega M)\longrightarrow C_{k+k'}(\Omega M\times \Omega M)\stackrel{C_{\ast}(\mu)}{\longrightarrow}C_{k+k'}(\Omega M)$$ which makes $C_{\ast}(\Omega M)$ into a differential ring. The discussion above shows that inside this ring we have the formula $$da^{x}_{z}=\sum_{y}a^{x}_{z}\cdot a^{z}_{y}~.~$$ An elegant way to rephrase this formula is to group these coefficients in a matrix $A=(a^{x}_{y})$ and then we have $$\label{eq:diff_matrix} dA=A^{2}~.~$$ We now define a new chain complex $\mathcal{C}(f)$ associated to $f$ by $$\label{eq:ext_cplx}\mathcal{C}(f)=(C_{\ast}(\Omega M)\otimes {\mathbb{Z}}/2<Crit(f)>\ ,d)\ \ , \ \ dx=\sum_{y}a^{x}_{y}\otimes y~.~$$ We shall call this complex [*the extended Morse complex of $f$*]{}. Here, $C_{\ast}(\Omega M)\otimes {\mathbb{Z}}/2<Crit(f)>$ is viewed as a graded $C_{\ast}(\Omega M)$-module and $d$ respects this structure in the sense that it verifies $d(a\otimes x)=(da)\otimes x + a (dx)$ (the grading on $Crit(f)$ is given, as before, by the Morse index). Choosing orientations on all the stable manifolds of all the critical points induces a co-orientation on all the unstable manifolds, and hence an orientations on the intersections $W^{u}(P)\cap W^{s}(Q)$ and finally on all the moduli spaces $M^{P}_{Q}$ : we may then use ${\mathbb{Z}}$-coefficients for this complex as well as, of course, for the classical Morse complex. In this case appropriate signs appear in the formulae above. Clearly, $d^{2}=0$ due to (\[eq:diff\_matrix\]). By definition, the coefficients $a^{x}_{y}$ represent the moduli spaces $M^{x}_{y}$. However, these coefficients are not invariant with respect to the choices made in their construction. Therefore, it is remarkable that there is a natural construction which extracts from this complex a useful algebraic invariant which is not just the homology of the complex - as it happens, this homology is not too interesting as it coincides with that of a point. Consider the obvious differential filtration which is defined on this complex by $$F^{k}\mathcal{C}(f)=C_{\ast}(\Omega M)\otimes {\mathbb{Z}}/2< x\in Crit(f) :\ ind_{f}(x)\leq k>~.~$$ Denote the associated spectral sequence by $$E(f)=(E^{r}_{p,q}(f),d^{r})~.~$$ \[theo:serre\_ss\][@BaCo] When $M$ is simply connected and if $r\geq 2$ the spectral sequence $E(f)$ coincides with the Serre spectral sequence of the path-loop fibration $$\mathcal{P}:\Omega M\to PM\to M~.~$$ \[rem:serre\_ss\] a\. A similar result can be established even in the absence of the simple-connectivity condition which has been assumed here to avoid some technical complications. b\. The Serre spectral sequence of the path-loop fibration of a space $X$ contains considerable information on the homotopy type of the space. In particular, there are spaces with the same cohomology and cup-product but which may be distinguished by their respective Serre spectral sequences. To outline the proof of the theorem we start by recalling the construction of the Serre spectral sequence in the form which will be of use here. We shall assume here that the Morse function $f$ is self-indexed (in the sense that for each critical point $x$ we have $ind_{f}(x)=k \Rightarrow f(x)=k$) and that it has a single minimum denoted by $m$. Let $M_{k}=f^{-1}((-\infty, k+\epsilon])$. We have $$M_{k}=M_{k-1}\bigcup_{\phi_{y}} D^{k}_{y}$$ where the union is taken over all the critical points $y\in Crit_{k}(f)$ and $\phi_{y}:S^{u}(y)\to M_{k-1}$ are the respective attaching maps. Denote by $E_{k}$ the total space of the fibration induced by pull-back over the inclusion $M_{k}\hookrightarrow M$ from the fibration $\mathcal{P}$. Consider the filtration of $C_{\ast}(PM)$ given by $F^{k}P=Im(C_{\ast}(E_{k})\longrightarrow C_{\ast}(PM))$. The spectral sequence induced by this filtration is invariant after the second page and is precisely the Serre spectral sequence (this spectral sequence may be constructed as above but by using an arbitrary skeletal filtration $\{X_{k}\}$ of a space $X$ which has the same homotopy type as that of $M$; in our case the particular filtration given by the sets $M_{k}$ is a natural choice). For further use, we also notice that there is an obvious action of $\Omega M$ on $PM$ and this action induces one on each $E_{k}$. Therefore, we may view $C_{\ast}(E_{k})$ as a $C_{\ast}(\Omega M)$-module. The first step in proving the theorem is to consider a certain compactification of the unstable manifolds of the critical points of $f$. Recall that $f$ is self-indexed and that $m$ is the unique minimum critical point of $f$. Fix $x\in Crit(f)$ and define the following equivalence relation on the set $\overline{M}^{x}_{m}\times [0,f(x)]$: $$(a,t)\sim (a',t') \ {\rm iff}\ t=t' \ {\rm and}\ a(-\tau)=a'(-\tau)\ \forall \tau\geq t~.~$$ Here the elements of $\overline{M}^{x}_{m}$ are viewed as paths in $M$ parametrized by the value of $-f$ (so that $f(a(-\tau))=\tau$). Denote by $\widehat{W}(x)$ the resulting quotient topological space. Notice that, if $y\in \overline{W^{u}(x)}$, then there exists some $a\in \overline{M}^{x}_{m}$ so that $y$ is on the (possibly broken) flow line represented by $a$. Or, in other words, so that $a(-f(y))=y$. This path $a$ might not be unique. Indeed, inside $\widehat{W}(x)$ there is precisely one equivalence class $[a,f(y)]$ (with $a(-f(y))=y$) for each (possibly) broken flow line joining $x$ to $y$. Clearly, if $y\in W^{u}(x)$, then there is just one such flow line and so the natural surjection $$\pi: \widehat{W}(x)\longrightarrow \overline{W^{u}(x)},\ \pi([a,t])=a(-t)$$ is a homeomorphism when restricted to $\pi^{-1}(W^{u}(x))$. Thus we may view $\widehat{W}(x)$ as a special compactification of $W^{u}(x)$ or as a desingularization of $\overline{W^{u}(x)}$. It is not difficult to believe (but harder to show and will not be proven here see [@BaCo]) that $\widehat{W}(x)$ is a topological manifold with boundary and moreover $$\label{eq:bdry_cpct} \partial \widehat{W}(x)=\bigcup_{y}M^{x}_{y}\times \widehat{W}(y)~.~$$ We continue the proof of the Theorem \[theo:serre\_ss\] with the remark that there are obvious maps $$h_{x}: \widehat{W}(x)\to PM$$ which associate to $[a,t]$ the path in $M$ which follows $a$ from $x$ to $a(-t)$. These maps and the maps $j^{x}_{y}$ are compatible with formula (\[eq:bdry\_cpct\]) in the sense that $$\label{eq:compat} h_{x}(a',[a'',t])=j^{x}_{y}(a')\cdot h_{y}([a'',t])$$ where $(a',[a'',t])\in M^{x}_{y}\times \widehat{W}(y)$ and $\cdot$ represents the action of $\Omega M$ on $PM$. Now, we may obviously rewrite (\[eq:bdry\_cpct\]) as: $$\partial \widehat{W}(x)=\bigcup_{y} \overline{M}^{x}_{y}\times \widehat{W}(y)~.~$$ Given the representing chain system $\{s^{x}_{y}\}$ it is easy to construct an associated representing chain system for $\widehat{W}(x)$. This is a system of chains $v(x)\in C_{|x|}(\widehat{W}(x))$ so that $v(x)$ represents the fundamental class of $C_{|x|}(\widehat{W}(x),\partial\widehat{W}(x))$ and we have the formula $$dv(x)=\sum_{y}s^{x}_{y}\otimes v(y)~.~$$ Finally, we define a $C_{\ast}(\Omega M)$-module chain map $$\alpha : \mathcal{C}(f)\to C_{\ast}(PM)$$ by $$\alpha(x)=(h_{x})_{\ast}(v(x))~.~$$ The formulas above show that we have $$d[(h_{x})_{\ast}(v(x))]=\sum_{y}a^{x}_{y}\cdot (h_{y})_{\ast}(v(y))$$ and so $\alpha$ is a chain map. It is clear that the map $\alpha$ is filtration preserving and it is not difficult to see that it induces an isomorphism at the $E^{2}$ level of the induced spectral sequences and this concludes the proof of Theorem \[theo:serre\_ss\]. a\. Another important but immediate property of $\widehat{W}(x)$ is that it is a contractible space. Indeed, all the points in $\overline{M}^{x}_{m}\times \{f(x)\}$ are in the same equivalence class. Moreover, each point $[a,t]\in\widehat{W}(x)$ has the property that it is related by the path $[a,\tau], \ \tau\in [t,f(x)]$ to $\ast=[a,f(x)]$. The contraction of $\widehat{W}(x)$ to $\ast$ is obtained by deforming $\widehat{W}(x)$ along these paths. Given that $\widehat{W}(x)$ is a contractible topological manifold with boundary, it is natural to suspect that $\widehat{W}(x)$ is homeomorphic to a disk. This is indeed the case as is shown in [@BaCo] and is an interesting fact in itself because it implies that the union of the unstable manifolds of a self-indexed Morse-Smale function gives a $CW$-decomposition of $M$. The attaching map of the cell $\widehat{W}(x)$ is simply the restriction of $\pi$ to $\partial \widehat{W}(x)$. b\. The Serre spectral sequence result above and the bordism result in Theorem \[theo:bordism\] are obviously related via the central role of the maps $j^{P}_{Q}$. There is also a more explicit relation. Indeed, (a stable version of) the Hopf invariants appearing in Theorem \[theo:bordism\] can be interpreted as differentials in the Atyiah-Hirzebruch-Serre spectral sequence of the path-loop fibration with coefficients in the stable homotopy of $\Omega M$. Moreover, the relation (\[eq:first\_bdry\]) can be understood as also keeping track of the framings. This leads to a type of extended Morse complex in which the coefficients of the differential are stable Hopf invariants [@Co1]. All of this strongly suggests that the construction of the complex $\mathcal{C}(f)$ can be enriched so as to include the framings of the connecting manifolds and, by the same method as above, that the whole Atyiah-Hirzebruch-Serre spectral sequence should be recovered from this construction. c\. Another interesting question, open even for consecutive critical points $P$, $Q$, is whether there are some additional constraints on the topology of the connecting manifolds $M^{P}_{Q}$ besides those imposed by Theorem \[theo:bordism\]. d\. Yet another open question is how this machinery can be adapted to the Morse-Bott situation and how it can be extended to general Morse-Smale flows (not only gradient-like ones). e\. It is natural to wonder what is the richest level of information that one can extract out of the moduli spaces of Morse flow lines. At a naive level, the union of all the points situated on the flow lines of $f$ is precisely the whole underlying manifold $M$ so we expect that there should exist some assembly process producing the manifold $M$ out of these moduli spaces. Such a machine has been constructed by Cohen, Jones and Segal [@CoJoSe1][@CoJoSe2]. They show that one can form a category out of the moduli spaces of connecting trajectories and that the classifying space of this category is of the homeomorphism type of the underlying manifold. In their construction an essential point is that the glueing of flow lines is associative. This approach is quite different from the techniques above and does not imply the results concerning the extended Morse complex or the Hopf invariants that we have presented. The two points of view are, essentially, complementary. To end this section it is useful to make explicit a relation between Theorems \[theo:serre\_ss\] and \[theo:bordism\] (we assume as above that $M$ is simply-connected). \[prop:hlgy\] Assume that there are $q,p\in {\mathbb{N}}$ so that in the Serre spectral sequence of the path loop fibration of $M$ we have $E^{2}_{k,s}=0$ for $q<k<p$ and there is an element $a\in E^{2}_{p,0}$ so that $d^{p-q}a\not=0$, then any Morse-Smale function on $M$ has a pair of consecutive critical points $P$, $Q$ of indexes at least $q$ and at most $p$ so that the homology class $[M^{P}_{Q}]\in H_{|P|-|Q|-1}(\Omega M)\not=0$. Clearly, Theorem \[theo:serre\_ss\] directly implies that, even without any restriction on $E^{2}$, if we have $d^{r}a\not=0$ with $a\in E^{r}_{p,0}$, then for any Morse-Smale function $f$ there are critical points $P$ and $Q$ with $|P|=p$ and $|Q|=p-r$ so that $M^{P}_{Q}\not=\emptyset$. Indeed, if this would not be the case, then all the coefficients $a^{x}_{y}$ in the extended Morse complex of $f$ are null whenever $|x|=p$, $|y|=p-r$. By the construction of the associated spectral sequence, this leads to a contradiction. However, the pair $P$, $Q$ resulting from this argument might have a connecting manifold which is not closed so that its homology class is not even defined and, thus, Proposition \[prop:hlgy\] provides a stronger conclusion. The proof of the Proposition is as follows. Recall that $E^{2}_{s,r}\approx H_{s}(M)\otimes H_{r}(\Omega M)$ and so $H_{\ast}(M)=0$ for $q\leq 0\leq p$. If there are some points $P, Q\in Crit(f)$ with $q\leq |Q|,|P|\leq p$ so that the differential of $P$ in the classical Morse complex contains $Q$ with a non-trivial coefficient then this pair $P$, $Q$ may be taken as the one we are looking for. If all such differentials in the classical Morse complex are trivial it follows that the critical points of index $p$ and $q$ are consecutive. In this case, the geometric arguments used in the proofs of either Theorem \[theo:bordism\] or \[theo:serre\_ss\] imply that if for all pairs $P$, $Q$ with $|P|=p$, $|Q|=q$ we would have $[M^{P}_{Q}]=0$, then the differential $d^{p-q}$ would vanish on $E^{p-q}_{p,0}$. \[rem:consec\] [Notice that the pair of critical points $P$ and $Q$ constructed in the proposition verify the property that $|P|$ and $|Q|$ are consecutive inside the set $\{ ind_{f}(x): x\in Crit(f)\}$.]{} Operations ---------- We discuss here a different and, probably, more familiar approach to understanding connecting manifolds as well as other related Morse theoretic moduli spaces. This point of view has been used extensively by many authors - Fukaya [@Fu], Betz and Cohen [@BeCo] being just a few of them. For this reason we shall review this technique very briefly. Given two consecutive critical points $x$, $y$ notice that the set $T^{x}_{y}=W^{u}(x)\cap W^{s}(y)$ is homeomorphic to the un-reduced suspension of $M^{x}_{y}$. Therefore, we may see this as an obvious inclusion $$i^{x}_{y}:\Sigma M^{x}_{y}\to M$$ and we may consider the homology class $[T^{x}_{y}]=(i^{x}_{y})_{\ast}(s[M^{x}_{y}])$ where $s$ is suspension and $[M^{x}_{y}]$ is the fundamental class. There exists an obvious evaluation map $$e:\Sigma \Omega M\to M$$ which is induced by $\Omega M\times [0,1]\to M,\ (\beta,t)\to \beta(t)$ (the loops here are parametrized by the interval $[0,1]$ but this is a minor technical difficulty). It is easy to see, by the definition of this evaluation map, that $[T^{x}_{y}]=e_{\ast}((j^{x}_{y})_{\ast}([M^{x}_{y}]))$. In general the map $e_{\ast}$ is not injective in homology. Clearly, the full bordism class $\{M^{x}_{y}\}$ carries much more information than the homology class $[T^{x}_{y}]$. Still, there is a direct way to determine $[T^{x}_{y}]$ without passing through a calculation of $\{M^{x}_{y}\}$ and we will now describe it. Consider a second Morse-Smale function $g:M\to {\mathbb{R}}$ so that its associated unstable and stable manifolds $W^{u}_{g}(-)$, $W^{s}_{g}(-)$ intersect transversally the stable and unstable manifolds of $f$ and, except if they are of top dimension, they avoid the critical points of $f$. Fix $x,y\in Crit(f)$ and $s\in Crit(g)$ so that $|x|-|y|-ind_{g}(s)=0$. We may define $k(x,y;s)=\#(T^{x}_{y}\cap W^{s}_{g}(s)$ (where the counting takes into account the relevant orientations if we work over ${\mathbb{Z}}$). We now put $$\bar{k}^{x}_{y}=\sum_{s} k(x,y;s)s\in C(g)~.~$$ The essentially obvious claim is that: \[prop:hlgy\_class\] The chain $\bar{k}^{x}_{y}$ is a cycle whose homology class is $[T^{x}_{y}]$. Indeed we have $\sum_{s}k(x,y;s)h^{s}_{z}=0$ where $ind_{g}(z)=ind_{g}(s)-1$ and $h^{s}_{z}$ are the coefficients in the classical Morse complex of $g$. This equality is valid because we may consider the $1$-dimensional space $T^{x}_{y}\cap W^{s}_{g}(z)$. This is an open $1$-dimensional manifold whose compactification is a $1$-manifold whose boundary points are counted precisely by the formula $\sum_{s}k(x,y;s)h^{s}_{z}=0$. By basic intersection theory it is immediate to see that the homology class represented by this cycle is $[T^{x}_{y}]$. While this construction does not shed a lot of light on the properties of $M^{x}_{y}$ its role is important once we use it to recover the various homological operations of $M$. To see how this is done from our perspective notice that the intersection $$T^{x}_{y}\cap W^{s}_{g}(s)=W^{u}_{f}(x)\cap W^{s}_{f}(y)\cap W^{s}_{g}(s)$$ can be viewed as a particular case of the following situation: assume that $f_{1}$, $f_{2}$, $f_{3}$ are three Morse-Smale functions in general position and define $$T^{x,y}_{z}=(W^{u}_{f_{1}}(x)\cap W^{u}_{f_{2}}(y)\cap W^{s}_{f_{3}}(z))~.~$$ If we assume that $|x|+|y|-|z|-n=0$ we may again count the points in $T^{x,y}_{z}$ with appropriate signs and we may define coefficients $t^{x,y}_{z}=\#T^{x,y}_{z}$. This leads to an operation [@BeCo][@Fu] $$C(f_{1})\otimes C(f_{2})\to C(f_{3})$$ given as a linear extension of $$x\otimes y\to \sum_{z} t^{x,y}_{z}z~.~$$ It is easy to see that this operation descends in homology and that it is in fact the dual of the $\cup$-product. Moreover, the space $T^{x,y}_{z}$ may be viewed as obtained by considering a graph formed by three oriented edges meeting into a point with the first two entering the point and the other one exiting it and considering all the configurations obtained by mapping this graph into $M$ so that to the first edge we associate a flow line of $f_{1}$ which exits $x$, to the second edge a flow line of $f_{2}$ which exits $y$ and to the third a flow line of $f_{3}$ which enters $z$. Clearly, this idea may be pushed further by considering other, more complicated graphs and understanding what are the operations they correspond to as was done by Betz and Cohen [@BeCo]. Applications to Symplectic topology =================================== We start with some applications that are rather “soft" even if difficult to prove and we shall continue in the main part of the section,§\[subsec:strips\], with some others that go deeper. Bounded orbits -------------- We fix a symplectic manifold $(M,\omega)$ which is not compact. Assume that $H:M\to {\mathbb{R}}$ is a smooth hamiltonian whose associated hamiltonian vector field is denoted by $X_{H}$. One of the main questions in hamiltonian dynamics is whether a given regular hypersurface $A=H^{-1}(a)$ of $H$ has any closed caracteristics, or equivalently, whether the hamiltonian flow of $H$ has any periodic orbits in $A$. As $M$ is not compact, from the point of view of dynamical systems, the first natural question is whether $X_{H}$ has any [*bounded*]{} orbits in $A$. Moreover, there is a remarkable result of Pugh and Robinson [@PuRo], the $C^{1}$-closing lemma, which shows that, for a generic choice of $H$, the presence of bounded orbits insures the existence of some periodic orbits. Therefore, we shall focus in this subsection on the detection of bounded orbits. It should be noted however that the detection of periodic orbits in this way is not very effective because the periods of the orbits found can not be estimated. Moreover, there is no reasonable test to decide whether a given hamiltonian belongs to the generic family to which the $C^{1}$-closing lemma applies. Finally, it will be clear from the methods of proof described below that these results are also soft in the sense that they are not truly specific to Hamiltonian flows but rather they apply to many other flows. An example of a bounded orbit result is the following statement [@Co2]. \[theo:bounded\] Assume that $H$ is a Morse-Smale function with respect to a riemannian metric $g$ on $M$ so that $M$ is metrically complete and there exists an $\epsilon$ and a compact set $K\subset M$ so that $||\nabla_{g} H(x)||\geq \epsilon$ for $x\not\in K$. Suppose that $P$ and $Q$ are two critical points of $H$ so that $|P|$ and $|Q|$ are successive in the set $\{ind_{H}(x) : x\in Crit(H)\}$. If the stabilization $[H(P,Q)]\in \pi^{S}_{|P|-|Q|-1}(\Omega M)$ of the Hopf invariant $H(P,Q)$ is not trivial then there are regular values $v\in (H(Q),H(P))$ so that $H^{-1}(v)$ contains bounded orbits of $X_{H}$. Before describing the proof of this result let’s notice that the theorem is not difficult to apply. Indeed, one simple way to verify that there are pairs $P$, $Q$ as required is to use Proposition \[prop:hlgy\] together with Remark \[rem:consec\] with a minor adaptation required in a non-compact setting. This adaptation consists of replacing the Serre spectral sequence of the path loop fibration with the Serre spectral sequence of a relative fibration $\Omega M\to (E_{1},E_{0})\to (N_{1},N_{0})$ where $N_{1}$ is an isolating neighbourhood for the gradient flow of $H$ which contains $K$ and $N_{0}$ is a (regular) exit set for this neighbourhood (to see the precise definition of these Conley index theoretic notions see [@Sal1]). The fibration is induced by pull-back from the path-loop fibration $\Omega M\to PM\to M$ over the inclusion $(N_{1},N_{0})\hookrightarrow (M,M)$. In short, because the gradient of $H$ is away from $0$ outside of a compact set, pairs $(N_{1},N_{0})$ as above are easy to produce and if the pair $(N_{1},N_{0})$ has some interesting topology it is easy to deduce the existence of non-constant bounded orbits. Here is a concrete example. \[cor:bounded\_orb\] Assume that $M$ is the contangent bundle of some closed, simply-connected manifold $N$ of dimension $k\geq 2$ and $\omega$ is an arbitrary symplectic form. Assume that $H:M\to {\mathbb{R}}$ is Morse and that outside of some compact set containing the $0$ section, $H$ restricts to each fibre of the bundle to a non-degenerate quadratic form. Then, $X_{H}$ has bounded, non constant orbits. Of course, this result is only interesting when there are no compact level hypersurfaces of $H$. This does happen if the quadratic form in question has an index which is neither $0$ nor $k$. The proof of this result comes down to the fact that as $N$ is closed and not a point there exists a lowest dimensional homology class $u\in H_{t}(N)$ which is transgressive in the Serre spectral sequence ( this means $d^{t}u\not=0$). Using the structure of function quadratic at infinity of $H$ it is easy to construct a pair $(N_{1},N_{0})$ where $N_{1}$ is is homotopic to a disk bundle of base $N$ and $N_{0}$ is the associated sphere bundle. The spectral sequence associated to this pair can be related by the Thom isomorphism to the Serre spectral sequence of the path-loop fibration over $N$ and the element $\bar{u}\in H_{\ast}(N_{1},N_{0})$ which corresponds to $u$ by the Thom isomorphism will have a non-vanishing differential. This means that Proposition \[prop:hlgy\] may be used to show the non-triviality of a homology class $[M^{P}_{Q}]$ for $P$ and $Q$ as in Theorem \[theo:bounded\]. By Theorem \[theo:bordism\], $[M^{P}_{Q}]$ is the same up to sign as the homology class of the Hopf invariant $H(P,Q)$ so Theorem \[theo:bounded\] is applicable to detect bounded orbits. We now describe the proof of the theorem. The basic idea of the proof is simpler to present in the particular case when $H^{-1}(H(Q),H(P))$ does not contain any critical value. In this case let $A=H^{-1}(a)$ where $a\in (H(Q),H(P))$. We intend to show that $A$ contains some bounded orbits of $X_{H}$. To do this notice that the two sets $S_{1}=W^{u}(P)\cap A$, $S_{2}=W^{s}(Q)\cap A$ are both diffeomorphic to spheres. We now assume that no bounded orbits exist and we consider a compact neighbourhood $U$ of $S_{1}\cup S_{2}$. Assume that we let $S_{2}$ move along the flow $X_{H}$. As this flow has no bounded orbits, each point of $S_{2}$ will leave $U$ at some moment. Suppose that we are able to perturb the flow induced by $X_{H}$ to a new deformation $\eta : M\times {\mathbb{R}}\to M$ so that for some finite time $T$ [*all*]{} the points in $S_{2}$ are taken simultaneously outside $U$ (in other words $\eta_{T}(S_{2})\cap U=\emptyset$) and so that $\eta$ leaves $Q$ fixed. It is easy to see that this implies that $S_{1}\cap S_{2}$ is bordant to the empty set which, by Theorem \[theo:bordism\], is impossible because $H(P,Q)\not=0$. This perturbation $\eta$ is in fact not hard to construct by using some elements of Conley’s index theory and the fact that the maximal invariant set of $X_{T}$ inside $U$ is the empty set (the main step here is to possibly modify also $U$ so that it admits a regular exit region $U_{0}\subset U$ and we then construct $\eta$ so that it follows the flow lines of $X_{T}$ but stops when reaching $U_{0}$, this eliminates the problem of “bouncing" points which first exit $U$ but later re-enter it). The case when there are critical points in $H^{-1}(H(Q),H(P))$ follows the same idea but is considerably more difficult. The main difference comes from the fact that the sets $S_{1}$ and $S_{2}$ might not be closed manifolds. Even their closures $\bar{S_{1}}$ and $\bar{S_{2}}$ are not closed manifolds in general but might be singular sets. To be able to proceed in this case we first replace $P$ and $Q$ with a pair of critical points of the same index so that for any critical point $Q'\in H^{-1}(H(Q),H(P))$ with $ind_{H}(Q')=ind_{H}(Q)$ we have $[H(P,Q')]=0$. We then take $a$ very close to $H(Q)$ so that $S_{2}$ at least is diffeomorphic to a sphere. We then first study the stratification of $\bar{S_{1}}$: there is a top stratum of dimension $|P|-1$ which is $S_{1}$ and a singular stratum $S'$ of dimension $|Q|-1$ which is the union of the sets $W^{u}(Q')\cap A$ for all $Q'$ so that $M^{P}_{Q'}\not=\emptyset$ and $|Q'|=|Q|$. Notice that the way to construct the null-bordism of $S_{1}\cup S_{2}$ is to consider in $A\times [0,T]$ the submanifold $W=(\eta_{t}(\bar{S_{2}}),t)$ and intersect it with $W'=\bar{S_{1}}\times [0,T]$ - we assume here $\eta_{T}(S_{2})\cap S_{1}=\emptyset$. Clearly, we need this intersection to be transverse and this can be easily achieved by a perturbation of $\eta$. The main technical difficulty is that $L$ might intersect the singular part, $S'\times [0,T]$. Indeed, $dim(W)=n-q$, $dim(S')=q-1$, $dim(A)=n-1$ and so generically the intersection $I$ between $W$ and $S'\times [0,T]$ is $0$-dimensional and not necessarily void. By studying the geometry around each of the points of $I$ it can be seen that $S_{1}\cap S_{2}$ is bordant to the union of the $M^{P}_{Q'}$’s where $Q'\in H^{-1}(H(Q),H(P))$ (roughly, this follows by eliminating from the singular bordism $W\cap W'$ a small closed, cone-like neighbourhood around each singular point and showing that the boundary of this cone-like neighbourhood is homeomorphic to a $M^{P}_{Q'}$). We now use the fact that all the stable bordism classes of the $M^{P}_{Q'}$’s vanish (because $[H(P,Q')]=0$) and this leads us to a contradiction. Notice also that, at this point, we need to use stable Hopf invariants (or bordism classes) $\in \pi^{S}(\Omega M)$ because, by contrast to the stable case, the unstable Thom-Pontryagin map associated to a disjoint union is not necessarily equal to the sum of the Thom-Pontryagin maps of the terms in the union and hence, unstably, even if we know $H(P,Q')=0, \forall Q'$ we still can not deduce $H(P,Q)=0$. [It would be interesting to see whether, under some additional assumptions, a condition of the type $[H(P,Q)]\not=0$ implies the existence of periodic orbits and not only bounded ones.]{} Detection of pseudoholomorphic strips and Hofer’s norm {#subsec:strips} ------------------------------------------------------ In this subsection we shall again use the Morse theoretic techniques described in §\[sec:Morse\] and, in particular, Theorem \[theo:serre\_ss\] to study some symplectic phenomena by showing that Floer’s complex can be enriched in a way similar to the passage from the classical Morse complex to the extended one. ### Elements of Floer’s theory. We start by recalling very briefly some elements from Floer’s construction (for a more complete exposition see, for example, [@Sal2]). We shall assume from now on that $(M,\omega)$ is a symplectic manifold - possibly non-compact but in that case convex at infinity - of dimension $m=2n$. We also assume that $L,L'$ are closed (no boundary, compact) Lagrangian submanifolds of $M$ which intersect transversally. To start the description of our applications it is simplest to assume for now that $L,L'$ are simply-connected and that $\omega|_{\pi_{2}(M)}=c_{1}|_{\pi_{2}(M)}=0$. Cotangent bundles of simply-connected manifolds offer immediate examples of manifolds verifying these conditions. We fix a path $\eta\in \mathcal{P}(L,L')=\{\gamma\in C^{\infty}([0,1], M) : \gamma(0)\in L$, $\gamma(1)\in L'\}$ and let $\mathcal{P}_{\eta}(L,L')$ be the path-component of $\mathcal{P}(L,L')$ containing $\eta$. This path will be trivial homotopically in most cases, in particular, if $L$ is hamiltonian isotopic to $L'$. We also fix an almost complex structure $J$ on $M$ compatible with $\omega$ in the sense that the bilinear form $X,Y\to \omega(X,JY)=\alpha (X,Y)$ is a Riemannian metric. The set of all the almost complex structures on $M$ compatible with $\omega$ will be denoted by $\mathcal{J}_{\omega}$. Moreover, we also consider a smooth $1$-periodic Hamiltonian $H:[0,1]\times M\to {\mathbb{R}}$ which is constant outside a compact set and its associated $1$-periodic family of hamiltonian vector fields $X_{H}$ determined by the equation $$\omega(X^{t}_{H},Y)=-dH_{t}(Y) \ , \ \forall Y~.~$$ we denote by $\phi_{t}^{H}$ the associated Hamiltonian isotopy. We also assume that $\phi^{H}_{1}(L)$ intersects transversally $L'$. In our setting, the action functional below is well-defined: $$\label{eq:action} \mathcal{A}_{L,L',H}:\mathcal{P}_{\eta}(L,L')\to {\mathbb{R}}\ , \ x\to -\int \overline{x}^{\ast}\omega +\int_{0}^{1}H(t,x(t))dt$$ where $\overline{x}(s,t):[0,1]\times [0,1]\to M$ is such that $\overline{x}(0,t)=\eta(t)$, $\overline{x}(1,t)=x(t)$, $\forall t\in [0,1]$, $x([0,1],0)\subset L$, $x([0,1],1)\subset L'$. The critical points of $\mathcal{A}$ are the orbits of $X_{H}$ that start on $L$, end on $L'$ and which belong to $\mathcal{P}_{\eta}(L,L')$. These orbits are in bijection with a subset of $\phi_{1}^{H}(L)\cap L'$ so they are finite in number. If $H$ is constant these orbits coincide with the intersection points of $L$ and $L'$ which are in the class of $\eta$. We denote the set of these orbits by $I(L,L'; \eta, H)$ or shorted $I(L,L')$ if $\eta$ and $H$ are not in doubt. We now consider the solutions $u$ of Floer’s equation: $$\label{eq:floer} \frac{\partial u}{\partial s}+J(u)\frac{\partial u}{\partial t}+\nabla H(t,u)=0$$ with $$u(s,t):{\mathbb{R}}\times [0,1]\to M \ , u({\mathbb{R}},0) \subset L \ , \ u({\mathbb{R}},1)\subset L' ~.~$$ When $H$ is constant, these solutions are called *pseudo-holomorphic strips*. For any strip $u\in\mathcal{S}(L,L')=\{u\in C^{\infty}({\mathbb{R}}\times [0,1],M) : u({\mathbb{R}},0)\subset L \ , \ u({\mathbb{R}},1)\subset L' \}$ consider the energy $$\label{eq:energy} E_{L,L',H}(u)=\frac{1}{2}\int_{{\mathbb{R}}\times [0,1]} ||\frac{\partial u}{\partial s}||^{2}+ ||\frac{\partial u}{\partial t}-X^{t}_{H}(u)||^{2}\ ds\ dt ~.~$$ For a generic choice of $J$, the solutions $u$ of (\[eq:floer\]) which [*are of finite energy*]{}, $E_{L,L',H}(u)<\infty$, behave like negative gradient flow lines of $\mathcal{A}$. In particular, $\mathcal{A}$ decreases along such solutions. We consider the moduli space $$\label{eq:param_moduli} \mathcal{M}'=\{u\in\mathcal{S}(L,L') : u \ {\rm verifies (\ref{eq:floer}) } \ , \ E_{L,L',H}(u)<\infty \}~.~$$ The translation $u(s,t)\to u(s+k,t)$ obviously induces an ${\mathbb{R}}$ action on $\mathcal{M}'$ and we let $\mathcal{M}$ be the quotient space. For each $u\in \mathcal{M}'$ there exist $x,y\in I(L,L';\eta, H)$ such that the (uniform) limits verify $$\label{eq:ends} \lim_{s\to-\infty}u(s,t)=x(t) \ , \ \lim_{s\to+\infty}u(s,t)=y(t)~.~$$ We let $\mathcal{M}'(x,y)=\{u\in \mathcal{M}' : u \ {\rm verifies} \ (\ref{eq:ends})\}$ and $\mathcal{M}(x,y)=\mathcal{M}'(x,y)/{\mathbb{R}}$ so that $\mathcal{M}=\bigcup_{x,y} \mathcal{M}(x,y)$. If needed, to indicate to which pair of Lagrangians, to what Hamiltonian and to what almost complex structure are associated these moduli spaces we shall add $L$ and $L'$, $H$, $J$ as subscripts (for example, we may write $\mathcal{M}_{L,L',H,J}(x,y)$). For $x,y\in I(L,L';\eta,H)$ we let $$\begin{matrix}\mathcal{S}(x,y)=\{ u\in C^{\infty}([0,1]\times [0,1], M) \ :& u([0,1],0)\subset L, u([0,1],1)\subset L', \\ \hspace{55pt} u(0,t)=x(t) ,u(1,t)=y(t)\}~.~ & \end{matrix}$$ To each $u\in S(x,y)$ we may associate its Maslov index $\mu(u)\in{\mathbb{Z}}$ [@Viterbo] and it can be seen that, in our setting, this number only depends on the points $x,y$. Thus, we let $\mu(x,y)=\mu(u)$. Moreover, we have the formula $$\label{eq:transit} \mu(x,z)=\mu(x,y)+\mu(y,z)~.~$$ According to these relations, the choice of an arbitrary intersection point $x_{0}$ and the normalization $|x_{0}|=0$, defines a grading $|.|$ such that : $$\mu(x,y)=|x|-|y|.$$ There is a notion of regularity for the pairs of $(H,J)$ so that, when regularity is assumed, the spaces $\mathcal{M}'(x,y)$ are smooth manifolds (generally non-compact) of dimension $\mu(x,y)$ and in this case $\mathcal{M}(x,y)$ is also a smooth manifold of dimension $\mu(x,y)-1$. Regular pairs $(H,J)$ are generic and, in fact, they are so even if $L$ and $L'$ are not transversal (but in that case $H$ can not be assumed to be constant), for example, when $L=L'$. Floer’s construction is natural in the following sense. Let $L''=(\phi_{1}^{H})^{-1}(L')$. Consider the map $b_{H}: \mathcal{P}(L,L'')\to \mathcal{P}(L,L')$ defined by $(b_{H}(x))(t)=\phi_{t}^{H}(x(t))$. Let $\eta'\in\mathcal{P}(L,L'')$ be such that $\eta=b_{H}(\eta')$. Clearly, $b_{H}$ restricts to a map between $\mathcal{P}_{\eta'}(L,L'')$ and $\mathcal{P}_{\eta}(L,L')$ and it restricts to a bijection $I(L,L'';\eta',0)\to I(L,L';\eta, H)$. It is easy to also check $$\mathcal{A}_{L,L',H}(b_{H}(x))=\mathcal{A}_{L,L'',0}(x)$$ and that the map $b_{H}$ identifies the geometry of the two action functionals. Indeed for $u:{\mathbb{R}}\times [0,1]\to M$ with $u({\mathbb{R}},0)\subset L$, $u({\mathbb{R}},1)\subset L''$, $\tilde{u}(s,t)=\phi_{t}(u(s,t))$, $\tilde{J}=\phi^{\ast}J$ we have $$\phi_{\ast}(\frac{\partial u}{\partial s}+ \tilde{J} \frac{\partial u}{\partial t}) = \frac{\partial \tilde{u}}{\partial s}+J (\frac{\partial \tilde{u}}{\partial t}-X_{H})~.~$$ Therefore, the map $b_{H}$ induces diffeomorphisms: $$b_{H}:\mathcal{M}_{L,L'',\tilde{J},0}(x,y)\to \mathcal{M}_{L,L',J,H}(x,y)$$ where we have identified $x,y\in L\cap L''$ with their orbits $\phi^{H}_{t}(x)$ and $\phi^{H}_{t}(y)$. Finally, the non-compactness of $\mathcal{M}(x,y)$ for $x,y\in I(L,L';\eta,H)$ is due to the fact that, as in the Morse-Smale case, a sequence of strips $u_{n}\in \mathcal{M}(x,y)$ might “converge" (in the sense of Gromov) to a broken strip. There are natural compactifications of the moduli spaces $\mathcal{M}(x,y)$ called Gromov compactifications and denoted by $\overline{\mathcal{M}}(x,y)$ so that each of the spaces $\overline{\mathcal{M}}(x,y)$ is a topological manifold with corners whose boundary verifies: $$\label{eq:Grom_comp} \partial\overline{\mathcal{M}}(x,y)=\bigcup_{z\in I(L,L';\eta,H)}\ \overline{\mathcal{M}}(x,z) \times \overline{\mathcal{M}}(z,y)~.~$$ A complete proof of this fact can be found in [@BaCo] (when $dim(\mathcal{M}^{x}_{y})=1$ the proof is due to Floer and is now classical). ### Pseudoholomorphic strips and the Serre spectral sequence We will now construct a complex $\mathcal{C}(L,L';H,J)$ by a method that mirrors the construction of $\mathcal{C}(f)$ in §\[subsubsec:serre\_ss\]. This complex, called the [*extended Floer complex*]{} associated to $L,L',H,J$ has the form: $$\mathcal{C}(L,L';H,J)=(C_{\ast}(\Omega L)\otimes {\mathbb{Z}}/2<I(L,L';\eta, H)>, D)$$ where the cubical chains $C_{\ast}(\Omega L)$ have, as before, ${\mathbb{Z}}/2$-coefficients. If needed, the moduli spaces $\mathcal{M}(x,y)$ can be endowed with orientations which are compatible with formula (\[eq:Grom\_comp\]), and so we could as well use ${\mathbb{Z}}$-coefficients. To define the differential we first fix a simple path $w$ in $L$ which joins all the points $\gamma(0)$, $\gamma\in I(L,L';H)$ and we identify all these points to a single one by collapsing this path to a single point. We shall continue to denote the resulting space by $L$ to simplify notation. For each moduli space $\mathcal{M}(x,y)$ there is a continuous map $$l^{x}_{y}:\mathcal{M}(x,y)\to \Omega L$$ which is defined by associating to $u\in \mathcal{M}(x,y)$ the path $u({\mathbb{R}},0)$ parametrized by the (negative) values of the action functional $\mathcal{A}$. This is a continuous map and it is seen to be compatible with formula (\[eq:Grom\_comp\]) in the same sense as in (\[eq:compat\]). We pick a representing chain system $\{k^{x}_{y}\}$ for the moduli spaces $\mathcal{M}(x,y)$ and we let $$m^{x}_{y}=(l^{x}_{y})_{\ast}(k^{x}_{y})\in C_{\ast}(\Omega L)$$ and $$\label{eq:differential_coeff} Dx=\sum_{y}m^{x}_{y}\otimes y~.~$$ As in the case of the extended Morse complex the fact that $D^{2}=0$ is an immediate consequence of formula (\[eq:Grom\_comp\]). \[rem:coeff\] a\. There is an apparent assymetry between the roles of $L$ and $L'$ in the definition of this extended Floer complex. In fact, the coefficients of this complex belong naturally to an even bigger and more symmetric ring than $C_{\ast}(\Omega L)$. Indeed, consider the space $T(L,L')$ which is the homotopy-pullback of the two inclusions $L\hookrightarrow M$, $L'\hookrightarrow M$. This space is homotopy equivalent to the space of all the continuous paths $\gamma : [0,1]\to M$ so that $\gamma(0)\in L$, $\gamma(1)\in L'$. By replacing both $L$ and $M$ by the respective spaces obtained by contracting the path $w$ to a point, we see that there are continous maps $\mathcal{M}(x,y)\to \Omega (T(L,L'))$. We may then use these maps to construct a complex with coefficients in $C_{\ast}(\Omega (T(L,L'))$. Clearly, there is an obvious map $T(L,L')\to L$ and it is precisely this map which, after looping, changes the coefficients of this complex into those of the extended Floer complex. b\. At this point it is worth mentioning why using representing chain systems is useful in our constructions. Indeed, for the extended Morse complex representing chain systems are not really essential: the moduli spaces $M^{x}_{y}$ are triangulable in a way compatible with the boundary formula and so, to represent this moduli space inside the loop space $\Omega M$, we could use instead of the chain $a^{x}_{y}$ a chain given by the sum of the top dimensional simplexes in such a triangulation. This is obviously, a simpler and more natural approach but it has the disadvantage that it does not extend directly to the Floer case. The reason is that it is not known whether the Floer moduli spaces $\mathcal{M}(x,y)$ admit coherent triangulations (even if this is likely to be the case). The chain complex $\mathcal{C}(L,L';H)$ admits a natural degree filtration which is given by $$\label{eq:degree_filtr}F^{k}\mathcal{C}(L,L';H,J)= C_{\ast}(L)\otimes {\mathbb{Z}}/2< x\in I(L,L';\eta,H) : |x|\leq k>~.~$$ It is clear that this filtration is differential. Therefore, there is an induced spectral sequence which will be denoted by $\mathcal{E}(L,L'; H,J)=(\mathcal{E}^{r}_{p,q}, D^{r})$. We write $\mathcal{E}(L,L';J)=\mathcal{E}(L,L';0,J)$. For convenience we have omitted $\eta$ from this notation (the relevant components of the paths spaces $\mathcal{P}(L,L')$ will be clear below). Here is the main result concerning this spectral sequence. \[theo:strips\_ss\] For any two regular pairs $(H,J),(H',J')$, the spectral sequences $\mathcal{E}(L,L';H, J)$ and $\mathcal{E}(L,L'; H',J')$ are isomorphic up to translation for $r\geq 2$. Moreover, if $\phi$ is a hamiltonian diffeomorphism , then $\mathcal{E}(L,L';J)$ and $\mathcal{E}(L,\phi(L');J')$ are also isomorphic up to translation for $r\geq 2$ (whenever defined). The second term of this spectral sequence is $\mathcal{E}^{2}(L,L';H,J)\approx H_{\ast}(\Omega L)\otimes HF_{\ast}(L,L')$ where $HF_{\ast}(-,-)$ is the Floer homology. Finally, if $L$ and $L'$ are hamiltonian isotopic, then $\mathcal{E}(L,L';J)$ is isomorphic up to translation to the Serre spectral sequence of the path-loop fibration $\Omega L\to PL\to L$. Isomorphism up to translation of two spectral sequences $E^{r}_{p,q}$, $F^{r}_{p,q}$ means that there exists a $k\in {\mathbb{Z}}$ and chain isomorphisms $\phi^{r}: E^{r}_{p,q}\to F^{r}_{p+k,q}$. This notion appears naturally here because the choice of the element $x_{0}\in I(L,L';H)$ with $|x_{0}|=0$ is arbitrary. A different choice will simply lead to a translated spectral sequence. As follows from the discussion in §\[susubsec:appli\] B, it is possible to replace this choice of grading with one that only depends on the path $\eta$. However, this might make the absolute degrees fractionary and, as the choice of $\eta$ is not canonical, the resulting spectral sequence will still be invariant only up to translation. The outline of the proof of this theorem is as follows (see [@BaCo] for details). First, in view of the naturality properties of Floer’s construction, it is easy to see that the second invariance claim in the statement is implied by the first one. Now, we consider a homotopy $G$ between $H$ and $H'$ as well as a one-parameter family of almost complex structures $\bar{J}$ relating $J$ to $J'$. For $x\in I(L,L';H,J)$ and $y\in I(L,L';H',J')$ we define moduli spaces $\mathcal{N}(x,y)$ which are solutions of an equation similar to (\[eq:floer\]) but replaces $H$ with $G$, $J$ with $\bar{J}$ (and takes into account the additional parameter - this is a standard construction in Floer theory). These moduli spaces have properties similar to the $\mathcal{M}(x,y)$’s. In particular they admit compactifications which are manifolds with boundary so that the following formula is valid $$\partial\overline{\mathcal{N}}(x,y)=\bigcup_{z\in I(L,L';H)}\overline{\mathcal{M}} (x,z)\times \overline{\mathcal{N}}(z,y)\cup \bigcup_{z'\in I(L,L';H')}\overline{\mathcal{N}}(x,z')\times\overline{\mathcal{M}}(z',y)~.~$$ The representing chain idea can again be used in this context and it leads to coefficients $n^{x}_{y}\in C_{\ast}(\Omega L)$. If we group these coefficients in a matrix $\mathcal{B}$ and we group the coefficients of the differential of $\mathcal{C}(L,L';H,J)$ in a matrix $\mathcal{A}$ and the coefficients of $\mathcal{C}(L,L';H',J')$ in a matrix $\mathcal{A}'$, then the relation above implies that we have: $$\label{eq:morph} \partial \mathcal{B}=\mathcal{A}\cdot\mathcal{B}+\mathcal{B}\cdot\mathcal{A}'~.~$$ It follows that the module morphism $$\phi_{G,\bar{J}}:\mathcal{C}(L,L';H,J)\to \mathcal{C}(L,L';H,J)$$ which is the unique extension of $$\phi_{G,\bar{J}}(x)=\sum_{y} r^{x}_{y}\otimes y,\ \forall x\in I(L,L';H)$$ is a chain morphism. Moreover, the chain morphism constructed above preserves filtrations (of course, to for this it is required that the choices for the point $x_{0}$ with $|x_{0}|=0$ for our two sets of data be coherent - this is why the isomorphisms are “up to translation"). After verifying that $\mathcal{E}^{2}\approx H_{\ast}(\Omega L)\otimes FH_{\ast}(L)$ for both spectral sequences it is not difficult to see that $\phi_{G}$ induces an isomorphism at the $\mathcal{E}^{2}$-level of these spectral sequences [@BaCo]. Hence it also induces an isomorphism for $r >2$. For the last point of the theorem we use Floer’s reduction of the moduli spaces $\mathcal{M}_{J,L,L'}(x,y)$ of pseudoholomorphic strips to moduli spaces of Morse flow lines $M^{x}_{y}(f)$. In short, this shows [@Fl1] [@Fl2] that for certain choices of $J$, $f$ and $L'$ which is hamiltonian isotopic to $L$ we have homeomorphisms $\psi_{x,y}:\mathcal{M}(x,y)\to M^{x}_{y}$ which are compatible with the compactification and with the boundary formulae. This means that with these choices we have an isomorphism $\mathcal{C}(L,L')\to \mathcal{C}(f)$ and it is easy to see that this preserves the filtrations of these two chain complexes. By Theorem \[theo:serre\_ss\] this completes the outline of proof. [It is shown in [@BaCo] that the $\mathcal{E}^{1}$-term of this spectral sequence has also some interesting invariance properties. ]{} ### Applications. {#susubsec:appli} We will discuss here a number of direct corollaries of Theorem \[theo:strips\_ss\] most (but not all )of which appear in [@BaCo]. An immediate adaptation of Theorem \[theo:serre\_ss\] provides a statement which is much more flexible. This is a “change of coefficients" or “localization" phenomenon that we now describe. Assume that $f:L\to X$ is a smooth map. Then we can consider the induced map $\Omega f :\Omega L\to \Omega X$ and we may use this map to change the coefficients of $\mathcal{C}(L,L';H,J)$ thus getting a new complex $$\mathcal{C}_{X}(L,L';H,J)=(C_{\ast}(\Omega X)\otimes {\mathbb{Z}}/2<I(L,L';H)>, D_{X})$$ so that $D_{X}(x)=\sum_{y}(\Omega f)_{\ast}(m^{x}_{y})\otimes y$ where $m^{x}_{y}$ are the coefficients in the differential $D$ of $\mathcal{C}(L,L';H,J)$ (compare with (\[eq:differential\_coeff\])). This complex behaves very much like the one studied in Theorem \[theo:strips\_ss\]. In particular, this complex admits a similar filtration and the resulting spectral sequence, $\mathcal{E}_{X}(L,L')$, has the same invariance properties as those in the theorem and, moreover, for $L$, $L'$ hamiltonian isoptopic this spectral sequence coincides with the Serre spectral sequence of the fibration $\Omega X\to E\to L$ which is obtained from the path-loop fibration $\Omega X\to PX\to X$ by pull-back over the map $f$. In particular, the homology of this complex coincides with the singular homology of $E$. If $X$ is just one point, $\odot$, it is easy to see that the complex $\mathcal{C}_{\odot}(L,L';H,J)$ coincides with the Floer complex. The complex $\mathcal{C}_{X}(L,L';H,J)$ may also be viewed as a sort of localization in the following sense. Assume that we are interested to see what pseudo-holomorphic strips pass through a region $A\subset L$. Then we may consider the closure $C$ of the complement of this region and the space $L/C$ obtained by contracting $C$ to a point. There is the obvious projection map $L\to L/C$ which can be used in place of $f$ above. Now, if some non-vanishing differentials appear in $\mathcal{E}_{L/C}(L,L')$ for $r\geq 2$, then it means that there are some coefficients $m^{x}_{y}$ so that $|(m^{x}_{y})|>0$ and $(\Omega f)_{\ast}(m^{x}_{y})\not=0$. This means that the map $$\mathcal{M}(x,y)\stackrel{l^{x}_{y}} {\longrightarrow}\Omega L\stackrel{\Omega f}{\longrightarrow}\Omega (C/L)$$ carries the representing chain of $\mathcal{M}(x,y)$ to a nonvanishing chain in $C_{\ast}(\Omega (L/C))$. But this means that the intersection $\mathcal{M}'(x,y)\cap A$ is of dimension equal to $\mu(x,y)$. The typical choice of region $A$ is a tubular neighbourhood of some submanifold $V\hookrightarrow L$. In that case $L/C$ is the associated Thom space. Let $$\nabla(L,L')=\inf_{H, \phi^{H}_{1}(L)=L'} (\max_{x,t}H(x,t)-\min_{x,t}H(x,t))~.~$$ be the Hofer distance between Lagrangians. It has been shown to be non-degenerate by Chekanov [@Chek] for symplectic manifolds with geometry bounded at infinity. \[cor:strips\_Maslov\] Let $a\in H_{k}(L)$ be a non-trivial homology class. If a closed submmanifold $V\hookrightarrow L$ represents the class $a$, then for any generic $J$ and any $L'$ hamiltonian isotopic to $L$, there exists a pseudoholomorphic strip $u$ of Maslov index at most $n-k$ which passes through $V$ and which verifies: $$\int u^{\ast}\omega \leq \nabla (L,L')~.~$$ In view of the discussion above, the proof is simple (we are using here a variant of that used in [@BaCo]). We start with a simple topological remark. Take $A$ to be a tubular neighbourhood of $V$. Then $L/C=TV$ is the associated Thom space. In the Serre spectral sequence of $\Omega (TV)\to P(TV)\to TV$ we have that $d^{n-k}(\tau)\not=0$ where $\tau\in H_{n-k}(TV)$ is the Thom class of $V$. By Poincaré duality, there is a class $b\in H_{n-k}(TV)$ which is taken to $\tau$ by the projection map $L\to TV$ ($b$ corresponds to the Poincaré dual $a^{\ast}$ of $a$ via the isomorphism $H^{n-k}(L)\approx H_{k}(L)$). This means that $D^{n-k}(b)$ is not zero in $\mathcal{E}_{TV}(L,L';H)$ (for any Hamiltonian $H$). To proceed with the proof notice that, by the naturality properties of the Floer moduli spaces, it is sufficient to show that for any Hamiltonian $H$ (and any generic $J$) so that $\phi^{H}_{1}(L)=L'$ there exists an element $u\in \mathcal{M}'_{L,L,J,H}$ which is of Maslov index $(n-k)$ and so that $u({\mathbb{R}},0)\cap A\not=\emptyset$ and $E_{L,L,H}(u)\leq ||H||$ where $||H||=\max_{x,t}H(x,t)-\min_{x,t}H(x,t)$. We may assume that $\min_{x,t}H(x,t)=0$ and we let $K=\max_{x,t}H(x,t)$. We consider a Morse function $f:L\to {\mathbb{R}}$ which is very small in $C^{2}$ norm and we extend it to a function (also denoted by $f$) which is defined on $M$ and remains $C^{2}$ small. In particular, we suppose $\min_{x}f(x)=0$ and $\max_{x} f(x)<\epsilon$. We denote $\underline{f}=f-\epsilon$ and $\overline{f}=f+K$. It follows that we may construct monotone homotopies $G:\overline{f}\simeq H$ and $G':H\simeq\underline{f}$. Consider the following action filtration of $\mathcal{C}_{TV}(L,L;H)$ $$F_{v}\mathcal{C}_{TV}(L,L;J,H)=C_{\ast}(\Omega TV)\otimes {\mathbb{Z}}/2<x\in I(L,L;H): \mathcal{A}_{L,L,H}(x)\leq v>$$ and similarly on the complexes $\mathcal{C}_{TV}(L,L;\overline{f})$ and $\mathcal{C}_{TV}(L,L;\underline{f})$. It is obvious that this is a differential filtration and, if the choice of path $\eta$ (used to define the action functional, see (\[eq:action\])) is the same for all the three hamiltonians involved, these monotone homotopies preserve these filtrations. We now denote $$\mathcal{C}=F_{K+\epsilon}\mathcal{C}_{TV} (L,L';\overline{f})/F_{-\epsilon}\mathcal{C}_{TV}(L,L';\overline{f}),$$ $$\mathcal{C}'=F_{K+\epsilon}\mathcal{C}_{TV}(L,L';H) /F_{-\epsilon}\mathcal{C}_{TV}(L,L';H),$$ $$\mathcal{C}''=F_{K+\epsilon}\mathcal{C}_{TV}(L,L';\underline{f}) /F_{-\epsilon}\mathcal{C}_{TV}(L,L';\underline{f})$$ These three complexes inherit degree filtrations and there are associated spectral sequences $\mathcal{E}(\mathcal{C})$, $\mathcal{E}(\mathcal{C}')$, $\mathcal{E}(\mathcal{C}'')$. We have induced morphisms $\phi_{G}:\mathcal{C}\to \mathcal{C}'$ and $\phi_{G'}:\mathcal{C}'\to \mathcal{C}''$ which also induce morphisms among these spectral sequences. Moreover, as $\mathcal{C}$ and $\mathcal{C''}$ both coincide with $\mathcal{C}_{TV}(f)$ (because $f$ is very $C^{2}$-small and $0\leq f(x)<\epsilon$), the composition $\phi_{G'}\circ \phi_{G}$ induces an isomorphism of spectral sequences for $r\geq 2$ (here $\mathcal{C}_{TV}(f)$ is the extended Morse complex obtained from $\mathcal{C}(f)$ by changing the coefficients by the map $L\to TV$). But, as the class $b$ has the property that its $D^{n-k}$ differential is not trivial in $\mathcal{E}(\mathcal{C})$, this implies that $D^{n-k}(\phi_{G}(b))\not=0$ which is seen to immediately imply that there is some moduli space $\mathcal{M}'(x,y)$ of dimension $n-k$ with $\mathcal{A}_{L,L';H}(x),\mathcal{A}_{L,L';H}(y)\in [-\epsilon,K+\epsilon]$ and $\mathcal{M'}(x,y)\cap A\not=\emptyset$. Therefore there are $J$-strips passing through $A$ which have Maslov index $n-k$ and area less than $||H||+2\epsilon$. By letting $A$ tend to $V$ and $\epsilon\to 0$, $||H||\to \nabla(L,L')$, these strips converge to strips with the properties desired. We may apply this even to $1\in H_{0}(L)$ and Corollary \[cor:strips\_Maslov\] shows in this case that through each point of $L$ passes a strip of Maslov index at most $n$ (again, for $J$ generic) and of area at most $\nabla(L,L')$. The case $V=pt$ was discussed explicitly in [@BaCo]. \[rem:area\_esti\] a\. It is clear that the strips detected in this corollary actually have a symplectic area which is no larger than $c(b;H)-c(1;H)$ where $c(x;H)$ is the spectral value of the homology class $x$ relative to $H$, $$c(x;H)=\inf\{v\in{\mathbb{R}}: x\in Im(H_{\ast}(FC_{\leq v})(L,L;H)\to FH_{\ast}(L,L;H))\}$$ where $FC_{\leq v}(L,L;H)$ is the Floer complex of $L,L,H$ generated by all the elements of $I(L,L;\eta, H)$ of action smaller or equal than $v$; $FH_{\ast}(L,L;H)$ is the Floer homology. Under our assumptions we have a canonical isomorphism (up to translation) between $HF_{\ast}(L,L, H)$ and $H_{\ast}(L)$ so we may view $b\in HF_{\ast}(L,L;H)$. b\. Clearly, in view of Gromov compactness our result also implies that for any $J$ (even non regular) and for any $L'$ hamiltonian isotopic to $L$ and for any $x\in L\backslash L'$ there exists a $J$-holomorphic strip passing through $x$ which has area less than $\nabla(L,L')$. This result (without the area estimate) also follows from independent work of Floer [@Fl] and Hofer [@Ho]. Another method has been mentioned to us by Dietmar Salamon. It is based on starting with disks with their boundary on $L$ and which are very close to be constant maps. Therefore, an appropriate evaluation defined on the moduli space of these disks is of degree $1$. Each of these disks is made out of two semi-disks which are pseudo-holomorphic and which are joined by a short semi-tube verifying the non-homogenous Floer equation for some given Hamiltonian $H$. This middle region is then allowed to expand till, at some point, it will necessarily produce a semi-tube belonging to some $\mathcal{M}'_{H}(x,y)$. It is also possible to use the pair of pants product to produce Floer orbits joining the “top and bottom classes” [@Sch2]. Still, having simultaneous area and Maslov index estimates appears to be more difficult by methods different from ours. Of course, detecting strips of lower Maslov index so that they meet some fixed submanifold is harder yet. Corollary \[cor:strips\_Maslov\] has a nice geometric application. \[cor:balls\] Assume that, as before, $L$ and $L'$ are hamiltonian isotopic. For any symplectic embedding $e:(B_{r},\omega_{0})\to M$ so that $e^{-1}(L)={\mathbb{R}}^{n}\cap B(r)$ and $e(B_{r})\cap L'=\emptyset$ we have $\pi r^{2}/2\leq \nabla(L,L')$. This is proven (see [@BaCo]) by using a variant of the standard isoperimetric inequality: a $J_{0}$-pseudoholomorphic surface in the standard ball $(B_{r},\omega_{0})$ of radius $r$ whose boundary is on $\partial B_{r}\cup {\mathbb{R}}^{n}$ has area at least $\pi r^{2}/2$. Clearly, this implies the non-degeneracy result of Chekanov that was mentioned before under the connectivity conditions that we have always assumed till this point. We have worked till now under the assumption that $$\label{eq:connect} L, L' \ {\rm are\ simply-connected\ and\ } \omega|_{\pi_{2}(M)}=c_{1}|_{\pi_{2}(M)}=0$$ These requirements were used in a few important places: in the definition of the action functional, the definition of the Maslov index, the boundary product formula (because they forbid bubbling). Of these, only the bubbling isssue is in fact essential: the boundary formula is precisely the reason why $d^{2}=0$ as well as the cause of the invariance of the resulting homology. We proceed below to extend the corollaries and techniques discussed above to the case when all the connectivity conditions are dropped but we assume that $L$ and $L'$ are hamiltonian isotopic and only work below the minimal energy that could produce some bubbling (this is similar to the last section of [@BaCo] but goes beyond the cases treated there). First, for a time dependent almost complex structure $J_{t}$, $t\in [0,1]$, we define $\delta_{L,L'}(J)$ as the infimmum of the symplectic areas of the following three types of objects: - the $J_{t}$-pseudoholomorphic spheres in $M$ (for $t\in [0,1]$). - the $J_{0}$-pseudoholomorphic disks with their boundary on $L$. - the $J_{1}$-pseudoholomorphic disks with their boundary on $L'$. By Gromov compactness this number is strictly positive. We will proceed with the construction in the case when $L=L'$ and in the presence of a hamiltonian $H$. We shall assume that the pair $(H,J)$ is regular in the sense that the moduli spaces of strips defined below, $\mathcal{M}(x,y)$, are regular. We take the fixed reference path $\eta$ to be a constant point in $L$ (see (\[eq:action\])). Denote $\mathcal{P}_{0}(L,L)=\mathcal{P}_{\eta}(L,L)$ and consider in this space the base point given by $\eta$. Notice that there is a morphism $\omega : \pi_{1}\mathcal{P}_{0}(L,L)\to {\mathbb{R}}$ obtained by integrating $\omega$ over the disk represented by the element $z\in\pi_{1}\mathcal{P}_{0}(L,L)$ (such an element can be viewed as a disk with boundary in $L$). Similarly, let $\mu:\pi_{1}\mathcal{P}_{0}(L,L)\to {\mathbb{Z}}$ be the Maslov morphism. Let $\mathcal{K}$ be the kernel of the morphism $$\omega\times\mu:\pi_{1}\mathcal{P}_{0}(L,L)\to {\mathbb{R}}\times {\mathbb{Z}}~.~$$ The group $$\pi=\pi_{1}(\mathcal{P}_{0}(L,L))/\mathcal{K}$$ is an abelian group (as it is a subgroup of ${\mathbb{R}}\times {\mathbb{Z}}$) and is of finite rank. Let’s also notice that this group is the quotient of $\pi_{2}(M,L)$ by the equivalence relation $a\sim b$ iff $\omega(a)=\omega(b), \mu(a)=\mu(b)$ (with this definition this group is also known as the Novikov group). This is a simple homotopical result. First $\mathcal{P}(L,L)$ is the homotopy pull-back of the map $L\to M$ over the map $L\to M$. But this means that we have a fibre sequence $F\to \mathcal{P}(L,L)\to L$ with $F$ the homotopy fibre of $L\to M$ and that this fibre sequence admits a canonical section. This implies that $$\pi_{1}\mathcal{P}_{0}(L,L)\approx\pi_{1}(F)\times\pi_{1}(L)~.~$$ But $\pi_{1}(F)=\pi_{2}(M,L)$. As both $\omega$ and $\mu$ are trivial on $\pi_{1}(L)$ the claim follows. It might not be clear at first sight why $\mu$ is null on $\pi_{1}(L)$ here. The reason is that the term $\pi_{1}(L)$ in the product above is the image of the map induced in homotopy by $$j_{L}:L\to \mathcal{P}_{0}(L,L)~.~$$ This map associates to a point in $L$ the constant path. Consider a loop $\gamma(s)$ in $L$. Then $j_{L}\circ\gamma$ is a loop in $\mathcal{P}_{0}(L,L)$ which at each moment $s$ is a constant path. We now need to view this loop as the image of a disk and $\mu([j_{L}(\gamma)])$ is the Maslov index of this disk. But this disk is null homotopic so $\mu([j_{L}(\gamma)])=0$. Consider the regular covering $p:\mathcal{P}'_{0}(L,L)\to \mathcal{P}_{0}(L,L)$ which is associated to the group $\mathcal{K}$. We fix an element $\eta_{0}\in\mathcal{P}'_{0}(L,L)$ so that $p(\eta_{0})=\eta$. Clearly, the action functional $$\mathcal{A}'_{L,L,H}:\mathcal{P}'_{0}(L,L)\to {\mathbb{R}}$$ may be defined by essentially the same formula as in (\[eq:action\]): $$\mathcal{A}'_{L,L,H}(x)= -\int (p\circ u)^{\ast}\omega +\int_{0}^{1}H(t,(p\circ x)(t))dt$$ where $u:[0,1]\to \mathcal{P}'_{0}(L,L)$ is such that $u(0)=\eta_{0}$, $u(1)=x$ and is now well-defined. Let $I'(L,L,H)=p^{-1}(I(L,L,H))$. For $x,y\in I'(L,L,H)$ we may define $\mu(x,y)=\mu(p\circ u)$ where $u:[0,1]\to \mathcal{P}'_{0}(L,L)$ is a path that joins $x$ to $y$. This is again well defined. For each $x\in I'(L,L,H)$ we consider a path $v_{x}:[0,1]\to \mathcal{P}'_{0}(L,L)$ so that $v_{x}(0)=\eta_{0}$ and $v_{x}(1)=x$. The composition $p\circ v_{x}$ can be viewed as a “semi-disk" whose boundary is resting on the orbit $p(x)$ and on $L$. Therefore, we may associate to it a Maslov index $\mu(v_{x})$ [@RoSa] and it is easy to see that this only depends on $x$. Thus we define $\mu(x)=\mu(v_{x})$ and we have $\mu(x,y)=\mu(x)-\mu(y)$ for all $x,y\in I'(L,L,H)$. To summarize what has been done till now: once the choices of $\eta$ and $\eta_{0}$ are made, both the action functional $\mathcal{A}':\mathcal{P}'_{0}(L,L,H)\to {\mathbb{R}}$ and the “absolute" Maslov index $\mu(-):I'(L,L,H)\to {\mathbb{Z}}$ are well-defined. Fix an almost complex structure $J$. Consider two elements $x,y\in I'(L,L,H)$. We may consider the moduli space which consists of all paths $u:{\mathbb{R}}\to \mathcal{P}'_{0}(L,L)$ which join $x$ to $y$ and are so that $p\circ u$ satisfies Floer’s equation (\[eq:floer\]) modulo the ${\mathbb{R}}$-action. If regularity is achieved, the dimension of this moduli space is precisely $\mu(x,y)-1$. The action functional $\mathcal{A}'$ decreases along such a solution $u$ and the energy of $u$ (which is defined as the energy of $p\circ u$) verifies, as in the standard case, $E(u)=\mathcal{A}'(x)-\mathcal{A}'(y)$. Bubbling might of course be present in the compactification of these moduli spaces. As we only intend to work below the minimal bubbling energy $\delta_{L,L}(J)$ we [*artificially*]{} put: $$\mathcal{M}(x,y)=\emptyset\ {\rm if}\ \mathcal{A}'(x)-\mathcal{A}'(y)\geq \delta_{L,L}(J)$$ and, of course, for $\mathcal{A}'(x)-\mathcal{A}'(y)<\delta_{L,L}(J)$, $\mathcal{M}(x,y)$ consists of the elements $u$ mentioned above. We only require these moduli spaces to be regular. With this convention, for all $x,y\in I'(L,L,H)$ so that $\mathcal{M}(x,y)$ is not void we have the usual boundary formula (\[eq:Grom\_comp\]). Notice at the same time that this formula is false for general pairs $x,y$ (and so there is no way to define a Floer type complex at this stage). Now consider a map $f:L\to X$ so that $X$ is simply-connected (the only reason to require this is to insure that the Serre spectral sequence does not require local coefficients). We consider the group: $$\mathcal{C}(L,L,H;X)=C_{\ast}(\Omega X)\otimes {\mathbb{Z}}/2<I'(L,L,H)>~.~$$ For $w\geq v\in {\mathbb{R}}$, we denote $I'_{v,w}=\{x\in I'(L,L,H)\ :\ w\geq\mathcal{A}'(x)\geq v\}$ and we define the subgroup $$\mathcal{C}_{w,v}(L,L,H;X)=C_{\ast}(\Omega X)\otimes {\mathbb{Z}}/2<I'_{v,w}>~.~$$ Suppose that $w-v \leq \delta_{L,L}(J)-\epsilon$. We claim that in this case we may define a differential on $\mathcal{C}_{v,w}(L,L,H;X)$ by the usual procedure. Consider representing chain systems for all the moduli spaces $\mathcal{M}(x,y)$ and let the image of these chains inside $C_{\ast}(\Omega X)$ be respectively $\bar{m}^{x}_{y}$. Let $D$ be the linear extension of the map given by $$Dx=\sum_{y\in I'_{v,w}}\bar{m}^{x}_{y}\otimes y~.~$$ \[prop:extension\] The linear map $D$ is a differential. A generic monotone homotopy $G$ between two hamiltonians $H$ and $H'$ $$\phi_{G}^{X}:\mathcal{C}_{v,w}(L,L,H, J;X) \to\mathcal{C}_{v,w}(L,L,H',J;X)~.~$$ A monotone homotopy between monotone homotopies $G$ and $G'$ induces a chain homotopy between $\phi_{G}^{X}$ and $\phi_{G'}^{X}$ so that $H_{\ast}(\phi_{G}^{X})= H_{\ast}(\phi_{G'}^{X})$. Now, $D^{2}x=\sum_{z}(\sum_{y}\bar{m}^{x}_{y}\cdot \bar{m}^{y}_{z}+\partial \bar{m}^{x}_{z})\otimes z~.~$ In this formula we have $\mathcal{A}'(x)-\mathcal{A}'(z)\leq \delta_{L,L}(J)$ and, because the usual boundary formula (\[eq:Grom\_comp\]) is valid in this range, all the terms vanish so that $D^{2}(x)=0$. The same idea may be applied to a monotone homotopy as well as to a monotone homotopy between monotone homotopies and it implies the claim. \[rem:point\] a\. If we take for the space $X$ a single point $\ast$ we get a chain complex whose differential only takes into account the $0$-dimensional moduli spaces and which is a truncated version of Floer homology. b\. The complex $\mathcal{C}_{v,w}(L,L,H,J;X)$ admits a degree filtration which is perfectly similar to the one given by (\[eq:degree\_filtr\]). Let $\mathcal{E}\mathcal{C}_{v,w}(L,L,H, J;X)$ be the resulting spectral sequence. Then, under the restrictions in the Proposition \[prop:extension\], a monotone homotopy $G$ induces a morphism of spectral sequences $\mathcal{E}_{X}(\phi_{G})$ and two such homotopies $G$, $G'$ which are monotonously homotopic have the property that they induce the same morphism $\mathcal{E}^{r}_{X}(\phi_{G})=\mathcal{E}^{r}_{X}(\phi_{G'})$ for $r\geq 2$. This last fact follows from Proposition \[prop:extension\] by computing $\mathcal{E}^{2}_{X}(\phi_{G})=id_{H_{\ast}(\Omega X)}\otimes H_{\ast}(\phi_{G}^{\ast})=\mathcal{E}^{2}_{X}(\phi_{G'})$ where $\phi_{G}^{\ast}:\mathcal{C}_{v,w}(L,L,H;\ast)\to \mathcal{C}_{v,w}(L,L,H';\ast)$. Naturally, the next step is to compare our construction with its Morse theoretical analogue. Consider the map $j_{L}:L\to \mathcal{P}_{0}(L,L)$ and consider $p:\tilde{L}\to L$ the regular covering obtained by pull-back from $\mathcal{P}'_{0}(L,L)\to \mathcal{P}_{0}(L,L)$. Notice that, because both compositions $\omega\circ\pi_{1}(j_{L})$ and $\mu\circ\pi_{1}(j_{L})$ are trivial, it follows that the covering $\tilde{L}\to L$ is trivial. Let $\bar{f}:\tilde{L}\to {\mathbb{R}}$ be defined by $\bar{f}=f\circ p$ and consider $\mathcal{C}(\bar{f};X)$ the extended Morse complex of $\bar{f}$ with coefficients changed by the map $\Omega \tilde{L}\to \Omega X$. Notice that, in general, the group $\pi$ acts on $I'(L,L,H)$ and we have the formula: $$\mathcal{A}'(gy)=\omega(g)+\mathcal{A}'(y),\ \mu(gy)=\mu(g)+\mu(y), \forall y\in I'(L,L,H), \forall g\in\Pi~.~$$ In our particular case, when $H=f$, we have $I'(L,L,H)=Crit(\bar{f})$. For each point $x\in Crit(f)$ let $\bar{x}\in Crit(\bar{f})$ be the element of $p^{-1}(x)$ which belongs to the component of $\tilde{L}$ which also contains $\eta_{0}$. We then have $\mathcal{A}'(\bar{x})=f(x)$ and $\mu(\bar{x})=ind_{\bar{f}}(\bar{x})$. The extended Morse complex $\mathcal{C}(\bar{f};X)$ is therefore isomorphic to $\mathcal{C}(f;X)\otimes {\mathbb{Z}}[\pi]$ and the action filtration is determined by writting $\mathcal{A}'(x\otimes g)=f(x)+\omega(g)$. The degree filtration induces, as usual, a spectral sequence which will be denoted by $\mathcal{E}\mathcal{C}(f;X)$. The remarks above together with Theorem \[theo:strips\_ss\] show that this spectral sequence consists of copies the Serre spectral sequence of $\Omega X\to E\to L$: one copy for each connected component of $\tilde{L}$. We denote by $\mathcal{C}_{0}(f;X)$ and $\mathcal{E}\mathcal{C}_{0}(f;X)$ the copies of the extended complex and of the spectral sequence that correspond to the connected component $L_{0}$ of $\tilde{L}$ which contains $\eta_{0}$. \[prop:comp\_morse\] Suppose $||H||_{0}< \delta_{J}(L,L)$. There exists a chain morphisms $\phi:\mathcal{C}_{0}(f;X)\to \mathcal{C}_{0, ||H||}(L,L,H,J;X)$ and $\psi :\mathcal{C}_{0,||H||}(L,L,H,J;X) \to \mathcal{C}_{0}(f;X)$ which preserve the respective degree filtrations and so that $\psi\circ\phi$ induces an isomorphism at the $E^{2}$ level of the respective spectral sequences. To prove this proposition we shall use a different method than the one used in Corollary \[cor:strips\_Maslov\]. The comparison maps $\phi$, $\psi$ will be constructed by the method introduced in [@PiSaSc] and later used in [@Sch1] and [@Sch2]. Compared to Proposition \[prop:extension\] this is particularly efficient because it avoids the need to control the bubbling thershold along deformations of $J$. We fix as before the Morse function $f$ as well as the pair $H,J$. To simplify the notation we shall assume that $\inf H(x,t)=0$. The construction of $\phi$ is based on defining certain moduli spaces $\mathcal{W}(x,y)$ with $x\in Crit(\bar{f})$ and $y\in I'(L,L,H)$. They consist of pairs $(u,\gamma)$ where $u:{\mathbb{R}}\to \mathcal{P}'_{0}(L,L)$, $\gamma:(-\infty,0]\to \tilde{L}$ and if we put $u'=p(u)$, $\gamma'=p(\gamma)$ ($p:\mathcal{P}'_{0}(L,L)\to \mathcal{P}_{0}(L,L)$ is the covering projection) then we have: $$u'(R\times\{0,1\})\subset L\ , \ \partial_{s}(u')+ J(u')\partial_{t}(u')+\beta(s)\nabla H(u',t)=0\ , u(+\infty)=y$$ and $$\frac{d\gamma'}{dt}=-\nabla_{g}f(\gamma')\ ,\ \gamma(-\infty)=x \ , \ \gamma(0)=u(-\infty)~.~$$ Here $g$ is a Riemannian metric so that $(f,g)$ is Morse-Smale and $\beta$ is a smooth cutt-off function which is increasing and vanishes for $s\leq 1/2$ and equals $1$ for $s\geq 1$. It is useful to view an element $(u,\gamma)$ as before as a semi-tube connecting $x$ to $y$. Under usual regularity assumptions these moduli spaces are manifolds of dimension $\mu(x)-\mu(y)$. The energy of such an element $(u,\gamma)$ is defined in the obvious way by $E(u,\gamma)=\int ||\partial_{s}u'||^{2}ds dt$. A simple computation shows that: $$E(u,\gamma)=I(u) + \int_{{\mathbb{R}}\times [0,1]} (u')^{\ast}\omega-\int_{0}^{1}H(y(t))dt$$ where $I(u)=\int_{{\mathbb{R}}\times [0,1]} \beta'(s)H(u'(s),t)dsdt$. If $x\in L_{0}$, then the energy verifies $$E(u,\gamma)=I(u) -\mathcal{A}'(y)\leq \sup (H) -\mathcal{A}'(y)~.~$$ As before, we only want to work here under the bubbling threshold and we are only interested in the critical points $x\in L_{0}$ so we put $\mathcal{W}(x,y)=\emptyset$ for all those pairs $(x,y)$ with either $y\in I'(L,L,H)$ so that $\mathcal{A}'(y)\not\in [0,||H||]$ or with $x\not\in L_{0}$. This means that there is no bubbling in our moduli spaces. Thus we may apply the usual procedure: compactification, representing chain systems, representation in the loop space (for this step we need to choose a convenient way to parametrize the paths represented by the elements $(u,\gamma)$). Notice that the boundary of $\overline{\mathcal{W}}(x,y)$ is the union of two types of pieces $\overline{M}^{x}_{z}\times \overline{\mathcal{W}}(z,y)$ and $\overline{\mathcal{W}}(x,z)\times \overline{\mathcal{M}}(z,y)$. We then define $\phi(x)=\sum w^{x}_{y}\otimes y$ where $w^{x}_{y}$ is a cubical chain representing the moduli space $\mathcal{W}(x,y)$. This is a chain map as desired. We now proceed to construct the map $\psi$. The construction is prefectly similar: we define moduli spaces $\mathcal{W}'(y,x)$, $y\in I'(L,L,J)$, $x\in Crit(\bar{f})$ except that the pairs $(u,\gamma)$ considered here, start as semi-tubes and end as flow lines of $f$. The equation verified by $u$ is similar to the one before but istead of the cut-off function $\beta$ we use the cut-off function $-\beta$. For $x\in L_{0}$ the energy estimate in this case gives $E(u,\gamma)\leq \mathcal{A}'(y)$. By the same method as above we define $\mathcal{W}'(y,x)$ to be void whenever $x\not\in L_{0}$ or $\mathcal{A}'(y)>||H||$ and we define $\psi(y)=\sum \bar{w}^{y}_{x}\otimes x$ where $\bar{w}^{y}_{x}$ is a cubical chain representing the moduli space $W'(y,x)$. Notice that because $E(u,\gamma)\leq \mathcal{A}'(y)$ this map $\psi$ does in fact vanish on $\mathcal{C}_{0}(L,L,H,J;X)$ and so it induces a chain map (also denoted by $\psi$) as desired. The next step is to notice that the composition $\psi\circ\phi$ induces an isomorphism at $E^{2}$. This is equivalent to showing that $H_{\ast}(\psi\circ\phi)$ is an isomorphism for $X=\ast$. In turn, this fact follows by now standard deformation arguments as in [@PiSaSc]. \[cor:general\_strips\] Assume that $L$ and $L'$ are hamiltonian isotopic and suppose that $J$ is generic. If $\nabla(L,L')< \delta_{L,L'}(J)$, then the statement of Corollary \[cor:strips\_Maslov\] remains true (for $J$) without the connectivity assumption (\[eq:connect\]). Notice that if $H$ is a hamiltonian so that $\phi^{H}_{1}(L)=L'$ and $J_{H}=(\phi^{H})_{\ast}(J)$ then, by the naturality described in §\[subsec:strips\], we have: $$\delta_{L,L}(J_{H})=\delta_{L,L'}(J)~.~$$ This implies, again by this same naturality argument, that the problem reduces to finding appropriate semi-tubes whose detection comes down to showing the non-vanishing of certain differentials in $\mathcal{E}\mathcal{C}_{0,w}(L,L,H;TV)$ for some well chosen $w<\delta_{L,L}(J_{H})$. But this immediatly follows from Proposition \[prop:comp\_morse\] by the same topological argument as the one used in the proof of Corollary \[cor:strips\_Maslov\] We formulate the geometric consequence which corresponds to Corollary \[cor:balls\]. For two lagrangians $L$ and $L'$ the following number has been introduced in [@BaCo]: $B(L,L')$ is the supremum of the numbers $r\geq 0$ so that there exists a symplectic embedding $$e: (B(r),\omega_{0})\to (M,\omega)$$ so that $e^{-1}(L)= {\mathbb{R}}^{n}\cap B(r)$ and $Im(e)\cap L'=\emptyset$. \[cor:ball\_bubble\] There exists an almost complex structure $J$ so that we have the inequality: $$\nabla(L,L')\geq \min\{\delta_{L,L'}(J)\ ,\ \frac{\pi}{2} B(L,L')^{2}\}~.~$$ Clearly, this implies that $\nabla(-,-)$ is non-degenerate in full generality (and recovers, in particular, the fact that the usual Hofer norm for Hamiltonians is non-dgenerate). It is useful to also notice as in [@BaCo] that this result is a Lagrangian version of the usual capacity - displacement energy inequality [@LaMc]. Indeed, this inequality (with the factor $\frac{1}{2}$) is implied by the following statement which has been conjectured to hold for any two compact lagrangians in a symplectic manifold [@BaCo]: $$\label{eq:conj} \nabla(L,L')\geq \frac{\pi}{2}B(L,L')^{2}~.~$$ This remains open. An even stronger conjecture is the following: \[conj:main\] For any two hamiltonian isotopic closed lagrangians $L,L'\subset (M,\omega)$ and for any almost complex structure $J$ which compatible with $\omega$ and any point $x\in L\backslash L'$ there exists a pseudoholomorphic curve $u$ which is either a strip resting on $L$ and on $L'$ or a pseudoholomorphic disk with boundary in $L$ so that $x\in Im(u)$ and $\int u^{\ast}\omega\leq \nabla(L,L')$. By the isoperimetric inequality used earlier in this paper, it follows that this statement implies (\[eq:conj\]). There is a substantial amount of evidence in favor of this conjecture: - as explained in this paper, in the absence of any pseudoholomorphic disks (that is, when $\omega|_{\pi_{2}(M,L)}=0$) it was proven in [@BaCo]. - the statement in Corollary \[cor:ball\_bubble\] shows that the area estimate is not unreasonable. - one striking consequence of Conjecture \[conj:main\] is that if the disjunction energy of the lagrangian $L$ is equal to $E_{0}<\infty$, then, for any $J$ as in the statement and any $x\in L$ there is a pseudoholomorpic disk of area at most $E_{0}$ which passes through $x$. When $L$ is relatively spin, this is indeed true and follows from recent work of the second author joint with François Lalonde. By the same geometric argument as above we deduce a nice consequence. Define the relative (or *real*) Gromov radius of $L$, $Gr(L)$, to be the supremum of the positive numbers $r$ so that there exists a symplectic embedding $e:(B(r),\omega_{0})\to (M,\omega)$ with the property that $e^{-1}(L)= {\mathbb{R}}^{n}\cap B(r)$, then $\pi (Gr(L))^{2}/2\leq E_{0}$ (where $E_{0}$, as before, is the disjunction energy of $L$). It is also useful to note that if $L$ is the zero section of a cotangent bundle, then $Gr(L)=\infty$. There are numerous other interesting consequences of Conjecture \[conj:main\] besides (\[eq:conj\]). To conclude, \[conj:main\] appears to be a statement worth investigating. [10]{} J.-F. Barraud, O. Cornea, *Lagrangian Intersections and the Serre spectral sequence*, Prepint, November 2003. M.Betz, R.Cohen *Graph moduli spaces and cohomology operations*, Turkish J. Math. 18 (1994), no. 1, 23–41. Y. Chekanov, *Invariant Finsler metrics on the space of Lagrangian embeddings*, Math. Z. 234,(2000), 6052013-619. , [Morse theory and classifying spaces]{}[Preprint]{} ,[ Floer’s infinite dimensional Morse theory and homotopy theory]{}[The Floer memorial Volume, Birkhauser]{}[ (1995)]{} O. Cornea, *Homotopical Dynamics II: Hopf invariants, smoothings and the Morse complex*, Ann. Scient. Ec. Norm. Sup. 35 (2002) 549–573. O. Cornea, *Homotopical Dynamics IV: Hopf invariants and Hamiltonian flows*, Communications on Pure and Applied Math. 55 (2002), 1033–1088. O.Cornea, *New obstructions to the thickening of CW-complexes*, Proc. Amer. Math. Soc., 132, (2004) no. 9, 2769–2781. O. Cornea, G. Lupton, J. Oprea, D. Tanré, *Lusternik-Schnirelmann category*, AMS Monographs in Mathematics, 2003. A. Floer, *Cuplength estimates on Lagrangian intersections*, Comm. Pure Appl. Math. 42 (1989), no. 4, 335–356. A. Floer, *Morse theory for Lagrangian intersections*, J. Differential Geom. 28 (1988), no. 3, 513–547. A. Floer, *Witten’s complex and infinite-dimensional Morse theory*, J. Differential Geom. 30 (1989), no. 1, 207–221. J.Franks *Morse-Smale flows and homotopy theory*, Topology 18, (1979), 199–215. K. Fukaya, *$A\sp \infty$-category, and Floer homologies*, Proceedings of GARC Workshop on Geometry and Topology ’93 (Seoul, 1993), 1–102, Lecture Notes Ser., 18, Seoul Nat. Univ., Seoul, 1993. H. Hofer, *Lusternik-Schnirelman-theory for Lagrangian intersections*, Ann. Inst. H. Poincaré Anal. Non Linéaire 5 (1988), no 4, 465–499. F. Lalonde, D. McDuff, *The Geometry of Symplectic Energy* Ann. of Math. (2) 141 (1995), no. 2, 349–371. J. Milnor, *Lectures on the $h$-cobordism theorem*, Notes by L. Siebenmann and J. Sondow Princeton University Press, Princeton, N.J. 1965. ,*Topology from the differentiable viewpoint*, Based on notes by David W. Weaver The University Press of Virginia, Charlottesville, Va. 1965. S.Piunikhin, D. Salamon, M. Schwarz, *Symplectic Floer-Donaldson theory and quantum cohomology*, Contact and symplectic geometry (Cambridge, 1994), 171–200, Publ. Newton Inst., 8, Cambridge Univ. Press, Cambridge, 1996. , *The $C^{1}$ closing Lemma, including Hamiltonians*,[Erg.Theory & Dyn. Sys.]{}[3]{}[(1983) 261–313.]{} J. Robbin, D. Salamon, *Asymptotic behaviour of holomorphic strips*. Ann. Inst. H. Poincaré Anal. Non Linéaire 18 (2001), no. 5, 573–612. , *Connected simple systems and the Conley index of isolated invariant sets*, [Trans. of the A.M.S.]{},[291]{}, [(1985) 1–41.]{} D. Salamon, *Lectures on Floer Homology*, Symplectic Geometry and Topology, edited by Y. Eliashberg and L. Traynor, IAS/Park City Mathematics series, (1999), 143–230. M.Schwarz, *Morse homology*, Progress in Mathematics, 111. Birkhäuser Verlag, Basel, 1993. M. Schwarz, *A quantum cup-length estimate for symplectic fixed points*, Invent. Math. 133 (1998), no. 2, 353–397. M. Schwarz, *On the action spectrum for closed symplectically aspherical manifolds*, Pacific J. Math. 193 (2000), no. 2, 419–461. S.Smale, *Differentiable dynamical systems*, Bull. A.M.S., 73, (1967), 747–817. C.Viterbo, *Intersection de sous-variétés lagrangiennes, fonctionnelles d’action et indice des systèmes hamiltoniens*, Bull. Soc. Math. France 115 (1987), no. 3, 361–390. , The Morse-Witten complex via dynamical systems, preprint Winter 2004. , [Supersymmetry and Morse theory]{}, [J. of Diff. Geometry]{}, [17]{}, [ (1982), 661–692.]{}
--- abstract: 'We consider the two-dimensional uniformly frustrated $XY$ model in the limit of small frustration, which is equivalent to an $XY$ system, for instance a Josephson junction array, in a weak uniform magnetic field applied along a direction orthogonal to the lattice. We show that the uniform frustration (equivalently, the magnetic field) destabilizes the line of fixed points which characterize the critical behaviour of the $XY$ model for $T \le T_{KT}$, where $T_{KT}$ is the Kosterlitz-Thouless transition temperature: the system is paramagnetic at any temperature for sufficiently small frustration. We predict the critical behaviour of the correlation length and of gauge-invariant magnetic susceptibilities as the frustration goes to zero. These predictions are fully confirmed by the numerical simulations.' address: - '$^1$ Scuola Normale Superiore and INFN, I-56126 Pisa, Italy' - '$^2$ Dipartimento di Fisica dell’Università di Roma “La Sapienza" and INFN, Sezione di Roma I, I-00185 Roma, Italy' - '$^3$ Dipartimento di Fisica dell’Università di Pisa and INFN, Sezione di Pisa, I-56127 Pisa, Italy' author: - 'Vincenzo Alba$^1$, Andrea Pelissetto$^2$ and Ettore Vicari$^3$' title: ' The uniformly frustrated two-dimensional $XY$ model in the limit of weak frustration ' --- Introduction ============ The uniformly frustrated two-dimensional (2D) $XY$ model is defined by the lattice Hamiltonian $${\cal H} = - \sum_{\langle xy \rangle } {\rm Re} \,\psi_x U_{xy} \psi_y^* = - \sum_{\langle xy \rangle} {\rm cos}(\theta_x - \theta_y+A_{xy}), \label{xymodB}$$ where $\psi_x\equiv e^{i\theta_x}$ and $U_{xy}\equiv e^{i A_{xy}}$. 2D arrays of coupled Josephson junctions in a magnetic field are interesting physical realizations of this model [@FZ-01]. In this case, the sum $C(P_{nm})$ of the variables $A_{xy}$ along the links of an elementary plaquette $P_{nm}$, $$\begin{aligned} C(P_{nm}) & \equiv & A_{(n,m),(n+1,m)} + A_{(n+1,m),(n+1,m+1)} \nonumber \\ && - A_{(n,m+1),(n+1,m+1)} - A_{(n,m),(n,m+1)},\end{aligned}$$ is related to the flux of an external magnetic field applied along an orthogonal direction: $C(P_{nm}) = {a^2 B/\Phi_0}$, where $a$ is the lattice spacing, $B$ is the magnetic field and $2\Phi_0=hc/e$. Hamiltonian (\[xymodB\]) depends on $A_{xy}$ through the phases $U_{xy}$ and thus the relevant physical quantity is the product of the phases around a plaquette, i.e., $U(P) \equiv \exp[i C(P)]$. If $U(P)$ is not 1, ${\cal H}$ is frustrated. In this paper we assume $U(P)$ to be independent of the chosen plaquette, i.e., that $$U(P) = e^{2\pi i f}, \label{deff}$$ with $0\le f \le 1$, independent of $P$. Using the invariance of the Hamiltonian under the transformation $\psi_x\to\psi^*_x$, it is not restrictive to take $f$ in the interval $0\le f \le 1/2$. We will work in a finite lattice of size $L^2$ with periodic boundary conditions. Therefore, we have $$\prod_P U(P) = 1,$$ where the product is extended over all lattice plaquettes. This implies that $f L^2$ must be an integer. Hamiltonian (\[xymodB\]) is invariant under the local gauge transformations $$\psi_x \rightarrow V_x \psi_x, \qquad U_{xy} \rightarrow V_x^* U_{xy} V_y , \label{ginv}$$ where $V_x$ is a phase, $|V_x| = 1$. Physical observables must be gauge invariant. For such observables, the choice of the fields $A_{xy}$ is irrelevant: only the value of $f$ is relevant. In a finite volume, this statement is strictly true only if free boundary conditions are taken. If one considers periodic boundary conditions, one must also specify the value of $\exp (i \sum A_{xy})$ along two non-trivial lattice paths that wind around the lattice (they are sometimes called Polyakov loops). For instance, one must also fix $P_1(m) = \exp(i\sum_n A_{(n,m),(n+1,m)})$ and $P_2(m) = \exp(i\sum_n A_{(m,n),(m,n+1)})$ for some fixed value of $m$. If we require the absence of magnetic circulation along these non-trivial paths, we must have $P_1(m) = P_2(m) = 1$ for any $m$. On a finite lattice of size $L^2$, this condition can be satisfied only if $f L$ is an integer, a condition that will be always satisfied in the numerical simulations that we shall present. The critical behaviour of uniformly frustrated $XY$ models changes dramatically with $f$. For $f=0$ the model corresponds to the standard $XY$ model, which is not frustrated. It shows a Kosterlitz-Thouless transition at $T_{KT}$ \[on a square lattice [@Has-05] $T_{KT}=0.89294(8)$\], where the correlation length $\xi$ diverges as ${\rm ln} \xi\sim (T-T_{KT})^{-1/2}$ for $T\gtrsim T_{KT}$; the low-temperature phase, $T<T_{KT}$, is characterized by quasi long-range order—correlation functions decay algebraically—associated with a line of fixed points. In the case of maximal frustration, i.e. for $f=1/2$, the system undergoes two very close continuous transitions (their critical temperature is $T\approx 0.45$ on the square lattice), respectively in the Ising and Kosterlitz-Thouless universality classes, see, e.g., [HPV-05,Korshunov-06]{} and references therein. The critical behaviour for other values of $f$ is even more complex, see, e.g., [CD-85,KVB-95,FT-95,HW-95,LL-95,SMK-97,CS-85,PCKJC-00]{}, and [Ling-etal-96]{} for experiments. There may be several transitions, whose nature is not clear in most of the cases. Even the structure of the ground state is only partially understood [@TJ-83; @SB-93; @LLK-02]. For $f=1/n$, where $n$ is an integer number, if $T_c$ is the critical temperature where the paramagnetic phase ends, $T_c$ decreases with increasing $n$; for example, [@LL-95] $T_c\lesssim 0.22$ for $f=1/3$ and [@HW-95] $T_c\lesssim 0.05,\,0.03$ for $n=30$ and 56, respectively. These studies suggest that $T_c$ vanishes [@HW-95; @FT-95] as $T_c\sim 1/n$ when $n\to \infty$. The critical behaviour for irrational values of $f$ is even less clear, see, e.g., [CS-85,PCKJC-00]{}. In this case, there are some indications that the system is paramagnetic for any $T$ and that a glassy transition occurs at zero temperature [@PCKJC-00]. The above-mentioned works studied the critical behaviour as a function of the temperature $T$, while keeping the uniform frustration $f$ fixed. In this paper we investigate a different critical limit, i.e. we consider the limit $f\to 0$ at fixed $T$ in the region $T\le T_{KT}$. In other words, we investigate the effect of a small uniform frustration on the low-temperature $XY$ critical behaviour. We show that a uniform frustration is a relevant perturbation at the fixed points that occur in the $XY$ model for $T \le T_{KT}$. As soon as $f$ is non-vanishing, the correlation length becomes finite and the system is paramagnetic. The critical behaviour for small values of $f$ can be understood within the Coulomb-gas picture [@FHS-78]. If one considers the Villain Hamiltonian corresponding to (\[xymodB\]), one can write the partition function as $$Z_{\rm Villain} = \int \prod_{x} d\theta_x\, e^{-\beta {\cal H}} = Z_{SW} \sum_{\{n_x\}} \exp\left(2 \pi \beta {\cal H}_{\rm CG}\right), \label{Villain}$$ where [@FHS-78] $Z_{SW}$ is the spin-wave contribution and ${\cal H}_{\rm CG}$ is the Coulomb-gas Hamiltonian: $${\cal H}_{\rm CG} = {1\over 2} \sum_{ij} (n_i-f) V({\ensuremath{{{r}}}}_i-{\ensuremath{{{r}}}}_j) (n_j-f), \label{cgas}$$ where $n_i$ is an integer (vorticity) defined at the site $i$ of the dual lattice and $V({\ensuremath{{{r}}}})$ is the lattice Coulomb potential. In (\[Villain\]) the sum over $n_x$ is restricted to configurations satisfying the neutrality condition [@FHS-78] $\sum_i (n_i-f)=0$. For $f = 0$ and $T< T_{KT}$ this representation allows one to show that correlations functions decay algebraically. The two-point correlation function is the product of a spin-wave contribution, which decays algebraically, and of a vortex contribution. For $T< T_{KT}$ charged vortices are strictly bound to form dipoles and the corresponding correlation function also decays algebraically [@JKKN-77]. For $f>0$ the picture changes. For small $f$, in the temperature interval $f T_{KT} < T < T_{KT}$, there are unbounded particles with $n = 0$ and charge $-f$, which screen the Coulomb interaction among the vortices of charge $n - f\approx n$, $n\not=0$. The Debye screening length can be easily computed. Consider a vortex of charge 1, surrounded by particles of charge $-f$. Since there is one charge $-f$ for each lattice site, complete screening is achieved when these charges occupy a circle of area $A$, such that $Af = 1$. Thus, the screening length $\xi$ should be proportional to $f^{-1/2}$. In this picture, for $f\to 0$, the system is equivalent to a dilute gas (the density is proportional to $f^{1/2}$) of neutral particles interacting by means of a screened Coulomb potential $V_{sc}(r)$. We can thus perform a standard virial expansion to predict that the vortex-vortex correlation function is proportional to $V_{sc}(r)$, hence decays exponentially with a rate controlled by the Debye screening length. This argument indicates that, for sufficiently small $f$ and any $T<T_{KT}$, the system is paramagnetic with a correlation length that scales as $$\xi \sim f^{-1/2}, \label{xif}$$ for $f\to 0$. Equation (\[xif\]) can also be predicted by simple dimensional arguments. For a given value of $f$ and $T$, consider a real-space renormalisation-group (RG) transformation. Eliminate lattice sites obtaining a lattice with a link length that is twice that of the original lattice. In lattice units we have $\xi' = \xi/2$, where we use a prime for quantities that refer to the decimated lattice. Analogously, we obtain $f' = 4 f$ for the frustration parameter. It follows $\xi' {f'}^{1/2} = \xi {f}^{1/2}$. This quantity is therefore constant under RG transformations, i.e. $\xi {f}^{1/2} = c$. Under the RG transformation, the Hamiltonian parameters also change. In particular, the transformation induces a temperature change $T\to T'$. However, for small $f$, one is close to the $XY$ line of fixed points and thus we expect $T'\approx T$. Thus, the condition $\xi f^{1/2} = c$ holds at (approximately) fixed temperature and $f\to 0$. Therefore, it implies (\[xif\]). In this paper we wish to verify numerically (\[xif\]) and study the critical behaviour of gauge-invariant susceptibilities (they will be defined in the next section). Note that, in a sense, at fixed $T\le T_{KT}$, the magnetic flux $f$ plays the role of the reduced temperature, with an associated correlation-length exponent $\nu=1/2$. The paper is organised as follows. In Sec. \[sec2\] we define gauge-invariant correlation functions, the associated susceptibilities and correlation lengths, and discuss the expected critical behaviour. In Sec. \[sec3\] we present some Monte Carlo (MC) results that fully confirm the theoretical predictions. Definitions and general scaling properties {#sec2} ========================================== In order to check prediction (\[xif\]), we consider two different gauge-invariant correlation functions: $$\begin{aligned} G_{sq}({\ensuremath{{{x}}}};{\ensuremath{{{y}}}}) &\equiv & |\langle \psi_{{\ensuremath{{{x}}}}} \psi_{{\ensuremath{{{y}}}}}^* \rangle|^2, \nonumber \\ G_\Gamma({\ensuremath{{{x}}}};{\ensuremath{{{y}}}}) &\equiv& \langle {\rm Re} \, \psi_{{\ensuremath{{{x}}}}} U[\Gamma_{{\ensuremath{{{x}}}};{\ensuremath{{{y}}}}}] \psi_{{\ensuremath{{{y}}}}}^* \rangle. \label{gxdef}\end{aligned}$$ Here $\Gamma_{{\ensuremath{{{x}}}};{\ensuremath{{{y}}}}}$ is a path that connects sites ${\ensuremath{{{x}}}}$ and ${\ensuremath{{{y}}}}$ and $U[\Gamma_{{\ensuremath{{{x}}}};{\ensuremath{{{y}}}}}]$ is a product of phases associated with the links that belong to $\Gamma_{{\ensuremath{{{x}}}};{\ensuremath{{{y}}}}}$. More precisely, if a link $\langle wz\rangle$ belongs to the path, $w$ and $z$ have coordinates $w = (w_1,w_2)$ and $z = (z_1,z_2)$, such that $z_1 - w_1 \ge 0$ and $z_2 - w_2 \ge 0$, we define $R_{wz} = U_{wz}$ if point $w$ occurs before point $z$ while moving along the path; otherwise, we set $R_{wz} = U_{wz}^*$. The phase $U[\Gamma_{{\ensuremath{{{x}}}};{\ensuremath{{{y}}}}}]$ is the product of all the phases $R_{wz}$ associated with the links belonging to the path. The definition (\[gxdef\]) of $G_\Gamma({\ensuremath{{{x}}}};{\ensuremath{{{y}}}})$ depends on a family of paths $\Gamma = \{\Gamma_{x;y}\}$. We assume this family to be translationally invariant: the path $\Gamma_{x;y}$ is obtained by rigidly translating the path $\Gamma_{0;y-x}$ that connects the origin to $y-x$. In this case, the correlation function $G_\Gamma({\ensuremath{{{x}}}};{\ensuremath{{{y}}}})$ is uniquely defined by specifying the paths from the origin to any point $x$. Because of the presence of the gauge field, the Hamiltonian is not translationally invariant, nor is it symmetric under the symmetry transformations of the lattice. Nonetheless, there are generalized symmetries of the Hamiltonian that also involve gauge transformations. For instance, if $Lf$ is an integer, the Hamiltonian is invariant under the generalized translations $$\begin{aligned} \psi'_{(n,m)} = \psi_{(n+1,m)} U_{(n,m),(n+1,m)}^* e^{-2 \pi i m f}, \nonumber \\ \psi'_{(n,m)} = \psi_{(n,m+1)} U_{(n,m),(n,m+1)}^* e^{2 \pi i n f}. \label{transl}\end{aligned}$$ Gauge-invariant correlation functions are invariant under these transformations. This implies that they do not depend on $x$ and $y$ separately, but only on the difference $y - x$. This invariance can be understood intuitively if one notes that gauge-invariant quantities should only depend on the value of the flux through a plaquette, i.e., $U(P)$, and of the Polyakov correlations $P_1(m)$ and $P_2(m)$. In our model $U(P)$ is independent of $P$ and, if $Lf$ is an integer, $P_1(m)$ and $P_2(m)$ do not depend on $m$: hence, translation invariance holds. Analogously, the Hamiltonian is invariant under generalized transformations that involve lattice symmetries and gauge transformations. For instance, in infinite volume the Hamiltonian is invariant under the generalized reflection transformations $$\begin{aligned} \psi'_{(n,m)} &=& \psi_{(-n,m)}^* K_m^* \prod_{k=0}^{|n|-1} [U_{(k,m),(k+1,m)} U^*_{(-k-1,m),(-k,m)}], \end{aligned}$$ where $$K_m = \cases{\displaystyle{ \prod_{k=0}^{m-1} U^2_{(0,k),(0,k+1)}} & for $m \ge 1$, \cr 1 & for $m=0$, \cr \displaystyle{ \prod_{k=0}^{-m-1} U^{*2}_{(0,k+m),(0,k+m+1)}} & for $m \le -1$. }$$ Under these symmetries $G_{sq}(x;y)$ transforms covariantly. If $T$ is a lattice symmetry, $G_{sq}(x;y) = G_{sq}(Tx;Ty)$. These relations do not hold in general for $G_\Gamma(x;y)$ since a lattice symmetry also changes the path family. Given $G_\Gamma(x;y)$ and $G_{sq}(x;y)$, we define the corresponding susceptibilities $$\begin{aligned} \chi_\Gamma \equiv \sum_y G_\Gamma(x;y), \qquad\qquad \chi_{sq} \equiv \sum_y G_{sq}(x;y), \label{chi-def}\end{aligned}$$ where the sums are extended over all lattice points $y$. Because of translational invariance, $\chi_{sq}$ and $\chi_\Gamma$ do not depend on the point $x$. Of course, $\chi_\Gamma$ depends on the family of paths $\Gamma = \{\Gamma_{x;y}\}$. Then, for any gauge-invariant correlation function $G(x;y)$ we define on a finite lattice of size $L^2$ $$\begin{aligned} F \equiv \sum_{y\equiv (y_1,y_2)} \cos [q_{\rm min} (y_1-x_1)] G(x;y) \label{def-Fx}\end{aligned}$$ where $x \equiv (x_1,x_2) $ and $q_{\rm min} \equiv 2\pi/L$. The correlation length is defined by $$\xi^2 \equiv {1\over 4 \sin^2(q_{\rm min}/2)} {\chi - F\over F}. \qquad\qquad$$ Note that an equally good definition of $F$ is $$\begin{aligned} F \equiv \sum_{y\equiv (y_1,y_2)} \cos [q_{\rm min} (y_2-x_2)] G(x;y).\end{aligned}$$ For the correlation function $G_{sq}(x;y)$, one can show that these two definitions of $F$ are equivalent, but this is not generically the case of $G_\Gamma(x;y)$, since this quantity is not symmetric under lattice transformations. In the following we use definition (\[def-Fx\]) for $F$. In the introduction we derived a prediction for the correlation length, $\xi \sim f^{-1/2}$. We wish now to obtain a similar result for the susceptibilities. In order to predict their scaling behaviour, let us note that, for $f = 0$ and $T\le T_{KT}$, $\langle \psi_0\psi_x^*\rangle$ decays algebraically, i.e., $\langle \psi_0\psi_x^*\rangle \sim x^{-\eta(T)}$. The critical exponent $\eta(T)$ depends on $T$ and varies between $\eta(0) = 0$ and $\eta(T_{KT})=1/4$. For $f\not = 0$, it is natural to assume that $$\begin{aligned} \chi_\Gamma \sim \int_{x < \xi} d^2x\, x^{-\eta(T)} \sim \xi^{2-\eta(T)} \sim f^{-1+\eta(T)/2}, \nonumber \\ \chi_{sq} \sim \int_{x < \xi} d^2x\, x^{-2\eta(T)} \sim \xi^{2-2\eta(T)} \sim f^{-1+\eta(T)}. \label{chibeh}\end{aligned}$$ In particular, these equations predict $\chi_\Gamma \sim f^{-7/8}$ and $\chi_{sq} \sim f^{-3/4}$ at $T = T_{KT}$. The check of the previous prediction for $\chi_{sq}$ does not present conceptual difficulties. Instead, when considering $\chi_\Gamma$, one shoud keep in mind that this quantity depends on a path family. Thus, there is a natural question that should be considered first. Given a path family $\Gamma^{(f_1)}$ for a given value $f=f_1$ of the frustration parameter, we must specify which path family $\Gamma^{(f_2)}$ must be considered for $f = f_2\not=f_1$. Only if $\Gamma^{(f_2)}$ is chosen appropriately, does the relation $${\chi_{\Gamma^{(f_1)}} \over \chi_{\Gamma^{(f_2)}} } \approx \left( {f_1\over f_2} \right)^{-1+\eta(T)/2}$$ hold for $f_1,f_2\to 0$. A naive choice would be $\Gamma^{(f_1)} = \Gamma^{(f_2)}$. As we now discuss, this choice is not correct: different path families should be chosen for different values of $f$. To clarify this issue, let us imagine we are working in the continuum. For each $f$, let us consider a family of paths $\Gamma^{(f)} = \{\Gamma^{(f)}_{x;y}\}$. Because of translation invariance, we can limit ourselves to paths going from the origin to any point $y$. These paths can be parametrised in terms of a function $X^{(f)}(t;y)$ such that $X^{(f)}(0;y) = 0$ for all $y$, $X^{(f)}(1;y) = y$. The path from the origin to $y$ is given by $$x = X^{(f)}(t;y) \qquad t\in [0,1]. \label{path-X}$$ To determine the relation between $\Gamma^{(f_1)}$ and $\Gamma^{(f_2)}$, one should remember that $x/\xi$ should be kept fixed in the critical limit. Thus, we expect the path family to be invariant only if all lengths are expressed in terms of $\xi$. In other words, set $\bar{x} = x/\xi_f$, $\bar{y} = y/\xi_f$, and rewrite (\[path-X\]) as $$\bar{x} = {1\over \xi_f} X^{(f)}(t;\bar{y}\xi_f) \qquad t\in [0,1], \label{path-X2}$$ where $\xi_f$ is the correlation length for the system with frustration parameter $f$. The natural requirement is therefore that the right hand side be independent of $f$, that is $${1\over \xi_{f_2}} X^{(f_2)}(t;\bar{y}\xi_{f_2}) = {1\over \xi_{f_1}} X^{(f_1)}(t;\bar{y}\xi_{f_1}) \; .$$ Since we expect $\xi_f\sim f^{-1/2}$, we obtain the relation $$X^{(f_2)}(t;r y) = r X^{(f_1)}(t;y), \qquad r = \left({f_1\over f_2}\right)^{1/2}. \label{scal-rel-Gamma}$$ In Fig. \[fig-Xf\] we report an example corresponding to $f_1 = 4 f_2$. The paths from the origin to $y_1$ and $y_2$ which belong to $\Gamma^{(f_1)}$ completely fix the paths to $2y_1$ and $2y_2$ belonging to $\Gamma^{(f_2)}$. Of course, on the lattice it is impossible to ensure (\[scal-rel-Gamma\]) exactly. However, note that the relevant scale is fixed by the correlation length and thus, violations at the level of the lattice spacing are irrelevant in the critical limit. In the following we shall consider the path families $\Gamma_n \equiv \{\Gamma_{n;0;x}\}$, which are specified by a non-negative integer $n$. They are defined as follows (see Fig. \[fig-Gamman\]). The path $\Gamma_{n;0;x}$ connecting the origin to the point $x \equiv (x_1,x_2)$ consists of three segments: the first one connects the origin to $(-n,0)$; the second one goes from $(-n,0)$ to $(-n,x_2)$; the last one is horizontal, from $(-n,x_2)$ to point $x$. We indicate with $\chi_n(f)$ the corresponding susceptibilities and with $\xi_{n}(f)$ the corresponding correlation lengths. These families of paths behave simply under the transformation (\[scal-rel-Gamma\]). If we consider the path $\Gamma_{n;0;x}$ for $f = f_1$, the mapping (\[scal-rel-Gamma\]) implies that, for $f = f_2$, one should consider the path $\Gamma_{rn;0;rx}$ between the origin and the point $rx$. This implies that, if we take the path family $\Gamma_n$ for $f = f_1$, we must consider $\Gamma_{rn}$ for $f = f_2$. As a consequence, $\chi_n$ and $\xi_n$ scale correctly only if we consider the limit $n\to\infty$, $f\to 0$ at fixed $n f^{1/2}$. Thus, we predict the scaling behaviours $$\begin{aligned} \chi_n &=& f^{-1+\eta(T)/2} F_\chi (n f^{1/2}), \nonumber \\ \xi_{n} &=& f^{-1/2} F_{\xi} (n f^{1/2}), \label{scal-with-n}\end{aligned}$$ where $F_\chi(x)$ and $F_\xi(x)$ are appropriate scaling functions. In the next Section, we verify these predictions. Numerical results {#sec3} ================= We perform simulations for various values of $f=1/m$, $m$ integer, and $T$ in the interval $T\le T_{KT}$, where $T_{KT}$ is the critical temperature of the $XY$ model, $T_{KT}=0.89294(8)$ [@Has-05]. We consider finite lattices of size $L^2$, where $L$ is a multiple of $1/f$, and periodic boundary conditions for the spins. Since we perform MC simulations in a gapped phase, boundary conditions are expected to be irrelevant in the thermodynamic limit. Cluster algorithms cannot be used in the presence of frustration and thus we use an overrelaxed algorithm, which consists in performing microcanonical and Metropolis updates. Predictions (\[xif\]) and (\[chibeh\]) hold in the thermodynamic limit, i.e. for sufficiently large values of the ratio $L/\xi$, where finite-size effects are negligible. We find numerically that size effects are much smaller than our statistical errors for $Lf\gtrsim 3$. In the simulations we choose the gauge $$\begin{aligned} &A_{{\ensuremath{{{x}}}}{\ensuremath{{{y}}}}}= 0 \quad & {\rm if} \quad {\ensuremath{{{y}}}}= {\ensuremath{{{x}}}}+\hat{1}, \\ &A_{{\ensuremath{{{x}}}}{\ensuremath{{{y}}}}}= {2 \pi f x_1} \quad & {\rm if} \quad {\ensuremath{{{y}}}}= {\ensuremath{{{x}}}}+\hat{2}, \nonumber \label{lgau}\end{aligned}$$ which is consistent with (\[deff\]) and with $P_1(m) = P_2(m) = 1$, as long as $L$ is an integer multiple of $1/f$. With this gauge choice the computation of the susceptibilities $\chi_n$ and of the corresponding correlation lengths $\xi_n$ is quite simple. Indeed, $U[\Gamma_{n;x;y}] = 1$ for any $y$ if the first component of $x$ is $n$, i.e., if $x = (n,m)$, $m$ arbitrary. Thus, if we choose $x = (n,m)$ in definition (\[chi-def\]), we can compute $\chi_n$ without taking into account the phases $U_{xy}$. In practice, we have determined $\chi_n$ by using $$\chi_n = {1\over L } \sum_{m}\sum_y \langle {\rm Re} \, \psi_{{\ensuremath{{{(n,m)}}}}} \psi_{{\ensuremath{{{y}}}}}^* \rangle. \label{chinMC}$$ An analogous expression holds for the correlation lengths. In Figs. \[fig:xiscal\] and \[fig:chiscal\] we plot the correlation lengths $\xi_n$ and the susceptibilities $\chi_n$ at $T=T_{KT}$ for several values of $f$ and $n$. In this case $\eta(T)=1/4$ so that $\chi_n$ should scale as $f^{-7/8}$. It is easy to show that $$\chi_n = \chi_{n+1/f}, \qquad\qquad \chi_n = \chi_{1/f - n}, \label{chi-symm}$$ so that in (\[scal-with-n\]) one must restrict oneself to data satisfying $0\le n\le 1/(2 f)$. The results reported in the figures show the scaling behaviour (\[scal-with-n\]) quite precisely, confirming the theoretical arguments. Note that the scaling function $F_\chi(x)$ apparently goes to zero as $x$ increases. This behaviour will be confirmed below by the analysis of a non-gauge-invariant correlation function. Good agreement is also found at $T<T_{KT}$. We check the behaviour of $\chi_{n=0}$ (in this case, the same path family can be used for all values of $f$) up to $T=0.2$. At $T=0.2,0.3,0.4,0.5,0.8$, a fit of $\chi_0$ to $a f^{-1+\eta(T)/2}$ gives $\eta=0.042(8),\,0.050(6),\,0.079(6),\,0.098(7),\,0.171(3)$. These results are in substantial agreement with the leading spin-wave contibution $\eta=T/(2\pi)$, and the MC estimates [@etaest] $\eta=0.036(3),\,0.052(5),\,0.074(6),\,0.100(8),\,0.19(2)$. For example, in Fig. \[chilt\] we show the MC results for $\chi_0$ at $T=0.4$, together with the result of the fit. The data show a clear power-law behaviour in perfect agreement with (\[chibeh\]). We also investigated the critical behaviour of $\chi_{sq}$, which is expected to scale as $f^{-3/4}$. For $1/f=40$, 60, 80, we obtain $\chi_{sq} = 9.933(7)$, 13.630(23), 17.06(4), respectively. These results are fully consistent with the theoretical prediction. Indeed, the product $f^{3/4} \chi_{sq}$ clearly converges to a constant as $f\to 0$ (corrections are expected to be proportional to $1/\ln(1/f)$, as in the $XY$ model at $T_{KT}$): we have $f^{3/4} \chi_{sq} = 0.6245(5)$, 0.6322(11), 0.6378(15) for the same values of $f$. Finally, we mention that correlation functions which are not gauge invariant show a different behaviour. For example, one may consider the susceptibility $\chi_w$ associated with the two-point function $\langle {\rm Re} \, \psi_{{\ensuremath{{{x}}}}} \psi_{{\ensuremath{{{y}}}}}^* \rangle$ in the gauge (\[lgau\]): $$\chi_w = {1\over L^2} \sum_{x,y} \langle {\rm Re} \, \psi_{{\ensuremath{{{x}}}}} \psi_{{\ensuremath{{{y}}}}}^* \rangle.$$ At $T_{KT}$ it shows a power-law behaviour $\chi_w\sim f^{-\varepsilon}$ as well, but with a power $\varepsilon\approx 0.39$, definitely different from the value $0.875$ of the gauge-invariant definition. This result can be derived analytically. Indeed, we can rewrite $$\chi_w = {1\over L} \sum_{n=0}^{L-1} \chi_n, \label{eq27}$$ where $\chi_n$ is defined in (\[chinMC\]). Using the properties (\[chi-symm\]) of the susceptibilities $\chi_n$, (\[eq27\]) can be rewritten as $$\chi_w \approx {2f} \sum_{n=0}^{1/(2f)} \chi_n.$$ In this range of values of $n$, as is clear from Fig. \[fig:chiscal\], we can use the scaling behaviour (\[scal-with-n\]) and write $$\begin{aligned} \chi_w &\sim& f\times f^{-7/8} \int_0^{1/(2f)} dn\, F(nf^{1/2}) \nonumber \\ &\sim& f^{-3/8} \int_0^{1/(2f^{1/2})} dx\, F(x)\sim f^{-3/8} \int_0^\infty dx\, F(x).\end{aligned}$$ Thus, provided that $F(x)$ is integrable (we already noted that the MC data for $\chi_n$ are consistent with $F(x)\to0$ as $x\to \infty$), we predict $\chi_w\sim f^{-3/8}=f^{-0.375}$, which is consistent with the MC data (see Fig. \[chi-nongauge\]). Note that the critical behaviour of $\chi_w$ depends on the chosen gauge. If we use the gauge $$\begin{aligned} &A_{{\ensuremath{{{x}}}}{\ensuremath{{{y}}}}}= {-\pi f x_2} \quad & {\rm if} \quad {\ensuremath{{{y}}}}= {\ensuremath{{{x}}}}+\hat{1}, \\ &A_{{\ensuremath{{{x}}}}{\ensuremath{{{y}}}}}= {\pi f x_1} \quad & {\rm if} \quad {\ensuremath{{{y}}}}= {\ensuremath{{{x}}}}+\hat{2}, \nonumber \label{lgau2}\end{aligned}$$ the susceptibility $\chi_w$ does not diverge and approaches a constant as $f\to 0$. In conclusion, we have shown that a small amount of uniform frustration (equivalently, a small uniform magnetic field) destabilizes the line of fixed points that occur in the $XY$ model for $T \le T_{KT}$. As soon as $f$ is different from zero, the system becomes paramagnetic. The critical behaviour $\xi\sim f^{-1/2}$ can be predicted by simple Coulomb-gas and scaling arguments. Our numerical simulations fully confirm this prediction. Also the scaling behaviour (\[chibeh\]) for the magnetic susceptibilities is fully consistent with the numerical results. References {#references .unnumbered} ========== [99]{} Fazio R and van der Zant H 2001 [*Phys. Rep.*]{} [**355**]{} 235 Hasenbusch M 2005 5869\ Hasenbusch M and Pinn K 1997 63 Hasenbusch M, Pelissetto A and Vicari E 2005 [*J. Stat. Mech.: Theory Exp.*]{} P12002\ Hasenbusch M, Pelissetto A and Vicari E 2005 B [**72**]{} 184502 Korshunov S E 2006 [*Usp. Fiz. Nauk*]{} [**176**]{} 233\ Korshunov S E 2006 [*Physics Uspekhi*]{} [**49**]{} 225 (translation) Choi M Y and Doniach S 1985 B [**31**]{} 4516 Korshunov S E, Vallat A and Beck H 1995 B [**51**]{} 3071 Franz M and Teitel S 1995 B [**51**]{} 6551 Hattel S A and Wheatley J M 1995 B [**51**]{} 11951 Lee S and Lee K-C 1995 B [**52**]{} 6706 Straley J P, Morozov A Y and Kolomeisky E B 1997 2534 Choi M Y and Stroud D 1985 B [**32**]{} 7532\ Choi M Y and Stroud D 1987 B [**35**]{} 7109 Park S Y, Choi M Y, Kim B J, Jeon G S and Chung J S 2000 3484 Ling X S, Lezec H J, Higgins M J, Tsai J S, Fujita J, Numata H, Nakamura Y, Ochiai Y, Chao Tang, Chaikin P M and Bhattacharya S 1996 2989\ (errata) 1996 410 Teitel S and Jayaprakash C 1983 1999 Straley J P and Barnett G M 1993 B [**48**]{} 3309 Lee S J, Lee J R and Kim B 2002 025701 Fradkin E, Huberman B A and Shenker S H, 1978 B [**18**]{} 4789 José J V, Kadanoff L P, Kirkpatrick S and Nelson D R 1977 B [**16**]{} 1217 Berche B 2003 586
--- abstract: 'We develop a new approach for controllable single-photon transport between two remote one-dimensional coupled-cavity arrays, used as quantum registers, mediated by an additional one-dimensional coupled-cavity array, acting as a quantum channel. A single two-level atom located inside one cavity of the intermediate channel is used to control the long-range coherent quantum coupling between two remote registers, thereby functioning as a quantum switch. With a time-independent perturbative treatment, we find that the leakage of quantum information can in principle be made arbitrarily small. Furthermore, our method can be extended to realize a quantum router in multi-register quantum networks, where single-photons can be either stored in one of the registers or transported to another on demand. These results are confirmed by numerical simulations.' address: | $^1$ CEMS, RIKEN, Wako-shi, Saitama 351-0198, Japan\ $^2$ School of Physics, Beijing Institute of Technology, Beijing 100081, China\ $^3$ Department of Physics, University of Michigan, Ann Arbor, Michigan 48109-1040, USA author: - 'Wei Qin$^{1,2}$' - 'Franco Nori$^{1,3}$' title: 'Controllable single-photon transport between remote coupled-cavity arrays' --- introduction {#se:section1} ============ Quantum networks are fundamental for quantum information science [@net1; @net2]. An elementary quantum network is composed of spatially-separated quantum nodes for quantum information manipulation and storage, with these nodes connected by quantum channels for quantum information distribution [@net3]. Thus, the implementation of such a quantum network relies upon the ability to realize the reliable transport of quantum states through these quantum channels. To this end, in the form of flying qubits, photons serve as an optimal choice for carrying information for long-distance quantum communications [@photon1; @photon2; @photon3; @photon4; @photon5]. Another approach to connect distant qubits is to utilize solid-state systems [@solid]. Such solid-state devices include electron spins of nitrogen-vacancy (NV) colour centers in diamond [@NV1; @NV2; @NV3; @NV4; @NV5; @NV6], nuclear spins in nuclear magnetic resonance (NMR) [@nuclear1; @nuclear2], flux qubits in superconductors [@flux1; @flux2; @flux3; @flux4], cold atoms in optical lattices [@cold1; @cold2; @cold3], and even magnons in ferromagnets [@mag1; @mag2; @mag3; @mag4]. Moreover, in recent decades, coupled-cavity arrays (CCAs) are currently being explored, for example, in superconducting transmission line resonators [@STLR1; @STLR2; @STLR3; @STLR4; @STLR5], photonic crystal resonators [@PCR1; @PCR2; @PCR3] or toroidal microresonators [@TM1; @TM2; @TM3]. The CCAs offer an inherent advantage because each cavity can be individually addressed. Indeed, both coherent optical information storage and transport can be achieved in such arrays, and at the same time the need for an external interface between the quantum register and the quantum channel is eliminated because they use the same fundamental hardware. In addition to simulating quantum many-body phenomena [@MB1; @MB2; @MB3], these CCAs also demonstrate promising applications in controlling photon coherent transport by using single controllable two-level or three-level atoms [@STLR3; @CCA_switch1; @CCA_switch2; @CCA_switch3; @CCA_switch4; @CCA_switch5; @CCA_switch6; @CCA_switch7]. Photons are transmitted or reflected based upon tuning the photon-atom scattering. In this case, the atom behaves as a quantum switch. Despite having been extensively studied [@ext1; @ext2; @ext3; @ext4; @ext5; @ext6; @ext7], prior work on the coherent transport of photons has typically focused on the nearby CCAs via the photon-atom scattering. However, in order to carry out quantum network operations, information needs to be controllably transported between distant quantum registers. Thus, a detailed understanding of controllable quantum channels which could connect these distant registers is of both fundamental and practical importance. Here we theoretically introduce a novel method for controllable coherent transport of single-photons upon making use of a CCA, one cavity of which contains a two-level atom, as a quantum channel to connect two remote CCAs as quantum registers. The key element underlying our method is that the atom is harnessed to control the long-range coherent interaction between the two boundary registers. Specifically, within the weak-coupling regime, the registers are resonantly coupled by a specific collective eigenmode of the bare channel, yielding an effective photon transport channel, such that time evolution results in a swap operation of the two registers. However, when this eigenmode is coupled to the atom, it will be dressed and split into two dressed modes. If the splitting between the two dressed modes is significantly detuned from the registers, photons will thus be reflected back and, as a result, the time evolution functions as an identity operation. Furthermore, we directly extend this approach to the case of multi-register quantum networks, where a single-photon can be redirected to different registers at will, in an analogous manner to a quantum router. As opposed to previous work, the proposed model can be applicable to controlling the coherent transport of a single-photon being in an arbitrary quantum state between two remote quantum registers over an arbitrarily long range. physical model and controllable transport of single-photons =========================================================== ![(color online) (a) A 1D CCA of having $N$ cavities and a two-level atom is employed as a quantum channel to connect two distant quantum registers composed of two identical 1D CCAs, each contains $n$ cavities. In the limit of $\left\{g_{0},g_{I},J_{I}\right\}\ll g_{c}$, the full dynamics can be reduced to an effective model, only involving the two boundary registers, the atom and the zero-energy mode of the bare channel. (b) Effective coupling configuration in the no-atom case of $J_{I}=0$. By ensuring $\{g_{0},g_{I}\}\ll g_{c}$, the boundary registers are resonant with a single boson mode ($k=z$) while the large detunings are eliminated, so that unitary evolution will result in a swap operation between the two registers. (c) Effective coupling configuration in the single-atom case of $\{g_{0},g_{I}\}\ll J_{I}\ll g_{c}$. Owing to the large detunings between the registers and the dressed states, such registers are decoupled from the intermediate channel. The incoming photon is thus reflected off this channel, and the quantum state of the photon will remain unchanged after time evolution.[]{data-label="fig1"}](fig1.eps){width="8.5cm"} The basic idea is to use two identical one-dimensional (1D) CCAs to enact quantum registers connected by a quantum channel consisting of an additional CCA and a two-level atom, shown schematically in Fig. \[fig1\](a). Let $\hat{c}_{i}^{\dag}$ ($i=1,\cdots,N$) be the creation operator of the $i$th cavity of the channel, and $\hat{c}_{l_{j}/r_{j}}^{\dag}$ ($j=1,\cdots,n$) be that of the $j$th cavity of the left or right register, assuming that all cavities have a common frequency $\omega$. The atom, characterized by a ground state $|g\rangle$ and an excited state $|e\rangle$, is embedded in the $m$th cavity of the channel and is resonantly coupled to the mode of this cavity with strength $J_{I}$. We assume that the intrachannel coupling $g_{c}$ is fixed, and the intraregister coupling $$g_{j}=g_{0}\sqrt{j\left(2n+1-j\right)}/2,$$ with $g_{0}$ being a constant, is non-uniform [@nonuniform1; @nonuniform2], which reveals that each register supports a linear spectrum of $$\lambda_{q}=g_{0}\left(2q-n-1\right),$$ where $q=1,\cdots,n$. In a frame rotating at $\omega$, the Hamiltonian governing the total system is $$\begin{aligned} \label{T_Hami} \hat{H}_{T}&=&\sum_{d=l,r}\sum_{j=1}^{n-1}g_{j}\left(\hat{c}_{d_{j}}^{\dag}\hat{c}_{d_{j+1}}+\hat{c}_{d_{j+1}}^{\dag}\hat{c}_{d_{j}}\right)\nonumber\\ &&+\sum_{i=1}^{N-1}g_{c}\left(\hat{c}_{i}^{\dag}\hat{c}_{i+1}+\hat{c}_{i+1}^{\dag}\hat{c}_{i}\right)+\hat{V}_{1}+\hat{V}_{2},\end{aligned}$$ with $$\hat{V}_{1}=g_{I}\left(\hat{c}_{l_{n}}^{\dag}\hat{c}_{1}+\hat{c}_{r_{n}}^{\dag}\hat{c}_{N}+\text{H.c.}\right)$$ and $$\hat{V}_{2}=J_{I}\left(|e\rangle\langle g|\hat{c}_{m}+\text{H.c.}\right),$$ where $g_{I}$ represents the register-channel coupling. Hereafter $d$ stands for $\left\{l,r\right\}$. The CCAs are initially prepared in their vacuum states, containing no atom excitation. Then, a single-photon is injected into the left register to have, for example, an arbitrary input state $$|\phi\rangle_{l}=\sum_{j=1}^{n}\alpha_{j}\hat{c}_{l_{j}}^{\dag}|\text{vac}\rangle_{l},$$ where $|\text{vac}\rangle_{l}$ is the vacuum state of the left register. This implies that the dynamics of the system is confined in a single-excitation subspace spanned by the basis vectors $\left\{|\boldsymbol{d}_{\boldsymbol{j}}\rangle,|\boldsymbol{i}\rangle,|\boldsymbol{e}\rangle\right\}$, where we define $$\begin{aligned} |\boldsymbol{d}_{\boldsymbol{j}}\rangle=\hat{c}_{d_{j}}^{\dag}|\text{vac}\rangle|g\rangle,\quad |\boldsymbol{i}\rangle=\hat{c}_{i}^{\dag}|\text{vac}\rangle|g\rangle,\quad |\boldsymbol{e}\rangle=|\text{vac}\rangle|e\rangle,\nonumber\end{aligned}$$ and $|\text{vac}\rangle$ is the vacuum state of the three CCAs. The unitary evolution under $H_{T}$ results in $$|\varphi\left(t\right)\rangle=\sum_{j=1}^{n}\alpha_{j}\left[f_{d_{j},l_{j}}\left(t\right)|\boldsymbol{d}_{\boldsymbol{j}}\rangle+\sqrt{\epsilon^{d}_{j}}|\boldsymbol{\epsilon}_{\boldsymbol{j}} ^{\boldsymbol{d}}\rangle\right],$$ where $$f_{d_{j'},l_{j}}\left(t\right)=\langle \boldsymbol{d}_{\boldsymbol{j'}}|e^{-i\hat{H}_{T}t}|\boldsymbol{l}_{\boldsymbol{j}}\rangle$$ is the transition amplitude of an excitation between the cavities $l_{j}$ and $d_{j'}$, $\epsilon^{d}_{j}=1-|f_{d_{j},l_{j}}\left(t\right)|^{2}$, and $|\boldsymbol{\epsilon}_{\boldsymbol{j}}^{\boldsymbol{d}}\rangle$ is a normalized linear combination of all the basis vectors apart from $|\boldsymbol{d}_{\boldsymbol{j}}\rangle$. We consider the limit $$\left\{g_{0},g_{I},J_{I}\right\}\ll g_{c},$$ and work perturbatively in $\hat{V}_{1}$ and $\hat{V}_{2}$. Through an orthogonal transformation $\hat{c}_{i}=\sum_{k=1}^{N}\psi_{i,k}\hat{f}_{k}$ with $$\psi_{i,k}=\sqrt{\frac{2}{N+1}}\sin\left(\frac{ik\pi}{N+1}\right),$$ one can find that the bare channel possesses a bosonic spectrum of $\Lambda_{k}=2g_{c}\cos\left[k\pi/\left(N+1\right)\right]$ [@nonuniform1; @ut]. Consequently, $\hat{V}_{1}$ and $\hat{V}_{2}$ are transformed to $$\hat{V}_{1}=g_{I}\sum_{k=1}^{N}\psi_{1,k}\left[\hat{c}_{l_{n}}^{\dag}\hat{f}_{k}+\left(-1\right)^{k-1}\hat{c}_{r_{n}}^{\dag}\hat{f}_{k}+\text{H.c.}\right]$$ and $$\hat{V}_{2}=J_{I}\sum_{k=1}^{N}\psi_{m,k}\left(|e\rangle\langle g|\hat{f}_{k}+\text{H.c.}\right),$$ respectively. To control the coherent transport of a single-photon, we restrict our attention to odd $N$, which yields the existence of a single zero-energy mode in the bare channel corresponding to $k=z\equiv\left(N+1\right)/2$. Thus the registers and the atom are resonantly coupled to this mode. In the limit of $g_{0}\ll g_{c}$, the width of the energy band of each register, $$\Omega_{0}=|\lambda_{1}-\lambda_{n}|,$$ would be much smaller than the energy gap between the $z$th and ($z\pm1$)th mode of the bare channel, $$\Omega_{1}=|\Lambda_{z\pm1}-\Lambda_{z}|;$$ that is, $\Omega_{0}\ll\Omega_{1}$. In combination with $$\left\{g_{I}\psi_{1,z},J_{I}|\psi_{m,z}|\right\}\ll\Omega_{1},$$ the registers and the atom are significantly detuned from the nonzero-energy modes of the bare channel, thereby neglecting these off-resonant couplings leads to an effective Hamiltonian $$\begin{aligned} \label{eff_Ham} \hat{H}_{\text{eff}}&=&\sum_{d=l,r}\sum_{j=1}^{n-1}g_{j}\left(\hat{c}_{d_{j}}^{\dag}\hat{c}_{d_{j+1}}+\hat{c}_{d_{j+1}}^{\dag}\hat{c}_{d_{j}}\right)\nonumber\\ &&+g_{I}\psi_{1,z}\left[\hat{c}_{l_{n}}^{\dag}\hat{f}_{z}+\left(-1\right)^{z-1}\hat{c}_{r_{n}}^{\dag}\hat{f}_{z}+\text{H.c.}\right]\nonumber\\ &&+J_{I}\psi_{m,z}\left(|e\rangle\langle g|\hat{f}_{z}+\text{H.c.}\right)\end{aligned}$$ as also shown in Fig. \[fig1\] (a). This dynamics can be used to make a single-photon switch based upon the dressing of the zero-energy mode by the atom. ![(color online) Numerical simulation results of the transmission infidelity $\xi_{r}$ in the uncoupled case ($J_{I}=0$) of either $N=7$, $n=2$ for (a) or $N=101$, $n=10$ for (b). The analytic upper bound is shown by the dashed red curve.[]{data-label="fig2"}](fig2.eps){width="7.6cm"} If the atom is uncoupled to the cavity ($J_{I}=0$) [@atom], the two spatially-separated registers are coherently coupled by means of the bare channel. It follows, on choosing $g_{I}\psi_{1,z}=g_{n}$ [@nonuniform1], that $$\hat{c}_{l_{j}}^\dag\left(\tau\right)=\left(-1\right)^{n+z-1}\hat{c}_{r_{j}}^\dag$$ for a specific time $\tau=\pi/g_{0}$, which leads to $$f_{r_{j},l_{j}}\left(\tau\right)=\left(-1\right)^{n+z-1}.$$ We therefore have $|\varphi\left(\tau\right)\rangle=\sum_{j=1}^{n}\alpha_{j}|\boldsymbol{r_{j}}\rangle$, implying that the photon is transported from the left register to the right register, and the time evolution is referred to as a swap gate between the two registers \[see Fig. \[fig1\](b)\]. However, the $z$th mode of the bare channel can, in the case when the atom is in the coupled state, be split into a doublet of dressed-states separated by $$\Omega_{2}=2J_{I}|\psi_{m,z}|.$$ Under the assumption that $g_{0}\ll J_{I}$, the two boundary registers are significantly detuned from the two dressed-states if $m$ is odd, and hence, the photon is reflected off the channel, from which the left register is decoupled. In this case, the effective Hamiltonian of Eq. (\[eff\_Ham\]) is reduced to $$\begin{aligned} &\hat{H}_{\text{eff}}&=\sum_{j=1}^{n-1}g_{j}\left(\hat{c}_{l_{j}}^{\dag}\hat{c}_{l_{j+1}}+\hat{c}_{l_{j+1}}^{\dag}\hat{c}_{l_{j}}\right)\nonumber\\ &&=\sum_{j,j'=1}^{n}\hat{A}_{j,j'}\hat{c}_{l_{j}}^{\dag}\hat{c}_{l_{j'}},\end{aligned}$$ where $\hat{A}$ is an $n\times n$ coupling matrix. Furthermore, applying the Heisenberg equations of motion for the operators gives $$\hat{c}_{l_{j}}^{\dag}\left(t\right)=\sum_{j'=1}^{n}\left[\exp{\left(i\hat{A}t\right)}\right]_{j,j'}\hat{c}_{l_{j'}}^{\dag}.$$ Owing to $$\left(\lambda_{q+1}-\lambda_{q}\right)/2g_{0}=1,$$ we find that $$\hat{A}=2g_{0}\hat{P}^{-1}\hat{S}_{x}\hat{P},$$ where $\hat{S}_{x}$ is the $x$ component of a pseudo angular momentum $S=\left(n-1\right)/2$, and $\hat{P}$ is a similarity transformation matrix. In the Schwinger picture [@Sch_pic1; @Sch_pic2], $\hat{S}_{x}$ can be expressed in terms of two bosons $\hat{\gamma}_{1}$, $\hat{\gamma}_{2}$, and therefore be thought of as a fictitious Hamiltonian, $$\hat{S}_{x}=\left(\hat{\gamma}_{1}^{\dag}\hat{\gamma}_{2}+\hat{\gamma}_{2}^{\dag}\hat{\gamma}_{1}\right)/2.$$ By calculating the time-evolution operator under this fictitious Hamiltonian, we straightforwardly obtain $$\exp\left(i2g_{0}\hat{S}_{x}\tau\right)\hat{\gamma}_{1}^{\dag}\exp\left(-i2g_{0}\hat{S}_{x}\tau\right)=-\hat{\gamma}_{1}^{\dag},$$ and $$\exp\left(i2g_{0}\hat{S}_{x}\tau\right)\hat{\gamma}_{2}^{\dag}\exp\left(-i2g_{0}\hat{S}_{x}\tau\right)=-\hat{\gamma}_{2}^{\dag},$$ such that $$\exp{\left(i\hat{A}\tau\right)}=(-1)^{n-1}\hat{I},$$ yielding $$\hat{c}_{l_{j}}^{\dag}\left(\tau\right)=\left(-1\right)^{n-1}\hat{c}_{l_{j}}^{\dag}.$$ The resulting transition amplitude is $$f_{l_{j},l_{j}}\left(\tau\right)=(-1)^{n-1}.$$ The final state of the system then becomes $|\varphi\left(\tau\right)\rangle=|\varphi\left(0\right)\rangle$, and thus the quantum state of the input photon remains unchanged after time evolution of functioning as an identity operation \[see Fig. \[fig1\](c)\]. Leakage of quantum information ============================== ![(color online) Numerical simulation results of the reflection infidelity $\xi_{l}$ in the coupled case of either $N=7$, $n=2$, $J_{I}/g_{c}=0.05$ for (a) or $N=101$, $n=10$, $J_{I}/g_{c}=0.1$ for (b). The analytic upper bound is shown by the dashed red curve. Here, we choose $m=3$.[]{data-label="fig3"}](fig3.eps){width="7.6cm"} Having explicitly demonstrated eigenmode-mediated single-photon transport, we now calculate its leakage of quantum information by making use of perturbation theory. Such a leakage arises only from the off-resonant couplings between the registers and the channel. Upon performing a first-order perturbative treatment, we find that the leakage of quantum information results from the two cavities coupled directly to the intermediate channel. Specifically, in the uncoupled case, the full dynamics can be mapped onto an effective photon transport channel being perturbatively coupled to a finite bosonic environment, whose Hamiltonian is $$\hat{H}_{z}=\sum_{k\neq z}\Lambda_{k}\hat{f}_{k}^{\dag}\hat{f}_{k}.$$ The interaction between them is $$\hat{V}_{z}=g_{I}\sum_{k\neq z}\psi_{1,k}\left[\hat{c}_{l_{n}}^{\dag}\hat{f}_{k}+\left(-1\right)^{k-1}\hat{c}_{r_{n}}^{\dag}\hat{f}_{k}+\text{H.c.}\right].$$ Up to second order, $\epsilon_{j}^{r}$ is modified as $$\epsilon_{j}^{r}\simeq4\Delta_{r}\delta_{n,j},$$ where $$\label{eq:deltar} \Delta_{r}=\sum_{k<z}\Delta_{k}^{r}\left[1-\left(-1\right)^{n+k+z}\cos\left(\Lambda_{k}\tau\right)\right]$$ and $$\Delta^{r}_{k}=\left(g_{I}\psi_{1,k}/\Lambda_{k}\right)^{2}.$$ In the coupled case, the zero-energy mode of the bare channel is the only state that is dressed by the atom owing to $J_{I}\ll g_{c}$. The boundary registers are thus coupled to the two dressed-states in addition to the bosonic environment; however, the coupling to this environment can be neglected so long as $g_{0}\ll J_{I}$. With a similar perturbative treatment as before, $\epsilon_{j}^{l}$ is given by $$\epsilon_{j}^{l}\simeq4\Delta_{l}\delta_{n,j},$$ where $$\label{eq:deltal} \Delta_{l}=\Delta_{z}^{l}\left[1-\left(-1\right)^{n-1}\cos\left(J_{I}\psi_{m,z}\tau\right)\right]$$ and $$2\Delta_{z}^{l}=\left(g_{I}\psi_{1,z}/J_{I}\psi_{m,z}\right)^{2}.$$ Observing these exhibits that encoding quantum information into the cavities between $d_{1}$ and $d_{n-1}$ could be more efficient. In order to quantify quantum information leaking into the off-resonant modes of the intermediate channel, we need to employ two average fidelities, the reflection fidelity $$F_{l}=\int\!\! d\phi\;\langle\phi|\hat{\rho}_{l}\left(\tau\right)|\phi\rangle$$ and the transmission fidelity $$F_{r}=\int\!\! d\phi\;\langle\phi|\hat{\rho}_{r}\left(\tau\right)|\phi\rangle.$$ Here, $\hat{\rho}_{l/r}\left(\tau\right)$ is the output reduced density matrix of the left or right register, the integration is over all input pure states and $\int\!\! d\phi$ is normalized to unity. The fidelity $F_{d}$ (in combination with $F_{l}$ and $F_{r}$) can, after a straightforward calculation, be expressed in terms of the transition amplitudes, $$\label{eq:avera_fidel} F_{d}=\frac{1}{n\left(n+1\right)}\sum_{j,j'=1}^{n}\left[|f_{d_{j'},l_{j}}\left(\tau\right)|^{2}+f_{d_{j},l_{j}}\left(\tau\right)f_{d_{j'},l_{j'}}^{*}\left(\tau\right)\right].$$ To demonstrate our theoretical results, we numerically simulate the transmission infidelity $$\xi_{r}=1-F_{r}$$ (see Fig. \[fig2\]) and the reflection infidelity $$\xi_{l}=1-F_{l}$$ (see Fig. \[fig3\]) for the $N=7$, $n=2$ and $N=101$, $n=10$ cases, as two examples. Specifically, in finite channels of fixed length, the infidelity $\xi_{d}$ is plotted as a function of $g_{I}/g_{c}$ along with an analytic upper bound. Working within the weak-coupling limit, $\xi_{d}$ can be analytically expressed as $\xi_{d}\simeq2\Delta_{d}$, and has the upper bound $$\label{eq:upper_bounds} \xi_{d}\leq\frac{8}{n}\left(\Delta_{z}^{l}\delta_{l,d}+\sum_{k<z}\Delta_{k}^{r}\delta_{r,d}\right).$$ This upper bound is in excellent agreement with the numerical results, shown in Figs. \[fig2\] and \[fig3\] [@fig23]. In addition, we find that decreasing $g_{I}/g_{c}$ can suppress the leakage of quantum information, so $\xi_{d}$ can in principle be made arbitrarily small. ![(color online) (a) Schematic illustration of a network which is made up of five registers and eight channels. Each register is coupled to at least three channels. Depending upon the atomic state, a single-photon can be stored in one of the registers or transported between them as required. (b) The average fidelities plotted as functions of the evolution time for a single-photon travelling along the network in (a). We choose that all the atoms are in the coupled states during the time intervals $[0,\tau]$ and $(5\tau,6\tau]$, with $J_{I}/g_{c}=0.05$; while the atoms in the channels $C_{1}$, $C_{5}$, $C_{6}$ and $C_{7}$ are uncoupled during the time intervals $(\tau,2\tau]$, $(2\tau,3\tau]$, $(3\tau,4\tau]$ and $(4\tau,5\tau]$, respectively. The solid black curve in (b) corresponds to $F_{1}$, the dashed red curve to $F_{2}$, the short dashed blue curve to $F_{3}$, the dashed-dotted orange curve to $F_{4}$, and the dashed-double dotted violet curve to $F_{5}$. Here, $g_{I}/g_{c}=0.0001$, $N=7$, $n=2$ and $m=3$.[]{data-label="fig4"}](fig4_1.eps "fig:"){width="7.3cm"} ![(color online) (a) Schematic illustration of a network which is made up of five registers and eight channels. Each register is coupled to at least three channels. Depending upon the atomic state, a single-photon can be stored in one of the registers or transported between them as required. (b) The average fidelities plotted as functions of the evolution time for a single-photon travelling along the network in (a). We choose that all the atoms are in the coupled states during the time intervals $[0,\tau]$ and $(5\tau,6\tau]$, with $J_{I}/g_{c}=0.05$; while the atoms in the channels $C_{1}$, $C_{5}$, $C_{6}$ and $C_{7}$ are uncoupled during the time intervals $(\tau,2\tau]$, $(2\tau,3\tau]$, $(3\tau,4\tau]$ and $(4\tau,5\tau]$, respectively. The solid black curve in (b) corresponds to $F_{1}$, the dashed red curve to $F_{2}$, the short dashed blue curve to $F_{3}$, the dashed-dotted orange curve to $F_{4}$, and the dashed-double dotted violet curve to $F_{5}$. Here, $g_{I}/g_{c}=0.0001$, $N=7$, $n=2$ and $m=3$.[]{data-label="fig4"}](fig4_2.eps "fig:"){width="8.5cm"} Extensions ========== In direct analogy to a classical computer, the potential power of a quantum computer exponentially increases with the number of qubits, but increasing arbitrarily the number of qubits is not easy to achieve. One approach to addressing this challenge is to envision a quantum computer containing a number of quantum registers [@network], so the study of the multi-register setups is important for making a powerful future quantum computer. While we have focused on the two-register case, the extension to the multi-register networks is directly analogous. In such networks, the registers and the channels are the same as mentioned in the description above, except that a single register needs to be coupled to multiple channels. The couplings of the channels to the registers and to the atoms are also chosen as before. The state of all the atoms in the coupled state decouples all the registers from the channels, thus, quantum information will be stored in the independent registers. However, in the situation where one of the atoms is uncoupled, the corresponding bare channel coherently couples two distant registers which are still decoupled from other channels, and therefore information transport will be reliably achieved between them. Together with individually addressable atoms, quantum information can be redirected from one register to another, in direct analogy to a quantum routing function. For simplicity, let us consider a specific network of five registers $R_{1},\cdots,R_{5}$ and eight channels $C_{1},\cdots,C_{8}$, and demonstrate a single-photon travelling along the path $R_{1}\rightarrow R_{2}\rightarrow R_{3}\rightarrow R_{4}\rightarrow R_{5}$, shown in Fig. \[fig4\](a). Suppose now that a single-photon is initially prepared in the register $R_{1}$ with an arbitrary input state. To confirm this travel, we numerically simulate the average fidelity $F_{\theta}$ ($\theta=1,\cdots,5$) between the input state of the register $R_{1}$ and the output state of the register $R_{\theta}$ \[see Fig. \[fig4\](b)\]. These numerical results show that the controllable single-photon transport in a network can be achieved with very high fidelity. Despite the fact that we elucidate only one of the paths in a simple network, in principle, our method can enable any arbitrary path and more complex networks. Conclusions =========== We have proposed and analyzed single-photon controllable transport using a 1D CCA to coherently couple two identical spatially-separated 1D CCAs, and a two-level atom, to control the transport of single-photons. We study the pure Hamiltonian evolution in this hybrid system. In the case when the atom is absent, a single-photon with an arbitrary unknown quantum state (for example, initially in the left CCA) will be transported to the right CCA, with a transmission fidelity arbitrarily close to unity. On the contrary, as a result of the coupling of the atom to the intermediate CCA, this single-photon will be reflected back into the left CCA and leave its quantum state unchanged, with a reflection fidelity also arbitrarily close to unity. The approach can also be directly generalized to multi-register quantum networks, and thus due to its scalability, applied to realize quantum information processing devices. It should be noted that, in the no-atom case, this method allows for arbitrary multi-photon state coherent transport through the intermediate CCA, even in a thermal equilibrium state. The proposed setup can be examined in the context of circuit quantum electrodynamics with superconducting circuits. For example, superconducting qubits act as two-level atoms and transmission line resonators bahave as cavities. In this situation, two nearest-neighbor resonators can be straightforwardly connected via a capacitor. The coupling strength could experimentally reach $2\pi\times 31$ MHz [@inter_resonator_coupling] and in fact this reachable strength can be markedly larger by increasing the capacitance. Moreover, the coupling between single superconducting qubits and transmission line resonators has also been implemented in the strong-coupling regime and even in the ultrastrong-coupling regime, with strengths of up to $2\pi\times 107$ MHz [@strong] and $12\%$ of the cavity frequency [@ustrong], respectively. Hence, our theoretical model seems to be experimentally accessible using current technologies. While we have chosen to focus on the special case of a CCA system, this framework can be employed to achieve controllable quantum state transfer in a wide range of systems, including, for example, coupled quantum spin chains. acknowledgments =============== WQ is supported by the Basic Research Fund of the Beijing Institute of Technology under Grant No. 20141842005. FN is supported by the RIKEN iTHES Project, MURI Center for Dynamic Magneto-Optics via the AFOSR Award No. FA9550-14-1-0040, the Impact Program of JST, a Grant-in-Aid for Scientific Research (A). [99]{} H. J. Kimble, The quantum internet, Nature (London) [**453**]{}, 1023 (2008). S. Perseguers, M. Lewenstein, A. Acin, and J. I. Cirac, Quantum random networks, Nat. Phys. [**6**]{}, 539 (2010). J. I. Cirac, P. Zoller, H. J. Kimble, and H. Mabuchi, Quantum State Transfer and Entanglement Distribution among Distant Nodes in a Quantum Network, Phys. Rev. Lett. [**78**]{}, 3221 (1997). D. Bouwmeester, J. W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, Experimental quantum teleportation, Nature (London) [**390**]{}, 575 (1997). D. S. Naik, C. G. Peterson, A. G. White, A. J. Berglund, and P. G. Kwiat, Entangled State Quantum Cryptography: Eavesdropping on the Ekert Protocol, Phys. Rev. Lett. [**84**]{}, 4733 (2000). W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Quantum Cryptography Using Entangled Photons in Energy-Time Bell States, Phys. Rev. Lett. [**84**]{}, 4737 (2000). L. M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, Long-distance quantum communication with atomic ensembles and linear optics, Nature (London) [**414**]{}, 413 (2001). S. Ritter [*et al*]{}., An elementary quantum network of single atoms in optical cavities, Nature (London) [**484**]{}, 195 (2012). Z.-L. Xiang, S. Ashhab, J. Q. You, and F. Nori, Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems, Rev. Mod. Phys. [**85**]{}, 623 (2013). L. Childress, M. V. G. Dutt, J. M. Taylor, A. S. Zibrov, F. Jelezko, J. Wrachtrup, P. R. Hemmer, M. D. Lukin, Coherent Dynamics of Coupled Electron and Nuclear Spin Qubits in Diamond, Science [**314**]{}, 281 (2006) A. M. Zagoskin, J. R. Johansson, S. Ashhab, and F. Nori, Quantum information processing using frequency control of impurity spins in diamond, Phys. Rev. B [**76**]{}, 014122 (2007). R. Hanson and D. D. Awschalom, Coherent manipulation of single spins in semiconductors, Nature [**453**]{}, 1043 (2008). A. Bermudez, F. Jelezko, M. B. Plenio, and A. Retzker, Electron-Mediated Nuclear-Spin Interactions between Distant Nitrogen-Vacancy Centers, Phys. Rev. Lett. [**107**]{}, 150503 (2011). N. Y. Yao, L. Jiang, A. V. Gorshkov, P. C. Maurer, G. Giedke, J. I. Cirac, and M. D. Lukin, Scalable architecture for a room temperature solid-state quantum information processor, Nat. Commun. [**3**]{}, 800 (2012). M. W. Doherty, N. B. Manson, P. Delaney, F. Jelezko, J. Wrachtrup, L. C. L. Hollenberg, The nitrogen-vacancy colour centre in diamond, Phys. Rep. [**528**]{}, 1 (2013). J. F. Zhang, G. L. Long, W. Zhang, Z. W. Deng, W. Z. Liu, and Z. H. Lu, Simulation of Heisenberg XY interactions and realization of a perfect state transfer in spin chains using liquid nuclear magnetic resonance, Phys. Rev. A [**72**]{}, 012331 (2005). G. R. Feng, G. F. Xu, and G. L. Long, Experimental Realization of Nonadiabatic Holonomic Quantum Computation, Phys. Rev. Lett. [**110**]{}, 190501 (2013). Y. X. Liu, L. F. Wei, J. S. Tsai, and F. Nori, Controllable Coupling between Flux Qubits, Phys. Rev. Lett. [**96**]{}, 067003 (2006). A. O. Niskanen, K. Harrabi, F. Yoshihara, Y. Nakamura, S. Lloyd, and J. S. Tsai, Quantum Coherent Tunable Coupling of Superconducting Qubits, Science, [**316**]{}, 723 (2007). S. Ashhab [*et al*]{}., Interqubit coupling mediated by a high-excitation-energy quantum object, Phys. Rev. B [**77**]{}, 014510 (2008). W. Xiong, D. Y. Jin, J. Jing, C. H. Lam, and J. Q. You, Controllable coupling between a nanomechanical resonator and a coplanar-waveguide resonator via a superconducting flux qubit, Phys. Rev. A [**92**]{}, 032318 (2015). M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms, Nature [**415**]{}, 39 (2002). L.-M. Duan, E. Demler, and M. D. Lukin, Controlling Spin Exchange Interactions of Ultracold Atoms in Optical Lattices, Phys. Rev. Lett. [**91**]{}, 090402 (2003). L. Banchi, A. Bayat, P. Verrucchi, and S. Bose, Nonperturbative Entangling Gates between Distant Qubits Using Uniform Cold Atom Chains, Phys. Rev. Lett. [**106**]{}, 140501 (2011). H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, High Cooperativity in Coupled Microwave Resonator Ferrimagnetic Insulator Hybrids, Phys. Rev. Lett. [**111**]{}, 127003 (2013). X. Zhang, C. L. Zou, L. Jiang, and H. X. Tang, Strongly Coupled Magnons and Cavity Microwave Photons, Phys. Rev. Lett. [**113**]{}, 156401 (2014). Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Coherent coupling between a ferromagnetic magnon and a superconducting qubit, Science [**349**]{}, 405 (2015). D. Zhang, X. M. Wang, T. F. Li, X. Q. Luo, W. Wu, F. Nori, and Y. Q. You, Cavity quantum electrodynamics with ferromagnetic magnons in a small yttrium-iron-garnet sphere, npj Quantum Information [**1**]{}, 15014 (2015). M. A. Sillanpää, J. I. Park, and R. W. Simmonds, Coherent quantum state storage and transfer between two phase qubits via a resonant cavity, Nature (London) [**449**]{}, 438 (2007). J. Majer [*et al*]{}., Coupling superconducting qubits via a cavity bus, Nature (London) [**449**]{}, 443 (2007). L. Zhou, Z. R. Gong, Y. X. Liu, C. P. Sun, and F. Nori, Controllable Scattering of a Single Photon inside a One-Dimensional Resonator Waveguide, Phys. Rev. Lett. [**101**]{}, 100501 (2008). P. Nataf and C. Ciuti, Protected Quantum Computation with Multiple Resonators in Ultrastrong Coupling Circuit QED, Phys. Rev. Lett. [**107**]{}, 190402 (2011). C. P. Yang, Q. P. Su, and F. Nori, Entanglement generation and quantum information transfer between spatially-separated qubits in different cavities, New J. Phys. [**15**]{}, 115003 (2013). H. Altug and J. Vučković, Two-dimensional coupled photonic crystal resonator arrays, Appl. Phys. Lett. [**84**]{}, 161 (2004). A. D. Greentree, C. Tahan, J. H. Cole, and L. C. L. Hollenberg, Quantum phase transitions of light, Nat. Phys. [**2**]{}, 856 (2006). P. Lodahl, S. Mahmoodian, and S. Stobbe, Interfacing single photons and single quantum dots with photonic nanostructures, Rev. Mod. Phys. [**87**]{}, 347 (2015). D. K. Armani, T. J. Kippenberg, S. M. Spillane, and K. J. Vahala, Ultra-high-Q toroid microcavity on a chip, Nature (London) [**421**]{}, 925 (2003). B. Peng [*et al*]{}., Parity-time-symmetric whispering-gallery microcavities, Nat. Phys. [**10**]{}, 394 (2014). B. Peng [*et al*]{}., Loss-induced suppression and revival of lasing, Science [**346**]{} 328 (2014). M. J. Hartmann, F. G. S. L. Brandão, and M. B. Plenio, Quantum many-body phenomena in coupled cavity arrays, Laser Photon. Rev. [**2**]{}, 527 (2008). I. M. Georgescu, S. Ashhab, F. Nori, Quantum simulation, Rev. Mod. Phys. [**86**]{}, 153 (2014). J. S. Douglas, H. Habibian, C.-L. Hung, A. V. Gorshkov, H. J. Kimble, and D. E. Chang, Quantum many-body models with cold atoms coupled to photonic crystals, Nat. Photon. [**9**]{}, 326 (2015). J. T. Shen and S. Fan, Strongly Correlated Two-Photon Transport in a One-Dimensional Waveguide Coupled to a Two-Level System, Phys. Rev. Lett. [**98**]{}, 153003 (2007). D. Witthaut and A. S. S[ø]{}rensen, Photon scattering by a three-level emitter in a one-dimensional waveguide, New J. Phys. [**12**]{}, 043052 (2010). Z. H. Wang, Y. Li, D. L. Zhou, C. P. Sun, and P. Zhang, Single-photon scattering on a strongly dressed atom, Phys. Rev. A [**86**]{}, 023824 (2012). L. Zhou, L. P. Yang, Y. Li, and C. P. Sun, Quantum Routing of Single Photons with a Cyclic Three-Level System, Phys. Rev. Lett. [**111**]{}, 103604 (2013). W. Zhu, Z. H. Wang, and D. L. Zhou, Multimode effects in cavity QED based on a one-dimensional cavity array, Phys. Rev. A, [**90**]{}, 043828 (2014). F. Lombardo, F. Ciccarello, and G. M. Palma, Photon localization versus population trapping in a coupled-cavity array, Phys. Rev. A [**89**]{}, 053826 (2014). J. Lu, Z. H. Wang, and L. Zhou, T-shaped single-photon router, Opt. Express [**23**]{}, 22955 (2015). P. Longo, P. Schmitteckert, and K. Busch, Few-Photon Transport in Low-Dimensional Systems: Interaction-Induced Radiation Trapping, Phys. Rev. Lett. [**104**]{}, 023602 (2010); Few-photon transport in low-dimensional systems, Phys. Rev. A, [**83**]{}, 063828 (2011). D. Roy, Two-Photon Scattering by a Driven Three-Level Emitter in a One-Dimensional Waveguide and Electromagnetically Induced Transparency, Phys. Rev. Lett. [**106**]{}, 053601 (2011). L. Zhou, Y. Chang, H. Dong, L. M. Kuang, and C. P. Sun, Inherent Mach-Zehnder interference with ¡°which-way¡± detection for single-particle scattering in one dimension, Phys. Rev. A [**85**]{}, 013806 (2012). H. Zheng, D. J. Gauthier, and H. U. Baranger, Strongly correlated photons generated by coupling a three- or four-level system to a waveguide, Phys. Rev. A [**85**]{}, 043832 (2012). C. Martens, P. Longo, and K. Busch, Photon transport in one-dimensional systems coupled to three-level quantum impurities, New J. Phys. [**15**]{}, 083019 (2013). J. Lu, L. Zhou, L. M. Kuang, and F. Nori, Single-photon router: Coherent control of multichannel scattering for single photons with quantum interferences, Phys. Rev. A [**89**]{}, 013805 (2014). W. B. Yan and H. Fan, Control of single-photon transport in a one-dimensional waveguide by a single photon, Phys. Rev. A [**90**]{}, 053807 (2014). M. Christandl, N. Datta, A. Ekert, and A. J. Landahl, Perfect State Transfer in Quantum Spin Networks, Phys. Rev. Lett. [**92**]{}, 187902 (2004). H. Ian, Y.X. Liu, F. Nori, Excitation spectrum for an inhomogeneously dipole-field-coupled superconducting qubit chain, Phys. Rev. A 85, 053833 (2012). E. Lieb, T. Schultz, and D. Mattis, Two Soluble Models of an Antiferromagnetic Chain, Ann. Phys. (NY) [**16**]{}, 407 (1961). In fact, we can introduce an auxiliary atomic state $|\tilde{g}\rangle$ in addition to the states $|g\rangle$ and $|e\rangle$. The states $|g\rangle$ and $|\tilde{g}\rangle$ belong to the ground state manifold. We assume that $|\tilde{g}\rangle$ is completely uncoupled to the cavity mode, so that the interaction $V_{2}$ does not play a role in Eq. (\[T\_Hami\]) if the atom is in $|\tilde{g}\rangle$, thus equivalent to having $J_{I}=0$. J. Schwinger, [*Quantum Theory of Angular Momentum*]{} (Academic Press, New York, 1965). W. Qin, C. Wang, Y. Cao, and G. L. Long, Multiphoton quantum communication in quantum networks, Phys. Rev. A [**89**]{}, 062314 (2014). In Figs. \[fig2\] and \[fig3\], the infidelities and the upper bounds are plotted according to Eqs. (\[eq:avera\_fidel\]) and (\[eq:upper\_bounds\]), respectively. The fidelity in Eq. (\[eq:avera\_fidel\]) has been averaged over all possible input pure states of the left register, so the plotted infidelities and upper bounds are the input-state-averaged quantities. Furthermore, we plot these quantities at a specific evolution time such that the quantum state of either the transmitted or reflected single-photon remains unchanged. S. Bose, Quantum communication through spin chain dynamics: an introductory overview, Contemp. Phys. [**48**]{}, 13 (2007). D. L. Underwood, W. E. Shanks, J. Koch, and A. A. Houck, Low-disorder microwave cavity lattices for quantum simulation with photons, Phys. Rev. A [**86**]{}, 023837 (2012). A. A. Houck [*et al*]{}., Generating single microwave photons in a circuit, Nature [**449**]{}, 328 (2007). T. Niemczyk [*et al*]{}., Circuit quantum electrodynamics in the ultrastrong-coupling regime, Nat. Phys. [**6**]{}, 772 (2010).
--- abstract: 'In the recent literature on estimating heterogeneous treatment effects, each proposed method makes its own set of restrictive assumptions about the intervention’s effects and which subpopulations to explicitly estimate. Moreover, the majority of the literature provides no mechanism to identify which subpopulations are the most affected–beyond manual inspection–and provides little guarantee on the correctness of the identified subpopulations. Therefore, we propose Treatment Effect Subset Scan (TESS), a new method for discovering which subpopulation in a randomized experiment is most significantly affected by a treatment. We frame this challenge as a pattern detection problem where we efficiently maximize a nonparametric scan statistic over subpopulations. Furthermore, we identify the subpopulation which experiences the largest distributional change as a result of the intervention, while making minimal assumptions about the intervention’s effects or the underlying data generating process. In addition to the algorithm, we demonstrate that the asymptotic Type I and II error can be controlled, and provide sufficient conditions for detection consistency–i.e., exact identification of the affected subpopulation. Finally, we validate the efficacy of the method by discovering heterogeneous treatment effects in simulations and in real-world data from a well-known program evaluation study.' author: - | **Edward McFowland III**\ Information and Decision Sciences,\ Carlson School of Management,\ University of Minnesota\ \ **Sriram Somanchi**\ IT, Analytics, and Operations,\ Mendoza College of Business,\ University of Notre Dame\ \ **Daniel B. Neill**\ Event and Pattern Detection Laboratory,\ H.J. Heinz III College,\ Carnegie Mellon University bibliography: - 'tess.bib' title: Efficient Discovery of Heterogeneous Treatment Effects in Randomized Experiments via Anomalous Pattern Detection --- \#1 [*Keywords:*]{} causal inference, program evaluation, algorithms, distributional average treatment effect, treatment effect subset scan, heterogeneous treatment effects Introduction {#sec:Intro} ============ The randomized experiment is employed across many empirical scientific disciplines as an important tool for scientific discovery, by estimating the causal impact of a particular stimulus, treatment or intervention. From bioinformatics to behavioral economics, large-scale experiments are being used for data-driven discovery of new biological phenomena [@adams-big_genetics-2015; @alyass-big_med-2015] and to inform policy in areas including poverty, education, health, microfinance, and governance [@duflo-field_development-2006]. Furthermore, web-facing organizations–e.g, Google, Microsoft, Amazon, Facebook, and eBay–conduct hundreds of large-scale online experiments daily to measure advertisement effectiveness, guide product development, expedite service adoption, and understand user behaviors [@kohavi-online_experiments-2013]. The increasing popularity of large-scale experiments has resulted in a widespread interest in discovering more fine-grained truths about experimental units, most prominently in the form of heterogeneous treatment effects. Heterogeneous treatment effects (HTE) describe the variability in individuals’ response to an intervention within the sampled population. A portion of this variability may result from systematic differences in the population, potentially captured in the observed covariates. Each combination of covariate values defines a characteristic profile, and a collection of profiles represents a subpopulation–e.g., the subpopulation gender = “male” or the more specific subpopulation gender = “female” & race = “A”. Discovering heterogeneity can be challenging because there are exponentially many subpopulations–with respect to the number of observable covariates–to consider, potentially resulting in multiple hypothesis testing issues and raising questions of unprincipled post-hoc investigation: searching for a fortuitously statistically significant result [@assmann-subgroup-2000; @weisberg-subgroup-2015]. Although these challenges and concerns are valid, uncovering affected subpopulations can lead to important scientific progress. In a “step toward a new frontier of personalized medicine” [@saul-bidil-2005] the FDA approved the first race-specific drug, whose impact on African-American subjects was first discovered post-hoc from more general experiments  [@cohn-bidil-1986; @cohn-bidil-1991]. Conversely, the Perry preschool experiment found extremely significant educational and life outcomes for pre-school education [@barnett-perry-1985; @schweinhart-perry-1993; @angrist-mhe-2008], while a re-analysis focused on heterogeneity and multiple hypothesis testing concludes that only girls experience these benefits [@anderson-perry-2008]. The original Perry preschool results were fundamental to the creation of the Head Start pre-school program [@angrist-mhe-2008] a national social program that provides, among other services, early childhood education to low-income children. If large-scale medical and policy decisions are made as a result of such experiments, then it is clear that identifying whether there is heterogeneity in treatment effects should be an integral component of the analysis. In this work we propose a novel computationally efficient framework–Treatment Effect Subset Scanning (TESS)–for *discovering* which subpopulations in a randomized experiment are the most significantly affected by a treatment. The contributions of this work can be summarized as follows: - Our TESS algorithm enables efficient discovery of subpopulations where the individuals affected by the treatment have observed outcome distributions that are unexpected given the distributions of their corresponding control groups. Unlike the standard approaches, TESS frames the challenge of discovering causal effects in subpopulations as one of *anomalous pattern detection*, and provides a computationally efficient approach for finding conditionally optimal subpopulations. Therefore, TESS is a novel contribution to the burgeoning literature on subset scanning, providing conditions under which the linear-time subset scanning property [@neill-ltss-2012] can be exploited in the context of high-dimensional tensors, and thus extending the applicability of this efficient optimization approach beyond the standard low-dimensional context [@neill-ltss-2012; @mcfowland-fgss-2013; @speakman-graphscan-2015]. - We formalize the objective of identifying subpopulations with significant treatment effects by proposing a new treatment effect estimand (§\[sec:causal\_framework\]). This estimand allows for identification of nuanced distributional treatment effects, as opposed to standard mean shifts, and contains the popular HTE estimands as special cases. - We provide theoretical results on the detection properties of TESS. When the maximum subpopulation score identified by TESS is used as a test statistic for the presence of HTEs, we demonstrate the conditions under which the Type I (Theorem \[thm:false\_posotive\]) and Type II (Theorem \[thm:power\]) errors can jointly be controlled asymptotically. Furthermore, we provide sufficient conditions on how “homogeneous” (Theorem \[thm:subset\_homo\]) and “strong” (Theorem \[thm:subset\_strength\]) the treatment effect must be across the affected subpopulation, such that TESS is guaranteed to detect the precisely correct subpopulation. - In the process of developing theory for TESS, we prove results for the general nonparametric scan statistic (NPSS), which has been used in the scan statistics literature [@mcfowland-fgss-2013; @feng-npss_graph-2014]. We are the first to provide theoretical guarantees on the detection behavior of subset scanning algorithms. Furthermore, our theory is derived for the high dimensional (tensor) context, with nonparametric score functions, and our results directly hold for the lower-dimensional and parametric cases as well. - Our empirical results (§\[sec:star-eda\]) provide useful insights to practitioners, revealing a potentially affected subpopulation in the Tennessee STAR study of class size and educational outcomes, for a treatment condition (the use of a teacher’s aide in a class of regular size) that was generally considered ineffective. These contributions are enabled by structuring the question of causal inference as one of anomalous pattern detection (and effect maximization), rather than model fitting (and risk minimization). In some contexts, the standard approach of learning an overall good model of the treatment effect response surface is desirable; however, in many cases, the identification of affected subpopulations is the primary goal and model learning is simply a step toward this goal. For these cases it seems more prudent and efficient to circumvent this first step and solve the subpopulation identification problem by framing it as one of pattern or subset discovery. Such a framing has not previously been considered in the literature. The remainder of this work begins with a review of recent statistical learning methods to estimate heterogeneous treatment effects (HTE), and an outline of general gaps that exist in the literature (Section \[sec:related\]). In Section \[sec:causal\_framework\], we propose a new class of causal estimands which generalizes the common HTE estimands and helps to address their limitations. Section \[sec:tess\] presents our computationally efficient TESS algorithm, which can identify the subpopulation that experiences the largest distributional change as a result of the treatment, while disregarding provably sub-optimal subpopulations. Additionally, we demonstrate that the probability of committing Type I and II errors can be bounded asymptotically, and we provide sufficient conditions under which our framework will discover the exact subpopulation of interest (§\[sec:estimator-theory\]). In Section \[sec:results\] we demonstrate empirically that our framework exhibits significantly more power to detect subtle signals than current methods, while also providing more precise characterization of the affected subpopulation. We then use TESS to conduct an exploratory analysis of the well-known Tennessee STAR [@word-star-1990] study, discovering previously unidentified treatment effect heterogeneity. Section 6 concludes the paper. Related Work {#sec:related} ============ Heterogeneity in treatment effects is studied across many disciplines. The typical approach is to specify a model of the relationship between variables (usually, linear regression) based on theory or intuition, estimate parameters of the model from data, and test the statistical significance of these parameters. When there is interest in identifying treatment heterogeneity, the researcher is expected to pre-specify the model with the form of heterogeneity included. In the absence of sufficient prior knowledge to guide the precise model specification, it is common to attempt multiple specifications and tests, which can quickly devolve into an unprincipled search. In response, some medical and social science disciplines require pre-analysis plans, which can impede the knowledge discovery process. We argue that these challenges necessitate new data-driven tools that enable the *discovery* of unknown, and possibly subtle, treatment effects in subpopulations, while avoiding the pitfalls created by multiple testing and post-hoc analysis. There has been a growing literature using statistical learning methods to provide data-driven approaches for estimating heterogeneous treatment effects in randomized experiments, including both sparse (regularized) regression models and tree-based methods. Recent work has adapted regularization to the causal setting and specifically to treatment effect heterogeneity [@imai-hte_lasso-2013; @tian-hte_lasso-2014; @weisberg-subgroup-2015], proposing methods that frame the treatment effect estimation problem as one of L1-regularized (LASSO) model selection [@tibshirani-lasso-1996]. Although these regularized regression methods select and estimate the importance of covariates, they are still subject to the possibly restrictive assumptions and limitations of (linear) regression, including requiring the researcher to specify which covariate and treatment interactions to include, compromising their ability to *discover* unexpected treatment patterns in subpopulations. Other methods [@su-subgroup-2009; @athey-hte-2016] select subpopulations and estimate treatment effects using the well-known regression tree, which recursively partitions the data into homogeneous subpopulations that share a subset of covariate profile values and have similar outcomes. Although a regression tree can adaptively approximate even complex functions, its effectiveness can be severely compromised in many settings as a result of its greedy partitioning. Tree models can be unstable; they can provide extremely discontinuous approximations of an underlying smooth function, limiting overall accuracy; and they can struggle to estimate functions which exhibit specific properties, including when a small proportion of the covariates constitute the influential interactions [@friedman-mars-1991]. Subsequent improvements on the single tree model propose the use of ensemble methods for treatment effect estimation: including the use of Bayesian Additive Regression Trees [@green-hte_bart-2012] and Random Forests [@wager-causalforest-2017]. [@grimmer-hte_ensemble-2017] observes that the machine learning methods proposed for model selection each make implicit modeling assumptions whose validity will vary given the specific problem context. Therefore, the authors propose a general ensemble method that brings together various models, where the weights of their estimates are learned from cross-validation. Combining the predictions of multiple models provide more stable and smooth function estimates [@wager-causalforest-2017]; however, they lose the interpretability of natural groupings (e.g., specific combinations of covariates or clearly defined leaves) which is important for identifying affected subpopulations. Addressing limitations of the prior literature {#sec:lit-limits} ---------------------------------------------- Our proposed methodology for Treatment Effect Subset Scanning (TESS) differs substantially from the prior literature in two main aspects: identifying **general changes in distribution** (or specific quantiles) rather than mean shifts, and a focus on **detecting the subpopulations most significantly affected by treatment** rather than estimating treatment effects for all individuals. First, the stated objective of the majority of methods in the current literature is estimating the average treatment effect (ATE), for the population or some subpopulations. The ATE measures the difference between the means of the treatment and control outcome distributions but cannot identify other changes in distribution. Anscombe’s quartet [@anscombe-quartet-1973] is a classic example of datasets which have very different distributions but identical first and second moments. In such cases, the ATE will fail to identify an effect, leading to incorrect assumptions about the similarity of the distributions. In other cases, a treatment (such as a policy change which impacts only the very rich or the very poor) may substantially affect various quantile values of the distribution with only slight shifts in the mean. In such cases, estimating the ATE would have low power to identify these distributional changes. Second, the prior literature on heterogeneous treatment effects is primarily focused on estimating the treatment effect for each individual or for a small set of manually defined subpopulations (e.g., estimating separate effects for males vs. females). To the best of our knowledge, none of these approaches provide a mechanism to automatically detect which subpopulations exhibit the most significant treatment effects. Our TESS framework is explicitly designed for subpopulation discovery, with the twin goals of maximizing *1) detection power*, the ability to distinguish between experiments with a subtle heterogeneous treatment effect and those with no treatment effect, and *2) detection accuracy*, the ability to precisely identify the affected subpopulation. This allows us to provide theoretical guarantees on the results of discovery as well as improving both detection power and accuracy in practice. In contrast, the prior literature can be roughly divided into three groups. Methods such as [@wager-causalforest-2017] produce separate estimates of the treatment effect for each individual (or set of individuals who are identical on all observed covariates). While such methods can produce a list of treated individuals ranked by estimated treatment effects, this provides little continuity across individuals, with no principled way to identify affected subpopulations or to distinguish significant HTEs from noise. Manually grouping highly affected individuals can easily lead to false positives and incorrect generalizations, as well as low power to detect subtle effects across multiple covariate profiles. Regression-based methods such as [@imai-hte_lasso-2013; @tian-hte_lasso-2014] allow manual inspection of the coefficients for each covariate interacted with the treatment dummy. However, such approaches typically assume a small number of pre-specified interaction terms and cannot identify other affected subpopulations. The extreme alternative of adjusting for each subpopulation separately, including a term for every possible combination of covariates interacted with the treatment, would require exponentially many interaction terms, leading to computational intractability as well as statistical challenges (lack of power and multiple testing). Finally, methods such as causal trees [@athey-hte-2016] and interaction trees [@su-subgroup-2009] use a greedy top-down approach to create specific partitions of the covariate space (the leaves of the tree) that can be interpreted as subpopulations, enabling manual or automatic identification of those partitions with the largest treatment effects. However, when the affected subpopulation and effect size are small, we do not expect the resulting partitions to correspond well to the subpopulation of interest, since the approach optimizes a global objective function such as statistical risk (average loss) rather than focusing on the most significantly affected subpopulations. This difference in emphasis may allow the tree to precisely estimate treatment effects across the entire population (including effects which are near zero) but have larger errors for the small and significantly affected subpopulations we wish to detect. Poor choice of partitions could exclude the affected subpopulation from being considered or identified, and instead estimate an effect which is the average over this subpopulation of interest and others. These aspects lead to reduced detection power and accuracy in practice, as shown in our results below. Moreover, the instability of tree-based methods may call into question the relevance of the tree-selected subpopulations, while extensions to random forest-based approaches [@wager-causalforest-2017] sacrifice the ability to identify subpopulations for more stable and more accurate estimation of individual treatment effects. In summary, the current state of the heterogeneous treatment effects literature has many gaps: the only effects of interest are mean shifts, these effects are estimated under possibly restrictive modeling assumptions, only a subset of possible subpopulations are considered and represented, discovering the subpopulation with the largest effect requires manual inspection or an exhaustive search over all modeled subpopulations, and there is little guarantee of the optimality of the discovered subpopulations. In contrast, our proposed TESS approach directly searches for the most significantly affected subpopulations, where significance is measured based on the divergence between the empirical distributions of the treatment and control data, thus avoiding restrictive modeling assumptions. We derive statistical theory which provides performance guarantees, and demonstrate state-of-the-art empirical performance on both real and simulated data, as described below. Framework for Distributional Causal Inference {#sec:causal_framework} ============================================= The Treatment Effect Subset Scan framework builds on the widely studied potential outcomes framework (Neyman-Rubin Causal Model), with random treatment assignment, enabling valid causal statements. More precisely, it begins with observing $\mathcal{N}$, a sample of $n$ independent and identically distributed units from a population of interest $\mathcal{P}$. The units are indexed by $i \in \{1, \ldots,n\}$, and for each unit there is a binary assignment indicator $W_i \in \{0,1\}$, where $W_i = 0$ indicates assignment to the control group (i.e., the group that did not receive the treatment), while $W_i = 1$ indicates assignment to the treatment group. Therefore, there exist two potential outcomes for each unit ($Y_i(0),Y_i(1) \in \mathbb{R}$), although only one of these two potential outcomes is observed for each unit. Additionally, each unit is described by $X_i$, a $d$-dimensional vector of covariates which are fixed, known, and unaffected by treatment assignment. Given this sample, we wish to perform causal inference for the (potentially infinite) population $\mathcal{P}$. In particular, there is interest in a causal population estimand that is a function of the potential outcome distributions and covariates: $\tau = \tau(F_{Y(1)},F_{Y(0)} \:|\: X)$, which can be approximated with estimators of the finite sample. In particular, we follow the literature and consider finite sample estimators that can be described as row-exchangeable functions of the potential outcomes, treatment assignments, and covariates, for all of the units in $\mathcal{N}$. More specifically, we consider $\tilde{\tau} = \tilde{\tau}(\bm{Y}(0), \bm{Y}(1), \bm{X}, \bm{W})$, where $\bm{Y}(0)$ and $\bm{Y}(1)$ are the $n$-dimensional column vectors of potential outcomes, $\bm{W}$ is the $n$-dimensional column vector of treatment assignments, and $\bm{X}$ is the $n \times d$ matrix of covariates, all of which are indexed by sample units $i$. In the following subsections we will describe causal estimands that are common in the literature and present the new causal estimands and estimators at the core of our TESS framework. Causal Estimands in the Literature ---------------------------------- In the heterogeneous treatment effect literature, the most flexible estimand considered thus far is the marginal conditional average treatment effect (MCATE) [@grimmer-hte_ensemble-2017], defined as $$ \label{eq:MCATE} \begin{split} \tau_{\text{MCATE}}(x^s) &= \int{ \left( \int{y~dF_{Y(1)|X^{^s}}(y|x^s)} - \int{y~dF_{Y(0)|X^{^s}}(y|x^s)}\right) dF_{X^{^{-s}} \mid X^{^{s}} = x^s}} \\ &= \int{\mathbb{E}\left[Y(1)-Y(0) \mid \left(X^1, X^2,\ldots,X^{s} = x^s, \ldots, X^d\right) \right] dF_{X^{^{-s}} \mid X^{^{s}} = x^s}} \\ &= \mathbb{E}\left[Y(1)-Y(0) \mid X^{s} = x^s\right]. \end{split}$$ The MCATE estimates the expected difference in potential outcomes for the specific subset $x^s$ of the covariate profile, marginalized over the remaining unfixed covariates. The MCATE generalizes the prevalent estimands in the literature: the average treatment effect (ATE), $\tau_{\text{ATE}} = \mathbb{E}[Y(1)-Y(0)]$, and *conditional* average treatment effect (CATE), $\tau_{\text{CATE}}(x) = \mathbb{E}[Y(1)-Y(0) | X = x]$. Essentially, the MCATE is a weighted average of the CATEs that include $X^{^s} = x^s$, weighted by the conditional distribution of the remaining unfixed covariates. A Distributional Average Treatment Effect Estimand Class -------------------------------------------------------- Although MCATE generalizes other estimands from the literature, it is limited to estimating the ATE for a particular covariate profile $X^{^s} = x^s$. In order to provide a population-level measurement that can capture more general changes in the outcome distribution resulting from treatment, we generalize MCATE to the distributional average treatment effect (DATE), a new class of treatment effect estimands. First, for a given covariate profile $X=x$, we define $\tau_{\text{DATE}}(x)$ as an arbitrary function of the cumulative distribution functions (cdfs) of the potential outcomes $Y(1)$ and $Y(0)$ given $X=x$: $\tau_{\text{DATE}}(x) \equiv \tau(F_{Y(1) \mid X=x}, F_{Y(0) \mid X=x})$ is a scalar which captures the individual-level treatment effect for a covariate profile $x$. For a set of covariate profiles $S$, we define $\tau_{\text{DATE}}(S)$ as a weighted average over the individual profiles: $$\begin{aligned} \label{eq:tauDATE(S)} \tau_{\text{DATE}}(S) = \int_{x \in S} \tau_{\text{DATE}}(x) P(X=x \mid X \in S) \; dx.\end{aligned}$$ We note that $\tau_{\text{CATE}}(x)$ and $\tau_{\text{MCATE}}(x^s)$ are special cases of $\tau_{\text{DATE}}(x)$ and $\tau_{\text{DATE}}(S)$ respectively, with function $\tau(F_1,F_0) = \int y dF_1(y) - \int y dF_0(y)$, the difference between the means of the two cdfs, and $S = \{X : X^{^s} = x^s\}$ consisting of those profiles with the given values for the subset of covariates $X^{^s}$. Although the DATE class includes CATE and MCATE, it provides more flexible estimation of HTEs by allowing other specifications of $\tau(F_1,F_0)$ that capture arbitrary comparisons between the potential outcome distributions. DATE also provides a flexible definition of subpopulations, considering an arbitrary set $S$ of covariate profiles, while MCATE only considers a single value $x$ for each covariate $X \in X^{^s}$ and all values for each covariate $X \in X^{^{-s}}$. Here we consider subsets $S$ representing *subspaces* of the attribute space, i.e., the Cartesian product of a subset of values for each attribute. This is important because a treatment of interest may affect multiple values, e.g., African-Americans *or* Hispanics who live in New York *or* Pennsylvania. While $\tau_{\text{DATE}}(S)$ is useful to estimate for a given subset $S$, our primary goal is to identify those subsets which have the most significant treatment effects. To do so, we need a model $H_0$ of how the data is generated under the null hypothesis of no treatment effect, i.e., $F_{Y(1) \mid X=x} = F_{Y(0) \mid X=x}$ for all $x$, and a general measure of divergence, $Div: \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}$, where $Div(u,v) \ge 0$ for all $u,v$ and $Div(u,u) = 0$ for all $u$. We then define $\mu_{\text{DATE}}(S)$ to represent the divergence between $\tau_{\text{DATE}}(S)$ and its expected value under $H_0$: $$\begin{aligned} \label{eqn:muDATE(S)} \mu_{\text{DATE}}(S) = Div(\tau_{\text{DATE}}(S), \mathbb{E}_{H_0}[\tau_{\text{DATE}}(S)]).\end{aligned}$$ For MCATE, we have $\mathbb{E}_{H_0}[\tau_{\text{MCATE}}(S)] = 0$, and thus $\mu_{\text{MCATE}}(S) = Div(\tau_{\text{MCATE}}(S),0)$. As described in §\[sec:div-stat\], the choice of divergence function depends on our assumptions about the data distribution under both null and alternative hypotheses. The Nonparametric Average Treatment Effect Estimand {#sec:nate} --------------------------------------------------- The class of estimands defined by the DATE is large; therefore, we select an instance of this class–which we define as the nonparametric average treatment effect (NATE)–that utilizes the flexibility provided by the DATE to evaluate a general divergence between two potential outcome distributions. We first define $$ \begin{split} \label{eqn:tauNATE(x)} \tau_{\text{NATE}_{\alpha} }(x) &= \beta_x(\alpha) \\ &\coloneqq F_{Y(1) \mid X=x}\left(F_{Y(0) \mid X=x}^{-1}(\alpha)\right), \end{split}$$ which maps the quantile value $\alpha$ of the control potential outcome distribution into the corresponding quantile value $\beta$ of the treatment potential outcome distribution. The corresponding $\tau_{\text{NATE}_\alpha}(S)$ and $\mu_{\text{NATE}_\alpha}(S)$ are defined as in and respectively. Under $H_0$ the potential outcomes are equal, thus $\tau_{\text{NATE}_{\alpha} }(x) = F_{Y(0) \mid X=x}\left(F_{Y(0) \mid X=x}^{-1}(\alpha)\right)=\alpha$, and $$ \label{eqn:muNATE(x)} \mu_{\text{NATE}_{\alpha}}(S) = Div\left(\int_{x \in S} \beta_x(\alpha)P(X = x \mid X \in S) \; dx,\alpha\right).$$ Intuitively, $\mu_{\text{NATE}_{\alpha}}(S)$ is a comparison between potential outcome distribution functions, localized to a specific subpopulation $S$ and quantile value $\alpha$ of the null distribution. We also consider the quantity $\mu_{\text{NATE}}(S) = \max_\alpha \mu_{\text{NATE}_{\alpha}}(S)$, which maximizes the divergence between treatment and control potential outcome distributions over a desired range of quantile values $\alpha$. This estimand will identify arbitrary effects of a treatment, over general subpopulations, measured by the maximal divergence between potential outcome distributions. An additional, and critical, component of NATE is allowing the different covariate profiles $x \in S$ to have different reference distributions: the distribution $F_{Y(0)|X=x}$ serves as the expectation for the corresponding distribution $F_{Y(1)|X=x}$. We note that the alternative approach of using a single reference distribution, aggregated from all controls in $S$, fails when the different covariate profiles being aggregated have different outcome distributions. In this case, marginalization could obfuscate, or even reverse, the true effects that are occurring in these covariate profiles: this phenomena is commonly known as Simpson’s Paradox. NATE avoids this paradox by evaluating the relationship between the outcome distributions for individual covariate profiles before aggregating across the subpopulation. Causal Estimators {#sec:causal-estimators} ----------------- Given that the estimands $\tau_{\text{DATE}}$ and $\mu_{\text{DATE}}$ are defined in terms of the cdfs $F_{Y(T) | X=x}$, $T \in \{0,1\}$, we consider the corresponding finite sample estimators, $\hat \tau_{\text{DATE}}$ and $\hat \mu_{\text{DATE}}$. We assume that each individual unit $Q_i$ is drawn i.i.d. from $\mathcal{P}$. A unit can be represented by a 4-tuple, $Q_i = (X_i,Y_i(0),Y_i(1),W_i)$, where $X_i$ are covariates, $Y_i(0)$ and $Y_i(1)$ represent that unit’s potential outcomes under control and treatment conditions respectively, and $W_i \in \{0,1\}$ is the treatment indicator. We note that $Y^{\text{obs}}_i = Y_i(W_i)$ is the unit’s observed outcome, while $Y_i(1-W_i)$ is unobserved. We define $$ \hat{F}_{Y^C | X=x}(y) = \frac{\sum_{Q_i:X_i = x} \mathbbm{1}\{W_i = 0\} \mathbbm{1}\{Y^{\text{obs}}_i \le y\}}{\sum_{Q_i:X_i=x} \mathbbm{1}\{W_i = 0\}},$$ and $\hat{F}_{Y^T \mid X=x}(y)$ similarly for units with $W_i = 1$. These represent the empirical cumulative distribution functions of $Y^{\text{obs}}_i$ for control individuals with $X_i = x$ and for treatment individuals with $X_i = x$ respectively. We can then define: $$\begin{aligned} \hat \tau_{\text{DATE}}(x) &= \tau(\hat F_{Y^T \mid X=x}, \hat F_{Y^C \mid X=x}),\\ \hat \tau_{\text{DATE}}(S) &= \frac{1}{N(S)} \sum_{Q_i: X_i \in U_{X}(S)} \hat \tau_{\text{DATE}}(X_i) = \sum_{x \in U_{X}(S)} \frac{N(x)}{N(S)} \hat \tau_{\text{DATE}}(x),\\ \hat \mu_{\text{DATE}}(S) &= Div(\hat \tau_{\text{DATE}}(S),\mathbb{E}_{H_0}[\hat \tau_{\text{DATE}}(S)]), \end{aligned}$$ where $U_{X}(S)$ is the set of unique covariate profiles in $S$, while $N(x)$ and $N(S)$ are the numbers of individuals $Q_i$ with $X_i = x$ and $X_i \in S$ respectively. To show that $\hat{F}_{Y^C}$ and $\hat{F}_{Y^T}$ are unbiased estimators of $F_{Y(0)}$ and $F_{Y(1)}$ requires additional assumptions about the mechanism by which units are assigned to the treatment or control group. Recall that $W_i$ determines which potential outcome is observed for unit $Q_i$. If $W_i$ is biased in which units it assigns to treatment, then subsequent inferences that do not account for this bias may be inaccurate. Thus we assume unconfoundedness: $Y_i(0),Y_i(1) {\protect\mathpalette{\protect\independenT}{\perp}}W_i \:|\: X_i$, i.e., potential outcomes are independent of treatment assignment conditional on the covariates. As a consequence, $\mathbb{E}\left[\hat{F}_{Y^C|X=x}(y)\right] = F_{Y(0) | X=x}(y)$, and similarly $\mathbb{E}\left[\hat{F}_{Y^T|X=x}(y)\right] = F_{Y(1) | X=x}(y)$ (Lemma \[lem:unconfound\] in Appendix \[sec:scoring\_functions\]). Therefore, given a randomized experiment, the empirical cumulative distribution of the control group is an unbiased and strongly consistent estimator of its population cumulative distribution function, and likewise for the treatment group. Noting that $\frac{N(x)}{N(S)} {\xrightarrow{a.s.}}\ \mathbb{E}\left[\frac{N(x)}{N(S)}\right] \text{ and } \mathbb{E}\left[\frac{N(x)}{N(S)}\right] = P(X = x \mid X \in S)$, it follows that $\hat \tau_{\text{DATE}}(x)$, $\hat \tau_{\text{DATE}}(S)$, and $\hat \mu_{\text{DATE}}(S)$, defined in terms of $\hat{F}_{Y^T|X=x}$ and $\hat{F}_{Y^C|X=x}$ as above, are unbiased and strongly consistent finite sample estimators of their population estimands. Choice of divergence function and test statistic {#sec:div-stat} ------------------------------------------------ Having defined the finite sample estimator $\hat \mu_{\text{DATE}}(S)$ in terms of the divergence $Div(\cdot,\cdot)$ between $\hat \tau_{\text{DATE}}(S)$ and its expectation under the null hypothesis, we now consider the choice of divergence function. For the nonparametric average treatment effect estimator, recall that $\hat \mu_{\text{NATE}}(S) = \max_\alpha \hat \mu_{\text{NATE}_{\alpha}}(S) = \max_\alpha Div(\hat \tau_{\text{NATE}_{\alpha}}(S),\alpha)$, where $\hat \tau_{\text{NATE}_{\alpha}}(x) = \hat F_{Y^T \mid X=x}\left(\hat F_{Y^C \mid X=x}^{-1}(\alpha)\right)$ maps the $\alpha$ quantile of the control observations with $X=x$ to a corresponding quantile $\beta_x(\alpha)$ of the treatment observations with $X=x$. Let $N_\alpha(x)$ be defined as number of outcomes $y_i$ that are significant at $\alpha$ level, with covariate profile $X_i = x$. We now consider two different models of the data generating process, based on the binomial distribution and a normal approximation to the binomial respectively. In each case, we compute the log-likelihood ratio statistic $F(S) = \log \left( \frac { P \left( \text{Data} | H_1(S) \right)}{P \left( \text{Data} | H_0 \right) } \right)$, which can be written as the product of the total number of $p$-values $N(S)$ in subset $S$ and a divergence $Div\left(\frac{N_\alpha(S)}{N(S)},\alpha\right)$ between the observed and expected proportions of p-values that are significant at level $\alpha$. For the binomial model, we have: H\_0&: N\_(x) \~ ( N(x), ) x &&\ H\_1(S)&: N\_(x) \~ ( N(x), ) x U\_[X]{}(S) = , && with the following Berk-Jones (BJ) log-likelihood ratio statistic [@berk-bj-1979]: $$\begin{aligned} F_{\alpha}^{BJ}(S) &= \log \left [ \frac { P \left( \text{Data} | H_1(S) \right)}{P \left( \text{Data} | H_0 \right) } \right ] \\ &= N_{\alpha}(S) \log \left(\frac {\beta} {\alpha} \right) + \left( N(S)-N_{\alpha}(S) \right) \log \left( \frac{1-\beta}{1-\alpha} \right) \\ &= N(S)Div_{KL} \left(\frac{N_{\alpha}(S)}{N(S)}, \alpha \right),\end{aligned}$$ where we have used the maximum likelihood estimate $\beta = \beta_{\text{mle}}(S) = \frac{N_{\alpha}(S)}{N(S)}$, and $Div_{KL}(\cdot,\cdot)$ is the Kullback-Liebler divergence, $Div_{KL}(x,y) = x \log \frac{x}{y} + (1-x) \log \frac{1-x}{1-y}$. For the normal approximation, we have: H\_0&: N\_(x) \~ (N(x), (1-)N(x) ) x &&\ H\_1(S)&: N\_(x) \~ (N(x), (1-)N(x) ) x U\_[X]{}(S) = , && with the following normal approximation (NA) log-likelihood ratio statistic: $$\begin{aligned} F_{\alpha}^{NA}(S) &= \log \left [ \frac { P \left( \text{Data} | H_1(S) \right)}{P \left( \text{Data} | H_0 \right) } \right ] \\ &= \frac{N_{\alpha}(S)\left(\beta-\alpha\right)}{\alpha(1-\alpha)} + \frac{N(S)\left(\alpha^2-\beta^2\right)}{2\alpha(1-\alpha)} \\ &= \frac{\left(N_{\alpha}(S)-N(S)\alpha\right)^2}{2N(S)\alpha(1-\alpha)} \\ &= N(S)Div_{\frac{1}{2}\chi^2} \left(\frac{N_{\alpha}(S)}{N(S)}, \alpha \right),\end{aligned}$$ where we have again used the maximum likelihood estimate of $\beta = \frac{N_{\alpha}(S)}{N(S)}$, and $Div_{\frac{1}{2}\chi^2}(\cdot,\cdot)$ is a scaled $\chi^2$ divergence, $Div_{\frac{1}{2}\chi^2}(x,y) = \frac{(x-y)^2}{2y(1-y)}$. The above $Div\left(\frac{N_{\alpha}(S)}{N(S)}, \alpha \right)$ each represent a $\hat \mu_{\text{NATE}_\alpha}(S)$, exhibiting the desirable properties described in §\[sec:causal-estimators\], and the corresponding score functions can be written as $F_{\alpha}(S) = N(S)\hat{\mu}_{\text{NATE}_\alpha}(S)$. We note that in NA, the alternative hypothesis corresponds to a change in the mean of a normal distribution, while the variance remains unchanged. We show in Appendix \[sec:scoring\_functions\] that this test statistic is related to many well-known goodness-of-fit statistics such as the Kolmogorov-Smirnov, Cramer-von Mises, Anderson-Darling, and Higher Criticism statistics. Treatment Effect Subset Scanning {#sec:tess} ================================ Treatment Effect Subset Scan (TESS) is a novel framework for identifying subpopulations in a randomized experiment which experience treatment effects–i.e., changes in quantiles of their outcome distribution–built atop the framework for distributional causal inference established in §\[sec:causal\_framework\], with the divergence estimand of interest described in §\[sec:nate\]. Unlike previous methods, TESS structures the challenge of treatment effect identification as an anomalous pattern detection problem–where the objective is to identify patterns of systematic deviations away from expectation–which is then solved by scanning over subpopulations. TESS therefore searches for subsets of values of each attribute for which the distributions of outcomes in the treatment groups are systematically anomalous, i.e., significantly different from their expectation as derived from the control group. More precisely, we define a real-valued outcome of interest $Y$ and a set of discrete covariates $X = (X^{1}, \ldots, X^{d}$), where each $X^{j}$ can take on a vector of values $V^{j}=\{v^{j}_{m}\}_{m = 1...|V^j|}$. Therefore, we define the arity of covariate $X^{j}$ as $|V^{j}|$, the cardinality of $V^j$, and note that any covariate profile $x$, a realization of $X$, follows $x \in \{V^1 \times \ldots \times V^d\}$. We then define a dataset (as in §\[sec:causal\_framework\]) as a sample $\mathcal{N}$ composed of $n$ records (units) $\{R_{1}, \ldots, R_{n}\}$, randomly drawn from population $\mathcal{P}$, where each 3-tuple $R_{i} = (Y_i, X_i, W_i)$ is described by an observed potential outcome $Y_i = Y^{\text{obs}}_i$, covariates $X_i$, and an indicator variable $W_i$, which indicates if the unit was randomly assigned to the treatment condition; see Table \[table:example\_data\] for a demonstrative example. We define the subpopulations $S$ under consideration to be $S = \{v^1 \times \ldots \times v^d\}$, where $v^j \subseteq V^j$. We wish to find the most anomalous subset $$ \label{eqn:objective_function} S^\ast = v^{1*}\times \ldots \times v^{d*} = \arg \max_{S} F(S)$$ where $F(S)$ is commonly referred to in the anomalous pattern detection literature as a score function, to measure the anomalousness of a subset $S$. In the context of TESS, this function is a test statistic of the treatment effect–i.e., the divergence between the treatment and control group–in subpopulation $S$ and therefore will be a function of the estimator $\hat{\mu}_{\text{NATE}}(S)$. We accomplish this by first partitioning the experiment dataset into control and treatment groups, and passing the groups to the TESS algorithm. For each unique covariate profile $x$ in the treatment group, TESS computes the empirical conditional outcome distribution $\hat F_{Y^C \mid X=x}$ from the control group, estimating the conditional outcome distribution under the null hypothesis $H_{0}$ that the treatment has no effect on units with this profile. Then for each record $R_{i}$ in the treatment group, TESS computes an empirical $p$-value $p_{i}$, which serves as a measure of how uncommon it is to see an outcome as extreme as $Y_{i}$ given $X=x_{i}$ under $H_0$. The ultimate goal of TESS is to discover subpopulations $S$ with a large amount of evidence against $H_0$, i.e., the outcomes of units in $S$ are consistently extreme given $H_0$. Thus, TESS searches for subpopulations which contain an unexpectedly large number of low (significant) empirical $p$-values, as such a subpopulation is more likely to have been affected by the treatment. Estimating Reference Distributions and Empirical P-values {#sec:ref-dist} --------------------------------------------------------- After partitioning the data into treatment and control groups, the TESS framework estimates the reference distribution for each unique covariate profile in the treatment group. These estimates follow from two assumptions: randomization and a sharp null hypothesis of no treatment effect. Randomization implies that $\hat{F}_{Y^C \mid X} \xrightarrow{a.s.} F_{Y(0) \mid X}$, while the sharp null hypothesis that no subpopulation is affected by the treatment implies that $F_{Y(0) \mid X} = F_{Y(1) \mid X}$; therefore, TESS uses $\hat{F}_{Y^C \mid X}$ as an unbiased and strongly consistent estimator of the unknown $F_{Y(1) \mid X}$ under $H_{0}$. Intuitively, under $H_{0}$ the outcomes of the treatment and control groups are drawn from the same distribution, allowing $\hat F_{Y^C \mid X = x}$ to serve as an outcome reference distribution for treatment units with covariate profile $X = x$. **When $H_0$ is true** the outcomes for every unit in the treatment group and the control group, with the same covariate profile, are exchangeable. **When $H_0$ is false** the affected treatment outcomes are drawn from an alternative distribution, different than their assumed reference distributions under $H_0$. No additional assumptions are made about the relationship between reference distributions. TESS calculates an empirical $p$-value for each treatment unit to obtain a measure of how “anomalous” or unusual a particular unit’s outcome is given its reference distribution. For each unit $R_i$ in the treatment group, we compute its empirical reference distribution: $$ \label{eq:control-edf} \hat{F}_{Y^C|X}(y_i|x_i) = \frac{1}{N(x_i)}\sum_{y_j \in Y^C(x_i)}{\mathbbm{1}\{y_j \le y_i\}},$$ where $Y^C(x_i)$ is the set of outcomes for control units with covariate profile $x_i$ and $N(x_i) = |Y^C(x_i)|$. The empirical $p$-value $p(y_{i})$ (or $p_{i}$ for notational convenience) is derived from , as in [@mcfowland-fgss-2013]. Furthermore, the $p_{i}$ are guaranteed to be distributed Uniform(0,1) under $H_0$[^1], which follows from exchangeability and the probability integral transform. We define the significance of a $p$-value, for a significance level $\alpha$, as $n_\alpha(p(y)) = \mathbbm{1}\{ p(y) \le \alpha\}$. Although we define and estimate individually for each unique covariate profile $x$ using the empirical distribution function, we note that TESS only requires some means of computing a $p$-value for each treatment unit. The empirical distribution allows TESS to accommodate arbitrary differences in conditional outcome distributions across covariate profiles, enabling general applicability without a priori contextual knowledge. However, in a specific context of interest, it may be possible to combine data across profiles to construct a more general estimate of the conditional probability distributions. Statistical learning offers many options for density estimation, any of which can be utilized in TESS. Subpopulations {#sec:subpopulations} -------------- Given $p$-values as a measure of the anomalousness of individual treatment units, we now consider how TESS combines these measures to form subpopulations. For intuition, we propose representing the data as a tensor, where each covariate is represented by a mode of the tensor, $X = (X^{1}, \ldots, X^{d})$, resulting in a $d$-order tensor. $|V^j|$, the arity of the $j^{th}$ covariate, is the size of the $j^{th}$ mode. Therefore, each covariate profile $x$ maps to a unique cell in the tensor, which contains the $p$-values of the treatment units that share $x$ as their covariate profile. As stated above, a subpopulation is $S = \{v^1 \times \ldots \times v^d\}$, where $v^j \subseteq V^j$; therefore, an individual cell (i.e., covariate profile $x$) is itself a subpopulation: $S = \{\{x^1\} \times \ldots \times \{x^d\}\}$. For a demonstrative example see Table \[table:example\_tensor\]. For a given subpopulation $S$, we define the quantities $$ \label{eqn:C(S),N_alpha,N} C(S) = \bigcup_{x \in U_{X}(S)}Y^T(x),\quad N_\alpha(S) = \sum_{y \in C(S)}{ n_\alpha(p(y))},\quad N(S) = \sum_{y \in C(S)}{1}$$ where $Y^T(x)$ is defined similarly to $Y^C(x)$, but for the treatment group; $C(S)$ is the union of treatment outcomes in the cells (i.e., covariate profiles) in $S$, and the corresponding $p$-values; $N(S)$ represents the total number of empirical $p$-values contained in $C(S)$; and $N_{\alpha}(S)$ is the number of p-values in $C(S)$ that are less than $\alpha$.[^2] Given that the distribution of each $p$-value is Uniform(0,1) under the null hypothesis that the treatment has no effect, for a subpopulation $S$ consisting of $N(S)$ empirical $p$-values, $\mathbb{E}\left[N_{\alpha}(S)\right] = \alpha N(S)$. Under the alternative hypothesis, we expect the outcomes of the affected units to be more concentrated in the tails of their reference distributions; thus, the $p$-values for these affected units will be lower. Therefore, subpopulations composed of covariate profiles that are systematically affected by the treatment should express higher values of $N_\alpha(S)$ for some $\alpha$. Consequently, a subpopulation $S$ where $N_\alpha(S) > \alpha N(S)$ (i.e., with a higher than expected number of low, significant $p$-values) is potentially affected by the treatment. Nonparametric Scan Statistic {#sec:npss} ---------------------------- TESS utilizes the nonparametric scan statistic [@mcfowland-fgss-2013; @feng-npss_graph-2014] to evaluate the statistical anomalousness of a subpopulation $S$ by comparing the observed and expected number of significantly low $p$-values it contains. The general form of the nonparametric scan statistic is $${} F(S)=\max_{\alpha}F_{\alpha}(S)=\max_{\alpha}\phi(\alpha,N_{\alpha}(S),N(S))=\max_{\alpha} N(S)\hat{\mu}_{\text{NATE}_\alpha}(S),$$ where $N_{\alpha}(S)$ and $N(S)$ are defined as in . Here $F_{\alpha}(S)$ is a log-likelihood ratio test statistic of the treatment effect in subpopulation $S$ which, as shown in §\[sec:div-stat\], is proportional to the divergence $\hat{\mu}_{\text{NATE}_\alpha}(S)$. We consider “significance levels” $\alpha \in [\alpha_{\text{min}},\alpha_{\text{max}}]$, for constants $0 < \alpha_{\text{min}} < \alpha_{\text{max}} < 1$. Maximizing $F(S)$ over a range of $\alpha$, rather than a single arbitrarily-chosen $\alpha$ value, enables TESS to detect a small number of highly anomalous $p$-values, a larger subpopulation with subtly anomalous $p$-values, or anything in between. The range of $\alpha$ to consider can be specified based on the quantile values of interest, or $(0,1)$ representing the entire distribution. The choice of $\alpha_{\text{max}}$ describes how extreme a value must be, as compared to the reference distribution, in order to be considered significant. We often choose $\alpha_{\text{min}} \approx 0$, but larger values can be used to avoid returning subsets with a small number of extremely significant $p$-values. ### Efficient Scanning The next step in the TESS framework is to detect the subpopulation most affected by the treatment, i.e., to identify the most anomalous subset of values for each of the $d$ modes of the tensor, or equivalently for each covariate $X^1 \ldots X^d$. More specifically, the goal is to identify the set of subsets $\{v^{1},\ldots, v^{d}\}$ where each element corresponds to values in a tensor-mode (covariate), such that $F(\{v^{1} \times \ldots \times v^{d}\})$ is jointly maximized. The computational complexity of solving this optimization naively is $O^{(2^{\sum_j |V^j|})}$, where $|V^j|$ is the size of mode $j$ (the arity of $X^j$), and is computationally infeasible for even moderately sized datasets. We therefore employ the linear-time subset scanning property (LTSS) [@neill-ltss-2012], which allows for efficient and exact maximization of any function satisfying LTSS over all subsets of the data. We begin by noting that, for the score function $F_\alpha(S)$ with a fixed value of $\alpha$: *(A1)* $\phi$ is monotonically ***increasing*** w.r.t. $N_{\alpha}$, *(A2)* $\phi$ is monotonically ***decreasing*** w.r.t. $N$, and *(A3)* $\phi$ is ***convex*** w.r.t. $N_{\alpha}$ and $N$. These properties are intuitive because the ratio of observed to expected number of significant $p$-values $\frac{N_\alpha}{\alpha N}$ increases with the numerator (A1) and decreases with the denominator (A2). Also, a fixed ratio of observed to expected is more significant when the observed and expected counts are large (A3). We now turn to the LTSS property which states that, for a given set of data elements $R=\{R_1 \ldots R_n \}$, a score function $F(S)$ mapping $S\subseteq R$ to a real number, and a priority function $G(R_i)$ mapping a single data element $R_i \in R$ to a real number, the LTSS property guarantees that the only subsets with the potential to be optimal are those consisting of the top-$t$ highest priority records $\{R_{(1)} \ldots R_{(t)} \}_{t\in[1,n]}$. More formally, we restate a theorem from @neill-ltss-2012 and add a corollary that extends LTSS to the high-dimensional tensor context: \[thm:LTSS\] Let $F(S) = F(X, Y)$ be a function of two additive sufficient statistics of subset $S$, $X(S) = \sum_{R_i \in S} x_i$ and $Y(S) = \sum_{R_i \in S} y_i$, where $x_i$ and $y_i$ depend only on element $R_i$. Assume that $F(S)$ is monotonically increasing with $X(S)$, that all $y_i$ values are positive, and that $F(X, Y)$ is convex. Then $F(S)$ satisfies the LTSS property with priority function $G(R_i) = \frac{x_i}{y_i}$. \[cor:LTSS-NPSS\] Consider the general class of nonparametric scan statistics $F(S) = \max_\alpha F_\alpha(S)$, where the significance level $\alpha \in [\alpha_{\text{min}},\alpha_{\text{max}}]$, for constants $0 < \alpha_{\text{min}} < \alpha_{\text{max}} < 1$. For a given value of $\alpha$ and $v^{-j} = \{v^1, \ldots, v^{j-1}, v^{j+1}, \ldots, v^{d}\}$ under consideration, $F_{\alpha}(S)$ can be efficiently maximized over all subpopulations $S = v^j \times v^{-j}$, for $v^j \subseteq V^j$. First, we note that number of $p$-values in every $v^j$ is positive: we only consider the values of a covariate that are expressed by some treatment unit. Thus we have $F_{\alpha}(S) = \phi(\alpha,N_{\alpha}(v^j), N(v^j))$, with the additive sufficient statistics $N_{\alpha}(v^j)= \sum_{y \in C(v^j \times v^{-j})} n_{\alpha}(p(y))$ and $N(v^j)=\sum_{y \in C(v^j \times v^{-j})} 1$. Since the nonparametric scan statistic is defined to be monotonically increasing with $N_{\alpha}$ (A1), monotonically decreasing with $N$ (A2), and convex (A3), we know that $F_{\alpha}(S)$ satisfies the LTSS property with priority function, over the values of mode (covariate) $j$, $G_{\alpha}(v^{j}_m) = \frac{\sum_{y \in C(v^{j}_m \times v^{-j})} n_{\alpha}(p(y))} {\sum_{y \in C(v^{j}_m \times v^{-j})} 1}$ for $v^j_m \in V^j$. Therefore the LTSS property holds for each value of $\alpha$, enabling each $F_{\alpha}(S)$ to be efficiently maximized over subsets of values for the $j^{th}$ mode of the tensor, given values for the other $d-1$ modes. Essentially, Corollary \[cor:LTSS-NPSS\] demonstrates that the nonparametric scan statistic satisfies LTSS in the context of TESS and therefore a single mode of a tensor can be efficiently optimized, conditioned on values of the other modes. Let $U_{\alpha}(S)$ be the set of unique $p$-values between $\alpha_{\text{min}}$ and $\alpha_{\text{max}}$ contained in subpopulation $S$. Then the quantity $\max_S F(S) = \max_{\alpha \in U_{\alpha}(S)} \max_S F_\alpha(S)$ can be efficiently and exactly computed over all subsets $S = v^j \times v^{-j}$, where $v^j \subseteq V^j$, for a given subset of values for each of the other modes $v^{-j}$. To do so, consider the set of distinct $\alpha$ values, $U = U_{\alpha}(V^j \times v^{-j})$. For each $\alpha \in U$ we employ the same logic as described in Corollary to optimize $F_{\alpha}(S)$: we compute the priority $G_\alpha(v^{j}_m)$ for each value ($v^{j}_m \in V^j$), sort the values based on priority function $G_{\alpha}(v^{j}_m)$, and evaluate subsets $S=\{v^{j}_{(1)} \ldots v^{j}_{(t)}\} \times v^{-j}$ consisting of the top-$k$ highest priority values, for $t=1, \ldots,|V^j|$. TESS then iterates over modes of the tensor, using the efficient optimization steps described above to optimize each mode: $v^j = {\arg\max}_{v^j \subseteq V^j} F(v^j \times v^{-j}), j = 1 \ldots d$. The cycle of optimizing each mode continues until convergence, at which point TESS has reached a conditional maximum of the score function, i.e., $v^j$ is conditionally optimal given $v^{-j}$ for all $j = 1 \ldots d$. This ordinal ascent approach is not guaranteed to converge to the joint optimum, but with multiple random restarts the combination of subset scanning and ordinal ascent has been shown to locate near globally optimal subsets with high probability [@neill-mvltss-2013; @mcfowland-fgss-2013]. Moreover, if $\sum_{j=1}^d |V^j|$ is large, this iterative procedure makes the ability to detect anomalous subpopulations computationally feasible, without excluding potentially optimal subpopulations from the search space (as a greedy top-down approach may). A single iteration (optimization of mode $j$ of the tensor) has a complexity of $O\left(|U| \left(n_t + |V^j| \log |V^j|\right)\right)$, where the $n_t$ term, the number of treatment units, results from collecting the $p$-values for all units in $C\left(V^j \times v^{-j}\right)$ over our sparse tensor; $U = U_{\alpha}\left(V^j \times v^{-j}\right)$, with $|U| \le n_t$; and $O\left(|V^j| \log |V^j|\right)$ is required to sort the values of tensor mode $j$. Therefore a step in the procedure (a sequence of $d$ iterations over all modes of the tensor) has complexity $O\left(\bar{U} d \left(n_t + \bar{V} \log \bar{V}\right)\right)$, where $\bar{U}$ and $\bar{V}$ are the average numbers of $\alpha$ thresholds considered and covariate arity, respectively. Thus the TESS search procedure has a total complexity of $O\left(I \bar{Z} \bar{U} d \left(n_t + \bar{V} \log \bar{V}\right)\right)$, where $I$ is the number of random restarts and $\bar{Z}$ is the average number of iterations required for convergence. We note that $\bar{Z}$ is typically very small; $\bar{Z} \le 5$ across all simulations discussed in §\[sec:results\]. TESS Algorithm {#sec:algo} -------------- Inputs: randomized experiment dataset, $\alpha_{\text{min}}$, $\alpha_{\text{max}}$, number of iterations $I$. 1. For each unique covariate profile $x$ in the treatment group: 1. Estimate $\hat{F}_{Y^C \mid X = x}$ from the outcomes of the units in the control group that share profile $x$. 2. Compute the $p$-value $p_i = p(y_i)$ for each treatment unit $i$ with profile $x$ from $\hat{F}_{Y^C \mid X = x}$. 2. Iterate the following steps $I$ times. Record the maximum value $F^\ast$ of $F(S)$, and the corresponding subsets of values for each mode $\{v^{1 \ast}, \ldots,v^{d \ast}\}$ over all such iterations: 1. For each of the $d$ modes, initialize $v^j$ to a random subset of values $V^j$. 2. Repeat until convergence: 1. For each of the $d$ modes: 1. Maximize $F(S) = \max_{\alpha \in [\alpha_{\text{min}}, \alpha_{\text{max}}]} F_\alpha(v^j \times v^{-j})$ over subsets of values for $j^{th}$ mode $v^j \subseteq V^j$, for the current subset of values of the other $d-1$ modes ($v^{-j}$), and set $v^j \leftarrow \arg\max_{v^j \subseteq V^j} F(v^j \times v^{-j})$. 3. Output $S^\ast = \{v^{1 \ast}, \ldots,v^{d \ast}\}$. Estimator Properties {#sec:estimator-theory} -------------------- In the above sections we outline a procedure to efficiently compute $\max_{S \in R} F(S)$, where $R$ represents the space of all rectangular subsets. In this section we treat $\max_{S \in R} F(S)$ as a statistic of the data and aim to show that it has desirable statistical properties. It is known that for data $X_1,\ldots,X_n {\overset{iid}{\sim}}P$ and the corresponding empirical distribution function $P_n$, $\|P_n - P\|_\infty \xrightarrow{a.s.}\ 0$. Many goodness-of-fit statistics $GoF(P_n, P)$ are equivalent to an empirical process over centered and scaled empirical measures; and empirical process theory provides tools to control Type I and II error [@gaenssler-glivenko_cantelli-2004; @dvoretzky-dkw-1956; @shorack-emp_process-2012]. However, in a general sense our goal is to control the behavior of $\max_{S \subseteq \{X_1,\ldots,X_n\}}GoF(P_S, P)$, where $P_S$ is the empirical measure given by the subset $S$. It is not obvious whether the desirable properties present for $P_n$ will persist when considering the empirical measure of $P_S$, a non-random subset of the data chosen by our optimization procedure. Given that this context of optimization over subsets is not considered in the current goodness-of-fit literature, we provide various theoretical results in support of our subset scanning algorithm. In the remainder of the section we present the key statements necessary to show our desired properties below, while additional results and all proofs can be found in Appendix \[sec:proofs\]. We begin with: [lem]{}[bjtona]{} \[lem:BJ\_to\_NA\] $F^{BJ}(S) \asymp F^{NA}(S)$ as $N(S)\longrightarrow \infty$. This result indicates that, as the number of subjects in a given subpopulation grows, its score under $F^{BJ}$ is well approximated by $F^{NA}$, where both functions are described in §\[sec:div-stat\]. Given the fact that a large class of other goodness-of-fit statistics in the literature are monotonic transformations of $F^{NA}$ (see Appendix \[sec:scoring\_functions\]), this result allows us to focus the remainder of our results on $F^{NA}$. Our score function can be considered a test statistic for the following hypothesis test: \[eq:test\] & H\_0:  &Y\_i(1) \~F\_[Y\_i(0) | X\_i]{}X\_i U\_X(D)&\ H\_1(S):  &Y\_i(1) \~F\_[Y\_i(0) | X\_i]{}X\_i U\_X(S) Y\_i(1) \~F\_[Y\_i(0) | X\_i]{}X\_i U\_X(S), S R& & where $D$ is our dataset (or tensor) of treatment units and $R$ is the set of all rectangular subsets of $D$. The null hypothesis is that all of the observed outcomes of treatment units are drawn from the same conditional outcome distribution (given the observed covariates) as their control group. The hypothesis tests which serve as the foundation for the score functions described in §\[sec:div-stat\] are special cases of this more general hypothesis test in . Recall that $U_{X}(D)$ is the set of unique covariate profiles (non-empty tensor cells) in our data, with cardinality $|U_{X}(D)| = M$; while $S^{\ast}=\arg \max_{S \in R} F(S)$ and $S^{\ast}_{u} = \arg \max_S F(S)$ represent the most anomalous rectangularly constrained subset and the most anomalous unconstrained subset respectively. We assume $N(x) \ge n$ for all $x \in U_{X}(D)$, i.e., at least $n$ units belong to each unique covariate profile (non-empty cell) in the data. We consider the case where $M,n \longrightarrow \infty$, maximizing $F(S)= \max_\alpha F_\alpha(S)$ over $\alpha \in \left[\alpha_{min}, \alpha_{max} \right]$ for constants $0 < \alpha_{min} < \alpha_{max} < 1$. We can therefore demonstrate: [lem]{}[nullconverg]{} \[lem:null\_converg\] Under $H_{0}$ defined in , $F^{NA}\left(S^{\ast}_{u}\right) {\xrightarrow{p}}\max_{Z} \frac{M\phi\left(Z\right)^2}{2(1-\Phi\left(Z\right))} \approx 0.202 \: M$, where $\phi$ and $\Phi$ are the Gaussian pdf and cdf respectively. Thus, when the null hypothesis is true, the score of the most unconstrained anomalous subset is asymptotically linear in $M$. Our ability to understand the limiting behavior of the $F\left(S^{\ast}_{u}\right)$ is built atop LTSS theory which indicates the optimal unconstrained subset will be $S^{\ast}_{u} = \{x_{(1)}, \ldots, x_{(t)}\}_{t\in\left[1,M\right]}$, where $x_{(t)}$ has the $t^{th}$ largest value of the random variable $\frac{N_{\alpha}(x)}{N(x)}~\forall x \in U_{X}(D)$. Next we note that the score maximized over the space of unconstrained subsets upper-bounds the score maximized over the subspace of rectangular subsets: $F\left(S^{\ast}\right) \le F\left(S^{\ast}_{u}\right)$. Therefore, we have the following result: [thm]{}[falseposotive]{} \[thm:false\_posotive\] Under $H_{0}$ defined in , let $N(x) \ge n~ \forall x \in U_{X}(D)$, fix $\epsilon > 0$, and assume $M, n \longrightarrow \infty$; then there exist a constant $C\le \max_{Z} \frac{\phi\left(Z\right)^2}{2(1-\Phi\left(Z\right))} \approx 0.202$ and critical value $h\left(M,\epsilon\right) = CM+\epsilon$ such that $$ P_{H_0}\left(\max_{S \in R} F(S) > h\left(M,\epsilon\right)\right) \longrightarrow 0.$$ As a direct consequence of Theorem \[thm:false\_posotive\], $\max_{S \in R} F(S)$ provides a statistic to quantify the evidence to reject $H_{0}$, whose Type I error rate can be controlled, producing an asymptotically valid $\gamma$-level hypothesis test $P_{H_0}(\text{Reject }H_0) \le \gamma$, for any fixed $\gamma \in (0,1)$. We note that, because we are maximizing both over subsets $S$ and thresholds $\alpha$, our result is distinct from the straightforward application of Dvoretzky–Kiefer–Wolfowitz bounds (maximizing over $\alpha$ for a given subset $S$), which would give us $\max_\alpha | \frac{N_\alpha(S)}{N(S)} - \alpha| \longrightarrow 0$ and therefore $F(S) \longrightarrow 0$. Next, we turn our attention to the alternative hypothesis, where $S^T$ represents the truly affected subset, $k=\frac{|U_{X}(S^T)|}{|U_{X}(D)|}$ is its proportion of covariate profiles, and $H_{1}\left(S^T\right)$ implies that there exist constants $\alpha$ and $\beta(\alpha) > \alpha$ such that, for all $x \in U_{X}(S^T)$, $\beta_x(\alpha) = F_{Y(1) \mid X=x}\left(F_{Y(0) \mid X=x}^{-1}(\alpha)\right) = \beta(\alpha)$. We then have the following results: [lem]{}[altconverg]{} \[lem:alt\_converg\] Under $H_{1}\left(S^T\right)$, $F^{NA}\left(S^T\right) {\xrightarrow{a.s.}}\max_{\alpha} (\beta\left(\alpha\right) - \alpha)^2\frac{kMn}{2\alpha(1-\alpha)}$. [thm]{}[power]{} \[thm:power\] Under $H_{1}\left(S^T\right)$ defined in , let $N(x) \ge n~ \forall x \in U_{X}(D)$, fix $\epsilon > 0$, and assume $M, n \longrightarrow \infty$; then for the same critical value $h\left(M,\epsilon\right)$ as in Theorem \[thm:false\_posotive\], $$P_{H_1}\left(\max_{S \in R} F(S) > h\left(M,\epsilon\right)\right) \longrightarrow 1.$$ As a direct consequence of Theorem \[thm:power\], the Type II error rate can be controlled and produces a hypothesis test with full asymptotic power $P_{H_1}(\text{Reject }H_0) \longrightarrow 1$. Note that in this context we consider a fixed alternative $\beta(\alpha)$, as opposed to a local alternative where $\beta_{Mn}(\alpha) \longrightarrow \alpha$ as $M, n \longrightarrow \infty$. Additionally, because the critical value $h\left(M,\epsilon\right)$ is the same in Theorems \[thm:false\_posotive\] and \[thm:power\], we are therefore showing that $P(Type~I ~error) + P(Type~II~error) \rightarrow 0$. We note that for any given experiment (with finite $M$ and $n$), permutation testing can be used to control the Type I error rate of our scanning procedure, and conditions have been shown where permutation calibrations achieve the Type II error rates of an oracle scan test [@arias-castro-calibration-2017]. These results intuitively capture our statistic’s ability to conclude that the null hypothesis is false– i.e., there exists some subset that follows $H_1$, and therefore invalidates $H_{0}$. However, this does not necessarily provide a guarantee that the statistic will exactly capture the true subset. Therefore, next we will derive finite sample conditions under which our framework achieves subset correctness: $S^{\ast} = S^T$. We begin by demonstrating that the score function of interest can be re-written as an additive function if we condition on the value of the null and alternative hypothesis parameters $\alpha$ and $\beta(\alpha)$ from the hypothesis test in §\[sec:div-stat\]. More specifically, the score of a subset $S$ can be decomposed into the sum of contributions (measured by a function $\omega$) from each individual covariate profile $x$ contained within the subset. For example, with respect to $F^{NA}$, $\omega^{NA} \left( \alpha, \beta, N_{\alpha}\left(x\right), N\left(x\right) \right) = C^{1}_{\alpha, \beta} ~ N_{\alpha}\left(x\right) + C^{2}_{\alpha, \beta} ~ N\left(x\right)$, where each $C$ is only a function of $\alpha$ and $\beta$, and therefore constant. [lem]{}[addfuncs]{} \[add\_funcs\] $F(S)$ can be written as $\max_{\alpha, \beta}\sum_{x \in U_{X}(S)}{\omega \left( \alpha, \beta, N_{\alpha}\left(x\right), N\left(x\right) \right)}$, for $\alpha, \beta \in (0,1)$ representing quantile values of the control and treatment potential outcomes distributions respectively. Next, we seek to demonstrate some important properties of the $\omega$ functions. More specifically we have that [lem]{}[naconcave]{} \[lem:na-concave\] $\omega^{NA}\left( \alpha, \beta, N_{\alpha}\left(x\right), N\left(x\right) \right)$ is concave with respect to $\beta$, maximized at $\beta_{\text{mle}}(x) = \frac{N_{\alpha}\left(x\right)}{N\left(x\right)}$, and has two roots $\left(\beta_{\min}(x), \beta_{\max}(x)\right)$. We show the same result in Lemma \[lem:bj-concave\] for $\omega^{BJ}.$ Intuitively, $(\beta_{\min}(x),\beta_{\max}(x))$ is the interval over which $\omega$ is concave and makes a positive contribution to the score of a subset, while this contribution is maximized at $\beta_{\text{mle}}(x)$; we note that in the case of $\omega^{NA}$, $\beta_{\min}(x) = \alpha$. We are now interested in the relationship between $r_{\max} = \beta_{\max}(x) - \alpha$ and $r_{\text{mle}} = \beta_{\text{mle}}(x) - \alpha$. [lem]{}[maxtomle]{} \[lem:na-max-to-mle\] With respect to $\omega^{NA}\left( \alpha, \beta, N_{\alpha}(x), N(x) \right)$, $\frac{r_{\max}(x)}{r_{\text{mle}}(x)} = 2$. We show a similar result in Lemma \[lem:bj-max\_to\_mle\] for $\omega^{BJ}.$ Given these two properties of the $\omega$ function, we can now provide the sufficient conditions for the detected subset to be exactly correct, i.e., $S^{\ast} = S^T$. We introduce some additional notation: $r^{\text{aff}}_{\text{mle}-h} = \max_{x \in U_{X}(S^T)} r_{\text{mle}}(x)$, $r^{\text{aff}}_{\text{mle}-l} = \min_{x\in U_{X}(S^T)} r_{\text{mle}}(x)$, $r^{\text{unaff}}_{\text{mle}-h} = \max_{x \not\in U_{X}(S^T)} r_{\text{mle}}(x)$, $\eta = \left( \frac{\sum_{x \in U_{X}(S^T)}{N\left(x\right)} }{ \sum_{x \in U_{X}(D)}{N\left(x\right)} } \right)$, and invertible function $R \colon r_{\max}(x) \mapsto r_{\text{mle}}(x)$. From Lemma \[lem:na-max-to-mle\] we know that with respect to $\omega^{NA}$, $R^{NA}(r) = \frac{r}{2}$. We also introduce the concepts of $\nu-homogeneous$, which means that $\frac{r^{\text{aff}}_{\text{mle}-h}}{r^{\text{aff}}_{\text{mle}-l}} < \nu$, and $\delta-strong$, which means that $ \frac{r^{\text{aff}}_{\text{mle}-l}}{r^{\text{unaff}}_{\text{mle}-h}} > \delta$. Intuitively, the concept of *homogeneity* measures how similarly the treatment affects each $F_{Y| X=x}$ for $x \in U_{X}(S^T)$, while *strength* measures how large of an effect the treatment exhibits across all $F_{Y | X=x}$ for $x \in U_{X}(S^T)$. More specifically, these concepts respectively imply that for any pair of the affected covariate profiles $\left(x_i, x_j \in U_{X}(S^T)\right)$, the anomalous signal (i.e., treatment effect) observed in $x_i$ is less than $\nu$-times that which is observed in $x_j$, and the treatment effect observed in every affected covariate profile is more than $\delta$-times that of the unaffected profiles. Using these concepts we have the following results: [thm]{}[homo]{} \[thm:subset\_homo\] Under $H_1(S^T)$ defined in , where $|U_{X}(S^T)|=t$, $\exists~\nu > 1$ such that if the observed effect (as measured by $\omega$) across the $t$ covariate profiles in $S^T$ is $\nu-homogeneous$, and at least $1$-strong, then the highest scoring subset $S^{\ast} \supseteq S^{T}$. [thm]{}[strength]{} \[thm:subset\_strength\] Under $H_1(S^T)$ defined in , where $|U_{X}(S^T)|=t$, $\exists~\delta > 1$ such that if the observed effect (as measured by $\omega$) across the $t$ covariate profiles in $S^T$ is $\frac{\delta}{\eta}-strong$, then the highest scoring subset $S^{\ast} \subseteq S^{T}$. Together, these results demonstrate that the test statistic $\max_S F(S)$ possesses desirable statistical properties. Theorems \[thm:false\_posotive\] and \[thm:power\] imply that the asymptotic Type I and II errors of our procedure can be controlled, with implications for maximization over subsets of empirical processes more generally. Theorems \[thm:subset\_homo\] and \[thm:subset\_strength\] indicate that for a score function there exist constants $\nu$ and $\delta$, both of which equal 2 for $F^{NA}$, that define how similar and strong the treatment effect must be in the affected subpopulation, to ensure that the highest-scoring subset corresponds exactly to the true affected subset $S^{\ast} = S^T$. To our knowledge, this is the first work on heterogeneous treatment effects that provides conditions on the exactness of subpopulation discovery. Empirical Analysis {#sec:results} ================== In this section we empirically demonstrate the utility of the TESS framework as a tool to identify subpopulations with significant treatment effects. We use data from the Tennessee Student/Teacher Achievement Ratio (STAR) randomized experiment [@word-star-1990] in order to provide representative performance in real-world policy analysis. We review the original STAR data (§\[sec:star-data\]), and describe our procedure for simulating affected subpopulations (§\[sec:simulations\]). Through the simulation results described in §\[sec:sim-results\], we compare the ability of TESS to detect significant subpopulations to three recently proposed statistical learning approaches: Causal Tree [@athey-hte-2016], Interaction Tree [@su-subgroup-2009; @athey-hte-2016], and Causal Forest [@wager-causalforest-2017]. Specifically, we evaluate each method on two general metrics: detection power and subpopulation accuracy. Detection power measures $P_{H_1}(Reject~H_0)$, or how well a method can detect the existence–not necessarily the location–of treatment effect heterogeneity in the experiment. Subpopulation accuracy, on the other hand, is specifically designed to measure how well a method can precisely and completely capture the subpopulation(s) with significant treatment effects. Finally, we conduct an exploratory analysis of the STAR dataset, and in §\[sec:star-eda\] discuss the subpopulations identified by TESS as affected by treatments. In some cases, the identified subpopulation is consistent with the literature on the STAR experiment; in other cases, TESS uncovers previously unreported, but intuitive and believable, subpopulations. These empirical results demonstrate TESS’s potential to generate potentially useful and non-obvious hypotheses for further exploration and testing. Tennessee STAR Experiment {#sec:star-data} ------------------------- The Tennessee Student/Teacher Achievement Ratio (STAR) experiment is a large-scale, four-year, longitudinal randomized experiment started in 1985 by the Tennessee legislature to measure the effect of class size on student educational outcomes, as measured by standardized test scores. The experiment started monitoring students in kindergarten (during the 1985-1986 school year) and followed students until third grade. Students and teachers were randomly assigned into conditions during the first school year, with the intention for students to continue in their class-size condition for the entirety of the experiment. The three potential experiment conditions were not based solely on class size, but also the presence of a full-time teaching aide: small classrooms (13-17 pupils), regular-size classrooms (22-25 pupils), and regular-size classrooms with aide (still 22-25 pupils). Therefore, the difference between the former two conditions is classroom size, and the difference between the latter two conditions is the inclusion of a full-time teacher’s aide in the classroom. The experiment included approximately 350 classrooms from 80 schools, each of which had at least one classroom of each type. Each year more than 6,000 students participated in this experiment, with the final sample including approximately 11,600 unique students. The Tennessee STAR dataset has been well studied and analyzed, both by the project’s internal research team [@word-star-1990; @folger-star-1989] and by external researchers [@krueger-star-1999; @nye-star-2000]. As indicated by [@krueger-star-1999], the investigations have primarily focused on comparing means and computing average treatment effects. [@krueger-star-1999] presents a detailed econometric analysis and draws similar conclusions to the previous research: students in small classrooms perform better than those in regular classrooms, while there is no significant effect of a full-time teacher’s aide, or moderation from teacher characteristics. Moreover, the effect accumulates each year a student spends in a small classroom [@krueger-star-1999]. Additionally, these conclusions are robust in the presence of potentially compromising experimental design challenges: imbalanced attrition, subsequent changes in original treatment assignment, and fluctuating class-sizes [@krueger-star-1999]. Experimental Simulation Setup {#sec:simulations} ----------------------------- The goal of our experimental simulation is to replicate conditions under which a researcher would want to use an algorithm to discover subpopulations with significant treatment effects, and to observe how capable various algorithms are at identifying the correct subpopulation(s). In order to replicate realistic conditions, we use the STAR experiment as our base dataset, and inject into it subpopulations (of a given size) with a treatment effect (of a given magnitude). More specifically, we treat each student-year as a unique record and for each record capture ten covariates: student gender, student ethnicity, grade, STAR treatment condition, free-lunch indicator, school development environment, teacher degree, teacher ladder, teacher experience, and teacher ethnicity. We note that each of these variables, other than teacher experience, is discrete; we discretize experience into five-year intervals: $[0,5), [5,10), \ldots, [30, \infty)$. The number of values a covariate can take ranges from two to eight. By preserving the overall data structure of the STAR experiment–number of covariates, covariate value correlations, subpopulations, sample sizes, etc.–our simulations are more able to replicate the structure (and challenges) faced by experimenters. The process we follow to generate a simulated treatment effect begins with selecting a subpopulation $S_{\text{affected}}$ to affect. Recall that the dataset contains a set of discrete covariates $X = (X^{1}, \ldots, X^{d}$), where each $X^{j}$ can take on a vector of values $V^{j}=\{v^{j}_{m}\}_{m = 1...|V^j|}$ and $|V^{j}|$ is the arity of covariate $X^{j}$. Therefore, we define a subpopulation as $S = \{v^1 \times \ldots \times v^d\}$, where $v^j \subseteq V^j$. The affected subpopulation is generated at random based on two parameters: $num\_covs$, or the number of covariates to select, and $value\_prob$, or probability a covariate value is selected. We select $num\_covs$ covariates at random, and for each of these covariates we select each of their values with probability equal to $value\_prob$, ensuring that at least one value for each of these covariates is selected. The final affected subpopulation is then $S_{\text{affected}} = \{v^1 \times \ldots \times v^d\}$, where $v^j$ is the selected values if $X^{j}$ is one of the $num\_covs$ covariates, and otherwise $v_j = V^j$. In other words, for a random subset of covariates, $S_{\text{affected}}$ only includes a random subset of their values, and for all other covariates $S_{\text{affected}}$ includes all of their values. This treatment effect simulation scheme allows for variation in the size of the subpopulation that is affected: instances of $S_{\text{affected}}$ can constitute a small subpopulation (a challenging detection task), a large subpopulation (a relatively easier detection task), or something in between. Therefore a set of simulations, with varying parameter values, captures the spectrum of conditions a researcher may face when analyzing an experiment to identify subpopulations with significant treatment effects. The next step in the process involves partitioning the dataset into treatment and control groups, and generating outcomes for each record. Outcomes are drawn randomly from one of two distributions: the null distribution ($f_0$) or the alternative distribution ($f_1$). Any record in the treatment group that has a covariate profile $x \in U_{X}\left(S_{\text{affected}}\right)$ has outcomes generated by $f_1$; all other records have outcomes drawn from $f_0$. Therefore only $S_{\text{affected}}$ has a treatment effect, whose effect magnitude is the distributional difference between $f_0$ and $f_1$, represented by the parameter $\delta$. Each of the methods we consider in these experiments has a unique approach to identifying potential subpopulations with differential treatment effects. Furthermore, as mentioned in §\[sec:lit-limits\] most methods in the literature do not provide a process for identifying extreme treatment effects. Therefore, we devise intuitive post-processing steps in an attempt to represent how researchers would use each method to identify potential subpopulations that have significant treatment effects. Each method returns identified subpopulations and corresponding scores (measures) of the treatment effect. For the single tree-based methods [@athey-hte-2016; @su-subgroup-2009] we follow the suggestion of [@athey-hte-2016] to perform inference (via a two-sample Welch T-Test) in each leaf of the tree, and we then sort the leaves based on their statistical significance. The final subpopulation returned by the tree is the leaf with the most statistically significant treatment effect, and the final treatment effect measure is this leaf’s statistical significance ($p$-value). For a method that provides an individual level treatment effect (and estimate of variance) [@wager-causalforest-2017], we propose to perform inference for each unique covariate profile, and return those that are statistically significant. The final treatment effect measure is the smallest p-value of the covariate profiles. The TESS algorithm, by design, provides the subpopulation it determines to have a statistically significant distributional change (treatment effect) and a measure of this change, so no post-processing is necessary. ### Detection Power For any given combination of simulation parameter values ($\delta, num\_covs, value\_prob$), detection power measures $P(Reject~H_0 \mid H_1(S_{\text{affected}}))$, or how well a method is able to identify the presence of $S_{\text{affected}}$. This is accomplished by comparing the treatment effect measure (score of the detected subset) found under $H_1(S_{\text{affected}})$ to the distribution of the treatment effect measure under $H_0$. More specifically, for a given set of parameter values, we generate a random dataset which only exhibits a treatment effect in the randomly selected subpopulation $S_{\text{affected}}$; each method attempts to detect this subpopulation. As described in §\[sec:simulations\], each method returns a final treatment effect measure for the subpopulation it detects in this affected dataset. For the same dataset, we then conduct randomization testing to determine how significant this treatment effect measure is under $H_0$. We make many copies of the dataset (1000 in our experiments) and in each copy, we generate new outcomes (drawn from $f_0$) such that no subpopulation has a treatment effect. Each method then generates a detected subpopulation and corresponding treatment effect measure for each of these null datasets. These treatment effect measures from the null datasets together provide an empirical estimate of the distribution of the treatment effect measure under $H_0$ for that method. Subsequently, a $p$-value is computed for the treatment effect measure captured under $H_1(S_{\text{affected}})$. This process is repeated many times (300 in our experiments), where each time we 1) generate a random $S_{\text{affected}}$, 2) generate a random dataset under $H_1(S_{\text{affected}})$ and compute each method’s treatment effect measure, and 3) generate 1000 copies of the dataset with no treatment effect to compute each method’s treatment effect measure distribution under $H_0$. This process creates 300 p-values for each method which describe how extreme each of the $S_{\text{affected}}$ appear under $H_0$. A method rejects $H_0$ for a given $p$-value if it is less than or equal to some test-level $\gamma$, corresponding to the $1-\gamma$ quantile of the null distribution ($\gamma = 0.05$ in our experiments). Therefore, the detection power $P\left(Reject~H_0 \mid H_1(S_{\text{affected}})\right)$ is captured as the proportion of $p$-values that are sufficiently extreme that they lead to the rejection of $H_0$ at level $\gamma$. ### Detection Accuracy While detection power measures how well a method identifies the presence of a subpopulation with a treatment effect $S_{\text{affected}}$, as compared to datasets with no treatment effect, detection accuracy measures how well a method can precisely and completely identify the affected subpopulation $S_{\text{affected}}$. Accurately identifying in which subpopulation(s) a treatment effect exists can be crucial, particularly when there is no prior theory to guide which subpopulations to inspect, or when the goal itself is to develop intuition for new theory. As described in §\[sec:simulations\], each of the methods we consider is able to return the subpopulation that it determines as having the most statistically significant treatment effect $S_{\text{detected}}$. Each method will pick out a set of covariate profiles, which could have coherent structure (as with TESS, Causal Tree, and Interaction Tree), or be an unstructured collection of individually significant covariate profiles (as with Causal Forest). To accommodate both types of subpopulations, we therefore define detection accuracy as $$ \label{eqn:accuracy} \begin{split} \text{accuracy} = \frac{|S_{\text{detected}}\:\cap\: S_{\text{affected}}|} {|S_{\text{detected}}\:\cup\: S_{\text{affected}}|} = \frac{\sum_{R_i} \mathbbm{1}\{R_i \in S_{\text{detected}} \cap S_{\text{affected}}\}} {\sum_{R_i} \mathbbm{1}\{R_i \in S_{\text{detected}} \cup S_{\text{affected}}\}}. \end{split}$$ where $R_i$ are records in the treatment group. This definition of accuracy, commonly known as the Jaccard coefficient, is intended to balance precision (i.e., what proportion of the detected subjects truly have a treatment effect) and recall (i.e., what proportion of the subjects with a treatment effect are correctly detected). We note that $0 \le \text{accuracy} \le 1$; high accuracy values correspond to a detected subset $S_{\text{detected}}$ that captures many of the subjects with treatment effects and few or no subjects without treatment effects. Simulation Results {#sec:sim-results} ------------------ Our first set of results involve a treatment effect that is a mean shift in a normal distribution: the null distribution $f_0 = N(0,1)$ and the alternative $f_1= N(\delta,1)$, where $\delta$ captures the magnitude of the signal (treatment effect). Recall from §\[sec:simulations\] that there are three parameters that we can vary to change the size and magnitude of the signal. For our simulation, we specifically consider $\delta \in \{0.25, 0.5, \ldots, 3.0\}$, $num\_covs \in \{1,2, \ldots, 10\}$, and $value\_prob \in \{0.1, 0.2, \ldots, 0.9\}$; the former controls magnitude of the treatment effect, while the latter two control the concentration of the treatment effect (i.e., the expected size of the affected subpopulation). Instead of considering every combination, we select the middle value of each parameter interval as a reference point ($\delta= 1.5, num\_covs = 5, value\_prob = 0.5$) and measure performance changes for one parameter, while keeping the others fixed. Figure \[fig:parametric-detection-power\] shows the changes in each method’s detection power performance as we vary each of the three parameters that contribute to the strength of the treatment effect. From each of the three graphs we observe that TESS consistently exhibits more power than (or equivalent to) the other methods. More importantly, TESS exhibits statistically significant improvements in power for the most challenging ranges of parameter values (i.e., more subtle signals). The top plot varies effect size (or $\delta$), which is positively associated with signal strength and negatively associated with detection difficulty; for values $2.0$ and below TESS has significantly higher detection power than the competing methods. The middle plot varies the number of covariates selected to have only a subset of values be affected ($num\_covs$). This parameter is negatively associated with signal strength and positively associated with detection difficulty; for values $5$ and above, TESS has significantly higher detection power. The bottom plot varies the expected proportion of values, for the selected covariates, which will be affected ($value\_prob$). This parameter is positively associated with signal strength and negatively associated with detection difficulty; for values $0.5$ and below TESS exhibits significantly higher detection power. We see that, for sufficiently strong signals (based on both signal magnitude and concentration), all methods are able to distinguish between experiments with and without a subpopulation exhibiting a treatment effect, while TESS provides significant advantages in detection power for weaker signals. [.5]{} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/parametric_power_effect_size.png "fig:"){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/parametric_power_num_atts.png){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/parametric_power_prob_val.png){width=".99\linewidth"} [.5]{} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/parametric_accuracy_effect_size.png "fig:"){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/parametric_accuracy_num_atts.png){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/parametric_accuracy_prob_val.png){width=".99\linewidth"} [.5]{} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/nonparametric_power_effect_size.png "fig:"){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/nonparametric_power_num_atts.png){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/nonparametric_power_prob_val.png){width=".99\linewidth"} [.5]{} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/nonparametric_accuracy_effect_size.png "fig:"){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/nonparametric_accuracy_num_atts.png){width=".99\linewidth"} ![Ability of each method to identify subpopulations with an unaffected mean, but distributional treatment effect. The three parameters start as fixed ($\delta=1.5, num\_covs=5, value\_prob=0.5$) and then are varied individually to see how detection ability varies.[]{data-label="fig:nonparametric-detection-all"}](./images/nonparametric_accuracy_prob_val.png){width=".99\linewidth"} Figure \[fig:parametric-detection-accuracy\] shows the changes in each method’s detection accuracy as we vary each of the three parameters that contribute to the strength of the treatment effect. From each of the three graphs we observe that TESS consistently exhibits significantly higher accuracy than any other method. Recall that we measure subpopulation accuracy as in , which captures both precision and recall of the subpopulation returned by a method. The single tree methods tend to have high precision but low recall, resulting in compromised overall accuracy. Intuitively, these results indicate that the truly affected subpopulation is being spread over multiple leaves of the tree, despite its goal of partitioning the data into subpopulations with similar outcomes. This phenomenon may be caused by the greedy search aspect of tree learning: if the tree splits the affected subpopulation between two branches of the tree, the recall of any leaf will be compromised, especially when this split occurs close to the root of the tree. The Causal Forest ensemble method, on the other hand, exhibits relatively higher recall than precision. These results indicate that it is difficult for Causal Forest to distinguish between the covariate profiles that do and do not make up the truly affected subpopulation, as profiles from both sets appear to have statistically significant treatment effects. This inability stems from the fact that ensemble methods are designed to provide individual level predictions, therefore their conclusions regarding the statistical significance of a covariate profile are made in isolation from the other covariate profiles that also make up the affected subpopulation. Unlike single-tree methods, ensemble methods do not provide coherent and natural groupings of subpopulations. TESS, however, does provide a coherent subpopulation, which seems to balance precision and recall, maintaining a significantly higher subpopulation accuracy. It is also important to note that the data generating process for these simulations (a treatment effect that occurs as a mean shift between treatment and control distributions) corresponds to the modeling assumptions of the current methods in the literature, which specifically attempt to detect mean shifts, while TESS is designed to detect more general distributional changes. TESS’s improved performance, as compared to the competing methods, in these adverse conditions may be due to its subset-scanning based approach, which combines information across groups of data in an attempt to find exactly and only the affected subset of data. Even if each individual covariate profile that is truly affected exhibits small evidence of a treatment effect, TESS can leverage the group structure and signal of all the affected covariate profiles, and correctly conclude that collectively the subpopulation exhibits significant evidence of a treatment effect. Additionally, the fact that TESS executes its optimization iteratively, unlike the greedy search of tree-based methods, enables it to rectify initial choices of subset that are later determined to be inferior. Our second set of results considers treatment effects that do not align with the mean shift assumption that pervades the literature. Therefore, the null distribution is still $f_0 = N(0,1)$; however, the alternative is a mixture distribution $f_1= \frac{1}{2}N(-\delta,1)+\frac{1}{2}N(\delta,1)$. Here $\delta$ still captures the magnitude of the signal (treatment effect), and the remainder of the simulation process remains unchanged. This mixture distribution alternative, however, changes the detection task dramatically: while the average treatment effect is zero, there is still a clear difference in the outcome distribution between treated and control individuals. Figure \[fig:nonparametric-detection-all\] shows how each method’s detection power and accuracy change as we vary each of three parameters that contribute to the strength of the treatment effect. If we compare these simulations to those above with a mean shift, TESS exhibits a consistent pattern of high performance, while the performance of the competing methods is dramatically lower. The detection power results indicate that, for the competing methods, it is hard to distinguish even strong distributional changes from random chance, while the accuracy results indicate that their pinpointing of the affected subpopulation is little better than random guessing. Given that there is no observable mean shift in these simulations, these results are consistent what we expect: TESS is designed to identify more general distributional changes, while the other methods are unable to identify distributional changes without corresponding mean shifts. A Case Study on Identifying Subpopulations: Tennessee STAR {#sec:star-eda} ---------------------------------------------------------- There appears to be a consensus in the literature that the presence of a teaching aide in a regular-size classroom has an insignificant effect on test scores [@word-star-1990; @krueger-star-1999; @folger-star-1989; @stock_watson-econ-2nd]. (One significant effect was observed in first grade, but this effect was largely considered to be a false positive.) Therefore, we want to use TESS to compare regular classrooms with an aide to regular classrooms without an aide, to determine if there appears to be a subpopulation that was significantly and positively affected by the treatment. To do so, we replicated the analysis of the internal STAR team, using TESS to extend the results, with the goal of demonstrating what the STAR team could have surmised with present-day tools for uncovering heterogeneity. We replicate the original STAR analysis from [@word-star-1990; @stock_watson-econ-2nd] which includes the sum of the Stanford math and reading scores as the outcome of interest. For the data provided to TESS for detection, we combine the panel data across years and include student’s grade level as a covariate. [.5]{} ![Kernel density plots of 3rd grade test scores for treatment students (red) who were in a regular classroom with a teacher’s aide and control students (black) who did not have a teacher’s aide.[]{data-label="fig:density-3rd"}](./images/star_exploration_full_2nd.png "fig:"){width=".75\linewidth"} [.5]{} ![Kernel density plots of 3rd grade test scores for treatment students (red) who were in a regular classroom with a teacher’s aide and control students (black) who did not have a teacher’s aide.[]{data-label="fig:density-3rd"}](./images/star_exploration_adjusted_2nd.png "fig:"){width=".75\linewidth"} [.5]{} ![Kernel density plots of 3rd grade test scores for treatment students (red) who were in a regular classroom with a teacher’s aide and control students (black) who did not have a teacher’s aide.[]{data-label="fig:density-3rd"}](./images/star_exploration_full_3rd.png "fig:"){width=".75\linewidth"} [.5]{} ![Kernel density plots of 3rd grade test scores for treatment students (red) who were in a regular classroom with a teacher’s aide and control students (black) who did not have a teacher’s aide.[]{data-label="fig:density-3rd"}](./images/star_exploration_adjusted_3rd.png "fig:"){width=".75\linewidth"} We would also like to obtain an unbiased estimate of the average treatment effect in the subpopulation identified by TESS. Therefore, we follow a cross-validation paradigm, where the entire dataset is partitioned into ten folds, and iteratively each fold is held out as a validation set (to obtain an estimate of the treatment effect) while the remaining nine folds are provided to TESS (for detection). We further partition the data into records corresponding to students observed in a regular classroom with an aide and a regular classroom without an aide, which serve as treatment and control groups respectively. In three of the ten folds, TESS identified exactly the same subset, which we will call the “detected subpopulation”. Essentially, this detected subpopulation is composed of students in second or third grade, who attended an inner-city or urban school, receiving instruction from a teacher with 10 or more years of experience[^3]. Therefore, it appears that the presence of an aide raised the test-scores of students exhibiting the selected covariate values described above for grade, school type, and teacher experience, in addition to any values for gender, free-lunch status, teacher ethnicity, and teacher degree. The subpopulations that were returned in each of the ten folds exhibited a large amount of agreement with the detected subpopulation: the fold subpopulations exhibited 88% agreement (on average) with the detected subpopulation on the detection status of a record. The estimated average treatment effect for this detected subpopulation, averaged across all validation folds, is approximately a 34.19 point increase in total test score (36.45 and 22.28 for second and third grades respectively). Given this consistency across folds, we use the full data to better understand the effect in the detected subpopulation generally. Table \[table:reg\] shows the evaluation of the treatment effect for all second-grade students (column 1), second-grade students in the detected subpopulation (column 2), and second-grade students in the complement of the detected subpopulation (column 3). Additionally, Figure \[fig:density-2nd\] shows the kernel density plots of the cumulative scores for all second-grade students and students in the detected subpopulation respectively. Figure \[fig:density-2nd-all\] depicts a strong similarity in the distribution of all second graders’ scores with and without a full-time aide; there is a slight difference around the center of the distribution, but its magnitude is not sufficiently large to be significant, as seen by column 1 of Table \[table:reg\]. Conversely, Figure \[fig:density-2nd-subpop\] depicts a difference in test scores for the detected subpopulation of second graders: there appears to be a clear effect of the treatment (dominated by a large mean shift), supported by column 2 of Table \[table:reg\]. We conduct a similar analysis with third graders, and observe similar results in Figure \[fig:density-3rd\] and Table \[table:reg\]. However, the effect of the treatment in third grade appears to result in less of a mean shift, and is better characterized by a change in the skew (third moment) and therefore, the overall form of the distribution (Figure \[fig:density-3rd-subpop\]). We note that because TESS is able to identify effects that change the distribution (and therefore higher order moments) of test scores, even if the difference in mean score between treatment and control students in third grade was smaller, TESS could potentially still identify the existence of a treatment effect. There appears to be another consensus in the literature that small classrooms have a consistent, positive, and significant effect [@word-star-1990; @krueger-star-1999; @folger-star-1989]; therefore, we also compare small classrooms to regular classrooms, and determine whether there appears to be a subpopulation which is the main driver of this effect. We conduct an analysis as above but with STAR data records corresponding to students observed in a small classroom (treatment group) and a regular classroom (control group). For this analysis, TESS identified the entire population, which is congruent with the previous literature’s analysis of the consistent and significant average treatment effect in each grade. This result from TESS appears to indicate that the effect of small classroom size was not limited to a specific subpopulation. For both TESS analyses, we also conducted permutation testing to compensate for multiple hypothesis testing. Based on these results, we conclude that there is a less than 0.01% chance we would obtain a subpopulation with a score as extreme under the null hypothesis. The detected subpopulation in the classrooms with aides is not only statistically significant, but may also provide domain insight into the efficacy of full-time aides. A possible explanation for the effect we observe in the detected subpopulation is the fact that 13 schools were chosen at random to have teachers participate in an in-service training session, which the literature has also deemed ineffective [@word-star-1990]. More specifically, 57 teachers were selected each summer from these schools to participate in a three-day in-service to help them teach more effectively in whatever class type they were assigned to; part of the instruction focused on how to work with an aide and also had the aides present. We note that the in-service only occurred during the summers prior to 2nd and 3rd grade, which are the grades identified by TESS. Therefore, it is possible that when provided proper training, the combination of an aide and an experienced teacher can provide a significantly enhanced education environment even in the challenging teaching environments that exist in inner-city and urban schools. An additional explanation is that the educational benefits may be cumulative–i.e., in each additional year a student in this subpopulation has access to the combination of an aide and experienced teacher, the treatment effect compounds–similar to what has been demonstrated in small classrooms for the overall population [@krueger-star-1999]. However, unlike in small classrooms, for this subpopulation in regular classrooms with an aide, the effects were not large enough to be distinguishable from zero (given the much smaller sample size of the affected subpopulation and smaller treatment effect) until after two years. While a more detailed follow-up analysis of these hypotheses might reveal other causal factors and mechanisms at work, we believe that these results do present evidence that a treatment previously believed to be ineffective may actually have been effective for a particularly vulnerable subpopulation. Therefore, this analysis provides a sense of how TESS can be utilized as a tool for data-driven hypothesis generation in real-world policy analysis. Conclusions {#sec:conclusion} =========== This paper has presented several contributions to the literature on statistical machine learning approaches for heterogeneous treatment effects. We proposed the Distribution Average Treatment Effect (DATE) estimand, which generalizes the focal estimands used in this literature. Moreover, as a specific example of DATE, we derived the Nonparametric Average Treatment Effect (NATE) estimand, which allows detection of treatment effects that manifest as arbitrary effects on the potential outcome distributions (or specific quantiles), rather than being limited to detection of mean shifts. Furthermore, we consider the challenge of identifying whether any subpopulation has been affected by treatment, and precisely characterizing the affected subpopulation, as opposed to the more typical problem setting of estimating individual-level treatment effects. We formalize the identification of subpopulations with significant treatment effects as an anomalous pattern detection problem, and present the Treatment Effect Subset Scan (TESS) algorithm, which serves as a computationally efficient test statistic for the maximization of our NATE estimand over all subpopulations. We demonstrate that the estimator used by TESS satisfies the linear-time subset scanning property, allowing it to be efficiently and exactly optimized over subsets of a covariate’s values, while evaluating only a linear rather than exponential number of subsets. This efficient conditional optimization step is incorporated into an iterative procedure which jointly maximizes over subsets of values for each covariate in the data: the result is a subpopulation, described as a subset of values for each covariate, which demonstrates the most evidence for a statistically significant treatment effect. In addition to its computational efficiency, we derive desirable statistical properties for the TESS estimator: bounded asymptotic probability of Type I and Type II errors, as well as providing sufficient conditions under the alternative hypothesis that will result in TESS exactly identifying the affected subpopulation. These properties apply more generally to the class of nonparametric scan statistics upon which TESS is built; therefore, this theory also serves as a contribution to the anomalous pattern detection and scan statistics literatures. In addition to proposing a novel algorithm with desirable properties, we provide an extensive comparison between TESS and other recently proposed statistical machine learning methods for heterogeneous treatment effects (Causal Tree, Interaction Tree, and Causal Forest) through semi-synthetic simulations. Our results indicate that TESS consistently outperforms the other methods in its ability to identify and precisely characterize subpopulations which exhibit treatment effects. TESS significantly outperforms competing methods in the challenging scenarios where the treatment effect signal is weak (i.e., the signal magnitude is low or the affected subpopulation is small) because the subset scanning approach allows it to combine subtle signals across various dimensions of data in order to identify effects of interest. Moreover, TESS’s detection performance is consistent even when the treatment outcome distribution in the affected subpopulation has the same mean as the control outcome distribution, while the competing methods demonstrate essentially no ability to identify the affected subpopulation in the absence of a mean shift. After demonstrating TESS’s performance through simulation, we explore the well-known Tennessee STAR experiment, searching for previously unidentified subpopulations with significant treatment effects. As a result of this analysis, TESS uncovered an intuitive subpopulation that seems to have experienced extremely significant improved test scores as a result of having a teacher’s aide in the classroom, a treatment that has consistently been considered ineffective (as measured by the average treatment effect) by the literature on the Tennessee STAR. This provides a sense of how TESS can be utilized as a tool for generating hypotheses to be further explored and tested. We do however caution researchers to view algorithms like TESS not as a replacement, but rather an assistive tool, for developing scientific and behavioral theory. Results discovered by these methods should be investigated further and evaluated to develop a deeper theoretical understanding of the phenomena they uncover. When used to this end, these tools fill a critical void: in many contexts it is rare to know *a priori* which hypotheses are relevant and supported by data, and the use of traditional methods (e.g., regression) puts the onus on the researcher to know which hypothesis to test. This process necessitates that theory comes first, and subsequent investigation is a form of confirmatory analysis. However, such a process can become an impediment to data-driven discovery: there is an increasing need for scalable methods to use (big) data to generate new hypotheses, rather than just confirming pre-existing beliefs. In the late 1970s, John W. Tukey began to outline his vision for the future of statistics, which included a symbiotic relationship between exploratory and confirmatory data analysis. He argues these two forms of data analysis “can–and should–proceed side by side” [@tukey-eda-1977] because he believed ideas “come from previous exploration more often than from lightning strokes” [@tukey-eda_cda-1980]. To this end Tukey advocates for using data to suggest hypotheses to test, or what we now call data-driven hypothesis generation. We see our work as the natural evolution of Tukey’s vision of data analysis: we develop an approach–rigorously conducted and theoretically grounded–to conduct exploratory analysis in randomized experiments, with the hope of catalyzing “lightning strokes” of discovery and the advancement of science. Score Functions {#sec:scoring_functions} =============== To begin we revisit the general form of the score function–or equivalently the treatment effect test statistic–that we refer to as the nonparametric scan statistic. Additionally, we establish equivalences, as different forms will lend themselves to various proof strategies we implement later. $$\label{eqn:F(S)} \begin{alignedat}{2} \max_{S} F(S) &= \max_{S,\alpha} F_{\alpha}(S) & &=\max_{S,\alpha,\beta}F_{\alpha,\beta}(S)\\ &= \max_{S,\alpha} \phi \left( \alpha, N_{\alpha}(S), N(S) \right) & &=\max_{S,\alpha,\beta}\sum_{x \in U_X(S)}{\omega\left( \alpha, \beta, N_{\alpha}(x), N(x) \right)}. \end{alignedat}$$ First, we demonstrate that our empirical distribution functions from the control and treatment groups are unbiased estimators under the assumption of unconfoundedness. \[lem:unconfound\] If $Y_i(0),Y_i(1) {\protect\mathpalette{\protect\independenT}{\perp}}W_i \:|\: X_i$, then $\hat{F}_{Y^C|X=x}$ and $\hat{F}_{Y^T|X=x}$ are unbiased estimators of $F_{Y(0)|X=x}$ and $F_{Y(1)|X=x}$ respectively. $$\begin{aligned} \mathbb{E}\left[\hat{F}_{Y^C|X=x}(y)\right] &= \mathbb{E}_{Y|X=x} \left[ \frac{\sum_{Q_i:X_i = x} \mathbbm{1}\{W_i = 0\} \mathbbm{1}\{Y^{\text{obs}}_i \le y\}}{\sum_{Q_i:X_i=x} \mathbbm{1}\{W_i = 0\}} \right]\\ &= \mathbb{E}_{W|X=x} \left[\mathbb{E}_{Y|W,X=x} \left[ \frac{\sum_{Q_i:X_i = x} \mathbbm{1}\{W_i = 0\} \mathbbm{1}\{Y^{\text{obs}}_i \le y\}}{\sum_{Q_i:X_i=x} \mathbbm{1}\{W_i = 0\}} \right] \right]\\ &= \mathbb{E}_{W|X=x} \left[ \frac{\sum_{Q_i:X_i = x} \mathbbm{1}\{W_i = 0\} \mathbb{E}_{Y|W_i=0,X_i=x} \left[ \mathbbm{1}\{Y_i(0) \le y\}\right]}{\sum_{Q_i:X_i=x} \mathbbm{1}\{W_i = 0\}} \right]\\ &= \mathbb{E}_{W|X=x} \left[ \frac{\sum_{Q_i:X_i = x} \mathbbm{1}\{W_i = 0\} \mathbb{E}_{Y|X_i=x} \left[ \mathbbm{1}\{Y_i(0) \le y\}\right]}{\sum_{Q_i:X_i=x} \mathbbm{1}\{W_i = 0\}} \right]\\ &= \mathbb{E}_{W|X=x} \left[ \mathbb{E}_{Y|X_i=x} \left[ \mathbbm{1}\{Y_i(0) \le y\}\right] \frac{\sum_{Q_i:X_i = x} \mathbbm{1}\{W_i = 0\}}{\sum_{Q_i:X_i=x} \mathbbm{1}\{W_i = 0\}} \right]\\ &= \mathbb{E}_{W|X=x} \left[ \mathbb{E}_{Y|X_i=x} \left[ \mathbbm{1}\{Y_i(0) \le y\}\right]\right]\\ &= \mathbb{E}_{Y|X_i=x} \left[ \mathbbm{1}\{Y_i(0) \le y\}\right]\\ &= F_{Y(0) | X=x}(y). \end{aligned}$$ A similar argument shows that $\mathbb{E}\left[\hat{F}_{Y^T|X=x}(y)\right] = F_{Y(1)|X=x}(y)$, assuming that unconfoundedness holds and thus the substitution $\mathbb{E}_{Y|W_i=1,X_i=x} \left[ \mathbbm{1}\{Y_i(1) \le y\}\right] = \mathbb{E}_{Y|X_i=x} \left[ \mathbbm{1}\{Y_i(1) \le y\}\right]$ can be made. As a direct consequence of $\mathbb{E}\left[\hat{F}_{Y^C|X=x}(y)\right] = F_{Y(0) | X=x}(y)$, we also have that $\mathbb{E}\left[\hat{F}_{Y^C|X=x}(y)\right]$ is strongly consistent, $\|\hat{F}_{Y^C|X=x} - F_{Y(0)|X=x}\|_\infty {\xrightarrow{a.s.}}\ 0$. The rate of convergence is exponential in sample size, $P\left(\|\hat{F}_{Y^C|X=x} - F_{Y(0)|X=x}\|_\infty > \epsilon \right) \le 2e^{-2n_{0,x}\epsilon^2}, \epsilon > 0$, where $n_{0,x}$ is the number of control units with $X_i=x$. Similar arguments apply for $\hat{F}_{Y^T|X=x}$. Given these properties of the empirical conditional distribution functions, we can now turn our attention to the score function. In §\[sec:div-stat\] we introduced two score functions: Berk-Jones, $F_{\alpha}^{BJ}(S)= N(S)Div_{KL} \left(\frac{N_{\alpha}(S)}{N(S)}, \alpha \right)$, where $Div_{KL}$ is the Kullback-Liebler divergence, and the Normal Approximation, $F_{\alpha}^{NA}(S)= N(S) Div_{\frac{1}{2}\chi^2} \left(\frac{N_{\alpha}(S)}{N(S)}, \alpha \right) = \frac{\left(N_{\alpha}(S)-N(S)\alpha\right)^2}{2N(S)\alpha(1-\alpha)}$. There are a collection of well-known supremum goodness-of-fit statistics used in the literature, all of which are described in [@jager-gof-2007], that are each a transformation of $F_{\alpha}^{NA}(S)$:\ the Kolmogorov-Smirnov statistic $$\begin{aligned} F^{KS}(S) &= \max_{\alpha} F_{\alpha}^{KS}(S) \\ &= \max_{\alpha} \frac{\left(N_{\alpha}(S)-N(S)\alpha\right)}{\sqrt{N(S)}} \\ &= \max_{\alpha} \sqrt{2\alpha(1-\alpha)F_{\alpha}^{NA}(S)},\end{aligned}$$ the Cramer-von Mises statistic $$\begin{aligned} F^{CV}(S) &= \max_{\alpha} F_{\alpha}^{CV}(S) \\ &= \max_{\alpha} \frac{\left(N_{\alpha}(S)-N(S)\alpha\right)^2}{N(S)} \\ &= \max_{\alpha} 2\alpha(1-\alpha)F_{\alpha}^{NA}(S),\end{aligned}$$ the Higher-Criticism statistic $$\begin{aligned} F^{HC}(S) &= \max_{\alpha} F_{\alpha}^{HC}(S) \\ &= \max_{\alpha} \frac{\left(N_{\alpha}(S)-N(S)\alpha\right)}{\sqrt{N(S)\alpha(1-\alpha)}} \\ &= \max_{\alpha} \sqrt{2F_{\alpha}^{NA}(S)},\end{aligned}$$ and the Anderson-Darling statistic $$\begin{aligned} F^{AD}(S) &= \max_{\alpha} F_{\alpha}^{AD}(S) \\ &= \max_{\alpha} \frac{\left(N_{\alpha}(S)-N(S)\alpha\right)^2}{N(S)\alpha(1-\alpha)} \\ &= \max_{\alpha} 2F_{\alpha}^{NA}(S).\end{aligned}$$ As a result of this connection between $F^{NA}$ and these other statistics, we have the following: \[lem:na\_transform\_max\] If $S$ maximizes $F_{\alpha}^{NA}(S)$, then it maximizes $F_{\alpha}^{\text{KS}}(S), F_{\alpha}^{\text{CV}}(S), F_{\alpha}^{\text{HC}}(S)$ and $F_{\alpha}^{\text{AD}}(S)$. First, we note that $T(F_{\alpha}^{NA})$, where $T(x)= (bx)^a$, for $b \in \{1, 2, 2\alpha(1-\alpha)\}$ and $a \in \{1,\frac{1}{2}\}$ is a monotonically increasing transformation. Therefore, $\arg\max_S F_{\alpha}^{NA}(S) = \arg\max_S T \left( F_{\alpha}^{NA}(S) \right)$, because $\arg\max$ is invariant to monotone transformations. Supplementary Materials: Proofs of Lemmas and Theorems {#sec:proofs} ====================================================== In this section, we provide detailed proofs of the Lemmas and Theorems stated in the main text. There are additional Lemmas presented here that are not stated in the main text, but are still beneficial in support of our Theorems. Before presenting the proofs, we (re-)introduce notation that will be used throughout the proofs. Notation -------- $S^T$: the truly affected (rectangular) subset.\ $S^{\ast}$: the highest scoring (rectangular) subset, $\arg \max_{S \in R} F(S)$, where $R$ is the space of all rectangular subsets.\ $\alpha^{\ast}$: the $\alpha$ at which $S^{\ast}$ is highest scoring, $\arg \max_\alpha F_{\alpha}(S^{\ast})$.\ $S^{\ast}_{u}$: the highest scoring unconstrained subset, $\arg \max_S F(S)$.\ $\alpha^{\ast}_{u}$: the $\alpha$ at which $S^{\ast}_{u}$ is highest scoring, $\arg \max_\alpha F_{\alpha}(S^{\ast}_{u})$.\ $U_{X}$: a function which returns the unique covariate profiles (non-empty tensor cells) in a set.\ $M$: $|U_{X}(D)|$, the number of unique covariate profiles in our data, or equivalently the number of non-empty cells in our data tensor.\ $k$: $\frac{|U_{X}(S^T)|}{|U_{X}(D)|}$, the proportion of non-empty cells that are affected under $H_1\left(S^T\right)$.\ $\beta(\alpha)$: $P\left(p(y) \le \alpha \rvert H_1\left(S^T\right)\right)$, for all the p-values of covariate profiles in $S^T$.\ $h(M,\epsilon)$: the critical value for the test statistic, $\max_{S\in R}F(S)$, for a given $M$.\ $\phi$: Probability density function of standard normal.\ $\Phi$: Cumulative distribution function of standard normal.\ Statistical Properties {#sec:stats_theory} ---------------------- We now demonstrate desirable statistical properties of $F^{BJ}(S)$ and $F^{NA}(S)$; these properties will also extend to the other statistics described in Appendix \[sec:scoring\_functions\] because of their close relationship to $F^{NA}(S)$. More specifically, we demonstrate that using $F(S)$ we can appropriately (fail to) reject $H_0$ with high probability. The results derived in this section assume $N(x) \ge n$ for all $x \in U_{X}(D)$, i.e., each unique covariate profile in the data has at least $n$ data points, and we consider the case where $M,n \longrightarrow \infty$. We would like to show that for the same critical value $h(M,\epsilon)$ we have the following: $$\begin{aligned} P_{H_0}\left(\max_{S \in R} F(S) > h(M,\epsilon)\right) &\longrightarrow 0,\\ P_{H_1}\left(\max_{S \in R} F(S) > h(M,\epsilon)\right) &\longrightarrow 1.\end{aligned}$$ Toward this pursuit, the first result we show is that in the limit $F^{BJ}$ is well approximated by $F^{NA}$, which will then allow us to focus the remainder of our results on $F^{NA}$ specifically. Recall that $K(x,y) = Div_{KL}(x,y) = x \log \frac{x}{y} + (1-x) \log \frac{1-x}{1-y}$. By expanding $K(x,y)$ through a Taylor series, we have $$\begin{aligned} K(x,y) &= K(y,y) + \frac{\partial K(x,y)}{\partial x}\Biggr\rvert_{x=y} \left(x-y\right) + \frac{\partial^2 K(x,y)}{\partial^2 x}\Biggr\rvert_{x=y'} \frac{\left(x-y\right)^2}{2}\\ &= 0 + 0 + \frac{\left(x-y\right)^2}{2y'(1-y')}\\ \end{aligned}$$ for some $y'$ such that $|y'-x|\le |y-x|$. Therefore, $$\begin{aligned} F^{BJ}(S) &= \max_{\alpha} N\left(S\right)K \left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)},\alpha\right)\\ &=\max_{\alpha} N\left(S\right)\frac{\left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}-\alpha\right)^2}{2\alpha'(1-\alpha')} \quad\left(\text{where }\biggr\rvert\alpha'-\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}\biggr\rvert\le \biggr\rvert\alpha-\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}\biggr\rvert\right)\\ &\le \max_{\alpha} N\left(S\right)\left[\frac{\left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}-\alpha\right)^2}{2\alpha(1-\alpha)} \bigvee \frac{\left(\frac{N_{\alpha}\left(S\right)} {N\left(S\right)}-\alpha\right)^2} {2\frac{N_{\alpha}\left(S\right)} {N\left(S\right)} \left(1-\frac{N_{\alpha}\left(S\right)}{N\left(S\right)} \right)} \right]\\ &\ge \max_{\alpha} N\left(S\right) \left[\frac{\left( \frac{N_{\alpha}\left(S\right)} {N\left(S\right)}-\alpha\right)^2}{2\alpha(1-\alpha)} \bigwedge \frac{\left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)} -\alpha\right)^2}{2\frac{N_{\alpha}\left(S\right)} {N\left(S\right)} \left(1-\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}\right)} \right]. \end{aligned}$$ Furthermore, under $H_0$, $\frac{N_{\alpha}\left(S\right)}{N\left(S\right)} {\xrightarrow{a.s.}}\alpha \implies \alpha' {\xrightarrow{a.s.}}\alpha$, which by the continuous mapping theorem results in $$F^{BJ}(S) {\xrightarrow{a.s.}}\max_{\alpha} N\left(S\right)\frac{\left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}-\alpha\right)^2}{2\alpha(1-\alpha)} =F^{NA}(S).$$ However, under $H_1\left(S^T\right)$, $\frac{N_{\alpha}\left(S\right)}{N\left(S\right)} {\xrightarrow{a.s.}}\beta(\alpha)$, therefore asymptotically for $F^{BJ}(S)$ we have, $$\begin{aligned} \max_{\alpha} N\left(S\right)\frac{\left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}-\alpha\right)^2}{2\alpha(1-\alpha)} \left(1 \bigwedge \frac{\alpha(1-\alpha)}{\beta(\alpha)\left(1-\beta(\alpha)\right)}\right) \le & F^{BJ}(S) \\ \le & \max_{\alpha} N\left(S\right)\frac{\left(\frac{N_{\alpha}\left(S\right)}{N\left(S\right)}-\alpha\right)^2}{2\alpha(1-\alpha)} \left(1 \bigvee \frac{\alpha(1-\alpha)}{\beta(\alpha)\left(1-\beta(\alpha)\right)}\right). \end{aligned}$$ We can see that $F^{BJ}(S)$ is bounded above and below by either $F^{NA}(S)$ or a constant times $F^{NA}(S)$. Now, we will show that when the null hypothesis is true–i.e., the treatment does not have an effect–in the limit of large $M$ and $n$, the score of the most anomalous subset is linear in $M$ and constant in $n$. From Theorem \[thm:LTSS\] we know that if $\{x_{(1)}, \ldots, x_{(M)}\}$ are data elements—and specifically in this context are the $M$ unique covariate profiles in $U_{X}(D)$—sorted according to their proper priority function, which for covariate profiles is $\frac{ N_{\alpha}\left(x\right)}{N\left(x\right)}$, where $x_{(t)}$ has the $t^{th}$ highest priority, then $$\begin{aligned} S^{\ast}_{u} &= \{x_{(1)}, \ldots, x_{(t)}\}_{t\in\left[1,M\right]} \\ &= \bigg{\{}x \bigg{\rvert} \frac{ N_{\alpha}\left(x\right)}{N\left(x\right)} > t(\alpha)\bigg{\}}. \end{aligned}$$ With Bin representing the Binomial distribution, for each of the unique covariate profiles $x\in U_{X}(D)$, $N_{\alpha}\left(x\right) \sim Bin\big{(}N\left(x\right), \alpha\big{)}$. Given that $N\left(x\right) \ge n$, we have that asymptotically $P\left(Bin(N\left(x\right),\alpha) > t(\alpha)N\left(x\right)\right)$ is upper bounded by $P\left(Bin(n,\alpha) > t(\alpha)n\right)$ for fixed $\alpha$ and $t(\alpha) > \alpha$, so we can focus on the simple case $N(x) = n$ for all $x$. Therefore, asymptotically, $|S| \sim Bin\bigg{(}M,P\left(Bin\left(n,\alpha\right) > t(\alpha)n\right)\bigg{)}$, $\mathbb{E}_{H_0}\left[N\left(S^{\ast}_{u}\right)\right]$ = $MnP\left(Bin(n,\alpha) > t(\alpha)n\right)$, and $\mathbb{E}_{H_0}\left[N_{\alpha}\left(S^{\ast}_{u}\right)\right] = M \mbox{P}(\mbox{Bin}(n,\alpha) \ge t(\alpha)n) \mathbb{E}[X \sim \mbox{Bin}(n,\alpha) \mid X \ge t(\alpha)n]$. Furthermore, this implies, $$\begin{aligned} \frac{N_{\alpha}\left(S^{\ast}_{u}\right)}{N\left(S^{\ast}_{u}\right)} & {\xrightarrow{a.s.}}\mathbb{E}\left[\frac{N_{\alpha}\left(x\right)}{n} \bigg{\rvert} \frac{N_{\alpha}}{n} > t(\alpha)\right] \nonumber\\ &=~~\mathbb{E}\left[\frac{X \sim Bin(n,\alpha)}{n} \bigg{\rvert} \frac{X}{n} > t(\alpha)\right] \nonumber\\ &{\xrightarrow{d}}\mathbb{E}\left[\frac{X \sim N(n\alpha,n\alpha(1-\alpha))}{n} \bigg{\rvert} \frac{\sqrt{n}\left(\frac{X}{n}-\alpha\right)}{\sqrt{\alpha(1-\alpha)}} > Z^t(\alpha)\right] \quad \left(\text{with } Z^t(\alpha) = \frac{\sqrt{n}(t(\alpha)-\alpha)}{\sqrt{\alpha(1-\alpha)}}\right) \nonumber \\ &= \frac{n\alpha + \sqrt{n\alpha(1-\alpha)}\frac{\phi\left(Z^t(\alpha)\right)}{(1-\Phi\left(Z^t(\alpha)\right))}}{n}. \label{eq:n_a/n-trunc} \end{aligned}$$ Using the definition of $F^{NA}(S^{\ast}_{u})$ we have $$\begin{aligned} F^{NA}(S^{\ast}_{u}) &= \max_{\alpha}F^{NA}_{\alpha}(S^{\ast}_{u}) \nonumber \\ &= \max_{\alpha} \left(\frac{N_{\alpha}\left(S^{\ast}_{u}\right)}{N\left(S^{\ast}_{u}\right)}- \alpha\right)^2\frac{N\left(S^{\ast}_{u}\right)}{2\alpha(1-\alpha)} \nonumber \\ &{\xrightarrow{p}}\max_{\alpha} \left(\frac{ \sqrt{n\alpha(1-\alpha)}\frac{\phi\left(Z^t(\alpha)\right)}{(1-\Phi\left(Z^t(\alpha)\right))}}{n}\right)^2\frac{Mn(1-\Phi(Z^t(\alpha)))}{2\alpha(1-\alpha)} \label{eq:F-trunc}\\ &= \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))} \approx 0.202 \: M \nonumber, \end{aligned}$$ where is a result of the continuous mapping theorem and . Furthermore, the convergence in probability of is a result of the convergence in distribution to a constant. However, these asymptotic results fail when $\alpha$ is allowed to become arbitrarily small, decreasing to zero as $n$ increases. A simple solution is to fix constants $\alpha_{min} > 0$ and $\alpha_{max} < 1$ and define $F^{NA}(S^{\ast}_u)$ as the maximum over $F_\alpha^{NA}(S^{\ast}_u)$ for $\alpha \in [\alpha_{min}, \alpha_{max}]$. Restricting the range of $\alpha$ values solves the asymptotic convergence issues for the $F^{NA}$, $F^{HC}$, $F^{AD}$, and $F^{BJ}$ score functions, while the $F^{KS}$ and $F^{CV}$ statistics converge for unrestricted $\alpha$. Now we can use these asymptotic results to bound the probability that the highest scoring rectangular subset $F^{NA}(S^{\ast})$ exceeds a threshold under the null hypothesis, again maximizing $F^{NA}$ over a range of $\alpha$ values from $\alpha_{min} > 0$ to $\alpha_{max} < 1$. First, we note that under $H_0$, $F\left(S^{\ast}\right) \le F\left(S^{\ast}_{u}\right)$, because the detected subset $S^{\ast}$ is the $\arg \max$ over all rectangular subsets, while $S^{\ast}_{u}$ is the $\arg \max$ over all subsets. We now consider the score function $F^{NA}$ and the critical value $h\left(M,\epsilon\right) = \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))}+\epsilon$, for any $\epsilon > 0$. $$\begin{aligned} P_{H_0}(\text{Reject }H_0) &= P_{H_0}\left(F^{NA}\left(S^{\ast} \right) > h\left(M,\epsilon\right) \right)\\ &= P_{H_0}\left(F^{NA}\left(S^{\ast} \right) > \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))} + \epsilon\right) \\ &\le P_{H_0}\left(F^{NA}\left(S^{\ast}_{u} \right) > \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))} + \epsilon \right) \\ &= P_{H_0}\left(F^{NA}\left(S^{\ast}_{u} \right) - \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))} > \epsilon\right) \\ &\le P_{H_0}\left(\bigg{\lvert} F^{NA}\left(S^{\ast}_{u} \right) - \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))} \bigg{\rvert} > \epsilon\right) \\ &{\xrightarrow{p}}0, \end{aligned}$$ where the final line follows from Lemma \[lem:null\_converg\]. Furthermore, from Lemma \[lem:BJ\_to\_NA\] we have $F^{BJ}\left(S^{\ast}_{u}\right) {\xrightarrow{a.s.}}F^{NA}\left(S^{\ast}_{u}\right)$ under $H_0$, which implies that this result holds for $F^{BJ}$. Finally, by Lemma \[lem:na\_transform\_max\] all other score functions under consideration are maximizations over a monotonic transformation ($T(F_{\alpha})$) of the continuous function $F_{\alpha}^{NA}(S)$; therefore, the result for $\max_{\alpha}F_{\alpha}^{NA}(S)$ will have a direct analogy for $\max_{\alpha}T\left(F_{\alpha}^{NA}(S)\right)$. Next, we analyze the score of the truly affected subset, when the null hypothesis is false. First, recognize that $N\left(S^T\right) {\xrightarrow{p}}kMn$ and recall that $\mathbb{E}_{H_1(S^T)}\left[N_{\alpha}\left(S^T\right)\right] = N\left(S^T\right)\beta\left(\alpha\right)$. Therefore, we have the following: $$\begin{aligned} F^{NA}\left(S^T\right) &= \max_{\alpha}F_{\alpha}^{NA}\left(S^T\right)\\ &= \max_{\alpha} \frac{\left(N_{\alpha}\left(S^T\right)-N\left(S^T\right)\alpha\right)^2}{2N\left(S^T\right)\alpha(1-\alpha)} \\ &= \max_{\alpha} \frac{N\left(S^T\right)\left(\frac{N_{\alpha}\left(S^T\right)}{N\left(S^T\right)}-\alpha\right)^2}{2\alpha(1-\alpha)}, \\ &\text{and} \\ \frac{N_{\alpha}\left(S^T\right)}{N\left(S^T\right)} &{\xrightarrow{a.s.}}\beta\left(\alpha\right). \end{aligned}$$ Finally, by the continuous mapping theorem we have $$F\left(S^T\right) {\xrightarrow{a.s.}}\max_{\alpha} (\beta\left(\alpha\right) - \alpha)^2\frac{kMn}{2\alpha(1-\alpha)}.$$ First, we note that under $H_1(S^T)$ $$F\left(S^T\right) \le F\left(S^{\ast}\right),$$ because the detected subset $S^{\ast}$ is a maximization over all rectangular subsets while $S^T$ is one such subset. Now that we have a lower bound on $F\left(S^{\ast}\right)$ under $H_1(S^T)$, we consider the critical value $h\left(M,\epsilon \right) = \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))}+\epsilon$, for any $\epsilon > 0$, and the score function $F^{NA}$. $$\begin{aligned} P_{H_1}(\text{Reject }H_0) &= P_{H_1}\left(F^{NA}\left(S^{\ast} \right) > h\left(M,\epsilon \right) \right) \nonumber \\ &= P_{H_1}\left(F^{NA}\left(S^{\ast} \right) > \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))}+\epsilon \right) \nonumber \\ &\ge P_{H_1}\left(F^{NA}\left(S^T \right) > \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2(1-\Phi\left(Z^t(\alpha)\right))}+\epsilon \right) \nonumber \\ &\longrightarrow P_{H_1}\left(\max_{\alpha} \left(\beta\left(\alpha\right) - \alpha\right)^2\frac{kMn}{2\alpha\left(1-\alpha\right)} > \max_{\alpha} \frac{M\phi\left(Z^t(\alpha)\right)^2}{2\left(1-\Phi\left(Z^t(\alpha)\right) \right)}+\epsilon \right) \label{rejection_converg}\\ &= P_{H_1}\left( \max_{\alpha} \left(\beta\left(\alpha\right) - \alpha\right)^2\frac{kMn}{2\alpha\left(1-\alpha\right)} > \frac{M\phi\left(Z^t(\alpha^{\ast}_{u})\right)^2} {2\left(1-\Phi\left(Z^t(\alpha^{\ast}_{u})\right)\right)}+\epsilon\right)\nonumber \\ &\ge P_{H_1}\left( \left(\beta\left(\alpha^{\ast}_{u}\right) - \alpha^{\ast}_{u}\right)^2\frac{kMn}{2\alpha^{\ast}_{u}\left(1-\alpha^{\ast}_{u}\right)} >\frac{M\phi\left(Z^t(\alpha^{\ast}_{u})\right)^2} {2\left(1-\Phi\left(Z^t(\alpha^{\ast}_{u})\right)\right)}+\epsilon\right) \nonumber \\ &= P_{H_1}\left( \left(\beta\left(\alpha^{\ast}_{u}\right) - \alpha^{\ast}_{u}\right)^2\frac{kn}{2\alpha^{\ast}_{u} \left(1-\alpha^{\ast}_{u}\right)} > o(1) \right)\nonumber \\ &= P_{H_1}\left( O(kn) > O(1) \right) \nonumber \\ & \longrightarrow 1 \nonumber. \end{aligned}$$ where follows from Lemma \[lem:alt\_converg\]. Furthermore, from Lemma \[lem:BJ\_to\_NA\] we have $F^{BJ}\left(S^{\ast}_{u}\right) {\xrightarrow{a.s.}}F^{NA}\left(S^{\ast}_{u}\right)$ under $H_0$, which implies that comparing $F^{BJ}$ to the same critical value $h\left(M,\epsilon\right)$ will also yield the above result. Finally, by Lemma \[lem:na\_transform\_max\] all other score functions under consideration are maximizations over a monotonic transformation ($T(F_{\alpha})$) of the continuous function $F_{\alpha}^{NA}(S)$; therefore, the result for $\max_{\alpha}F_{\alpha}^{NA}(S)$ will have a direct analogy for $\max_{\alpha}T\left(F_{\alpha}^{NA}(S)\right)$. Subset Correctness {#sec:subset_correct} ------------------ In this section, we are still interested in studying the properties of our framework under $H_1(S^T)$, however we are now concerned about the correctness of our detected subset $S^{\ast}$: as the objective is for the detected subset $S^{\ast} = S^T$. If $x$ is a data element, i.e., one of the $M$ unique covariate profiles in the data; $U_{X}(D)$ is the collection of these data elements, i.e., $U_{X}(D) = \{x_{1},\ldots,x_{M}\}$; and both $U_{X}(S^{\ast}),U_{X}\left(S^T\right) \subseteq U_{X}(D)$. The results in this section are general, and are therefore applicable to an unconstrained (or constrained) $S^T$; therefore $S^{\ast}$ and $\alpha^{\ast}$ will refer to the joint maximization of subsets and $\alpha$ values over the unconstrained (or constrained) space in which $S^T$ is defined. We begin building our theory with a demonstration that the score functions can be re-written as additive functions if we also condition on the value of the alternative hypothesis parameter $\beta$. First we note that from the derivations of $F_{\alpha}^{BJ}(S)$ and $F^{NA}_{\alpha}(S)$ in §\[sec:div-stat\], that if we do not set $\beta = \beta_{\text{mle}}(S)$ but instead treat $\beta \in (0,1)$ as a given quantity, then $$\begin{aligned} F^{BJ}(S) &= \max_{\alpha}F_{\alpha}^{BJ}(S) \\ &= \max_{\alpha,\beta}F_{\alpha,\beta}^{BJ}(S)\\ &= \max_{\alpha,\beta}N_{\alpha}(S) \log \left(\frac {\beta} {\alpha} \right) + \left( N(S)-N_{\alpha}(S) \right) \log \left( \frac{1-\beta}{1-\alpha} \right) \\ &=\max_{\alpha,\beta} N_{\alpha}(S) \log \left(\frac {\beta} {\alpha} \right) - N_{\alpha}(S) \log \left( \frac{1-\beta}{1-\alpha} \right) + N(S) \log \left( \frac{1-\beta}{1-\alpha} \right) \\ &=\max_{\alpha,\beta} N_{\alpha}(S) \log \left(\frac {\beta (1-\alpha)} {\alpha(1-\beta)} \right) + N(S) \log \left( \frac{1-\beta}{1-\alpha} \right) \\ &= \max_{\alpha,\beta} \log \left(\frac {\beta (1-\alpha)} {\alpha(1-\beta)} \right) \left( \sum_{x \in U_{X}(S)}{N_{\alpha}(x)} \right)+ \log \left( \frac{1-\beta}{1-\alpha} \right) \left( \sum_{x \in U_{X}(S)}{N(x)} \right) \\ &=\max_{\alpha,\beta} \sum_{x \in U_{X}(S)}{ \left(\frac {\beta (1-\alpha)} {\alpha(1-\beta)} \right) N_{\alpha}(x) + \log \left( \frac{1-\beta}{1-\alpha} \right) N(x)}\\ &= \max_{\alpha,\beta} \sum_{x \in U_{X}(S)}{ C^{BJ_1}_{\alpha, \beta} ~ N_{\alpha}(x) + C^{BJ_2}_{\alpha, \beta} ~ N(x)}\\ &=\max_{\alpha,\beta}\sum_{x \in U_{X}(S)} \omega^{BJ}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big) \end{aligned}$$ $$\begin{aligned} F^{NA}(S) &= \max_{\alpha}F_{\alpha}^{NA}(S) \\ &= \max_{\alpha,\beta}F_{\alpha,\beta}^{NA}(S)\\ &= \max_{\alpha,\beta} \frac{N_{\alpha}(S)\left(\beta-\alpha\right)}{\alpha(1-\alpha)} + \frac{N(S)\left(\alpha^2-\beta^2\right)}{2\alpha(1-\alpha)} \\ &= \max_{\alpha,\beta}\frac{\left(\beta-\alpha\right)}{\alpha(1-\alpha)} \left( \sum_{x \in U_{X}(S)}{N_{\alpha}(x)} \right) + \frac{\left(\alpha^2-\beta^2\right)}{2\alpha(1-\alpha)} \left( \sum_{x \in U_{X}(S)}{N(x)} \right) \\ &= \max_{\alpha,\beta}\sum_{x \in U_{X}(S)}{ \frac{\left(\beta-\alpha\right)}{\alpha(1-\alpha)} N_{\alpha}(x) + \frac{\left(\alpha^2-\beta^2\right)}{2\alpha(1-\alpha)} N(x)}\\ &= \max_{\alpha,\beta}\sum_{x \in U_{X}(S)}{ C^{NA_1}_{\alpha, \beta} N_{\alpha}(x) + C^{NA_2}_{\alpha, \beta} N(x)}\\ &=\max_{\alpha,\beta}\sum_{x \in U_{X}(S)} \omega^{NA}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big) \end{aligned}$$ where all the $C_{\alpha, \beta}$’s are constants with respect to given values of $\alpha,\beta$. We now have that the score of a subset $S$ can be decomposed into the sum of contributions (measured by a function $\omega$) from each individual element contained within the subset. Next, we seek to demonstrate some important properties of the $\omega$ functions. More specifically, $\omega$ is a concave function with respect to $\beta$, which has two roots and a unique maximum. Firstly, $$\begin{aligned} \frac{\partial~\omega^{NA}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)}{\partial \beta} &= \frac{N_{\alpha}(x)-N(x)\beta}{\alpha(1-\alpha)} \nonumber \\ &= -\frac{N(x)}{\alpha(1-\alpha)}\beta+\frac{N_{\alpha}(x)}{\alpha(1-\alpha)} \label{eq:na-line} \\ {\text{ (set) }} ~ 0 &= -\frac{N(x)}{\alpha(1-\alpha)}\beta+\frac{N_{\alpha}(x)}{\alpha(1-\alpha)} \nonumber \\ 0 &= -N(x)\beta+N_{\alpha}(x) \nonumber \\ \beta &= \frac{N_{\alpha}(x)}{N(x)}, \label{eq:na-root} \end{aligned}$$ shows that the first derivative is the equation of a line, with a negative slope, and shows that this line has one root at $\frac{N_{\alpha}(x)}{N(x)}$. This implies $\omega^{NA}$ is concave with respect to $\beta$ (with at most two roots which we will refer to as $\beta_{\min}(x)$ and $\beta_{\max}(x)$) and is maximized at $\frac{N_{\alpha}(x)}{N(x)}$. \[lem:bj-concave\] $\omega^{BJ}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)$ is concave with respect to $\beta$, maximized at $\beta_{\text{mle}}(x) = \frac{N_{\alpha}(x)}{N(x)}$, and has two roots. $$\begin{aligned} \frac{\partial~\omega^{BJ}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)}{\partial \beta} &= \frac{N_{\alpha}(x)-N(x)\beta}{\beta(1-\beta)} \\ {\text{ (set)}} ~ 0 &= \frac{N_{\alpha}(x)-N(x)\beta}{\beta(1-\beta)} \\ 0 &= N_{\alpha}(x)-N(x)\beta \\ \beta &= \frac{N_{\alpha}(x)}{N(x)} \end{aligned}$$ shows that $\omega^{BJ}$ is maximized (if it is concave) at $\frac{N_{\alpha}(x)}{N(x)}$ and has at most two roots (which we will refer to as $\beta_{\min}(x)$ and $\beta_{\max}(x)$). Additionally, $$\begin{aligned} \frac{\partial^2 ~\omega^{BJ}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)}{\partial^2 \beta}\Biggr\rvert_{\beta = \frac{N_{\alpha}(x)}{N(x)}} &= -\frac{\beta^2N(x)+(1-2\beta)N_{\alpha}(x)}{(\beta-1)^2\beta^2}\Biggr\rvert_{\beta = \frac{N_{\alpha}(x)}{N(x)}} \\ &< 0 \end{aligned}$$ shows that $\omega^{BJ}$ is concave with respect to $\beta$. Now that we have demonstrated that $\omega$, is concave, we now demonstrate a key insight about the difference between $\alpha$ and $\beta_{\max}(x)$ (i.e., $r_{\max}$) relative to the difference between $\alpha$ and $\beta_{\text{mle}}(x)$ (i.e., $r_{\text{mle}}$). First, by Lemma \[lem:na-concave\], we know that, with respect to $\beta$, $\omega^{NA}$ is concave and has at most 2 roots $\left(\beta_{\min}(x), \beta_{\max}(x)\right)$. Therefore, we have the following: $$\begin{aligned} \omega^{NA}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)&= \frac{N_{\alpha}(x)\left(\beta-\alpha\right)}{\alpha(1-\alpha)} + \frac{N(x)\left(\alpha^2-\beta^2\right)}{2\alpha(1-\alpha)} \\ {\text{ (set)}}~0 &= \frac{N_{\alpha}(x)\left(\beta-\alpha\right)}{\alpha(1-\alpha)} + \frac{N(x)\left(\alpha^2-\beta^2\right)}{2\alpha(1-\alpha)} \\ &= 2N_{\alpha}(x)\left(\beta-\alpha\right) + N(x)\left(\alpha^2 - \beta^2\right) \\ &= \left( -N(x) \right)\beta^2 + \left( 2N_{\alpha}(x) \right)\beta + \left( -2\alpha N_{\alpha}(x) + N(x)\alpha^2 \right) \\ \\ \{\beta_{\min}(x), \beta_{\max}(x)\} &= \frac{ -2N_{\alpha}(x) \pm \sqrt{ \left( -2N_{\alpha}(x) \right) -4 \left( -N(x) \right) \left( -2N_{\alpha}(x)\alpha + \left( N(x)\alpha \right)^2 \right)} }{-2N(x)} \\ &= \frac{ N_{\alpha}(x) \pm \sqrt{ N_{\alpha}(x)^2 -2N_{\alpha}(x)N(x)\alpha + (N(x)\alpha)^2 } }{N(x)} \\ &= \frac{ N_{\alpha}(x) \pm \sqrt{ \left( N_{\alpha}(x) - 2N(S)\alpha \right)^2 } }{N(x)} \\ &= \frac{ N_{\alpha}(x) \pm \left( N_{\alpha}(x) - 2N(x)\alpha \right) }{ N(x) } \\ &= \{ \alpha, 2\beta_{\text{mle}}(x) - \alpha \}.\end{aligned}$$ This implies that $\beta_{\max}(x) - \alpha = 2\left(\beta_{\text{mle}}(x) - \alpha\right)$ and $r_{\max}(x) = 2r_{\text{mle}}(x)$, with respect to $\omega^{NA}.$ \[lem:bj-max\_to\_mle\] With respect to $\omega^{BJ}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)$, $$\frac{r_{\max}(x)}{r_{\text{mle}}(x)} \begin{cases} < 2 & \text{if } \beta_{\text{mle}}(x) > \frac{1}{2} \\ = 2 & \text{if } \beta_{\text{mle}}(x) = \frac{1}{2} \\ > 2 & \text{otherwise}.\\ \end{cases}$$ First, by Lemma \[lem:bj-concave\], we know that, with respect to $\beta$, $\omega^{BJ}$ is concave and has at most 2 roots $\left(\beta_{\min}(x), \beta_{\max}(x)\right)$. One of the solutions of $\omega^{BJ}$ must be $\alpha$, so let us assume that $\beta_{\min}(x) = \alpha$; this will be true when $\beta > \alpha$, which intuitively corresponds to our case of interest: when the covariate profile contains more significant (extreme) $p$-values than expected. Furthermore, we know that $\omega^{BJ}$ achieves a maximum at $\beta_{\text{mle}} = \frac{N_\alpha(x)}{N(x)}$. With these properties we can show the first case ($1 \le \frac{r_{\max}(x)}{r_{\text{mle}}(x)} < 2$) by first recognizing that trivially $\beta_{\text{mle}} \leq \beta_{\max}$, and $\beta_{\text{mle}} -\alpha \leq \beta_{\max} -\alpha $. To show the upper bound of the first case, it suffices to show that $\omega^{BJ}\left(\alpha, \beta_{\text{mle}} - \epsilon, N_\alpha(x), N(x)\right) \geq \omega^{BJ}\left(\alpha, \beta_{\text{mle}} + \epsilon, N_\alpha(x), N(x)\right)$ for some $\epsilon >0$. The essential implication is that the concave function $\omega^{BJ}$ increases at a slower rate (until it reaches its maximum) than it decreases. This further implies that the distance between $\beta_{\text{mle}}$ and $\alpha$ is higher than $\beta_{\text{mle}}$ and $\beta_{\max}$, and therefore the desired result. Recall from Lemma \[lem:bj-concave\] that $$\begin{aligned} \frac{\partial~\omega^{BJ}\big( \alpha, \beta, N_{\alpha}(x), N(x) \big)}{\partial \beta} &= \frac{N_{\alpha}(x)-N(x)\beta}{\beta(1-\beta)} \\ &= N(x) \left[ \frac{\beta_{\text{mle}}(x)-\beta}{\beta(1-\beta)} \right],\end{aligned}$$ which means the slope of $\omega^{BJ}$ is proportional to $\frac{\beta_{\text{mle}}(x)-\beta}{\beta(1-\beta)}$. We now compare the slope around the inflection point $\beta_{\text{mle}}(x)$, and recognize that at $\beta = \beta_{\text{mle}}(x)+\epsilon$ the slope is negative with absolute value proportional to $\frac{\epsilon} { \left( \beta_{\text{mle}}(x) + \epsilon \right) \left( 1-\beta_{\text{mle}}(x) - \epsilon \right) }$. At $\beta = \beta_{\text{mle}}(x)-\epsilon$ the slope is positive with absolute value proportional to $\frac{\epsilon} { \left( \beta_{\text{mle}}(x) - \epsilon \right) \left( 1-\beta_{\text{mle}}(x) + \epsilon \right) }$. Therefore, $$\begin{aligned} {3} \beta_{\text{mle}}(x) > \frac{1}{2} &\Longleftrightarrow & \left( \beta_{\text{mle}}(x) + \epsilon \right) \left( 1-\beta_{\text{mle}}(x) - \epsilon \right) &< \left( \beta_{\text{mle}}(x) - \epsilon \right) \left( 1-\beta_{\text{mle}}(x) + \epsilon \right) &\\ &\Longleftrightarrow & \frac{\epsilon} { \left( \beta_{\text{mle}}(x) + \epsilon \right) \left( 1-\beta_{\text{mle}}(x) - \epsilon \right) } &> \frac{\epsilon} { \left( \beta_{\text{mle}}(x) - \epsilon \right) \left( 1-\beta_{\text{mle}}(x) + \epsilon \right) } &\\ &\Longleftrightarrow & \frac{r_{\max}(x)}{r_{\text{mle}}(x)} &< 2. &\end{aligned}$$ The demonstration of the remaining two conditions follow precisely the same approach above, mutatis mutandis. Now that we have built up the necessary properties of the $\omega$ functions, we now will discuss the sufficient conditions for the detected subset to be exactly correct $S^{\ast} = S^T$. To begin we re-introduce some additional notation $$\begin{aligned} r^{\text{aff}}_{\text{mle}-h} &= \max_{x \in U_{X}(S^T)} r_{\text{mle}}(x),\\ r^{\text{aff}}_{\text{mle}-l} &= \min_{x \in U_{X}(S^T)} r_{\text{mle}}(x),\\ r^{\text{unaff}}_{\text{mle}-h} &= \max_{x \not\in U_{X}(S^T)} r_{\text{mle}}(x),\\ \eta &= \left( \frac{\sum_{x \in U_{X}(S^T)}{N(x)} }{ \sum_{x \in U_{X}(D)}{N(x)} } \right),\\ \nu-homogeneous &\colon \frac{r^{\text{aff}}_{\text{mle}-h}}{r^{\text{aff}}_{\text{mle}-l}} < \nu, \\ \delta-strong &\colon \frac{r^{\text{aff}}_{\text{mle}-l}}{r^{\text{unaff}}_{\text{mle}-h}} > \delta,\\ R &\colon (0,1) \mapsto (0,1).\end{aligned}$$ More specifically, $R$ is an invertible function such that $R \colon r_{\max}(x) \mapsto r_{\text{mle}}(x)$–i.e., if $R$ is applied to $r_{\max}(x)$ it would produce the corresponding $r_{\text{mle}}(x)$. From Lemma \[lem:na-max-to-mle\] we know that with respect to $\omega^{NA}$, $R^{NA}(r) = \frac{r}{2}$, while from Lemma \[lem:bj-max\_to\_mle\] we know that with respect to $\omega^{BJ}$, $R^{BJ}(r) \le \frac{r}{2}$ under certain conditions. The first result we provide is a sufficient condition for guaranteeing that the detected subset includes all the elements from the true subset ($S^{\ast} \supseteq S^T$). More specifically, we show that such a condition is sufficient homogeneity of the affected data elements: for a given value $\nu$, and any pair of affected covariate profiles $(x_i, x_j \in U_{X}(S^T))$, the anomalous signal (i.e., treatment effect) observed in $x_i$ is no more than $\nu$-times that which is observed in $x_j$. First, let $\{ x_{(1)}, \ldots, x_{(t)} \}$ be the data elements in $S^T$ sorted by the priority function (Theorem \[thm:LTSS\]) $G(x) = \frac{N_{\alpha}(x)}{N(x)} = \beta_{\text{mle}}(x)$. By the assumption of an observed signal that is at least $1$-strong, these data elements are the $t$ highest priority data elements. Additionally, let $\nu = \frac{r^{\text{aff}}_{\text{mle}-h}} {R \left( r^{\text{aff}}_{\text{mle}-h} \right)}.$ Therefore, $$\begin{aligned} {3} \nu-homogeneous &\implies & \nu &> \frac{r^{\text{aff}}_{\text{mle}-h}}{r^{\text{aff}}_{\text{mle}-l}} &{}\\ &\therefore & \frac{r^{\text{aff}}_{\text{mle}-h}}{R \left( r^{\text{aff}}_{\text{mle}-h} \right)} &> \frac{r^{\text{aff}}_{\text{mle}-h}}{r^{\text{aff}}_{\text{mle}-l}} &{} \\ &\implies & r^{\text{aff}}_{\text{mle}-l} &> R\left( r^{\text{aff}}_{\text{mle}-h} \right) &\\ &\implies & R^{-1}\left( r^{\text{aff}}_{\text{mle}-l} \right) &> r^{\text{aff}}_{\text{mle}-h} &\\ &\implies & \beta_{\max}(x_{(t)}) - \alpha &> \beta_{\text{mle}}(x_{(1)}) - \alpha &\\ &\implies & \beta_{\max}(x_{(t)}) &> \beta_{\text{mle}}(x_{(k)}) &\quad \left( \forall k \right) \\ &\implies & \beta_{\max}(x_{(t)}) &> \beta_{\text{mle}}(S^{\ast}) &\\ &\therefore & \omega\big( \alpha, \beta_{\text{mle}}(S^{\ast}), N_{\alpha}(x_{(t)}), N(x_{(t)}) \big) &> 0 & \\ &\therefore & |S^{\ast}| &\ge t &\\ &\therefore & S^{\ast} &\supseteq S^T. & \end{aligned}$$ Intuitively, $\beta_{\text{mle}}(x_{(t)})$ and $\beta_{\text{mle}}(x_{(1)})$ are respectively the smallest and largest $\beta_{\text{mle}}$ of all the $x\in U_{X}(S^T)$. Furthermore, $\beta_{\text{mle}}(x_{(t)}) \le \beta_{\text{mle}}(x_{(k)}) \le \beta_{\max}(x_{(t)})~\forall k\in[1,t]$, which means $\beta_{\text{mle}}(S^{\ast}) \le \beta_{\max}(x_{(t)})$ for the optimal subset $S^{\ast}$. Moreover, the $S^{\ast}$ that maximizes $F_{\alpha,\beta}$ will include any covariate profile $x$ that would make a positive contribution to the score $F_{\alpha,\beta}$ at the given value of $\beta$. Such a positive contribution occurs when the concave $\omega$ function of $x$ is positive. At the optimal $\alpha$ and $\beta = \beta_{\text{mle}}(S^{\ast})$ the $\omega$ function for each of the $\{x_{(1)}, \ldots, x_{(t)}\}$ is positive because $\beta_{\max}$ (the larger root of the $\omega$ functions) for each of these elements is greater than $\beta_{\text{mle}}(S^{\ast})$. From Lemma \[lem:na-max-to-mle\] we know that with respect to $\omega^{NA}$, $\frac{r} {R \left( r \right)} = 2$. Additionally, from Lemma \[lem:bj-max\_to\_mle\] we know that with respect to $\omega^{BJ}$, $\frac{r} {R \left( r \right)} \le 2$ under certain conditions. Therefore, we can conclude that at $\alpha^{\ast}$, $2$-homogeneity (and $1$-strength) is sufficient for $S^{\ast} \supseteq S^T$ with respect to $F^{NA}$; to $F^{BJ}$, under some conditions; and to the other score functions described above, by Lemma \[lem:na\_transform\_max\]. Essentially, if the observed proportions of $p$-values significant at $\alpha^{\ast}$ vary by no more than a factor of 2 across all of the $x \in U_{X}(S^T)$, then the detected subset will include all of the affected data elements. The next result we provide is a sufficient condition for guaranteeing that the detected subset will only include elements from the true subset ($S^{\ast} \subseteq S^T$). More specifically, we show that such a condition is sufficient strength of the affected data elements; or intuitively, for a given value $\delta$, the anomalous signals observed in every affected data elements is more than $\delta$-times that of the unaffected data elements. First, let $D = \{ x_{(1)}, \ldots, x_{(t)}, x_{(t+1)}, \ldots, x_{(M)}\}$ be the data elements sorted by the priority function (Theorem \[thm:LTSS\]) $G(x) = \frac{N_{\alpha}(x)}{N(x)} = \beta_{\text{mle}}(x)$. By the assumption of $\delta > 1$ (an observed signal that is at least $1$-strong), $S^T = \{x_{(1)}, \ldots, x_{(t)}\}$. Additionally, let $\delta = \frac{R^{-1} \left( r^{\text{unaff}}_{\text{mle}-h} \right)}{r^{\text{unaff}}_{{\text{mle}-h}}}.$ Therefore, $$\begin{aligned} {3} \frac{\delta}{\eta}-strong &\implies & \frac{\delta}{\eta} &< \frac{r^{\text{aff}}_{\text{mle}-l}}{r^{\text{unaff}}_{\text{mle}-h}} &{}\\ &\therefore & \frac{R^{-1} \left( r^{\text{unaff}}_{\text{mle}-h} \right)}{\eta r^{\text{unaff}}_{\text{mle}-h}} &< \frac{r^{\text{aff}}_{\text{mle}-l}}{r^{\text{unaff}}_{\text{mle}-h}} &{} \\ &\implies & R^{-1}\left(r^{\text{unaff}}_{\text{mle}-h} \right) &< \left( \frac{\sum_{x \in U_{X}(S^T)}{N(x)} }{ \sum_{x \in U_{X}(D)}{N(x)} } \right) r^{\text{aff}}_{\text{mle}-l} &\\ & & &=\frac{\sum_{x \in U_{X}(S^T)}{ r^{\text{aff}}_{\text{mle}-l}N(x)} }{ \sum_{x \in U_{X}(D)}{N(x)} } &\\ & & &\le \frac{\sum_{x \in U_{X}(S^T)}{ r_{\text{mle}}(x)N(x)} }{ \sum_{x \in U_{X}(D)}{N(x)} } ~~\left( \text{since} ~ r_{\text{mle}}(x) \ge r^{\text{aff}}_{\text{mle}-l} ~~ \forall x \in U_{X}(S^T) \right) &\\ & & &\le\frac{ \sum_{x \in U_{X}(S^T)}{ r_{\text{mle}}(x)N(x)} + \sum_{x \not\in U_{X}(S^T)}{ r_{\text{mle}}(x)N(x)} }{ \sum_{x \in U_{X}(D)}{N(x)} } &\\ & & &=\frac{ \sum_{x \in U_{X}(D)}{ r_{\text{mle}}(x)N(x)} }{ \sum_{x \in U_{X}(D)}{N(x)} } &\\ & & &=\frac{ \sum_{x \in U_{X}(D)}{ \left( \frac{ N_{\alpha}(x) }{ N(x) } - \alpha \right) N(x) } }{ \sum_{x \in U_{X}(D)}{N(x)} } &\\ & & &=\frac{ \sum_{x \in U_{X}(D)}{ N_{\alpha}(x) - N(x)\alpha } }{ \sum_{x \in U_{X}(D)}{N(x)} } &\\ & & &=\frac{ \sum_{x \in U_{X}(D)}{ N_{\alpha}(x) } - \sum_{x \in U_{X}(D)}{ N(x)\alpha } }{ \sum_{x \in U_{X}(D)}{N(x)} } &\\ & & &=\frac{ \sum_{x \in U_{X}(D)}{ N_{\alpha}(x) } }{ \sum_{x \in U_{X}(D)}{N(x)} } - \alpha &\\ &\therefore & \beta_{\max}(x_{(t+1)}) - \alpha &< \beta_{\text{mle}}(D) - \alpha &\\ &\implies & \beta_{\max}(x_{(t+1)}) &< \beta_{\text{mle}}(x_{(t)}) &\\ &\implies & \beta_{\max}(x_{(t+1)}) &< \beta_{\text{mle}}(S^{\ast}) &\\ &\therefore & \omega\big( \alpha, \beta_{\text{mle}}(S^{\ast}), N_{\alpha}(x_{(t+1)}), N(x_{(t+1)}) \big) &< 0 &\\ &\implies & |S^{\ast}| &\le t &\\ &\therefore & S^{\ast} &\subseteq S^T &\\ \end{aligned}$$ Intuitively, $\beta_{\text{mle}}(x_{(t)})$ and $\beta_{\text{mle}}(x_{(t+1)})$ are respectively the smallest affected and largest unaffected $\beta_{\text{mle}}$ values. Furthermore, $\beta_{\max}(x_{(t+1)}) \le \beta_{\text{mle}}(x_{(t)})$, which means $\beta_{\max}(x_{(t+1)}) \le \beta_{\text{mle}}(S^{\ast})$ for the optimal subset $S^{\ast}$. Moreover, the $S^{\ast}$ that maximizes $F_{\alpha,\beta}$ will not include any data element $x$ that makes a non-positive contribution to the score $F_{\alpha,\beta}$ at the given value of $\beta$. Such a non-positive contribution occurs when the concave $\omega$ function of $s$ is non-positive. At the optimal $\alpha$ and $\beta = \beta_{\text{mle}}(S^{\ast})$ the $\omega$ function for each of the $\{x_{(t+1)}, \ldots, x_{(M)}\}$ are non-positive because $\beta_{\max}$ (the larger root of the $\omega$ functions) for each of these elements is less than $\beta_{\text{mle}}(S^{\ast})$. From Lemma \[lem:na-max-to-mle\] we know that with respect to $\omega^{NA}$, $\frac{R^{-1} \left( r \right)} {r} = 2$. Additionally, from Lemma \[lem:bj-max\_to\_mle\] we know that with respect to $\omega^{BJ}$, $\frac{R^{-1} \left( r \right)} {r} \ge 2$ under certain conditions. Therefore, we can conclude that at $\alpha^{\ast}$, $\frac{2}{\eta}$-strength is sufficient for $S^{\ast} \supseteq S^T$ with respect to $F^{NA}$; to $F^{BJ}$, under some conditions; and to the other score functions described above, by Lemma \[lem:na\_transform\_max\]. Essentially, if the observed proportions of $p$-values significant at $\alpha^{\ast}$ across all of the $x \in U_{X}(S^T)$ are at least $\frac{2}{\eta}$ times larger than the observed proportions for $x \not\in U_{X}(S^T)$, then the detected subset will only include affected data elements. \[thm:subset\_exact\] Under $H_1(S^T)$, where $S^T$ is a $t$-element subset of covariate profiles and for each element the true potential outcome distributions are unequal, $\exists~\nu,\delta > 1$ such that if the observed effect (as measured by $\omega$) across these covariate profiles is $\nu-homogeneous$ and $\frac{\delta}{\eta}-strong$, then $S^{\ast} = S^T$. $$\begin{aligned} {3} \because~& & \nu-homogeneous \implies S^{\ast} &\supseteq S^{T} &\quad(\text{by Theorem } \ref{thm:subset_homo})\\ \because~& & \frac{\delta}{\eta}-strong \implies S^{\ast} &\subseteq S^{T} &\quad(\text{by Theorem } \ref{thm:subset_strength})\\ \therefore~& &S^{\ast} &= S^T \end{aligned}$$ It follows from the above corollaries that 2-homogeneity and $\frac{2}{\eta}$-strength are sufficient for $S^\ast = S^T$ with respect to $F^{NA}$; to $F^{BJ}$, under some conditions; and to the other score functions described above, by Lemma \[lem:na\_transform\_max\]. [^1]: Traditional empirical $p$-values are only asymptotically Uniform(0,1); for p-value ranges [@mcfowland-fgss-2013], p-values drawn uniformly from each range will be Uniform(0,1), even in finite samples. [^2]: For $p$-value ranges, as in [@mcfowland-fgss-2013], $N_{\alpha}(S)$ is more precisely the total probability mass less than $\alpha$ over the $p$-value ranges in $C(S)$. [^3]: The detected subpopulation excluded teacher experience between 25 and 30 years. Including this range yields qualitatively the same results and conclusions.
--- author: - | Nassim Bozorgnia\ Department of Physics and Astronomy, UCLA, 475 Portola Plaza, Los Angeles, CA 90095, USA\ E-mail: - | Graciela B. Gelmini\ Department of Physics and Astronomy, UCLA, 475 Portola Plaza, Los Angeles, CA 90095, USA\ Email: - | Paolo Gondolo\ Department of Physics and Astronomy, University of Utah, 115 South 1400 East \#201, Salt Lake City, UT 84112, USA\ School of Physics, KIAS, Seoul 130-722, Korea\ E-mail: title: | Channeling in direct dark matter detection II :\ channeling fraction in Si and Ge crystals --- Introduction ============ Channeling and blocking effects in crystals refer to the orientation dependence of charged ion penetration in crystals. In the “channeling effect" ions incident upon a crystal along symmetry axes and planes suffer a series of small-angle scatterings that maintain them in the open “channels" in between the rows or planes of lattice atoms and thus penetrate much further into the crystal than in other directions. Channeled incident ions do not get close to lattice sites, where they would be deflected at large angles. The “blocking effect" consists in a reduction of the flux of ions originating in lattice sites along symmetry axes and planes, due to large-angle scattering with the atoms immediately in front of the originating lattice site, creating what is called a “blocking dip" in the flux of ions exiting from a thin enough crystal as function of the exit angle with respect to a particular symmetry axis or plane. Channeling and blocking effects in crystals are related because the non-channeled incident ions are those which suffer a close-encounter process with an atomic nucleus in the crystal, namely those which pass sufficiently close to a lattice nucleus to be deflected at a large angle. After a close-encounter collision the deflected ion acts as if it was “emitted" from a lattice site. Channeling is many times observed as a lack of ions (incident at a small angle $\psi$ with respect to a particular symmetry axis or plane) deflected at a large-angle which form a “channeling dip" in the outgoing flux as function of the incident beam angle $\psi$. As pointed out first by Lindhard [@Lindhard:1965], when no slowing-down processes are involved the “channeling" and “blocking" dips should be identical, when compared for the same particles, energies, crystals and crystal directions. Channeled ions loose their energy to electrons. They penetrate distances much larger than the characteristic separation of atoms along the channels, thus they interact with hundreds or thousands of lattice atoms. For energies in the keV range and above, channeled ions penetrate distances of at least several 10’s of nm (see Appendix A, where we use the Lindhard-Scharff [@Lindhard-Scharff; @Dearnaley:1973] model of electronic energy loss to calculate the penetration length of ions). These are distances much longer than the separation of atoms along the channels, which are similar to the lattice constant, i.e. approximately 0.5 nm for Si and Ge (see Appendix B). The potential importance of the channeling effect for direct dark matter detection was first pointed out for NaI (Tl) by Drobyshevski [@Drobyshevski:2007zj] and by the DAMA collaboration [@Bernabei:2007hw]. The prospect of a daily modulation of the dark matter signal in direct detection due to channeling was recently raised by Avignone, Creswick and Nussinov [@Avignone:2008cw] in NaI. In this paper we compute the channeling fraction of recoiling ions in Si and Ge crystals as function of the recoil energy and temperature. Si and Ge crystals are used in several direct dark matter detection experiments, such as CDMS [@CDMS], CoGeNT [@CoGeNT], Edelweiss [@EDELWEISS-II], TEXONO [@TEXONO], EURECA [@EURECA], HDMS [@HDMS] and IGEX [@IGEX]. In a companion paper [@BGG-I] we introduced the general ideas and analytic models [@Lindhard:1965; @Dearnaley:1973; @Gemmell:1974ub; @Andersen:1967; @Morgan-VanVliet; @VanVliet; @Andersen-Feldman; @Komaki:1970; @Appleton-Foti:1977; @Hobler] that we use to describe these phenomena in the context of dark matter detection, and applied them to NaI (Tl). For the reader familiar with Ref. [@BGG-I] we would like to clarify which are the main differences between the calculations in Ref. [@BGG-I] and in the present paper, besides the crystal structure (see Appendix B). In this paper we use a different expression for the continuum potentials (see Eqs. 2.1 to 2.7), which leads to a different expression for the critical channeling distance for axial channels (see Eq. 2.15). We also use a different way of deriving the critical distance for planar channels (see Eqs. 2.18 to 2.23). Model of Channeling =================== Continuum models ---------------- There are different approaches to calculate the deflections of ions traveling in a crystal. In “binary collision models” the ion path is computed by a computer program (see Ref. [@Barrett:1971] for one of the first ones) in terms of a succession of individual interactions, each with one of the atoms in the crystal. Crystal imperfections and lattice vibrations are thus easily and correctly taken into account. In “continuum models”, reasonable approximations are made which allow to replace the discrete series of binary collisions with atoms by a continuous interaction between a projectile and uniformly charged strings or planes. These models allow to replace the numerical calculations by an analytic description of channeling, and provide good quantitative predictions of the behavior of projectiles in the crystal in terms of simple physical quantities. This is the approach we use here. The analytical description of channeling phenomena was initially developed mostly by J. Lindhard [@Lindhard:1965] and collaborators for ions of energy MeV and higher, and its use was later extended to lower energies, i.e. hundreds of eV and above, mostly to apply it to ion implantation in Si. For the low energy range, we found most useful the work of G. Hobler [@Hobler], who in 1995 and 1996 perfected and checked experimentally previous continuum model predictions [@cho] for axial and planar channeling at energies in the keV to a few 100 keV range, developed to avoid channeling in the implantation of B, P and As atoms in Si crystals [@implantation]. This approach must be complemented by determination of parameters through data fitting or simulations. Moreover, lattice vibrations are more difficult to include in continuum models. Since we use a continuum model, our results should in last instance be checked by using some of the many sophisticated simulation programs that implement the binary collision approach or mixed approaches (e.g. [@Monte-Carlo-programs]). Our calculation is based on the classical analytic models developed in the 1960’s and 70’s, in particular by Lindhard [@Lindhard:1965; @Dearnaley:1973; @Andersen:1967; @Morgan-VanVliet; @VanVliet; @Andersen-Feldman; @Komaki:1970; @Appleton-Foti:1977; @Hobler]. The fact that the de Broglie wavelengths of ions in the keV energy range are of the order of $\sim$ 0.01 pm (and smaller at higher energies), thus much shorter than the lattice constant of a crystal ($\sim$ 500 pm, see Appendix B), justifies using a classical treatment. We use the continuum string and plane model, in which the screened Thomas-Fermi potential is averaged over a direction parallel to a row or a plane. This averaged potential is considered to be uniformly smeared out along the row or plane of atoms, which is a good approximation if the propagating ion interacts with many lattice atoms in the row or plane by a correlated series of many consecutive glancing collisions with lattice atoms. We are going to consider just one row, which simplifies the calculations and is correct except at the lowest energies we consider, as we explain below. There are several good analytic approximations of the screened potential. Except when said otherwise, in this paper we use Molière’s approximation, following the work of Hobler [@Hobler] and Morgan and Van Vliet [@Morgan-VanVliet; @VanVliet]. Molière’s approximations of the continuum potentials are more complicated and also somewhat better than Lindhard’s expressions, which we used in our paper devoted to NaI [@BGG-I]. Lindhard’s expressions are easier to manipulate algebraically to obtain different quantities of interest. Still in this paper we used some expressions derived from Lindhard’s form of the potentials. In Molière’s approximation [@Gemmell:1974ub] the axial continuum potential, as a function of the transverse distance $r$ to the string, is $$U(r)=\left(2Z_1 Z_2 e^2/d\right)f(r/a) =E\psi_1^2f(r/a),$$ where $E$ is the energy of the propagating particle and $\psi_1$ is a dimensionless parameter defined by $$\psi_{1}^2=\frac{2Z_{1}Z_{2}e^2}{E d}, \label{psi1}$$ $Z_1$, $Z_2$ are the atomic numbers of the recoiling and lattice nuclei respectively, $d$ is the spacing between atoms in the row, $a$ is the Thomas-Fermi screening distance, $a= 0.4685 {\text {\AA} } (Z_1^{1/2} + Z_2^{1/2})^{-2/3} $ [@Barrett:1971; @Gemmell:1974ub] (1.225$\times 10^{-2}$ nm and 0.9296$\times 10^{-2}$ nm for a Si ion in Si and a Ge ion in Ge respectively, see Appendix B) and $E= Mv^2/2$ is the kinetic energy of the propagating ion. Molière’s screening function [@Gemmell:1974ub] for the continuum potential is $$f(\xi)=\sum_{i=1}^{3}{\alpha_i K_0(\beta_i \xi)}.$$ Here $K_0$ is the zero-order modified Bessel function of the second kind, and the dimensionless coefficients $\alpha_i$ and $\beta_i$ are $\alpha_i=\{ 0.1, 0.55, 0.35 \}$ and $\beta_i=\{ 6.0, 1.2, 0.3 \}$  [@Watson:1958], for $i=1,2,3$. The string of crystal atoms is at $r=0$. In our case, $E$ is the recoil energy imparted to the ion in a collision with a WIMP, $$E = \frac{|\vec{\bf q}|^2}{2M},$$ and $\vec{\bf q}$ is the recoil momentum. The continuum planar potential in Molière’s approximation [@Gemmell:1974ub], as a function of the distance $x$ perpendicular to the plane, is $$U_p(x)=\left(2\pi n Z_1 Z_2 e^2 a\right)f_p(x/a) =E\psi_a^2f_p(x/a),$$ where the dimensionless parameter $\psi_a$ is defined as $$\psi_a^2=\frac{2\pi n Z_1 Z_2 e^2 a}{E}, \label{psi_a}$$ and $n= N d_{pch}$ is the average number of atoms per unit area, where $N$ is the atomic density and $d_{pch}$ is the width of the planar channel, i.e. the interplanar spacing (thus, the average distance of atoms within a plane is $d_p=1/ \sqrt{N d_{pch}}$). The subscript p denotes “planar" and $$f_p(\xi)=\sum_{i=1}^{3}{(\alpha_i/\beta_i) \exp(-\beta_i \xi)},$$ where the coefficients $\alpha_i$ and $\beta_i $ are the same as above. The plane is at $x=0$. Examples of axial and planar continuum potentials for a Si ion propagating in a Si crystal and a Ge ion propagating in a Ge crystal are shown in Fig. \[U\]. The continuum model does not imply that the potential energy of an ion moving near an atomic row is well approximated by the continuum potential $U$. The actual potential consists of sharp peaks near the atoms and deep valleys in between. The continuum model says that the net deflection due to the succession of impulses from the peaks is identical to the deflection due to a force $-U'$. This is only so if the ion never approaches so closely any individual atom that it suffers a large-angle collision. Lindhard proved that for a string of atoms this is so only if $$U''(r) < \frac{8}{d^2} E , \label{U''}$$ where the double prime denotes the second derivative with respect to $r$. Replacing the inequality in Eq. \[U”\] by an equality defines an energy dependent critical distance $r_c$ such that $r > r_c$ for the continuum model to be valid. Morgan and Van Vliet [@Morgan-VanVliet] also derived a condition for axial channels, similar to Eq. \[U”\] but with the factor 8 replaced by 16. The breakdown of the continuum theory for a planar channel is more involved than for an axial channel because the atoms in the plane contributing to the scattering of the propagating ion are usually displaced laterally within the plane with respect to the ion’s trajectory. Thus the moving ion does not encounter atoms at a fixed separation or at fixed impact parameter as is the case for a row. Morgan and Van Vliet [@Morgan-VanVliet] reduced the problem of scattering from a plane of atoms to the scattering of an equivalent row of atoms contained in a strip centered on the projection of the ion path on the plane of atoms. They then applied Eq. \[U”\] as the condition for planar channeling to the fictitious string defined in this way (more about this below). The transverse energy --------------------- Lindhard proved that for channeled particles the longitudinal component $v \cos\phi$ of the velocity, i.e. the component along the direction of the row or plane of the velocity, may be treated as constant (if energy loss processes are neglected). Then, in the continuum model, the trajectory of the ions can be completely described in terms of the transverse direction, perpendicular to the row or plane considered. For small angle $\phi$ between the ion’s trajectory and the atomic row (or plane) in the direction perpendicular to the row (or plane), the so called “transverse energy" $$E_{\perp} = E \sin^2\phi + U\simeq E \phi^2 + U \label{E-perp}$$ is conserved. In Eq. \[E-perp\] relativistic corrections are neglected. Let $r_i$ be the initial position at which the WIMP nucleus collision occurs, i.e. if $r_i >0$ the recoiling nucleus was displaced with respect to its position of equilibrium in a crystal row when it collided with a WIMP. We call $\phi_i$ the angle of the initial recoil momentum with respect to the row of atoms and $E$ the initial recoil energy of the propagating ion. Given these initial parameters, the issue of where to define $E_{\perp}$ arises. Namely, we define $$E_{\perp}= E \sin\phi_i^2 + U(r^*), \label{E-perp-HP}$$ but there are different possible choices for $r^*$, the position at which to measure the potential $U$. In our case, the recoiling ion leaves an empty lattice site, thus it moves away from an empty lattice site in the potential generated by its neighboring lattice atoms. So the potential the recoiling ion moves through at the moment of collision is very small, and the recoiling ion conserves its momentum and direction of motion until it gets very near the nearest neighbor, a distance $d$ away along the string. At this moment, it is at a distance $$r^* \equiv r_i + d \tan{\phi_i} \label{r*definition}$$ from its nearest neighbor. Therefore, as we did in Ref. [@BGG-I], we will make the approximation of defining the potential entering into Eq. \[E-perp-HP\] at this position $r^*$. Minimum distances of approach and critical channeling angles ------------------------------------------------------------ The conservation of the transverse energy provides a definition of the minimum distance of approach to the string, $r_{\rm min}$ (or to the plane of atoms $x_{\rm min}$), at which the trajectory of the ion makes a zero angle with the string (or plane), and also of the angle $\psi$ at which the ion exits from the string (or plane), i.e. far away from it where $U \simeq0$. In reality the furthest position from a string or plane of atoms is the middle of the channel, whose width we call $d_{ach}$ for an axial channel ($d_{pch}$ for a planar channel). Thus, for an axial channel $$\label{eq:consetrans} E_{\perp}= U(r_{\rm min}) = E \psi^2 +U(d_{ach}/2).$$ We proceeded in two ways to define the axial channel radius $(d_{ach}/2)$ for the axial channels we included in our calculation. We used the contour plots of the axial continuum potentials plotted in a plane perpendicular to the channels shown in Fig. 3 of the 1995 paper of Hobler [@Hobler] to read off the channel radius $d_{\rm ach}/2$ of the $<$100$>$, $<$110$>$ and $<$111$>$ axial channels in terms of the lattice constant a$_{lat}$. They are 0.25 a$_{lat}$, 0.375 a$_{lat}$, and $ \sqrt{0.2^2+0.12^2}$ a$_{lat}=$ 0.233 a$_{lat}$, respectively. For the other axial channels we considered, $<$211$>$ and $<$311$>$, we define the channel width $d_{\rm ach}$ in terms of the interatomic distance $d$ in the corresponding row as $d_{ach}= 1/ \sqrt{N d}$, where is $N$ the atomic density. For a planar channel we replace the axial potential at the middle of the axial channel $U(d_{ach}/2)$ in Eq. \[eq:consetrans\] by the planar potential at the middle of the planar channel $U_p(d_{pch}/2)$ (the channel width $d_{pch}$ was defined after Eq. \[psi\_a\]). For axial channeling Lindhard equates the condition for channeling with the condition in Eq. \[U”\] for the validity of the continuum model. Replacing the inequality in Eq. \[U”\] by an equality defines an energy dependent critical distance $r_c$, so that channeling can happen only if the propagating ion always keeps a distance $r > r_c$. Morgan and Van Vliet [@Morgan-VanVliet] use 5 instead of 8 in Eq. \[U”\], because this agrees better with their simulations of channeling in copper crystals. Following Hobler [@Hobler], we use here Morgan and Van Vliet’s equation to define $r_c$, i.e. $$U''(r_c) = \frac{5}{d^2} E. \label{MVU''}$$ With Molière’s form of the potential it is not possible to solve analytically for $r_c$. Morgan and Van Vliet [@Morgan-VanVliet] gave the following approximate analytical solution for the axial channeling minimum distance of approach, $$r^{MV}_c= (2/3) a \sqrt{\alpha} \left[1-(\sqrt{\alpha}/19) + (\alpha/700)\right] \label{rcritMoliere}$$ with $\alpha= (Z_1 Z_2 e^2 d/ a^2 E)$. This solution is not correct at low energies (high values of $\alpha$). As can be seen in Fig. \[rcPoly\] (and also in Figs. 8 and 13 of the 1995 paper of Hobler [@Hobler]) the steep increase in the approximate Morgan and Van Vliet solution at low energies (see the curve labeled “MV" in Fig. \[rcPoly\].a) is not present in the numerical solution (see the curve labeled “Exact" in Fig. \[rcPoly\].a) of $r_c$. Instead of Eq. \[rcritMoliere\] we use here a better approximate analytic solution obtained by fitting a degree nine polynomial to the exact solution of Eq. \[MVU”\], $$\begin{aligned} r_c &=& a~[0.57305 \sqrt{\alpha} - 0.0220301 (\sqrt{\alpha}) ^2 + 0.000728889 (\sqrt{\alpha}) ^3\nonumber\\ &&- 0.0000155189 (\sqrt{\alpha})^4 + 2.04162 \times 10^{-7} (\sqrt{\alpha})^5 - 1.65057 \times 10^{-9} (\sqrt{\alpha})^6\nonumber\\ &&+ 7.9749 \times 10^{-12} (\sqrt{\alpha})^7 -2.11041 \times 10^{-14} (\sqrt{\alpha})^8 + 2.35121 \times 10^{-17} (\sqrt{\alpha})^9]. \label{rcritPoly}\end{aligned}$$ Eq. \[rcritPoly\] is valid from $E$ of 1 keV to 29 TeV ( which corresponds to values of $\sqrt\alpha$ between 180 and 0.000158). Fig. \[rcPoly\] shows a comparison of the exact numerical solution $r_c(E)$ of Eq. \[U”\] and the approximate analytic solution Eq. \[rcritPoly\] as a function of $\sqrt{\alpha}$ (divided by the screening distance $a$). The high and low $\sqrt{\alpha}$ range in Fig. \[rcPoly\].a and \[rcPoly\].b respectively corresponds to low and high energies. The maximum percentage error between the exact solution and the analytic approximation we use is 11.5 %. Fig. \[rcPoly-Si-Ge\] shows the critical distance of approach $r_c(E)$ in Eq. \[rcritPoly\] as a function of energy of the propagating ion for several axial channels, for Si ions propagating in a Si crystal and Ge ions propagating in a Ge crystal. Since $r_c$ is the smallest possible minimum distance of approach to the string of a channeled propagating ion for a given energy $E$, i.e. $r_{\rm min} > r_c$, and the potential $U(r)$ decreases monotonically with increasing $r$, then $$U(r_{\rm min}) < U(r_c). \label{defrcrit}$$ Using Eq. \[eq:consetrans\], this can be further translated into an upper bound on $E_{\perp}$ and thus on $\psi$, the angle the ion makes with the string far away from it, $$\psi < \psi_{c}(E)= \sqrt{ \frac{U(r_c(E))- U(d_{ach}/2)}{E} }. \label{psicritaxial}$$ $\psi_{c}(E)$ is the critical channeling angle for the particular axial channel, i.e. it is the maximum angle the propagating ion can make with the string far away from it (in the middle of the channel) if the ion is channeled. The critical distance $r_c(E)$ increases as $E$ decreases (see Figs. \[rcPoly-Si-Ge\], \[xcPoly-Si-Ge\] and \[rc-psic-MV-100-Si\] to \[rc-psic-MV-100-Si-c2\]). At low enough $E$, $r_c(E)$ becomes close to the radius of the channel $d_{\rm ach}/2$, and the critical angle $\psi_{c}(E)$ (which is the maximum angle for channeling in the middle of the channel) goes to zero (see Figs. \[rc-psic-MV-100-Si\] to \[rc-psic-MV-110-Ge\] and \[T-depent-psic-Si\], \[T-depent-psic-Ge\]). This means that there is a minimum energy below which channeling cannot happen, even for ions moving initially in the middle of the channel. This is a reflection of the fact that the range of the interaction between ion and lattice atoms increases with decreasing energy and at some point there is no position in the crystal where the ion would not be deflected at large angles. The existence of a minimum energy for channeling was found by Rozhkov and DyulÕdya [@Rozhkov] in 1984 and later by Hobler [@Hobler] in 1996. It is clear that to compute $r_c(E)$ when it is not small with respect to the radius of the channel $d_{\rm ach}/2$, and thus to compute the actual minimum energy for channeling, we would need to consider the effect of more than one row or plane (as done in Refs. [@Rozhkov] and [@Hobler]), thus our results are approximate in this case. For planar channeling we will follow the procedure of defining a “fictitious row" introduced by Morgan and Van Vliet [@Morgan-VanVliet; @Hobler]. They reduced the problem of scattering from a plane of atoms to the scattering of an equivalent row of atoms contained in a strip of width 2$R$ ($R$ is defined below) centered on the projection of the ion path onto the plane of atoms, and took the average area per atom in the plane, $1/N d_{pch}$ to be 2$R$ times the characteristic distance $\bar{d}$ between atoms along this fictitious row, i.e. $$\bar{d} = 1/ (N d_{pch} 2 R). \label{eq: dbar}$$ Once the width $2R$ of the fictitious row is specified, one uses the channeling condition for the continuum string model, Eq. \[U”\], with the average atomic composition of the plane. For $R$, Morgan and Van Vliet used the impact parameter in an ion-atom collision corresponding to a deflection angle of the order of “the break-through" angle $\sqrt{U_p(0)/E}$. This is the minimum angle at which an ion of energy $E$ must approach the plane from far away (so that the initial potential can be neglected) to overcome the potential barrier at the center of the plane at $x=0$ (namely, so that $E_\perp= U_p(0)$). For small scattering angles, the deflection angle $\delta$ is related to the impact parameter, in this case $R$, as (see e.g. Eq. $2.1'$ of Lindhard [@Lindhard:1965]) $$2 E \delta= -d~U'(R), \label{U'}$$ where $U'$ is the derivative of the axial continuum potential, and Morgan and Van Vliet define $R$ by taking $\delta=\sqrt{U_p(0)/E}$. Using the Molière’s approximation for the potentials, Morgan and Van Vliet found the following expressions for $R$ $$R^{MV}= a \left(\frac{A}{2}\right) \ln\left(B~Z_1 Z_2 e^2/ a \sqrt{E U_p(0)}\right) \label{MVR}$$ which lead to the $\bar{d}$ value $$\bar{d}^{MV}= \left[A~a N d_{\rm pch} \ln\left(B~Z_1 Z_2 e^2/ a \sqrt{E U_p(0)}\right)\right]^{-1}, \label{dbar-MV}$$ with coefficients $A=1.2$ and $B=4$. Morgan and Van Vliet [@Morgan-VanVliet] found discrepancies with this theoretical formula in simulations of binary collisions of 20 keV protons in a copper crystal and adjusted the coefficients to $A= 3.6$ and $B=2.5$. Hobler [@Hobler] used both sets of coefficients and compared them with simulations and data of B and P ions propagating in Si for energies of about 1 keV and above. Hobler concluded that the original theoretical formula was better in his case (although Hobler proposed yet another empirical relation to define $\bar{d}$). While Eq. \[U’\] seems to provide a good condition for $R$, there is a channel dependent energy upper limit of applicability of its approximate analytical solution in Eq. \[dbar-MV\], because the logarithm in $\bar{d}^{MV}$ approaches zero as $E$ approaches $(4 Z_1 Z_2 e^2/ a)^2/U_p(0)$. Close to this value of $E$ there is an unphysical fast increase in $\bar{d}^{MV}$ (and consequently in $x_c(E)$) that indicates the break-down of the approximate solution $\bar{d}^{MV}$ in Eq. \[dbar-MV\] (and, as shown in Fig. 13 of the 1995 paper of Hobler [@Hobler], is not found in other expressions of $x_c$). We decided to keep the Morgan and Van Vliet definition for $R$ in Eq. \[U’\] and use the following approximate analytical solution obtained by fitting a degree five polynomial in $\ln{y}$ to the exact numerical solution of Eq. \[U’\] $$\begin{aligned} R&=& a~\big(0.716014 + 0.510922 \ln{y} + 0.12047 (\ln{y})^2 + 0.0180492 (\ln{y})^3\nonumber\\ &&+ 0.00442459 (\ln{y})^4 - 0.000824744 (\ln{y})^5 \big), \label{R}\end{aligned}$$ where $y=Z_1 Z_2 e^2/a \sqrt{E U_p(0)}$. Fig. \[dbar\] shows a comparison of the exact numerical solution of Eq. \[U’\] for $R$ and its analytical approximation in Eq. \[R\] (divided by $a$) as a function of $y$. Also the approximate expression of Morgan and Van Vliet in Eq. \[dbar-MV\] is shown in Fig. \[dbar\] (labeled MV). The high and low $y$ ranges in Fig. \[dbar\].a and b respectively corresponds to low and high energies. The approximate solution is not valid at $y<0.15$ which corresponds to $E>50$ MeV for Si, and $E>700$ MeV for Ge. Within its range of validity, the percentage error of the analytic approximation in Eq. \[R\] is less than 9%. Let us call $\bar{r}_c(E)$ the critical distance obtained from Eq. \[rcritPoly\] for the fictitious row, whose interatomic distance is $\bar{d}$ in Eq. \[eq: dbar\] in which the distance $R$ is given in Eq. \[R\]. Then, the minimum distance of approach for planar channeling is $$x_c(E) \equiv \bar{r}_c(E). \label{ourxcrit}$$ Fig. \[xcPoly-Si-Ge\] shows the plot of $x_c(E)$ (obtained from using Eq. \[R\] for the fictitious string) as a function of energy for the most important planar channels, i.e. {100}, {110}, {111}, {210} and {310}. Fig. \[xcPoly-Si-Ge\] shows that we can safely extend our approximation to 50 MeV for Si ions in a Si crystal and to 700 MeV for Ge ions in Ge crystals. Writing equations equivalent to Eqs. \[eq:consetrans\] and \[defrcrit\] for planar channels, namely $$E_{\perp}= U_p(x_{\rm min}) = E (\psi^p)^2 +U_p(d_{pch}/2) \label{planarEperp}$$ and $$U_p(x_{\rm min}) < U_p(x_c(E)), \label{planarmincond}$$ we obtain an equation similar to Eq. \[psicritaxial\] but for the maximum planar channeling angle, the critical planar channeling angle $$\psi^p_c(E)= \sqrt{ \frac{U_p(x_c(E))- U_p(d_{pch}/2)}{E} }. \label{psicritplanar}$$ For very small energies, for which $x_c(E) \geq d_{\rm pch}/2$ no channeling is possible (the maximum distance to any plane cannot be larger than half the width of the channel separating them) and $\psi^p_c=0$ (see Figs. \[xcPoly-Si-Ge\], \[rc-psic-MV-100-Si\] to \[rc-psic-MV-110-Ge\] and \[T-depent-psic-Si\].b, \[T-depent-psic-Ge\].b). When $x_c(E)$ approaches the middle of the channel the effect of other planes should be considered, so our approximation of using the potential of only one plane is not correct in this regime. The static lattice critical distances presented in Figs \[rcPoly-Si-Ge\] and \[xcPoly-Si-Ge\] (also in the left panels of Figs. \[rc-psic-MV-100-Si\], \[rc-psic-MV-110-Si\], \[rc-psic-MV-100-Ge\] and \[rc-psic-MV-110-Ge\]) do not include thermal effects. These are important and must be taken into account. They increase the critical channeling distances and consequently decrease the critical channeling angles as the temperature increases (as clearly shown in Fig. \[rc-psic-MV-100-Si-c1\] and \[rc-psic-MV-100-Si-c2\]). Temperature dependent critical distances and angles --------------------------------------------------- So far we have been considering static strings and planes, but the atoms in a crystal are actually vibrating. We use here the Debye model to take into account the zero point energy and thermal vibrations of the atoms in a crystal. The one dimensional rms vibration amplitude $u_1$ of the atoms in a crystal in this model is [@Gemmell:1974ub; @Appleton-Foti:1977] $$u_1(T)=12.1 \, \text{\AA} \, \left[\left(\frac{\Phi(\Theta/T)}{{\Theta/T}} + \frac{1}{4}\right)(M\Theta)^{-1}\right]^{1/2}, \label{vibu1}$$ where the 1/4 term accounts for the zero point energy, $M$ is the atomic mass in amu, $\Theta$ and $T$ are the Debye temperature and the temperature of the crystal in K, respectively, and $\Phi(x)$ is the Debye function, $$\Phi(x)=\frac{1}{x}\int_{0}^{x}{\frac{t dt}{e^t -1}}.$$ The Debye temperatures of Ge and Si are respectively $\Theta=290\;^\circ$K and $\Theta=490\;^\circ$ K  [@Gemmell:1974ub; @Hobler]. The vibration amplitude $u_1$ as a function of the temperature $T$ is plotted in Fig. \[figu1\] for Si and Ge crystals. At room temperature (20 $^\circ$C), $u_1=0.00849$ nm for Ge and $u_1=0.00827$ nm for Si. In principle there are modifications to the continuum potentials due to thermal effects, but we are going to take into account thermal effects in the crystal through a modification of the critical distances found originally by Morgan and Van Vliet [@Morgan-VanVliet] and later by Hobler [@Hobler] to provide good agreement with simulations and data. For axial channels it consists of taking the temperature corrected critical distance $r_c(T)$ to be, $$r_c(T)= \sqrt{r^2_c(E) + [c_1 u_1(T)]^2}, \label{rcofT}$$ where the dimensionless factor $c_1$ in different references is a number between 1 and 2 (see e.g. Eq. 2.32 of Ref. [@VanVliet] and Eq. 4.13 of the 1971 Ref. [@Morgan-VanVliet]). For planar channels the situation is more complicated, because some references give a linear and others a quadratic relation between $x_c(T)$ and $u_1$. Following Hobler [@Hobler] we use an equation similar to that for axial channels, namely $$x_c(T)= \sqrt{x^2_c(E) + [c_2 u_1(T)]^2}, \label{xcofT}$$ where again $c_2$ is a number between 1 and 2 (for example Barret [@Barrett:1971] finds $c_2 = 1.6$ at high energies, and Hobler [@Hobler] uses $c_2 = 2$). We will mostly use $c_1=c_2=1$ in the following, to try to produce upper bounds on the channeling fractions. Using the temperature corrected critical distances of approach $r_c(T)$ and $x_c(T)$ (Eqs. \[rcofT\] and \[xcofT\]) instead of the static lattice critical distances $r_c$ and $x_c$ (Eqs. \[rcritPoly\] and \[ourxcrit\]), in the definition of critical angles, Eqs. \[psicritaxial\] and \[psicritplanar\], we obtain the temperature corrected critical axial and planar angles, examples of which are shown in the right panels of Figs. \[rc-psic-MV-100-Si\] to \[rc-psic-MV-110-Ge\] ($c_1=c_2=c$ and $c=1$ or $c=2$ at room temperature). As shown in Fig. \[Compare-Data\], with this formalism and using $c_1=c_2=2$ we fit relatively well the critical angles measured at room temperature for B and P ions in a Si crystal (shown in green, or gray if color not available) in several channels, for energies between 20 keV and 600 keV that Hobler [@Hobler] extracted from thermal wave measurements. Figs. \[rc-psic-MV-100-Si-c1\] and \[rc-psic-MV-100-Si-c2\] show clearly the temperature effects in the critical distances and angles for a specific channel, the $<$100$>$ axial channel of a Si crystal and for a propagating Si ion. At small energies the static critical distance of approach is much larger than the vibration amplitude, so temperature corrections are not important. For small enough energies the critical distance becomes larger than the radius of the channel indicating that nowhere in the channel an ion can be far enough from the row of lattice atoms for channeling to take place (thus the critical channeling angle is zero). The exact calculation of the energy at which this happens would require considering the effect of more than a single row of atoms (which we do not do here) thus our results at these low energies are only approximate. As the energy increases, the static critical distance of approach decreases and when it becomes small with respect to the vibration amplitude $u_1$, the temperature corrected critical distance becomes equal to $(c_1 u_1)$ which is larger for larger values of $c_1$. When $u_1(T)$ becomes important in determining the critical distance, this becomes larger, and therefore the critical channeling angle become smaller, for higher temperatures. Figs. \[T-depent-psic-Si\] and \[T-depent-psic-Ge\] show how the critical channeling angles change with temperature for four particular channels, the $<$110$>$ and $<$100$>$ axial and the {110} and {100} planar channels, for Si ions in Si and Ge ions in Ge, respectively. In both cases the axial channeling angles are larger than the planar critical angles. The $<$110$>$ and {111} critical channeling angles are the largest among the axial and planar channels respectively. For example, at $E=200$ keV for Si ions in Si, the channels with the largest channeling angles are (in order of decreasing channeling angles): $<$110$>$, $<$100$>$, $<$211$>$, $<$111$>$, {111}, $<$311$>$, {110}, {100}, {310}, and {210}. We can clearly see that the critical angles become zero at low enough energies (for which the critical distance of approach needed for channeling should be larger than the radius of the channel) indicating the range of energies for which no channeling is possible. Channeling of recoiling lattice nuclei ====================================== The channeling of ions in a crystal depends not only on the angle their initial trajectory makes with rows or planes, but also on their initial position. The nuclei recoiling after an interaction with a WIMP start initially from lattice sites (or very close to them), thus blocking effects are important. In fact, as argued originally by Lindhard [@Lindhard:1965], in a perfect lattice and in the absence of energy-loss processes the probability of a particle starting from a lattice site to be channeled would be zero. This is what Lindhard called the [*“Rule of Reversibility."*]{} However, any departure of the actual lattice from a perfect lattice, for example due to vibrations of the atoms in the lattice, violates the conditions of this argument and allows for some of the recoiling lattice nuclei to be channeled. Lattice vibrations are more important at hight temperatures and they have two opposite effects on channeling fractions: the probability of finding the atom which collides with a dark matter particle further out of its equilibrium position increases with increasing temperature thus channeling fractions increase, but the range of angles the trajectory of the propagating ion must make with the direction of the channel decreases with increasing temperature, which decreases the channeling fraction. We now estimate the channeling fraction using the formalism presented so far. Channeling fraction for each channel ------------------------------------ As in Ref. [@BGG-I] we use a Gaussian function for the probability distribution $g(r)$ for the perpendicular distance $r$ to the row at which the atom that collides with a WIMP is located at the moment of the collision due to thermal vibrations in the crystal $$g(r)=\frac{r}{u_1^2}\exp{(-r^2/2u_1^2)}. \label{gofr}$$ The one dimensional vibration amplitude $u_1$ is given in Eq. \[vibu1\]. As explained in detail in Ref. [@BGG-I], the channeled fraction $\chi_{\rm axial}(E, \hat{\textbf{q}})$ of nuclei with recoil energy $E$ moving initially in the direction $\hat{\textbf{q}}$ making an angle $\phi$ with respect to an atomic row is given by the fraction of nuclei which can be found at a distance $r$ larger than a minimum distance $r_{i,\rm min}$ from the row at the moment of collision, determined by the critical distance of approach $$\chi_{\rm axial}(E, \phi)=\int_{r_{i,\rm min}}^{\infty}{dr g(r)}=\exp{(-r_{i,\rm min}^2/2u_1^2)}. \label{chiaxial}$$ It can easily be seen using Eqs. \[E-perp-HP\], \[r\*definition\], \[eq:consetrans\] and \[defrcrit\] that, because $U(r_i + d \tan\phi) \geq U(r_c)$, if $\phi>\psi_c$ no channeling can occur and $\chi_{\rm axial}(E, \phi)=0$. Using the condition $$E \sin^2 \phi + U(r_i + d \tan\phi)= U(r_{\rm min}) < U(r_c(E)), \label{ChanCond-CA}$$ that implies the equality for the minimum initial distance $r_{i,\rm min}$, $$U(r_{i,\rm min}+ d \tan\phi) = U(r_c(E))-E \sin^2\phi,$$ in Ref. [@BGG-I] we derived the following analytic expression for $r_{i,\rm min}$ from Lindhard’s approximation to the potential: $$r_{i,\rm min} (E, \phi) = \frac{C a}{\sqrt{\left( 1+\frac{C^2a^2}{r_{c}^2} \right) \, \exp\!\left(-{2 \sin^2\phi}/{\psi_1^2} \right) -1}} - d \tan\phi, \label{rimin}$$ where $C$ is a constant, which was found experimentally to be $C\simeq\sqrt{3}$ [@Lindhard:1965]. We use here this equation because it is not possible to find a similar analytic expression using Molière’s approximation to the potential (although following Hobler we use Molière’s approximation to obtain the critical distances and angles). $r_{i,\rm min}$ is a function of the temperature too, through $r_{c}(T)$. Notice that a small change in the critical distance $r_c(T)$ and thus in $r_{i,\rm min}$ is exponentially magnified in the channeling fraction $\chi_{\rm axial}$ (Eq. \[chiaxial\]). This constitutes the most important difficulty to evaluate channeling fractions. The same happens for planar channels. For a planar channel, the Gaussian thermal distribution for the planar potential is one-dimensional (the relevant vibrations occurring perpendicularly to the plane), $$g(x)= (2 \pi u_1^2)^{-1/2} \exp(-x^2/2u_1^2). \label{gofx}$$ This is normalized to be 1 for $-\infty<x<+\infty$. In our calculations we only consider positive values of $x$, $0<x<+\infty$, for each plane, thus we multiply $g(x)$ by a factor of $2$ to find the fraction of channeled nuclei for a planar channel, $$\chi_{\rm planar}(E, \phi)=\int_{x_{i,\rm min}}^{\infty}{2~g(x) dx} =\frac{2}{\sqrt{\pi}}\int _{x_{i,\rm min}}^{\infty}{\frac{\exp(-x^2/2u_1^2)}{\sqrt{2}u_1}dx}\nonumber\\ =\mathop{\rm erfc}\left(\frac{x_{i,\rm min}}{\sqrt{2}u_1}\right), \label{chiplanar}$$ where the minimum initial distance (derived in Ref. [@BGG-I] using Lindhard’s planar potential) is $$x_{i, \rm min}(E, \phi)= \frac{(a/2) \left\{C^2-\left[\sqrt{(x_c^2/a^2)+C^2}- (x_c/a)- (\sin^2\phi/\psi_a^2)\right]^2\right\} }{\left[\sqrt{(x_c^2/a^2)+C^2}-(x_c/a)-(\sin^2\phi/\psi_a^2)\right]} - d_p \tan\phi. \label{ximin}$$ Here $\phi$ is the angle $\hat{\textbf{q}}$ makes with the plane, defined as the complementary angle to the angle between $\hat{\textbf{q}}$ and the normal to the plane, or as the smallest angle between $\hat{\textbf{q}}$ and vectors lying on the plane. Also in this case, $\chi_{\rm planar}(E, \phi)=0$ if $\phi>\psi_c^p$. Note that $x_{i, \rm min}$ is also a function of the temperature through its dependence on $x_c(T)$. Using either Eq. \[chiaxial\] or Eq. \[chiplanar\], for an axial and a planar channel respectively, we define channeling fraction $\chi_k$ for each channel $k$, which depends on the initial energy $E$, initial angle $\phi$ and temperature $T$. Then we will sum over all channels and angles to obtain the total channeling fraction as function of $E$ and $T$. Total geometric channeling fraction ----------------------------------- The geometric channeling fraction is the fraction of recoiling ions that propagate in the 1st, or 2nd, or …or 74th channel. Here “geometric” refers to assuming that the distribution of recoil directions is isotropic. In reality, in a dark matter direct detection experiment, the distribution of recoil directions is expected to be peaked in the direction of the average WIMP flow. Here we examine this geometrical channeling fraction, and postpone the case of a WIMP wind to another paper [@NGG-WIMPwind]. We include in our calculation only the most important channels, the same considered by Hobler [@Hobler]. These are the $<$100$>$, $<$110$>$, $<$111$>$, $<$211$>$ and $<$311$>$ axial channels and the {100}, {110}, {111}, {210} and {310} planar channels. These constitute a total of 74 channels, as explained in Appendix B. The probability $\chi_{\rm rec}(E,\hat{\bf q})$ that an ion with initial energy $E$ is channeled in a given direction $\hat{\bf q}$ is the probability that the recoiling ion enters any of the available channels, i.e. $$\chi_{\rm rec}(E,\hat{\bf q}) = P(A_1~\text{or}A_2~\text{or \ldots or}~A_{74}). \label{chirec}$$ We compute this probability in the same way we did in Ref. [@BGG-I], using a recursion of the addition rule in probability theory and treating channeling along different channels as independent (see in Appendix C that this is a good approximation). Fig. \[Our-HEALPIX\] shows the channeling probability $\chi_{\rm rec}(E, {\bf{\hat{q}}})$ in Eq. \[chirec\] for a 200 keV recoiling Si ion in a Si crystal and a 1 MeV Ge ion recoil in a Ge crystal, at 20 $^\circ$C. Temperature effects were included with $c_1=c_2=1$. The probability is computed for each direction and plotted using the HEALPix pixelization of a sphere. The light green, light blue, dark blue, pink, red, and yellow colors indicate a channeling probability of 0.5, 0.013, $7.5 \times 10^{-4}$, $4 \times 10^{-5}$, $10^{-5}$ and zero, respectively. To obtain the geometric total channeling fraction, we average the channeling probability $\chi_{\rm rec}(E,\hat{\bf q})$ over the directions $\hat{\bf q}$, assuming an isotropic distribution of the initial recoiling directions $\hat{\bf q}$, $$P_{\rm rec}(E)=\frac{1}{4\pi}\int{\chi_{\rm rec}(E, \hat{\bf q})d\Omega_q}.$$ This integral is computed using HEALPix [@HEALPix:2005] (see Appendix B of Ref. [@BGG-I] for a complete explanation). Our results for the geometric total channeling fraction for Si ions in a Si crystal and Ge ions in a Ge crystal are shown in Figs. \[FracSiG-DiffT-rigid\], \[FracSiG-DiffT-c1\] and \[FracSiG-DiffT-c2\] for three different assumptions for the effect of thermal vibrations in the lattice, which depend on the values of the parameters $c_1$ and $c_2$ used in the temperature corrected critical distances of approach $r_c(T)$ and $x_c(T)$ in Eqs. \[rcofT\] and \[xcofT\]. The unrealistic case of assuming no vibrations in the lattice (except for vibrations of the colliding atom) corresponds to taking $c_1=c_2=0$ and is shown in Fig \[FracSiG-DiffT-rigid\] for different temperatures because it provides an upper limit to the channeling fractions. In this case the channeling fractions reach a few % and they increase with temperature. In the literature, in other materials or for other channeling ions, values of $c_1$ and $c_2$ between 1 and 2 are used. Thus, we show the $c_1=c_2=1$ choice in Fig. \[FracSiG-DiffT-c1\] and the $c_1=c_2=2$ in Fig. \[FracSiG-DiffT-c2\]. As the values of $c_1$ and $c_2$ increase, also the minimal distances from row or planes at which propagating ions must be to be channeled increase, thus the critical channeling angles decrease, what makes the channeling fractions smaller. If the values of $c_1$ and $c_2$ found by Hobler [@Hobler] and by us (see Fig. \[Compare-Data\]) to fit measured channeling angles for B and P ions propagating in Si apply also to the propagation of Si ions in Si, then the case of $c_1=c_2=2$ in Fig. \[FracSiG-DiffT-c2\] should be chosen and the channeling fractions would never be larger than 0.3%. With $c_1=c_2=1$ the channeling fractions reach about 1% and they increase with temperature. Please note that we have not considered the possibility of dechanneling of initially channeled ions due to imperfections in the crystal. Any mechanism of dechanneling will decrease the fractions obtained here. Main results and conclusions ============================ We have studied the channeling of ions recoiling after a collision with dark matter particles within Si and Ge crystals. The calculations are similar because both crystals have the same structure. Channeled ions move within the crystal along symmetry axes and planes and suffer a series of small-angle scatterings that maintain them in the open “channels" in between the rows or planes of lattice atoms and thus penetrate much further into the crystal than in other directions. In order for the scattering to happen at small enough angles, the propagating ion must not approach a row or plane closer than a critical distance $r_c$ or $x_c$ respectively. These are given in Eqs. \[rcritPoly\] and \[ourxcrit\] for a “static lattice" (i.e. a perfect lattice in which all vibrations are neglected) and by Eqs. \[rcofT\] and \[xcofT\] once temperature vibrations of the crystal lattice are taken into account. The temperature corrected minimum distances of approach (in Eqs. \[rcofT\] and \[xcofT\]) depend on the one dimensional rms vibration amplitude $u_1(T)$ (Eq. \[vibu1\]), which increases with the temperature, through the coefficients $c_1$ and $c_2$. These dimensionless coefficients are found in the literature (for different ions and/or crystals) to take values between 1 and 2. Channeled ions must have trajectories that at large distances from the atomic rows or planes must make an angle with respect to these rows or planes smaller than a critical angle given in Eqs. \[psicritaxial\] and \[psicritplanar\] for axial and planar channels respectively. The critical angles depend on the temperature through the minimum distances of approach: as these increase with increasing temperatures, the critical angles decrease, what makes the channeling fraction smaller. However, there is a second temperature effect which makes the channeling fractions larger as the temperature increases: the vibrations of the atom which collides with the dark matter particle. Thus, the channeling fraction of recoiling ions is strongly temperature dependent. Depending on which of the two competing effects is dominant, the channeling fraction may either increase or decrease as the temperature increases. Increasing the temperature of a crystal usually increases the fraction of channeled recoiling ions (see Figs. \[FracSiG-DiffT-rigid\] and \[FracSiG-DiffT-c1\]), but when the values of $c_1$ and $c_2$ are large (i.e. close to 2) so the critical distances increase rapidly with the temperature, the opposite may happen (see Fig. \[FracSiG-DiffT-c2\]). The vibrations of the atom colliding with the dark matter particle are essential to have a non-zero probability of channeling of the recoiling ion. A nucleus ejected from its lattice site by a collision with a dark matter particle is initially part of a row or plane. Thus, the recoiling nuclei start initially from lattice sites or very close to them. This means that blocking effects are important. In fact, as argued originally by Lindhard [@Lindhard:1965], in a perfect lattice and in the absence of energy-loss processes the probability of a particle starting from a lattice site to be channeled would be zero. This is what Lindhard called the [*“Rule of Reversibility."*]{} However, vibrations of the atoms in the lattice violate the conditions of this argument and allow for some of the recoiling lattice nuclei to be channeled. The channeling fraction $\chi_{\rm axial}$, Eq. \[chiaxial\], or $\chi_{\rm planar}$, Eq. \[chiplanar\] for axial and planar channels respectively, is given by the fraction of nuclei which can be found further than a minimum distance $r_{i,\rm min}$ or $x_{i,\rm min}$ away from a row or plane at the moment of collision. This fraction increases as $u_1(T)$ increases. This is the effect that dominates the temperature dependence in Figs. \[FracSiG-DiffT-rigid\] and \[FracSiG-DiffT-c1\], in which the geometric channeling fractions increase with increasing temperature. However, $r_{i,\rm min}$, Eq. \[rimin\] or $x_{i,\rm min}$, Eq. \[ximin\], increase with increasing critical distances and this decreases the channeling fraction. The increase of the critical distances with temperature is more accentuated for large values of $c_1$ and $c_2$. This can be seen in Fig. \[FracSiG-DiffT-c2\], in which $c_1=c_2=2$ and some channeling fractions are larger at lower temperatures. The unrealistic case of assuming no vibrations in the lattice (except for vibrations of the colliding atoms) corresponds to taking $c_1=c_2=0$ (this is what we call the “static lattice" approximation) shown in Fig. \[FracSiG-DiffT-rigid\] for different temperatures, provides an upper limit to the channeling fractions. This is the limiting case of the possibility that $c_1$ and $c_2$ are smaller than 1, in which the channeling fractions reach a few % at energies of 100’s of keV and increase with temperature. We show the $c_1=c_2=1$ choice in Fig. \[FracSiG-DiffT-c1\] and the $c_1=c_2=2$ in Fig. \[FracSiG-DiffT-c2\]. If the values found by Hobler [@Hobler] and by us (see Fig. \[Compare-Data\]) to fit the measured channeling angles for B and P ions propagating in a Si crystal apply also to the propagation of Si ions in Si, then the case of $c_1=c_2=2$ should be chosen and the channeling fractions would never be larger than a few 0.1%. In this case, as mentioned above, the channeling fractions at some energies are higher at lower temperatures. The $c_1=c_2=1$ case, instead, leads to maximum channeling fractions of roughly 1 %, which increase with increasing temperature. Notice that a small change in the critical distances $r_c(T)$ or $x_c(T)$ and thus in the initial minimum distances of approach $r_{i,\rm min}$ or $x_{i,\rm min}$ is exponentially magnified in the channeling fractions $\chi_{\rm axial}$, Eq. \[chiaxial\], or $\chi_{\rm planar}$, Eq. \[chiplanar\]. This constitutes the most important difficulty to evaluate channeling fractions in the models we use. Notice too that we have not considered any mechanism of dechanneling of the channeled ions (due to irregularities in the crystals, for example) which would decrease the channeling fractions. N.B. and G.G. were supported in part by the US Department of Energy Grant DE-FG03-91ER40662, Task C. P.G. was supported in part by the NFS grant PHY-0756962 at the University of Utah. We would like to thank S. Nussinov and F. Avignone for several important discussions about their work, and to J. U Andersen, D. S. Gemmell, D. V. Morgan, G. Hobler, and Kai Nordlund for some exchange of information. Penetration length of channeled ions ==================================== Fig. \[xMaxSiGe\] shows the maximum distance, $x_{\rm max}(E)$ a channeled ion with initial energy $E$ propagates in a crystal channel, according to the Lindhard-Scharff [@Lindhard-Scharff; @Dearnaley:1973] model of electronic energy loss, for a Si ion channeled in a Si crystal and a Ge ion in a Ge crystal. This model is valid for small enough energies, $E < (M_1/2)Z_1^{4/3} v_0^2$ (where $v_0={e^2}/{\hbar} = 2.2 \times 10^8$ cm$/$sec is the Bohr’s velocity [@Lindhard:1965]. $M_1$ and $Z_1$ are the mass and charge of the propagating ion) which is $E< 24.3$ MeV for a Si ion propagating in a Si crystal and $E< $ 188.7 MeV for a Ge ion propagating in a Ge crystal. In this model the energy $E(x)$ as a function of the propagated distance $x$ and the initial energy $E$ is the solution of the following energy loss equation [@Dearnaley:1973] $$-\frac{dE}{dx}=Kv,$$ where $v=\sqrt{2E/M_1}$ is the ion velocity and $K$ is the function $$K=\frac{\xi_e 8 \pi e^2 N a_0 Z_1 Z_2}{ \left(Z_1^\frac{2}{3}+Z_2^\frac{2}{3}\right)^{\frac{3}{2}}v_0}.$$ Here $\xi_e$ is a dimensionless constant of the order of $Z_1^{\frac{1}{6}}$ [@Dearnaley:1973], $N$ is the number of atomic centers per unit volume, $a_0\simeq 0.53$ [Å]{} is the Bohr radius of the hydrogen atom. Explicitly, an ion with initial energy $E$ at $x=0$ has energy $$E(x)=E\left(1-\frac{x}{x_{\rm max}}\right)^2 \label{Ex}$$ after traveling a distance $x$. The range of the propagating ion is $$x_{\rm max}(E)=\frac{\sqrt{2M_1E_R}}{K}. \label{xmax}$$ Fig. \[xMaxSiGe\] shows that even at energies of a few keV a channeled ion interacts with hundreds of lattice atoms. The characteristic interdistance of atoms along the channels is the lattice constant, i.e. approximately 0.5 nm for Si and Ge crystals (see Appendix B). Crystal structure of Si and Ge ============================== Silicon (Si) and Germanium (Ge) crystals have a diamond cubic type lattice structure which consists of two interpenetrating face centered cubic (f.c.c.) lattices, displaced along the body diagonal of the cubic cell by one quarter of the length of the diagonal. The unit cell, shown in Fig. \[UnitCell\], has 8 atoms. The lattice constant, the side of the cube in Fig. \[UnitCell\], is $a_{\rm lat}=0.5431$ nm for Si and 0.5657 nm for Ge (from the Table 3.4 of Ref. [@Appleton-Foti:1977]). The atomic mass and atomic number of Si and Ge are $M_{Si}=28.09$ amu, $M_{Ge}=72.59$ amu, $Z_{Si}=14$ and $Z_{Ge}=32$. The Thomas-Fermi screening distances for two Si atoms and two Ge atoms are $a_{SiSi}=0.4685 {\text {\AA} } (Z_{Si}^{1/2} + Z_{Si}^{1/2})^{-2/3} =0.01225$ nm and $a_{GeGe}=0.4685 {\text {\AA} } (Z_{Ge}^{1/2} + Z_{Ge}^{1/2})^{-2/3} =0.009296$ nm respectively. Once an origin of the coordinate system is fixed on a lattice point $O$, any position vector of a point on the crystal lattice can be written as $\textbf{R}=n_1\textbf{a}+n_2\textbf{b}+n_3\textbf{c}$ with $n_1$, $n_2$, and $n_3$ specific integer numbers. The vectors $\textbf{a}$, $\textbf{b}$, and $\textbf{c}$ are the basis vectors of the crystal lattice, and are three noncoplanar vectors joining the lattice point $O$ to its near neighbors. For Si and Ge, the three vectors $\textbf{a}$, $\textbf{b}$, $\textbf{c}$ form a Cartesian frame and their length is $a_{\rm lat}/4$. The integers $n_1$, $n_2$, and $n_3$ can be positive, negative, or zero. The direction of a crystal axis pointing in the direction $\textbf{R}$ is specified by the triplet $[n_1 n_2 n_3]$ written in square brackets, when $n_1$, $n_2$, and $n_3$ are positive or zero. Note that if there is a common factor in the numbers $n_1$, $n_2$, $n_3$, this factor is removed. Moreover, negative integers are denoted with a bar over the number, e.g. $-1$ is denoted as $\bar{1}$ and the $-y$ axis is $[0\bar{1}0]$ direction. The plane perpendicular to the $[n_1 n_2 n_3]$ axis is denoted by $(n_1 n_2 n_3)$. For example, the plane perpendicular to the $[100]$ axis is denoted by $(100)$, and that perpendicular to $[101]$ by $(101)$. The integers $n_1$, $n_2$, and $n_3$ are called Miller indices. In a cubic crystal, because of the symmetry of the unit cell, the directions $[100]$, $[010]$, $[001]$, $[\bar{1}00]$, $[0\bar{1}0]$ and $[00\bar{1}]$ are equivalent. All directions equivalent to the $[n_1 n_2 n_3]$ direction are denoted by $<$$n_1n_2n_3$$>$ in angular brackets. For example, $<$100$>$ indicates all the six directions mentioned. Similarly, $<$211$>$ and $<$311$>$ indicate twelve different directions each. When the unit cell has cubic symmetry, we can indicate all planes that are equivalent to the plane $(hkl)$ by curly brackets $\{hkl\}$. For example, the indices {100} refer to the six planes $(100)$, $(010)$, $(001)$, $(\bar{1}00)$, $(0\bar{1}0)$, and $(00\bar{1})$. The negative sign over a number denotes that the plane cuts the axis on the negative side of the origin. Similarly, $<$210$>$ and $<$310$>$ each indicate twelve different planes. Counting all the axes and planes we mentioned above, the total is 74. The interatomic spacing $d$ in atomic rows and the interplanar spacing $d_{pch}$ (“pch" stands for “planar channel") of atomic planes of monatomic diamond crystals, are obtained by multiplying the respective lattice constant by the following coefficients [@Gemmell:1974ub]: - Rows: $<100>: 1$, $<110>: 1/\sqrt{2}$, $<111>: 3\sqrt{3}/4$, $<211>: \sqrt{6}/2$, $<311>: 3 \sqrt{11}/4$ - Planes: $\{100\}: 1/4$, $\{110\}: 1/2\sqrt{2}$, $\{111\}: \sqrt{3}/4$, $\{210\}: 1/(4 \sqrt{5})$, $\{310\}: 1/(2 \sqrt{10})$ The Debye temperatures for Si and Ge are $\Theta=490$ K and $\Theta=290$ K, respectively  [@Gemmell:1974ub; @Hobler]. Probability of correlated channels ================================== In this paper, as we did in Ref. [@BGG-I], we treat channeling along different channels as independent events when computing the probability $\chi_{\rm rec}(E,\hat{\bf q})$ in Eq.\[chirec\] that an ion with initial energy $E$ and direction $\hat{\bf q}$ enters any of the available channels. Available channels are those whose axis or plane, respectively, form and angle with the direction $\hat{\bf q}$ smaller than the critical channeling angle for the particular channel. In Appendix D of Ref. [@BGG-I] we showed that we can obtain an upper limit to the channeling probability of overlapping channels by replacing the intersection of the complements of the integration regions in Eqs. \[chiaxial\] and \[chiplanar\] with the inscribed cylinder of radius $r_{\rm MIN}$ equal to the minimum of the $r_{i, {\rm min}}$ or $x_{i, {\rm min}}$ among the overlapping channels. We find that the two methods give practically indistinguishable results for Si and Ge (as we did in Ref. [@BGG-I] for NaI), as clearly shown in Fig. \[FracSiGe-Max\] for some particular examples. Thus, the method we use is adequate for our purpose of providing upper bounds to the channeling fractions. Fig. \[OneChannel-SiGe\] shows the channeling fractions of Si ions propagating in a Si crystal and Ge ions propagating in a Ge crystal for individual channels with $c_1=c_2=1$ and T$=293$ K. The black and green (or gray) lines correspond to single axial and planar channels respectively. Fig. \[OneChannel-SiGe\] shows that at low $E$ channeling is dominated by axial channels which do not overlap, so treating them as independent is strictly correct. However, at the transition energy of 1 to 10 MeV at which axial and planar channels are both equally important, and at higher energies at which planar channels dominate, the overlap of one axial and two or more planar channels, or the overlap of two or more planar channels among themselves, makes the channeling along them not necessarily uncorrelated. Still we find that considering channeling along different channels as independent is a good approximation if we are interested in providing upper bounds to the channeling fractions. [99]{} J. Lindhard, Kongel. Dan. Vidensk. Selsk., Mat.-Fys. Medd.  [**34**]{} No. 14 (1965). J. Lindhard and M. Scharff, Phys. Rev. [**124**]{} 128 (1961). Chapter 2 of “Ion Implantation", by Geoffrey Dearnaley, Amsterdam, North-Holland Pub. Co.; New York, American Elsevier, 1973. E. M. Drobyshevski, Mod. Phys. Lett.  A [**23**]{} 3077 (2008) \[arXiv:0706.3095 \[physics.ins-det\]\]. R. Bernabei [*et al.*]{}, Eur. Phys. J.  C [**53**]{}, 205 (2008) \[arXiv:0710.0288 \[astro-ph\]\]. F. T. Avignone, R. J. Creswick and S. Nussinov, arXiv:0807.3758 \[hep-ph\]; R. J. Creswick, S. Nussinov and F. T. Avignone, arXiv:1007.0214v2 \[astro-ph.IM\]. Z. Ahmed [*et al.*]{} (CDMS Collaboration), Phys. Rev. Lett. [**102**]{}, 011301 (2009); Z. Ahmed [*et al.*]{} (CDMS Collaboration), arXiv:0912.3592v1 \[astro-ph.CO\]. C. E. Aalseth [*et al.*]{} (CoGeNT Collaboration), arXiv:1002.4703v2 \[astro-ph.CO\]. E. Armengaud (for the Edelweiss Collaboration), Phys. Lett. B [**687**]{}, 294 (2010) \[arXiv:0912.0805v1 \[astro-ph.CO\]\]. M. Deniz [*et al.*]{} (TEXONO Collaboration), Phys. Rev. D [**82**]{}, 033004 (2010) \[arXiv:1006.1947 \[hep-ph\]\]. H. Kraus [*et al.*]{}, Nucl. Phys. B - Proceedings Supplements. [**173**]{} 168-171 (2007) H.V. Klapdor-Kleingrothaus, Int. J. Mod. Phys. A [**17**]{} 3421-3431 (2002). C. E. Aalseth [*et al.*]{}, Phys. Rev. D [**65**]{}, 092007 (2002) \[arXiv:hep-ex/0202026\]. N. Bozorgnia, G. B. Gelmini and P. Gondolo, arXiv:1006.3110 \[astro-ph.CO\]. D. S. Gemmell, Rev. Mod. Phys.  [**46**]{} 129 (1974). J. U. Andersen, Kongel. Dan. Vidensk. Selsk., Mat.-Fys. Medd.  [**36**]{} No. 7 (1967). D. V. Morgan and D. Van Vliet, Can. J. Phys. [**46**]{}, 503 (1963); D. V. Morgan and D. Van Vliet, Radiat. Effects and Defects in Solids, [**8**]{} 51 (1971). D. van Vliet in “Channeling", ed. by D. V. Morgan (Wiley, London), 37 (1973). J.U. Andersen and L.C. Feldman, Phys. Rev. B [**1**]{}, 2063 (1970). K. Komaki and F. Fujimoto, Phys. Stat. Sol. (a) [**2**]{} 875 (1970). B. R. Appleton and G. Foti, “Channeling" in [*Ion Beam Handbook for Material Analysis*]{}, edited by J. W. Mayer and E. Rimini (Academic, New York), p. 67 (1977). G. Hobler, Radiation effects and defects in solids [**139**]{} 21 (1996); G. Hobler, Nucl. Instrum. Methods Phys. Research (NIM)[**B 115**]{} 323 (1996). J. H. Barrett, Phys. Rev.  B [**3**]{} 1527 (1971). K. Cho [*et al*]{}, Nucl. Instrum. Meth. [**B**]{}7/8 265 (1985). L. Rubin and J. Poate, The Industrial Physicist, June/July 2003, p.12-15. MDRANGE, http://beam.acclab.helsinki.fi/knordlun/mdh/mdh\_program.html; SRIM, http: //www.srim.org/; TRIM, J. F. Ziegler, “Ion Implantation Technology”, Ion Implantation Technology Co. (1996); MARLOWE and UT-MARLOWE, Y. Chen [*et al.*]{}, IEEE Trans. Electron Devices, vol. 49, no. 9, 1519 (2002); Crystal-TRIM, http://www.fzd.de/pls/rois/; REED, K. M. Beardmore and N Gronbech-Jensen, Phys. Rev. B [**60**]{} 12610 (1999); SARIC,V. Bykov [*et al.*]{} Nucl. Instrum. Methods Phys. Research (NIM)[**B 114**]{} 371 (1996); MDRANGE, K. Nordlund, Comput. Mater. Sci. [**3**]{}, 448 (1995). G. N. Watson, “Theory of Bessel Functions” Cambridge U. P., Cambridge, England (1958). V. V. Rozhkov and S. V. DyulÕdya, PisÕma Zh. Tekh. Fiz. [**10**]{}, 1181 (1984) \[Sov. Tech. Phys. Lett. [**10**]{}, 499 (1984)\]. K. M. Górski [*et al.*]{}, ApJ [**622**]{} 759 (2005). N. Bozorgnia, G. Gelmini and P. Gondolo, “Channeling in direct dark matter detection IV: daily modulation of the WIMP signal", in preparation.
--- abstract: | The investigation of transverse spin and transverse momentum effects in deep inelastic scattering is one of the key physics programs of the COMPASS collaboration. Three channels have been analyzed at COMPASS to access the transversity distribution function: The azimuthal distribution of single hadrons, involving the Collins fragmentation function, the azimuthal dependence of the plane containing hadron pairs, involving the two-hadron interference fragmentation function, and the measurement of the transverse polarization of $\Lambda$ hyperons in the final state. Azimuthal asymmetries in unpolarized semi-inclusive deep-inelastic scattering give important information on the inner structure of the nucleon as well, and can be used to estimate both the quark transverse momentum $k_T$ in an unpolarized nucleon and to access the so-far unmeasured Boer-Mulders function. COMPASS has measured these asymmetries using spin-averaged $^6LiD$ data. address: | Physikalisches Institut der Albert-Ludwigs-Universität Freiburg\ Hermann-Herder-Str. 3, 79104 Freiburg, Germany author: - | Christian Schill\ on behalf of the COMPASS collaboration title: Transverse Spin Physics at COMPASS --- polarized deep-inelastic scattering ,transversity , azimuthal asymmetries ,structure functions. Introduction ============ Most of our knowledge of the inner structure of the nucleon is encoded in parton distribution functions. They are used to describe hard scattering processes involving nucleons. While a lot of understanding has been achieved on the longitudinal structure of a fast moving nucleon, very little is known about its transverse structure [@Anselmino0]. Recent data on single spin asymmetries in semi-inclusive deep-inelastic scattering (SIDIS) off transversely polarized nucleon targets [@COMPASS; @HERMES] triggered a lot of interest towards the transverse momentum dependent and spin dependent distribution and fragmentation functions [@Bacchetta]. The SIDIS cross-section in the one-photon exchange approximation contains eight transverse-momentum dependent distribution functions [@Bacchetta3]. Some of these can be extracted in SIDIS measuring the azimuthal distribution of the hadrons in the final state [@Aram]. Three distribution functions survive upon integration over the transverse momenta: These are the quark momentum distribution $q(x)$, the helicity distribution $\Delta q(x)$, and the transversity distribution $\Delta_T q(x)$ [@Collins]. The latter is defined as the difference in the number density of quarks with momentum fraction $x$ with their transverse spin parallel to the nucleon spin and their transverse spin anti-parallel to the nucleon spin [@Artru]. To access transversity in SIDIS, one has to measure the quark polarization, i.e. use the so-called ’quark polarimetry’. Several techniques are used at COMPASS: a measurement of the single-spin asymmetries (SSA) in the azimuthal distribution of the final state hadrons (the Collins asymmetry), a measurement of the SSA in the azimuthal distribution of the plane containing final state hadron pairs (the two-hadron asymmetry), and a measurement of the polarization of final state hyperons (the $\Lambda$-polarimetry). In these proceedings, I will focus on new results for the two-hadron asymmetry, while results for the other channels are shown elsewhere [@COMPASS2]. The chiral-odd Boer-Mulders function is of special interest among the other transverse-momentum dependent distribution functions [@Boer]. It describes the transverse parton polarization in an unpolarized hadron. The Boer-Mulders function generates azimuthal asymmetries in unpolarized SIDIS, together with the so-called Cahn effect [@Cahn], which arises from the fact that the kinematics is non-collinear when $k_T$ is taken into account. The COMPASS experiment ====================== COMPASS is a fixed target experiment at the CERN SPS accelerator with a wide physics program focused on the nucleon spin structure and on hadron spectroscopy. COMPASS investigates transversity and the transverse momentum structure of the nucleon in semi-inclusive deep-inelastic scattering. A $160$ GeV muon beam is scattered off a transversely polarized $NH_3$ or $^6LiD$ target. The scattered muon and the produced hadrons are detected in a wide-acceptance two-stage spectrometer with excellent particle identification capabilities [@Experiment]. The data with a transversely polarized $NH_3$ target shown here were taken in the $2007$ run. Two-hadron asymmetry ==================== The chiral-odd transversity distribution $\Delta_T q(x)$ can be measured in combination with the chiral-odd polarized two-hadron interference fragmentation function $H^{\sphericalangle}_1 (z,M^2_{inv})$ in SIDIS. $M_{inv}$ is the invariant mass of the $h^+h^-$ pair. The fragmentation of a transversely polarized quark into two unpolarized hadrons leads to an azimuthal modulation in $\Phi_{RS} = \phi_R + \phi_s - \pi$ in the SIDIS cross section. Here $\phi_R$ is the azimuthal angle between $\vec R_T$ and the lepton scattering plane and $\vec R_T$ is the transverse component of $\vec R$ defined as: $$\vec R = (z_2\cdot \vec p_1 - z_1 \cdot \vec p_2)/(z_1+z_2).$$ $\vec p_1$ and $\vec p_2$ are the momenta in the laboratory frame of $h^+$ and $h^-$ respectively. This definition of $\vec R_T$ is invariant under boosts along the virtual photon direction. The number of produced oppositely charged hadron pairs $N_{h^+h^-}$ can be written as: $$N_{h^+h^-} =N_0 \cdot ( 1 + f \cdot P_t \cdot D_{NN} \cdot A_{RS} \cdot \sin \Phi_{RS} \cdot \sin \theta).$$ Here, $\theta$ is the angle between the momentum vector of $h^+$ in the center of mass frame of the $h^+h^-$-pair and the momentum vector of the two hadron system [@Bacchetta]. The measured amplitude $A_{RS}$ is proportional to the product of the transversity distribution and the polarized two-hadron interference fragmentation function $$A_{RS} \propto \frac {\sum_q e_q^2 \cdot \Delta_T q(x) \cdot H^{\sphericalangle}_1(z,M^2_{inv})} {\sum_q e_q^2 \cdot q(x) \cdot D_q^{2h}(z,M^2_{inv})}.$$ $D_q^{2h}(z,M^2_{inv})$ is the unpolarized two-hadron interference fragmentation function. The polarized two-hadron interference fragmentation function $H^{\sphericalangle}_1$ can be expanded in the relative partial waves of the hadron pair system, which up to the p-wave level gives [@Bacchetta]: $$H^{\sphericalangle}_1 = H^{\sphericalangle,sp}_1 + \cos \theta \cdot H^{\sphericalangle,pp}_1.$$ Where $H^{\sphericalangle,sp}_1$ is given by the interference of $s$ and $p$ waves, whereas the function $H^{\sphericalangle,pp}_1$ originates from the interference of two $p$ waves with different polarization. For this analysis the results are obtained by integrating over $\theta$. The $\sin \theta$ distribution is strongly peaked at one and the $\cos \theta$ distribution is symmetric around zero. Both the interference fragmentation function $H_1^\sphericalangle(z,M_{inv}^2)$ and the corresponding spin averaged fragmentation function into two hadrons $D_q^{2h}(z, M_{inv}^2)$ are unknown, and need to be measured in $e^+e^-$ annihilation or to be evaluated using models [@Bacchetta; @Jaffe; @Bianconi; @Radici]. Results for hadron pairs ======================== ![Two-hadron asymmetry $A_{RS}$ as a function of $x$, $z$ and $M_{inv}$, compared to predictions of [@Ma]. The lower bands indicate the systematic uncertainty of the measurement.[]{data-label="pic:results"}](schill.fig1.eps){width="1.1\columnwidth"} The two-hadron asymmetry as a function of $x$, $z$ and $M_{inv}$ is shown in Fig. \[pic:results\]. A strong asymmetry in the valence $x$-region is observed, which implies a non-zero transversity distribution and a non-zero polarized two hadron interference fragmentation function $H^{\sphericalangle}_1$. In the invariant mass binning one observes a strong signal around the $\rho^0$-mass and the asymmetry is negative over the whole mass range. The lines are calculations from Ma [*et al.*]{}, based on a SU6 and a pQCD model for transversity [@Ma]. The calculations can describe the magnitude and the $x$-dependence of the measured asymmetry, while there are discrepancies in the $M_{inv}$-behavior. Azimuthal asymmetries in DIS off an unpolarized target ====================================================== The cross-section for hadron production in lepton-nucleon DIS $\ell N \rightarrow \ell' h X$ for unpolarized targets and an unpolarized or longitudinally polarized beam has the following form [@bacchetta2]: $$\begin{array}{lcr}\displaystyle \frac{d\sigma}{dx dy dz d\phi_h dp^2_{h,T}} = \frac{\alpha^2}{xyQ^2} \frac{1+(1-y)^2}{2} \cdot\\[2ex] \displaystyle [ F_{UU,T} + F_{UU,L} + \varepsilon_1 \cos \phi_h F^{\cos \phi_h}_{UU} \\[2ex] + \varepsilon_2 \cos(2\phi_h) F^{\cos\; 2\phi_h}_{UU} + \lambda_\mu \varepsilon_3 \sin \phi_h F^{\sin \phi_h}_{LU} ] \end{array}$$ where $\alpha$ is the fine structure constant. $F_{UU,T}$, $F_{UU,L}$, $F^{\cos \phi_h}_{UU}$, $F^{\cos\; 2\phi_h}_{UU}$ and $F^{\sin \phi_h}_{LU}$ are structure functions. Their first and second subscripts indicate the beam and target polarization, respectively, and the last subscript denotes, if present, the polarization of the virtual photon. $\lambda_\mu$ is the longitudinal beam polarization and: $$\begin{array}{rcl} \varepsilon_1 & = & \displaystyle\frac{2(2-y)\sqrt{1-y}}{1+(1-y)^2} \\[2ex] \varepsilon_2 & = & \displaystyle\frac{2(1-y)}{1+(1-y)^2} \\[2ex] \varepsilon_3 & = & \displaystyle\frac{2 y \sqrt{1-y}}{1+(1-y)^2} \end{array}$$ are depolarization factors. The Boer-Mulders parton distribution function contributes to the $\cos \phi_h$ and the $\cos 2\phi_h$ moments as well, together with the Cahn effect [@Cahn] which arises from the fact that the kinematics is non collinear when the $k_\perp$ is taken into account, and with the perturbative gluon radiation, resulting in order $\alpha_s$ QCD processes. pQCD effects become important for high transverse momenta $p_T$ of the produced hadrons. Analysis of unpolarized asymmetries =================================== To obtain an unpolarized data sample, data taken with a longitudinally polarized or a transversely polarized $^6LiD$ target in the year 2004 have both been spin-averaged. In the measurement of unpolarized asymmetries a Monte Carlo simulation is used to correct for acceptance effects of the detector. The SIDIS event generation is performed by the LEPTO generator [@lepto], the experimental setup and the particle interactions in the detectors are simulated by the COMPASS Montecarlo simulation program COMGEANT. The acceptance of the detector as a function of the azimuthal angle $A(\phi_h)$ is then calculated as the ratio of reconstructed over generated events for each bin of $x$, $z$ and $p_T$ in which the asymmetries are measured. The measured distribution, corrected for acceptance, is fitted with the following functional form: $$\begin{array}{lll} N(\phi_h) &=&N_0 \left( 1 + A^D_{\cos \phi} \cos \phi_h + \right. \\&& \left. A^D_{\cos 2\phi} \cos 2\phi_h + A^D_{\sin \phi} \sin \phi_h \right) \end{array}$$ The contribution of the acceptance corrections to the systematic error was studied in detail. Results for unpolarized asymmetries =================================== ![$\cos \phi_h$ asymmetries from COMPASS deuteron data for positive (upper row) and negative (lower raw) hadrons; the asymmetries are divided by the kinematic factor $\varepsilon_1$ and the bands indicate the size of the systematic uncertainty. The superimposed curves are the values predicted by [@anselmino2] taking into account the Cahn effect only. []{data-label="f:cosphi"}](schill.fig2a.eps "fig:"){width="1.1\columnwidth"} ![$\cos \phi_h$ asymmetries from COMPASS deuteron data for positive (upper row) and negative (lower raw) hadrons; the asymmetries are divided by the kinematic factor $\varepsilon_1$ and the bands indicate the size of the systematic uncertainty. The superimposed curves are the values predicted by [@anselmino2] taking into account the Cahn effect only. []{data-label="f:cosphi"}](schill.fig2b.eps "fig:"){width="1.1\columnwidth"} The $\sin \phi_h$ asymmetries measured by COMPASS, not shown here, are compatible with zero, at the present level of statistical and systematic errors, over the full range of $x$, $z$ and $p_T$ covered by the data. The $\cos \phi_h$ asymmetries extracted from COMPASS deuteron data are shown in Fig. \[f:cosphi\] for positive (upper row) and negative (lower row) hadrons, as a function of $x$, $z$ and $p_T$. The bands indicate the size of the systematic error. The asymmetries show the same trend for positive and negative hadrons with slightly larger absolute values for positive hadrons. Values as large as 30$-$40% are reached in the last point of the $z$ range. The theoretical prediction [@anselmino2] in Fig. \[f:cosphi\] takes into account the Cahn effect only, which does not depend on the hadron charge. The Boer-Mulders parton distribution function is not considered in this prediction. ![$\cos 2 \phi_h$ asymmetries from COMPASS deuteron data for positive (upper row) and negative (lower raw) hadrons; the asymmetries are divided by the kinematic factor $\varepsilon_1$ and the bands indicate size of the systematic error. []{data-label="f:cos2phi"}](schill.fig3.eps){width="1.06\columnwidth"} The $\cos 2 \phi_h$ asymmetries are shown in Fig. \[f:cos2phi\] together with the theoretical predictions of [@barone], which take into account the kinematic contribution given by the Cahn effect, first order pQCD (which, as expected, is negligible in the low $p_T$ region), and the Boer-Mulders parton distribution function (coupled to the Collins fragmentation function), which gives a different contribution to positive and negative hadrons. In [@barone], the Boer-Mulders parton distribution function is assumed to be proportional to the Sivers function as extracted from preliminary HERMES data. The COMPASS data show an amplitude different for positive and negative hadrons, a trend which confirms the theoretical predictions. There is a satisfactory agreement between the data points and the model calculations, which hints to a non-zero Boer-Mulders parton distribution function. Summary and Outlook =================== New preliminary results for the two-hadron azimuthal asymmetry at COMPASS in semi-inclusive deep-inelastic scattering off a transversely polarized proton target have been presented. For $x>0.05$, an asymmetry different from zero and increasing with increasing $x$-Bjorken has been observed. The measured unpolarized azimuthal asymmetries on a deuteron target show large $\cos\phi_h$ and $\cos 2\phi_h$ moments which can be qualitatively described in model calculations taking into account the Cahn effect and the intrinsic $k_T$ of the quarks in the nucleon and the Boer-Mulders structure function. With a full-year transverse-target running in $2010$, COMPASS will significantly increase its statistical precision in all measurements of transverse-spin dependent asymmetries. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported by the German BMBF. [999]{} M. Anselmino [*et al.*]{}, Phys.Rev. [**D74**]{} (2006), 074015. V.Yu. Alexakhin [*et al.*]{} \[COMPASS collaboration\] Phys. Rev. Lett. [**94**]{}, 202002 (2005) and E.S. Ageev [*et al.*]{} \[COMPASS collaboration\] Nucl. Phys. [**B765**]{}, 31 (2007). A. Airpetian [*et al.*]{} \[HERMES Collaboration\], Phys. Rev. Lett. [**94**]{}, 012002 (2005). A. Bacchetta and M. Radici, Phys. Rev. [**D67**]{}, 094002 (2003), Phys. Rev. [**D69**]{}, 074026 (2004) and Phys. Rev. [**D74**]{}, 114007 (2006). A. Bacchetta, hep-ph/0612196, AIPConf.Proc. [**915**]{} (2007), 517-520. M. Anselmino [*et al.*]{}, Phys. Rev. [**D71**]{} (2005) 074006. J.C. Collins [*et al.*]{}, Nucl. Phys. [**B420**]{}, 565 (1994). X. Artru and J.C. Collins, Z. Phys. [**C69**]{}, 277 (1996). M.G. Alekseev [*et al.*]{} \[COMPASS collaboration\], Phys. Lett. [**B692**]{} (2010), 240. and M. Alekseev [*et al.*]{} \[COMPASS collaboration\], Eur. Phys. J. [**C64**]{} (2009), 171. D. Boer and P.J. Mulders, Phys. Rev. [**D57**]{} (1998) 5780. R.N. Cahn, Phys. Lett. [**B78**]{} (1978), 269. P. Abbon [*et al.*]{} \[COMPASS collaboration\], NIM [**A577**]{}, 455-518 (2007). R.L. Jaffe [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 1166 (1998). A. Bianconi, S. Boffi, R. Jakob and M. Radici, Phys. Rev. [**D62**]{}, 034008 (2000). M. Radici, R. Jakob and A. Bianconi, Phys. Rev. Lett [**D65**]{}, 074031 (2002). B-Q. Ma [*et al.*]{}, Phys. Rev. [**D77**]{} (2008), 014035. A. Bacchetta [*et al.*]{}, JHEP [**0702**]{} (2007) 93. G. Ingelman, A. Edin and J. Rathsman, Comp.Phys.Commun. [**101**]{} (1997) 108. M. Anselmino [*et al.*]{}, Eur. Phys. J.  [**A31**]{} (2007) 373. V. Barone, A. Prokudin and B.Q. Ma, Phys. Rev. [**D78**]{} (2008) 45022.
--- abstract: 'Tensile creep tests, tensile relaxation tests and a tensile test with a constant rate of strain are performed on injection-molded isotactic polypropylene at room temperature in the vicinity of the yield point. A constitutive model is derived for the time-dependent behavior of semi-crystalline polymers. A polymer is treated as an equivalent network of chains bridged by permanent junctions. The network is modelled as an ensemble of passive meso-regions (with affine nodes) and active meso-domains (where junctions slip with respect to their positions in the bulk medium with various rates). The distribution of activation energies for sliding in active meso-regions is described by a random energy model. Adjustable parameters in the stress–strain relations are found by fitting experimental data. It is demonstrated that the concentration of active meso-domains monotonically grows with strain, whereas the average potential energy for sliding of junctions and the standard deviation of activation energies suffer substantial drops at the yield point. With reference to the concept of dual population of crystalline lamellae, these changes in material parameters are attributed to transition from breakage of subsidiary (thin) lamellae in the sub-yield region to fragmentation of primary (thick) lamellae in the post-yield region of deformation.' author: - | Aleksey D. Drozdov and Jesper deClaville Christiansen\ Department of Production\ Aalborg University\ Fibigerstraede 16\ DK–9220 Aalborg, Denmark title: 'The nonlinear time-dependent response of isotactic polypropylene' --- Introduction ============ This paper is concerned with the experimental study and modelling of the time-dependent behavior of isotactic polypropylene (iPP) at strains up to 20 % in uniaxial tensile tests, creep tests and relaxation tests at room temperature. Isotactic polypropylene is chosen for the investigation because of its numerous applications in industry (oriented films for packaging, reinforcing fibres, nonwoven fabrics, blends with thermoplastic elastomers, etc.). The nonlinear viscoelastic response of polypropylene was studied by Ward and Wolfe (1966) and Smart and Williams (1972) three decades ago, and, in recent years, by Wortmann and Schulz (1994, 1995), Ariyama (1996), Ariyama et al. (1997), Dutta and Edward (1997), Read and Tomlins (1997) Tomlins and Read (1998), and Sweeney et al. (1999). Viscoplasticity and yielding of iPP have been investigated in the past five years by Kalay and Bevis (1997), Coulon et al. (1998), Seguela et al. (1999), Staniek et al. (1999), Nitta and Takayanagi (1999, 2000) and Labour et al. (2001), to mention a few. Dynamic mechanical analysis reveals that the loss tangent of iPP demonstrates two pronounced maxima being plotted versus temperature (Andreassen, 1999; Seguela et al., 1999; Lopez-Manchado and Arroyo, 2000). The first maximum ($\beta$–transition in the interval between $T=-20$ and $T=10$ $^{\circ}$C) is associated with the glass transition in a mobile part of the amorphous phase, whereas the other maximum ($\alpha$–transition in the interval between $T=70$ and $T=110$ $^{\circ}$C) is attributed to the glass transition in the remaining part of the amorphous phase, the so-called “rigid amorphous fraction" (Verma et al., 1996). This conclusion is confirmed by DSC (differential scanning calorimetry) traces of quenched polypropylene which show (in the heating mode) an endotherm near $T=70$ $^{\circ}$C ascribed to thermal activation of amorphous domains with restricted mobility (Seguela et al., 1999). Isotactic polypropylene is a semi-crystalline polymer containing three different crystallographic forms: monoclinic $\alpha$ crystallites, hexagonal $\beta$ structures, orthorhombic $\gamma$ polymorphs, together with “smectic" mesophase (Iijima and Strobl, 2000). At rapid cooling of the melt (at the stage of injection molding), $\alpha$ crystallites and smectic mesophase are mainly developed, whereas $\beta$ and $\gamma$ polymorphs are observed as minority components (Kalay and Bevis, 1997; Al-Raheil et al., 1998) that disappear after annealing above 130 $^{\circ}$C (Al-Raheil et al., 1998; Labour et al., 2001). A unique feature of $\alpha$ spherulites in iPP is the lamellar cross-hatching: development of transverse lamellae oriented in the direction perpendicular to the direction of radial lamellae (Iijima and Strobl, 2000; Maiti et al., 2000). The characteristic size of $\alpha$ spherulites in injection-molded specimens is estimated as 100 to 200 $\mu$m (Kalay and Bevis, 1997; Coulon et al., 1998). These spherulites consist of crystalline lamellae with thickness of 10 to 20 nm (Coulon et al., 1998; Maiti et al., 2000). The amorphous phase is located (i) between spherulites, (ii) inside spherulites, in “liquid pockets" (Verma et al., 1996) between lamellar stacks, and (iii) between lamellae in lamellar stacks. It consists of (i) relatively mobile chains between spherulites, in liquid pockets and between radial lamellae inside lamellar stacks, and (ii) severely restricted chains in the regions bounded by radial and tangential lamellae. Stretching of iPP specimens results in inter-lamellar separation, rotation and twist of lamellae, fine and coarse slip of lamellar blocks and their fragmentation (Aboulfaraj et al., 1995; Seguela et al., 1999), chain slip through the crystals, sliding and breakage of tie chains (Nitta and Takayanagi, 1999, 2000), and activation of rigid amorphous fraction. At large strains, these morphological transformations lead to cavitation, formation of fibrills and stress-induced crystallization (Zhang et al. 1999). It is hard to believe that these mechanically-induced changes in the micro-structure of iPP can be adequately described by a constitutive model with a small number of adjustable parameters. To develop stress–strain relations, we apply a method of “homogenization of micro-structure," according to which a sophisticated morphology of isotactic polypropylene is modelled by an equivalent phase whose deformation captures essential features of the response of this semi-crystalline polymer. We choose a network of chains as the equivalent phase for the following reasons: 1. The viscoelastic response of isotactic polypropylene is conventionally associated with rearrangement of chains in amorphous regions (Coulon et al., 1998). 2. Sliding of tie chains along and their detachment from lamellae play the key role in the time-dependent response of iPP (Nitta and Takayanagi, 1999, 2000). 3. The viscoplastic flow in semi-crystalline polymers is assumed to be “initiated in the amorphous phase before transitioning into the crystalline phase" (Meyer and Pruitt, 2001). 4. The time-dependent behavior of polypropylene is conventionally modelled within the concept of a network of macromolecules (Sweeney and Ward, 1995, 1996; Sweeney et al., 1999). Isotactic polypropylene at room temperature (i.e., above the glass transition temperature for the mobile amorphous phase) is treated a permanent network of macromolecules bridged by junctions (physical cross-links, entanglements and lamellar blocks). The network is assumed to be highly heterogeneous (this inhomogeneity is attributed to interactions between lamellae and surrounding amorphous regions, as well as to local density fluctuations in the amorphous phase), and it is thought of as an ensemble of meso-regions (MR) with different potential energies. Two types of MRs are distinguished: (i) active domains, where junctions can slide with respect to their positions in the bulk material as they are thermally agitated (the mobile part of the amorphous phase), and (ii) passive domains, where sliding of junctions is prevented by surrounding lamellae (the rigid amorphous fraction). Straining of a specimen induces 1. Sliding of meso-domains with respect to each other (which reflects fragmentation and coarse slip of lamellae). 2. Sliding of junctions with respect to their positions in the stress-free medium (which is associated with slip of tie molecules along lamellae and fine slip of lamellar blocks). Sliding of MRs with respect to each other is modelled as a rate-independent process that describes the elastoplastic response of iPP. Sliding of junctions in meso-domains is treated as a rate-dependent phenomenon that reflects the viscoplastic behavior of isotactic polypropylene. Stretching of a specimen results in an increase in the concentration of active MRs and changes in the distribution of meso-domains with various activation energies for sliding of junctions driven by release of part of the rigid amorphous fraction due to lamellar fragmentation. The objective of this study is two-fold: 1. To report experimental data in a tensile test with a constant strain rate, in creep tests and in relaxation tests at several elongation ratios on injection-molded iPP specimens annealed for 24 h at the temperature $T=140$ $^{\circ}$C. 2. To derive constitutive equations for the time-dependent response of a semi-crystalline polymer and to find adjustable parameters in the stress–strain relations by fitting observations. In previous studies on modelling the response of amorphous and semi-crystalline polymers in the sub-yield and post-yield regions, see, e.g., pioneering works by Haward and Thackray (1968) and G’Sell and Jonas (1979), and more recent publications by Boyce et al. (1988), Bordonaro and Krempl (1992), Arruda et al. (1995), Hasan and Boyce (1995), Krempl and Bordonaro (1995, 1998), Zhang and Moore (1997), Spathis and Kontou (1998), Drozdov (2001), Duan et al. (2001), stress–strain curves, creep curves and relaxation curves were treated independently of each other (in the sence that different adjustable parameters were determined by matching observations in different tests). The aim of the present paper is to approximate experimental data in three conventional types of mechanical tests within one constitutive model. This allows two approaches in the nonlinear viscoelasticity with an “internal time" to be compared, the so-called models with stress- and strain-induced material clocks (Lustig et al., 1996; Krempl and Bordonaro, 1998, Wineman, 2002), as well as to shed some light on a mechanism for mechanically-induced changes in relaxation (retardation) spectra in the vicinity of the yield point. The exposition is organized as follows. The specimens and the experimental procedure are described in Section 2. The distribution of meso-regions with various potential energies for sliding of junctions is introduced in Section 3. Kinetic equations for sliding of MRs with respect to each other are proposed in Section 4. Sliding of junctions in active meso-domains is modelled in Section 5. The strain energy density of a semi-crystalline polymer is determined in Section 6. Constitutive equations are derived in Section 7 by using the laws of thermodynamics. These equations are further simplified to describe observations in “rapid" tensile tests, creep tests and relaxation tests. Section 8 is concerned with fitting experimental data. Our findings are briefly discussed in Section 9. Some concluding remarks are formulated in Section 10. Experimental procedure ====================== Isotactic polypropylene (Novolen 1100L) was supplied by BASF (Targor). ASTM dumbbell specimens were injection molded with length 148 mm, width 10 mm and height 3.8 mm. To erase the influence of thermal history, the samples were annealed in an oven at 140 $^{\circ}$C for 24 h and slowly cooled by air. To minimize the effect of physical aging on the time-dependent response of iPP, mechanical tests were carried out a week after the thermal pre-treatment. Uniaxial tensile relaxation tests were performed at room temperature on a testing machine Instron–5568 equipped with electro-mechanical sensors for the control of longitudinal strains in the active zone of samples (with the distance 50 mm between clips). The tensile force was measured by a standard load cell. The engineering stress, $\sigma$, was determined as the ratio of the axial force to the cross-sectional area of the specimens in the stress-free state. The specimens were loaded with the cross-head speed 5 mm/min (that corresponded to the Hencky strain rate $\dot{\epsilon}_{H}=1.1\cdot 10^{-3}$ s$^{-1}$), which provides nearly isothermal test conditions (Arruda et al., 1995). The engineering stress, $\sigma$, is plotted versus the elongation ratio $\lambda$ in Figure 1. The true longitudinal stress, $\sigma_{\rm t}$, is calculated as $\sigma_{\rm t}=\sigma\lambda$ (this formula is based on the incompressibility condition). The true stress is also depicted in Figure 1, which shows that necking of samples does not occur at elongations up to 50 %. The apparent yield strain, $\epsilon_{\rm y1}$, calculated as the intersection point of tangent lines to the true stress–elongation ratio diagram at “small" and “large" deformations, equals 0.04. The yield strain, $\epsilon_{\rm y2}$, determined as the strain corresponding to the maximum on the engineering stress–engineering strain curve, equals 0.08. A series of 8 creep tests was performed at the engineering stresses $\sigma_{1}^{0}=10.0$ MPa, $\sigma_{2}^{0}=15.0$ MPa, $\sigma_{3}^{0}=20.0$ MPa, $\sigma_{4}^{0}=25.0$ MPa, $\sigma_{5}^{0}=30.0$ MPa, $\sigma_{6}^{0}=30.38$ MPa, $\sigma_{7}^{0}=30.94$ MPa, and $\sigma_{8}^{0}=32.80$ MPa. The last three values of stress correspond to the initial strains (at the beginning of the creep tests) $\epsilon_{6}^{0}=0.45$, $\epsilon_{7}^{0}=0.50$, and $\epsilon_{8}^{0}=0.60$. The first four tests were carried out in the sub-yield region of deformation (the initial strains are less than the yield strain $\epsilon_{\rm y1}$), whereas the other tests were performed in the interval between the yield strains $\epsilon_{\rm y1}$ and $\epsilon_{\rm y2}$. Each creep test was carried out on a new sample. In the $m$th test ($m=1,\ldots,8$), a specimen was loaded with the cross-head speed 5 mm/min up to the engineering stress $\sigma_{m}^{0}$ that was preserved constant during the creep test, $t_{\rm c}=20$ min. The longitudinal strains, $\epsilon$, measured in the first 6 tests are plotted versus the logarithm ($\log=\log_{10}$) of time $t$ (the instant $t=0$ corresponds to the beginning of a creep test) in Figure 2. This figure demonstrates that the rate of increase in strain, $\epsilon$, with time, $t$, is relatively low at small stresses, and it noticeably grows with $\sigma$. The creep curves at stresses $\sigma_{m}^{0}$ exceeding 20 MPa are plotted in Figure 3, where the Hencky strain $\epsilon_{\rm H}=\ln \lambda$ is presented as a function of time $t$. Figure 3 reveals the primary creep of iPP at the stress $\sigma_{4}^{0}$, transition from the primary creep to the secondary creep at the stress $\sigma_{5}^{0}$, and transition from the secondary creep to the ternary creep at higher stresses, $\sigma_{6}^{0}$ to $\sigma_{8}^{0}$. These transitions are indicated by lines AA$^{\prime}$ ($\epsilon_{\rm H}=0.06$) and BB$^{\prime}$ ($\epsilon_{\rm H}=0.14$) in Figure 3. A series of 4 relaxation tests was performed at the strains $\epsilon_{1}^{0}=0.05$, $\epsilon_{2}^{0}=0.10$, $\epsilon_{3}^{0}=0.15$, and $\epsilon_{4}^{0}=0.20$. The first test was carried out at the strain belonging to the interval between the yield strains $\epsilon_{\rm y1}$ and $\epsilon_{\rm y2}$, whereas the other tests were performed at strains in the post-yield region of deformation. Any relaxation test was carried out on a new sample. In the $m$th test ($m=1,\ldots,4$), a specimen was loaded with the cross-head speed 5 mm/min up to the longitudinal strain $\epsilon_{m}^{0}$ that was preserved constant during the relaxation time $t_{\rm r}=20$ min. The engineering stress, $\sigma$, is plotted versus the logarithm of time $t$ (the instant $t=0$ corresponds to the beginning of a relaxation test) in Figure 4. This figure shows that the relaxation curves are strongly affected by strain. Distribution of meso-regions ============================ A semi-crystalline polymer is treated as a permanent network of chains bridged by junctions. The network is thought of as an ensemble of meso-regions with various potential energies for slippage of junctions with respect to their positions in the reference state. Two types of meso-domains are distinguished: passive and active. In passive MRs, all nodes are assumed to move affinelly with the bulk material. In active MRs, the junctions slide with respect to their reference positions under loading. Sliding of junctions in active MRs is modelled as a thermally activated process. The rate of sliding in a MR with potential energy $\bar{\omega}$ is given by the Eyring equation (Eyring, 1936) $$\Gamma=\Gamma_{0}\exp\biggl (-\frac{\bar{\omega}}{k_{\rm B}T}\biggr ),$$ where $k_{\rm B}$ is Boltzmann’s constant, $T$ is the absolute temperature, and the pre-factor $\Gamma_{0}$ is independent of energy $\bar{\omega}$ and temperature $T$. Confining ourselves to isothermal processes and introducing the dimensionless activation energy $\omega=\bar{\omega}/(k_{\rm B}T_{0})$, where $T_{0}$ is some reference temperature, we arrive at the formula $$\Gamma(\omega) =\Gamma_{0}\exp (-\omega).$$ Some lamellae (that restrict mobility of junctions in passive MRs) break under straining, which implies an increase in the concentration of active meso-domains. As a result, the number of strands in active MRs grows, and the number of strands in passive meso-domains decreases. Denote by $N_{\rm a}(t,\omega)$ the number of strands (per unit mass) in active meso-domains with energy $\omega$ at instant $t\geq 0$. The total number of strands in active MRs, $X_{\rm a}(t)$, reads $$X_{\rm a}(t)=\int_{0}^{\infty} N_{\rm a}(t,\omega) d\omega .$$ The number of strands (per unit mass) in passive MRs, $X_{\rm p}(t)$, is connected with the number of strands in active meso-domains, $X_{\rm a}(t)$, by the conservation law $$X_{\rm a}(t) +X_{\rm p}(t) =X,$$ where $X$ is the number of active strands per unit mass (this quantity is assumed to be time-independent). The distribution of strands in active MRs is described by the ratio, $p(t,\omega)$, of the number, $N_{\rm a}(t,\omega)$, of strands in active meso-domains with energy $\omega$ to the total number of strands in active MRs, $$p(t,\omega)=\frac{N_{\rm a}(t,\omega)}{X_{\rm a}(t)},$$ and by the concentration, $\kappa(t)$, of active MRs, $$\kappa(t)=\frac{X_{\rm a}(t)}{X}.$$ In what follows, constitutive equations for a semi-crystalline polymer will be derived for an arbitrary distribution of active MRs. To fit experimental data, the random energy model is employed with $$p(t,\omega) = p_{0}(t) \exp \biggl [ -\frac{(\omega-\Omega(t))^{2}}{2\Sigma^{2}(t)} \biggr ] \quad (\omega\geq 0), \qquad p(t,\omega)=0 \quad (\omega <0).$$ Here $\Omega$ is the average activation energy in an ensemble of active meso-domains, $\Sigma$ is the standard deviation of potential energies for sliding of junctions, and $p_{0}$ is determined by the condition $$\int_{0}^{\infty} p(t,\omega) d\omega =1.$$ Sliding of meso-regions ======================= It is assumed that meso-domains are not rigidly connected, but can slide with respect to each other under straining. Sliding of meso-domains is treated as a rate-independent process and is associated with the elastoplastic behavior of a semi-crystalline polymer. We suppose that an increase in strain, $\epsilon$, by an increment, $d\epsilon$, causes growth of the elastoplastic strain, $\epsilon_{\rm ep}$, by an increment, $d\epsilon_{\rm ep}$, that is proportional to $d\epsilon$, $$d\epsilon_{\rm ep}=\varphi d\epsilon.$$ The coefficient of proportionality, $\varphi$, may, in general, be a function of strain, $\epsilon$, stress, $\sigma$, and the elastoplastic strain, $\epsilon_{\rm ep}$. Only the dependence of $\varphi$ on $\epsilon$ is taken into account, which results in the kinematic equation $$\frac{d\epsilon_{\rm ep}}{dt}(t)=\varphi (\epsilon(t) ) \frac{d\epsilon}{dt}(t), \qquad \epsilon_{\rm ep}(0)=0.$$ The function $\varphi(\epsilon)$ vanishes at $\epsilon=0$ (the elastoplastic strain equals zero at very small strains), monotonically increases with strain, and reaches some limiting value $b\in (0,1)$ at rather large strains (which corresponds to a steady regime of plastic flow). To reduce the number of adjustable parameters in the constitutive equations, an exponential dependence is adopted, $$\varphi(\epsilon)=b\Bigl [ 1-\exp(-a\epsilon)\Bigr ],$$ where the positive coefficients $a$ and $b$ are found by matching observations. Equations (8) and (9) differ from conventional flow rules in elastoplasticity, where the elastoplastic strain is assumed to be proportional to the stress, $\sigma$. Similarities and differences between our approach and traditional ones will be discussed in Section 8. Sliding of junctions in active MRs ================================== Sliding of junctions in active meso-domains with respect to their positions in a stress-free medium is treated as a rate-dependent process and is associated with the viscoelastic response of a semi-crystalline polymer. Sliding of junctions in active MRs reflects (i) sliding of tie chains along lamellae, (ii) slip of chains in amorphous regions with respect to entanglements, and (iii) rearrangement of junctions driven by slip of lamellar blocks. The non-affine deformation of a network is modelled as a mechanically activated process induced by straining of active meso-domains. The strain in a meso-region, $e$, is defined as the difference between the macro-strain, $\epsilon$, and the elastoplastic strain, $\epsilon_{\rm ep}$, caused by sliding of meso-domains with respect to each other, $$e(t)=\epsilon(t)-\epsilon_{\rm ep}(t).$$ Accepting the first-order kinetics for sliding of junctions, $$\frac{\partial \epsilon_{\rm ve}}{\partial t}(t,\omega) =\Gamma(\omega)\Bigl [ e(t)-\epsilon_{\rm ve}(t,\omega)\Bigr ],$$ and using Eq. (1), we arrive at the constitutive equation $$\frac{\partial \epsilon_{\rm ve}}{\partial t}(t,\omega) =\Gamma_{0}\exp(-\omega)\Bigl [ \epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr ], \qquad \epsilon_{\rm ve}(0,\omega)=0,$$ where $\epsilon_{\rm ve}(t,\omega)$ is the strain driven by sliding of junctions in an active MR with potential energy $\omega$. Strain energy density ===================== The elastic strain, $\epsilon_{\rm e}$, is calculated as the difference between the macro-strain, $\epsilon$, and the strains, $\epsilon_{\rm ep}$ and $\epsilon_{\rm ve}$, induced by sliding of meso-domains with respect to each other and by sliding of junctions in active MRs with respect to their reference positions. For a passive meso-region (where sliding of junctions is prevented), the elastic strain is given by $$\epsilon_{\rm e}(t)=\epsilon(t)-\epsilon_{\rm ep}(t).$$ For an active meso-domain with potential energy $\omega$, the elastic strain reads $$\epsilon_{\rm e}(t,\omega)=\epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega).$$ A strand is modelled as a linear elastic medium with the strain energy $$w=\frac{1}{2}\mu \epsilon_{\rm e}^{2},$$ where $\mu$ is a constant rigidity. Multiplying the mechanical energy of a strand, Eq. (11), by the number of strands per unit mass, summing the strain energies for strands in active and passive meso-domains, and neglecting the energy of inter-chain interaction, we find the mechanical energy per unit mass of a semi-crystalline polymer, $$W(t)=\frac{1}{2} \mu \biggl [ \int_{0}^{\infty} N_{\rm a}(t,\omega) \Bigl (\epsilon(t)- \epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr )^{2}d\omega +X_{\rm p}(t)\Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)\Bigr )^{2} \biggr ].$$ To develop a stress–strain relation, an explicit expression is necessary for the derivative of $W$ with respect to time. Differentiation of Eq. (12) implies that $$\begin{aligned} \frac{dW}{dt}(t) &=& \mu \biggl [ \int_{0}^{\infty} N_{\rm a}(t,\omega) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr )d\omega +X_{\rm p}(t) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t) \Bigr )\biggr ] \nonumber\\ && \times \Bigl [\frac{d\epsilon}{dt}(t)-\frac{d\epsilon_{\rm ep}}{dt}(t)\Bigr ] -Y_{1}(t)-Y_{2}(t),\end{aligned}$$ where $$\begin{aligned} Y_{1}(t) &=& -\frac{1}{2}\mu \biggl [ \int_{0}^{\infty} \frac{\partial N_{\rm a}}{\partial t}(t,\omega) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr )^{2} d\omega \nonumber\\ && +\frac{d X_{\rm p}}{dt}(t) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t) \Bigr )^{2} \biggr ], \\ Y_{2}(t) &=& \mu \int_{0}^{\infty} N_{\rm a}(t,\omega) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr ) \frac{\partial \epsilon_{\rm ve}}{\partial t}(t,\omega) d\omega.\end{aligned}$$ It follows from Eqs. (2), (3), (8) and (13) that $$\begin{aligned} \frac{dW}{dt}(t) &=& \mu \biggl [ X \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t) \Bigr ) - \int_{0}^{\infty} N_{\rm a}(t,\omega) \epsilon_{\rm ve}(t,\omega) d\omega \biggr ] \Bigl [ 1-\varphi(\epsilon(t))\Bigr ]\frac{d\epsilon}{dt}(t) \nonumber\\ &&-Y_{1}(t)-Y_{2}(t).\end{aligned}$$ Equations (2) and (3) imply that $$\frac{dX_{\rm p}}{dt}(t)=-\frac{dX_{\rm a}}{dt}(t)=-\int_{0}^{\infty} \frac{\partial N_{\rm a}}{\partial t}(t,\omega) d\omega.$$ This equality together with Eq. (14) results in $$Y_{1}(t)=\frac{1}{2}\mu \int_{0}^{\infty} \frac{\partial N_{\rm a}}{\partial t}(t,\omega) \biggl [ \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t) \Bigr )^{2} -\Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr )^{2} \biggr ] d\omega.$$ Combining Eqs. (10) and (15), we find that $$Y_{2}(t) = \mu \int_{0}^{\infty} N_{\rm a}(t,\omega)\Gamma(\omega) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)-\epsilon_{\rm ve}(t,\omega)\Bigr )^{2} d\omega.$$ Constitutive equation ===================== For isothermal uniaxial deformation, the Clausius–Duhem inequality reads $$-\frac{dW}{dt}+\sigma\frac{d\epsilon}{dt}\geq 0.$$ Substituting expression (16) into Eq. (19) and using Eqs. (4) and (5), we obtain $$\begin{aligned} && \biggl [ \sigma(t)-E\Bigl (1-\varphi(\epsilon(t))\Bigr ) \Bigl (\epsilon(t)-\epsilon_{\rm ep}(t)-\kappa(t)\int_{0}^{\infty} p(t,\omega)\epsilon_{\rm ve}(t,\omega)d\omega \Bigr )\biggr ] \frac{d\epsilon}{dt}(t) \nonumber\\ && +Y_{1}(t)+Y_{2}(t)\geq 0,\end{aligned}$$ where $E=\mu X$. It follows from Eqs. (17) and (18) that for an active loading program (when $\epsilon(t)$, $\epsilon_{\rm ve}(t)$, $\epsilon_{\rm ep}(t)$ and $N_{\rm a}(t,\omega)$ increase with time), the functions $Y_{1}(t)$ and $Y_{2}(t)$ are non-negative. This means that the dissipation inequality (20) is satisfied, provided that the expression in the square brackets vanishes, which results in the stress–strain relation $$\sigma(t) =E\Bigl (1-\varphi(\epsilon(t))\Bigr ) \biggl [\epsilon(t)-\epsilon_{\rm ep}(t) -\kappa(t)\int_{0}^{\infty} p(t,\omega)\epsilon_{\rm ve}(t,\omega)d\omega \biggr ].$$ Given functions $\kappa(t)$, $\Omega(t)$ and $\Sigma(t)$, constitutive equations (6), (8) to (10) and (21) describe the time-dependent response of a semi-crystalline polymer at uniaxial deformation. These relations are determined by 4 adjustable parameters: an analog of Young’s modulus $E$ in Eq. (21), dimensionless constants $a$ and $b$ in Eq. (9), and the attempt rate for sliding of junctions $\Gamma_{0}$ in Eq. (10). The pre-factor $\Gamma_{0}$ can be excluded from the governing equations by introducing a “shifted" potential energy $\tilde{\omega}=\omega-\omega_{0}$ with $\omega_{0}=\ln \Gamma_{0}$. Because this transformation does not change the structure of the stress–strain relations, we set $\Gamma_{0}=1$ s without loss of generality. For “rapid" deformations, when the effect of sliding of junctions in active MRs on the mechanical response of a specimen is negligible, the constitutive equations are substantially simplified. Neglecting the integral term in Eq. (21) and using Eq. (9), we find that $$\sigma =E\Bigl [1-b\Bigl ( 1-\exp(-a\epsilon)\Bigr ) \Bigr ] (\epsilon-\epsilon_{\rm ep}),$$ where the elastoplastic strain $\epsilon_{\rm ep}$ obeys Eqs. (8) and (9), $$\frac{d\epsilon_{\rm ep}}{{d\epsilon}}=b\Bigl ( 1-\exp(-a\epsilon)\Bigr ), \qquad \epsilon_{\rm ep}(0)=0.$$ Equations (22) and (23) are determined by 3 material constants, $E$, $a$ and $b$, to be found by matching a stress–strain curve for a tensile test with a constant strain rate. To study “slow" deformation processes, when sliding of junctions in meso-domains is to be taken into account, an additional hypothesis should be introduced to describe the effect of deformation history on the quantities $\kappa$, $\Omega$ and $\Sigma$. Two approaches are conventionally used to predict the effect of mechanical factors on these parameters, the so-called models with strain and stress clocks. For a survey of these concepts, the reader is referred to Drozdov (1998) and the bibliography therein. The theory of a stress-induced internal time is traditionally employed to fit observations in creep tests with the loading program $$\sigma(t)=\left \{\begin{array}{ll} 0, & t<0,\\ \sigma^{0}, & t\geq 0, \end{array} \right .$$ where $\sigma^{0}$ is a given stress. In terms of our model, this concept means that the quantities $\kappa$, $\Omega$ and $\Sigma$ depend on the current stress $\sigma$. Combining Eqs. (6), (8), (9), (21) and (24), we arrive at the formulas $$\begin{aligned} \epsilon(t) &=& \epsilon_{\rm ep}(t)+p^{0}\kappa(\sigma^{0})\int_{0}^{\infty} \epsilon_{\rm ve}(t,\omega)\exp\Bigl [-\frac{(\omega-\Omega(\sigma^{0}))^{2}}{2\Sigma^{2}(\sigma^{0})} \Bigr ]d\omega \nonumber\\ &&+(\epsilon^{0}-\epsilon_{\rm ep}^{0}) \frac{1-b(1-\exp(-a\epsilon^{0}))}{1-b(1-\exp(-a\epsilon(t)))}, \\ \frac{d\epsilon_{\rm ep}}{dt}(t) &=& b\Bigl ( 1-\exp(-a\epsilon(t))\Bigr ) \frac{d\epsilon}{dt}(t), \qquad \epsilon_{\rm ep}(0)=\epsilon_{\rm ep}^{0}.\end{aligned}$$ In these equations, the initial instant $t=0$ corresponds to the beginning of the creep test, the quantities $\epsilon^{0}$ and $\epsilon_{\rm ep}^{0}$ are found by integration of Eqs. (22) and (23) in the interval from $\sigma=0$ to $\sigma=\sigma^{0}$, the coefficient $p^{0}$ is determined by condition (7), and the function $\epsilon_{\rm ve}(t,\omega)$ obeys Eq. (10). It is worth noting that Eq. (26) can be integrated explicitly to express the elastoplastic strain, $\epsilon_{\rm ep}$, by means of the macro-strain $\epsilon$. However, we do not dwell on this transformation. Given a stress, $\sigma^{0}$, Eqs. (10), (25) and (26) are determined by 3 experimental constants, $\kappa(\sigma^{0})$, $\Omega(\sigma^{0})$ and $\Sigma (\sigma^{0})$ to be found by matching observations in a creep test. According to the concept of a strain-induced material clock, the parameters $\kappa$, $\Omega$ and $\Sigma$ are functions of the current strain $\epsilon$. This approach is conventionally used to approximate experimental data in a relaxation test with $$\epsilon(t)=\left \{\begin{array}{ll} 0, & t<0,\\ \epsilon^{0}, & t\geq 0, \end{array} \right .$$ where $\epsilon^{0}$ is a given strain. It follows from Eqs. (8), (9) and (27) that the elastoplastic strain, $\epsilon_{\rm ep}$, is time-independent. The quantity $\epsilon_{\rm ep}(t)=\epsilon_{\rm ep}^{0}$ is determined by integration of Eq. (23) from zero to $\epsilon^{0}$. Combining Eqs. (6), (21) and (27), we find that $$\begin{aligned} \sigma(t) &=& E\Bigl [ 1-b\Bigl (1-\exp(-a\epsilon^{0})\Bigr )\Bigr ] \Bigl \{ \epsilon^{0}-\epsilon_{\rm ep}^{0} \nonumber\\ &&-p^{0}\kappa(\epsilon^{0})\int_{0}^{\infty} \epsilon_{\rm ve}(t,\omega) \exp\Bigl [-\frac{(\omega-\Omega(\epsilon^{0}))^{2}}{2\Sigma^{2}(\epsilon^{0})} \Bigr ]d\omega \Bigr \},\end{aligned}$$ where the instant $t=0$ corresponds to the beginning of a relaxation test, the coefficient $p^{0}$ is determined by condition (7), and the function $\epsilon_{\rm ve}(t,\omega)$ is governed by Eq. (10) with $\Gamma_{0}=1$, $$\frac{\partial \epsilon_{\rm ve}}{\partial t}(t,\omega) =\exp(-\omega)\Bigl [ \epsilon^{0}-\epsilon_{\rm ep}^{0}-\epsilon_{\rm ve}(t,\omega)\Bigr ], \qquad \epsilon_{\rm ve}(0,\omega)=0.$$ Introducing the notation $$e_{\rm ve}(t,\omega)=\frac{\epsilon_{\rm ve}(t,\omega)}{\epsilon^{0}-\epsilon_{\rm ep}^{0}},$$ we present the latter equation as follows: $$\frac{\partial e_{\rm ve}}{\partial t}(t,\omega) =\exp(-\omega)[ 1-e_{\rm ve}(t,\omega) ], \qquad e_{\rm ve}(0,\omega)=0.$$ Substitution of expression (29) into Eq. (28) results in $$\sigma(t) = \sigma^{0}(\epsilon^{0}) \Bigl \{ 1-p^{0}\kappa(\epsilon^{0})\int_{0}^{\infty} e_{\rm ve}(t,\omega) \exp\Bigl [-\frac{(\omega-\Omega(\epsilon^{0}))^{2}}{2\Sigma^{2}(\epsilon^{0})} \Bigr ]d\omega \Bigr \},$$ where $$\sigma^{0}(\epsilon^{0})=E\Bigl [ 1-b\Bigl (1-\exp(-a\epsilon^{0})\Bigr )\Bigr ] (\epsilon^{0}-\epsilon_{\rm ep}^{0}).$$ Given a strain, $\epsilon^{0}$, Eqs. (30) and (31) are characterized by 4 experimental constants, $\sigma^{0}(\epsilon^{0})$, $\kappa(\epsilon^{0})$, $\Omega(\epsilon^{0})$ and $\Sigma (\epsilon^{0})$ to be determined by fitting observations in a relaxation test. Our aim now is to find adjustable parameters in the constitutive equations by matching experimental data depicted in Figures 1, 2 and 4, and to assess the applicability of the concept of internal time with stress- and strain-induced material clocks. Fitting of observations ======================= We begin with the approximation of the stress–strain curve depicted in Figure 5. The restriction on strains ($\epsilon_{\max}=0.06$) may explained by two reasons: (i) at higher elongations, the assumption that the strain energy of a strand is a quadratic function of strain, see Eq. (11), becomes questionable, and (ii) according to Figure 3, at $\epsilon=0.06$ the primary creep is transformed into the secondary creep (developed plastic flow), which is beyond the scope of the present study. Under uniaxial tension with the cross-head speed 5 mm/min, the strain $\epsilon_{\max}=0.06$ is reached within 69 s. According to Figure 2, changes in strain induced by sliding of junctions during this period are insignificant at stresses up to 20 MPa, whereas the duration of stretching at higher stresses does not exceeds 30 s, which causes rather small growth of strains. Based on these observations, we treat the deformation process as rapid and apply Eqs. (22) and (23) to match observations. To find the constants $E$, $a$ and $b$, we fix the intervals $[0,a_{\max}]$ and $[0,b_{\max}]$, where the “best-fit" parameters $a$ and $b$ are assumed to be located, and divide these intervals into $J$ subintervals by the points $a_{i}=i\Delta a$ and $b_{j}=j\Delta b$ ($i,j=1,\ldots,J$) with $\Delta a=a_{\max}/J$ and $\Delta b=b_{\max}/J$. For any pair, $\{ a_{i}, b_{j} \}$, we integrate Eqs. (22) and (23) numerically (with the step $\Delta \epsilon=5.0\cdot 10^{-5}$) by the Runge–Kutta method. The elastic modulus $E=E_{0}(i,j)$ is found by the least-squares method from the condition of minimum of the function $$K(i,j)=\sum_{\epsilon_{l}} \Bigl [ \sigma_{\rm exp}(\epsilon_{l}) -\sigma_{\rm num}(\epsilon_{l}) \Bigr ]^{2},$$ where the sum is calculated over all experimental points, $\epsilon_{l}$, depicted in Figure 5, $\sigma_{\rm exp}$ is the engineering stress measured in the tensile test, and $\sigma_{\rm num}$ is given by Eq. (22). The “best-fit" parameters $a$ and $b$ minimize $K$ on the set $ \{ a_{i}, b_{j} \quad (i,j=1,\ldots, J) \}$. After determining their values, $a_{i}$ and $b_{j}$, this procedure is repeated twice for the new intervals $[ a_{i-1}, a_{i+1}]$ and $[ b_{j-1}, b_{j+1}]$ to ensure an acceptable accuracy of fitting. The “best-fit" parameters read $E= 2.12$ GPa, $a= 38.10$ and $b=0.64$. This value of $E$ slightly exceeds Young’s modulus ($E=1.50$ GPa) provided by the supplier for a virgin material, which may be explained by changes in the microstructure of spherulites at annealing. To estimate the elastoplastic strain, $\epsilon_{\rm ep}$, and the difference between the present model and the conventional flow rule $$\frac{d\epsilon_{\rm ep}}{d\epsilon}=k\sigma$$ with a constant coefficient $k$, we present results of numerical simulation in Figure 6. This figure shows that the elastoplastic strain, $\epsilon_{\rm ep}$, is negligible at strains $\epsilon<0.025$, and it increases (practically linearly) with macro-strain at higher elongation ratios. The ratio $$r(\epsilon)=\frac{\varphi(\epsilon)}{\sigma(\epsilon)}$$ linearly grows with strain at $\epsilon < 0.025$, and it becomes practically constant at $\epsilon > 0.03$. The latter implies that in the region, where the influence of elastoplastic strains on the response of iPP cannot be disregarded, Eqs. (22) and (23) are rather close to the flow rule (32). An advantage of Eqs. (22) and (23) is that (i) they have a transparent physical meaning, (ii) no additional constants (analogs of the yield stress, $\sigma_{\rm y}$, or yield strain, $\epsilon_{\rm y}$) are to be introduced in the stress–strain relations, and (iii) these equations can correctly predict the time-dependent response of iPP in creep tests (see later), where Eq. (32) appears to be an oversimplified relationship. We proceed with fitting observations in creep tests depicted in Figure 2. For any stress, $\sigma^{0}$, the quantities $\epsilon^{0}$ and $\epsilon_{\rm ep}^{0}$ are found by integration of Eqs. (22) and (23) with the material constants found in the approximation of the stress–strain curve plotted in Figure 5. The quantities $\kappa(\sigma^{0})$, $\Omega(\sigma^{0})$ and $\Sigma (\sigma^{0})$ are determined by the following algorithm. We fix the intervals $[0,\kappa_{\max}]$, $[0,\Omega_{\max}]$ and $[0,\Sigma_{\max}]$, where the “best-fit" parameters $\kappa$, $\Omega$ and $\Sigma$ are assumed to be located, and divide these intervals into $J$ subintervals by the points $\kappa_{i}=i\Delta \kappa$, $\Omega_{j}=j \Delta \Omega$ and $\Sigma_{k}=k\Delta \Sigma$ ($i,j,k=1,\ldots,J$) with $\Delta \kappa=\kappa_{\max}/J$, $\Delta \Omega=\Omega_{\max}/J$ and $\Delta \Sigma=\Sigma_{\max}/J$. For any pair, $\{ \Omega_{j}, \Sigma_{k} \}$, the constant $p^{0}=p^{0}(j,k)$ is determined by Eq. (7), where the integral is evaluated by Simpson’s method with 200 points and the step $\Delta \omega=0.15$. For any triple, $\{ \kappa_{i}, \Omega_{j}, \Sigma_{k} \}$, Eqs. (10), (25) and (26) are integrated numerically (with the time step $\Delta t=0.1$) by the Runge–Kutta method. The “best-fit" parameters $\kappa$, $\Omega$ and $\Sigma$ are found from the condition of minimum of the function $$K(i,j,k)=\sum_{t_{l}} \Bigl [ \epsilon_{\rm exp}(t_{l})-\epsilon_{\rm num}(t_{l}) \Bigr ]^{2},$$ where the sum is calculated over all experimental points, $t_{l}$, presented in Figure 2, $\epsilon_{\rm exp}$ is the strain measured in the creep test, and $\epsilon_{\rm num}$ is given by Eq. (25). After determining the “best-fit" values, $\kappa_{i}$, $\Omega_{j}$ and $\Sigma_{k}$, this procedure is repeated for the new intervals $[ \kappa_{i-1}, \kappa_{i+1}]$, $[ \Omega_{j-1}, \Omega_{j+1}]$ and $[ \Sigma_{k-1}, \Sigma_{k+1}]$ to provide an acceptable accuracy of fitting. Figure 2 demonstrates fair agreement between the experimental data and the results of numerical simulation. The adjustable parameters $\Omega$, $\Sigma$ and $\kappa$ are plotted versus the engineering stress, $\sigma$, in Figures 7 to 9. The experimental data are approximated by the linear functions $$\Omega=\Omega_{0}+\Omega_{1}\sigma, \qquad \Sigma=\Sigma_{0}+\Sigma_{1}\sigma, \qquad \kappa=\kappa_{0}+\kappa_{1}\sigma,$$ where the coefficients $\Omega_{m}$, $\Sigma_{m}$ and $\kappa_{m}$ ($m=0,1$) are found by the least-squares technique. Figures 7 and 8 show that the quantities $\Omega$ and $\Sigma$ increase with stress, $\sigma$, reach their maxima in the interval between $\sigma=25$ and $\sigma=30$ MPa (which means, in the vicinity of the yield strain $\epsilon_{\rm y1}$), and dramatically decrease at higher stresses. According to Figure 9, the concentration of active meso-regions, $\kappa$, increases with strain up to $\sigma=25$ MPa, whereas at higher strains, the slope of the straightline (33) noticeably grows. Finally, we approximate the experimental data in the relaxation tests presented in Figure 4. To fit observations, we re-write Eq. (31) as $$\sigma(t) = C_{1}(\epsilon^{0})+p^{0}C_{2}(\epsilon^{0})\int_{0}^{\infty} e_{\rm ve}(t,\omega) \exp\Bigl [-\frac{(\omega-\Omega(\epsilon^{0}))^{2}}{2\Sigma^{2}(\epsilon^{0})} \Bigr ]d\omega$$ with $$C_{1}=\sigma^{0}, \qquad C_{2}=-\sigma^{0}\kappa.$$ For any strain, $\epsilon^{0}$, we fix the intervals $[0,\Omega_{\max}]$ and $[0,\Sigma_{\max}]$, where the “best-fit" parameters $\Omega$ and $\Sigma$ are located, and divide these intervals into $J$ subintervals by the points $\Omega_{i}=i \Delta \Omega$ and $\Sigma_{j}=j\Delta \Sigma$ ($i,j=1,\ldots,J$) with $\Delta \Omega=\Omega_{\max}/J$ and $\Delta \Sigma=\Sigma_{\max}/J$. For any pair, $\{ \Omega_{i}, \Sigma_{j} \}$, the constant $p^{0}=p^{0}(i,j)$ is determined from Eq. (7), where the integral is evaluated by Simpson’s method with 200 points and the step $\Delta \omega=0.15$. Equations (30) are integrated numerically (with the time step $\Delta t=0.1$) by the Runge–Kutta method. The constants $C_{1}$ and $C_{2}$ in Eq. (34) are determined by the least-squares algorithm to minimize the function $$K(i,j)=\sum_{t_{l}} \Bigl [ \sigma_{\rm exp}(t_{l})-\sigma_{\rm num}(t_{l}) \Bigr ]^{2},$$ where the sum is calculated over all experimental points, $t_{l}$, presented in Figure 4, $\sigma_{\rm exp}$ is the stress measured in the relaxation test, and $\sigma_{\rm num}$ is given by Eq. (34). The “best-fit" parameters $\Omega$ and $\Sigma$ are found from the condition of minimum of the function $K(i,j)$. After determining the “best-fit" values, $\Omega_{i}$ and $\Sigma_{j}$, this procedure is repeated for the new intervals $[ \Omega_{i-1}, \Omega_{i+1}]$ and $[ \Sigma_{j-1}, \Sigma_{j+1}]$ to guarantee good accuracy of fitting. Figure 4 demonstrates an acceptable agreement between the experimental data and the results of numerical analysis. The quantities $\Omega$, $\Sigma$ and $\kappa=-C_{2}/C_{1}$ are plotted versus strain $\epsilon$ in Figures 10 to 12. The experimental data are approximated by the linear functions $$\Omega=\Omega_{0}+\Omega_{1}\epsilon, \qquad \Sigma=\Sigma_{0}+\Sigma_{1}\epsilon, \qquad \kappa=\kappa_{0}+\kappa_{1}\epsilon,$$ where the coefficients $\Omega_{m}$, $\Sigma_{m}$ and $\kappa_{m}$ ($m=0,1$) are found by the least-squares technique. Figures 10 to 12 demonstrate that the adjustable parameters $\Omega$, $\Sigma$ and $\kappa$ found by matching observations in relaxation tests monotonically increase with strain $\epsilon$. There is no doubt that Eqs. (30) and (31) can be applied to fit experimental data in the relaxation test with the smallest strain $\epsilon_{1}^{0}=0.05$. Some questions arise, however, whether these equations (based on the quadratic approximation (11) of the strain energy), may be used to match observations in relaxation tests at higher strains. Fortunately, it can be shown that Eqs. (30) and (31) remain valid for an arbitrary (not necessary quadratic in strains) mechanical energy per strand $w$. To avoid complicated algebra employed in the derivation of constitutive equations at finite strains, appropriate transformations are omitted. Discussion ========== Our objective now is to compare adjustable parameters in the constitutive equations found by fitting observations in the creep and relaxation tests. First, we plot the quantities $\Omega$, $\Sigma$ and $\kappa$ determined in the approximation of the relaxation curve at $\epsilon_{1}^{0}$ together with the data obtained by matching the creep curves (Figures 7 to 9). For this purpose, we replace the strain in the relaxation test, $\epsilon_{1}^{0}$, by the corresponding stress at the beginning of the relaxation process, $\sigma^{0}=C_{1}(\epsilon_{1}^{0})$. Figures 7 and 8 demonstrate good agreement between the parameters $\Omega$ and $\Sigma$ determined by fitting the experimental data in the creep and relaxation tests. Figure 9 reveals that the concentration of active meso-domains, $\kappa$, found in the relaxation text follows Eq. (33) with the coefficients found in the approximation of the creep curves below the yield point, $\epsilon_{\rm y1}$, see curve 1. These conclusions confirm changes in the distribution function $p$ in the interval $[\epsilon_{\rm y1}, \epsilon_{\rm y2}]$ revealed by matching the experimental data in the creep tests, but question appropriate findings for the concentration of active MRs $\kappa$. To shed some light on this controversy, we re-plot the experimental constants $\Omega$, $\Sigma$ and $\kappa$ found by fitting observations in the creep tests below the yield strain $\epsilon_{\rm y1}$ together with those determined in the approximation of the relaxation curves (Figures 10 to 12). The adjustable parameters $\Omega$, $\Sigma$ and $\kappa$ found by matching the creep curves are depicted as functions of strain at the beginning of creep tests. Figures 10 and 11 reveal that the quantities $\Omega$ and $\Sigma$ linearly increase with strain up to the yield strain $\epsilon_{\rm y1}$, where they suffer a pronounced drop, and proceed to grow with strain afterwards. Figure 12 demonstrates that the concentration of active meso-domains, $\kappa$, linearly increases with strain in the entire region of strains under consideration, and the data found in the creep tests are in fair agreement with those determined by matching the relaxation curves. Based on these observations and adopting the dual lamellar population model (Verma et al., 1996), we propose the following scenario for the effect of mechanical factors on the distribution of active MRs: 1. Below the yield strain $\epsilon_{\rm y1}$, the average activation energy, $\Omega$, and the standard deviation of activation energies, $\Sigma$, for sliding of junctions in active MRs monotonically increase with strain $\epsilon$. This growth is attributed to fragmentation of thin (subsidiary) lamellae formed during injection-molding of specimens and developed at annealing of iPP. The increase in the average activation energy, $\Omega$, is associated with - breakage of subsidiary and transverse lamellae into small pieces that serve as extra physical cross-links in amorphous regions, - mechanically-induced activation of rigid amorphous fraction, where sliding of junctions was prevented by surrounding lamellae in stress-free specimens. The increase in the standard deviation of potential energies, $\Sigma$, reflects heterogeneity of the fragmentation process, which grows with strain because of the inhomogeneity of breakage of subsidiary lamellae in active meso-regions with different sizes (potential energies). 2. When the strain, $\epsilon$, reaches the yield strain, $\epsilon_{\rm y1}$, subsidiary (thin) lamellae become totally disintegrated, which results in a decrease in the average activation energy $\Omega$ (lamellar blocks that served as physical cross-links disappear) and a pronounced decrease in the standard deviation of potential energies $\Sigma$ (meso-domains become more homogeneous). Figures 7 and 8 reveal that the interval of strains, where this “homogenization" of the micro-structure occurs, is relatively narrow (less than 2 %), which implies that the slopes of curves 2 in these figures are rather high. 3. Straining of a specimen above the yield strain, $\epsilon_{\rm y1}$, causes fragmentation of dominant (thick) lamellae, which results in a noticeable increase in $\Omega$ and $\Sigma$. This growth is driven by the same mechanism as an increase in $\Omega$ and $\Sigma$ in the sub-yield region of deformation (breakage of lamellae into small pieces that serve as extra physical cross-links in the amorphous phase). This conclusion may be confirmed by the similarity of slopes of curves 1 and 2 for the standard deviation of potential energies of MRs, $\Sigma$, depicted in Figure 11. The slopes of the curves 1 and 2 for the average potential energy for sliding of junctions, $\Omega$, in Figure 10 differ substantially. This discrepancy is attributed to the fact that the growth of the average potential energy of active meso-domains in the sub-yield region of deformation is governed by two morphological transformations (release of rigid amorphous fraction and formation of extra junctions in active MRs), whereas only the latter process takes place in the post-yield region. 4. The concentration of active meso-domains, $\kappa$, monotonically increases under stretching. A noticeable increase in $\kappa$ with stress reported in Figure 9 is an artifact caused by the assumption that the concentration of active MRs remains constant during a creep test (which, in turn, is based on a concept of stress-induced internal time). Despite good agreement between the experimental data and the results of numerical simulation revealed in Figure 2, it seems more adequate to presume that the adjustable parameters $\Omega$, $\Sigma$ and $\kappa$ depend on the current strain (not stress). This hypothesis does not affect curves 1 to 4 in Figure 2 (because of relatively small increases in strain during the creep tests), but it results in substantial changes in creep curves 5 and 6 (corresponding to the yield region). Assuming $\kappa$ to be a function of strain, one can treat the experimental points on curve 2 in Figure 9 as some “average" (over the creep curves) values of the concentration of active MRs in the post-yield region of deformation. In this case, the values of $\kappa$ found by matching creep curves 5 and 6 in Figure 2 become quite comparable with the data depicted in Figure 12, where these values roughly correspond to the vicinity of the yield point $\epsilon_{\rm y2}$. Concluding remarks ================== A model has been developed for the elastoplastic and viscoplastic responses of semi-crystalline polymers at isothermal uniaxial deformation with small strains. To derive constitutive equations, a complicated micro-structure of a polymer is replaced by an equivalent permanent network of macromolecules bridged by junctions (physical cross-links, entanglements and lamellae). The network is thought of as an ensemble of meso-regions with various potential energies for sliding of junctions with respect to their positions in a stress-free medium. The elastoplastic (rate-independent) behavior of a semi-crystalline polymer is attributed to sliding of meso-domains with respect to each other. The kinetics of sliding is governed by Eqs. (8) and (9) with 2 adjustable parameters: $a$ and $b$. The viscoplastic (rate-dependent) response is associated with sliding of junctions in active MRs. The sliding process is assumed to be thermally-agitated, and its rate is described by the Eyring equation (1). Equation (10) for the rate of viscoplastic strain is based on the first order kinetics of sliding, and it does not contain experimental constants (the attempt rate $\Gamma_{0}$ is set to be 1 s). The distribution of active meso-domains is determined by the random energy model, Eq. (6), with three adjustable parameters: the average activation energy for sliding of nodes, $\Omega$, the standard deviation of potential energies of active MRs, $\Sigma$, and the concentration of active meso-domains, $\kappa$, that are affected by mechanical factors. With reference to the concept of mechanically-induced material clocks, two hypotheses are analyzed: (i) the quantities $\Omega$, $\Sigma$ and $\kappa$ are stress-dependent, and (ii) these parameters are governed by the current strain. Stress–strain relations are derived by using the laws of thermodynamics. The constitutive equations are developed under the assumptions that (i) a strand can be modelled as a linear elastic medium with the strain energy (11), and (ii) the energy of inter-chain interaction is negligible. These equations contain 6 material constants: an elastic modulus $E$, and the quantities $a$, $b$, $\Omega$, $\Sigma$ and $\kappa$. To find these parameters, a tensile test with a constant strain rate, a series of 8 creep tests, and a series of 4 relaxation tests have been performed on injection-molded isotactic polypropylene at room temperature. Material constants are determined by fitting experimental data. Figures 2, 4 and 5 demonstrate fair agreement between the observations and the results of numerical simulation. The following conclusions are drawn: 1. Stretching of iPP in the sub-yield region of deformation increases the average potential energy for sliding of junctions and the standard deviation of potential energies of active MRs. This increase is attributed to fragmentation of subsidiary lamellae and activation of the rigid amorphous fraction. 2. In the vicinity of the yield point, the parameters $\Omega$ and $\Sigma$ fall down noticeably, which is associated with disappearance of physical cross-links formed by blocks of disintegrated thin lamellae. 3. In the post-yield region of deformation, the quantities $\Omega$ and $\Sigma$ grow with strain, which is explained by fragmentation of dominant lamellae. 4. The concentration of active meso-regions, $\kappa$, monotonically increases with strain both in the sub-yield and post-yield domains. 5. The hypothesis about a strain-induced material clock appears to be more adequate to morphological transformations in isotactic polypropylene under loading than the assumption about the dependence of material parameters on the current stress. This study focused on modelling the nonlinear elastoplastic and viscoplastic behavior of iPP in creep and relaxation tests at relatively small strains. Some important questions remained, however, beyond the scope of the present work. In particular, transitions from the primary creep to the secondary and tertiary creeps have not been analyzed. No attention has been paid to the effect of time and temperature of annealing on the time-dependent response of isotactic polypropylene. The applicability of the constitutive equations to the description of the mechanical behavior of other semi-crystalline polymers has not been examined. These issues will be the subject of a subsequent publication. References {#references .unnumbered} ========== 0 mm Aboulfaraj, M., C. G’Sell, B. Ulrich, and A. Dahoun, “[*In situ*]{} observation of the plastic deformation of polypropylene spherulites under uniaxial tension and simple shear in the scanning electron microscope," Polymer [**36**]{}, 731–742 (1995). Al-Raheil, I.A., A.M. Qudah, M. Al-Share, “Isotactic polypropylene crystallized from the melt. 2. Thermal melting behavior," J. Appl. Polym. Sci. [**67**]{}, 1267–1271 (1998). Andreassen, E., “Stress relaxation of polypropylene fibres with various morphologies," Polymer [**40**]{}, 3909–3918 (1999). Ariyama, T., “Viscoelastic-plastic behaviour with mean strain changes in polypropylene," J. Mater. Sci. [**31**]{}, 4127–4131 (1996). Ariyama, T., Y. Mori, and K. Kaneko, “Tensile properties and stress relaxation of polypropylene at elevated temperatures," Polym. Eng. Sci. [**37**]{}, 81–90 (1997). Arruda, E.M., M.C. Boyce, and R. Jayachandran, “Effects of strain rate, temperature and thermomechanical coupling on the finite strain deformation of glassy polymers," Mech. Mater. [**19**]{}, 193–212 (1995). Bordonaro, C.M. and E. Krempl, “The effect of strain rate on the deformation and relaxation behavior of 6/6 nylon at room temperature," Polym. Eng. Sci. [**32**]{}, 1066–1072 (1992). Boyce, M.C., D.M. Parks, and A.S. Argon, “Large inelastic deformation of glassy polymers. 1. Rate dependent constitutive model," Mech. Mater. [**7**]{}, 15–33 (1988). Coulon, G., G. Castelein, and C. G’Sell, “Scanning force microscopic investigation of plasticity and damage mechanisms in polypropylene spherulites under simple shear," Polymer [**40**]{}, 95–110 (1998). Drozdov, A.D., “Mechanics of Viscoelastic Solids," John Wiley & Sons, Chichester (1998). Drozdov, A.D., “A model for the viscoelastic and viscoplastic responses of glassy polymers," Int. J. Solids Structures [**38**]{}, 8285–8304 (2001). Duan, Y., A. Saigal, R. Grief, and M.A. Zimmerman, “A uniform phenomenological constitutive model for glassy and semicrystalline polymers," Polym. Eng. Sci. [**41**]{}, 1322–1328 (2001). Dutta, N.K. and G.H. Edward, “Generic relaxation spectra of solid polymers. 1. Development of spectral distribution model and its application to stress relaxation of polypropylene," J. Appl. Polym. Sci. [**66**]{}, 1101–1115 (1997). Eyring, H., “Viscosity, plasticity, and diffusion as examples of absolute reaction rates," J. Chem. Phys. [**4**]{}, 283–291(1936). G’Sell, C. and J.J. Jonas, “Determination of the plastic behavior of solid polymers at constant true strain rate," J. Mater. Sci. [**14**]{}, 583–591 (1979). Hasan, O.A. and M.C. Boyce, “A constitutive model for the nonlinear viscoelastic viscoplastic behavior of glassy polymers," Polym. Eng. Sci. [**35**]{}, 331–344 (1995). Haward, R.N. and G. Thackray, “The use of a mathematical model to describe stress–strain curves in glassy thermoplastics," Proc. Roy. Soc. London [**A302**]{}, 453–472 (1968). Iijima, M. and G. Strobl, “Isothermal crystallization and melting of isotactic polypropylene analyzed by time- and temperature-dependent small-angle X-ray scattering experiments," Macromolecules [**33**]{}, 5204–5214 (2000). Kalay, G. and M.J. Bevis, “Processing and physical property relationships in injection-molded isotactic polypropylene. 1. Mechanical properties," J. Polym. Sci. B: Polym. Phys. [**35**]{}, 241–263 (1997). Krempl, E. and C.M. Bordonaro, “A state variable model for high strength polymers," Polym. Eng. Sci. [**35**]{}, 310–316 (1995). Krempl, E. and C.M. Bordonaro, “Non-proportional loading of nylon 66 at room temperature," Int. J. Plasticity [**14**]{}, 245–258 (1998). Labour, T., C. Gauthier, R. Seguela, G. Vigier, Y. Bomal, and G. Orange, “Influence of the $\beta$ crystalline phase on the mechanical properties of unfilled and CaCO$_{3}$-filled polypropylene. 1. Structural and mechanical characterization," Polymer [**42**]{}, 7127–7135 (2001). Lopez-Manchado, M.A. and M. Arroyo, “Thermal and dynamic mechanical properties of polypropylene and short organic fiber composites," Polymer [**41**]{}, 7761–7767 (2000). Lustig, S.R., R.M. Shay, and J.M. Caruthers, “Thermodynamic constitutive equations for materials with memory on a material time scale," J. Rheol. [**40**]{}, 69–106 (1996). Maiti, P., M. Hikosaka, K. Yamada, A. Toda, and F. Gu, “Lamellar thickening in isotactic polypropylene with high tacticity crystallized at high temperature," Macromolecules [**33**]{}, 9069–9075 (2000). Meyer, R.W. and L.A. Pruitt, “The effect of cyclic true strain on the morphology, structure, and relaxation behavior of ultra high molecular weight polyethylene," Polymer [**42**]{}, 5293–5306 (2001). Nitta, K.-H. and M. Takayanagi, “Role of tie molecules in the yielding deformation of isotactic polyprolylene," J. Polym. Sci. B: Polym. Phys. [**37**]{}, 357–368 (1999). Nitta, K.-H. and M. Takayanagi, “Tensile yield of isotactic polypropylene in terms of a lamellar-cluster model," J. Polym. Sci. B: Polym. Phys. [**38**]{}, 1037–1044 (2000). Read, B.E. and P.E. Tomlins, “Time-dependent deformation of polypropylene in response to different stress histories," Polymer [**38**]{}, 4617–4628 (1997). Seguela, R., E. Staniek, B. Escaig, and B. Fillon, “Plastic deformation of polypropylene in relation to crystalline structure," J. Appl. Polym. Sci. [**71**]{}, 1873–1885 (1999). Smart, J. and J.G. Williams, “A comparison of single-integral non-linear viscoelasticity theories," J. Mech. Phys. Solids [**20**]{}, 313–324 (1972). Spathis, G. and E. Kontou, “Experimental and theoretical description of the plastic behaviour of semicrystalline polymers," Polymer [**39**]{}, 135–142 (1998). Staniek, E., R. Seguela, B. Escaig, and P. Francois, “Plastic behavior of monoclinic polypropylene under hydrostatic pressure in compressive testing," J. Appl. Polym. Sci. [**72**]{}, 1241–1247 (1999). Sweeney, J., T.L.D. Collins, P.D. Coates, and R.A. Duckett, “High temperature large strain viscoelastic behaviour of polypropylene modeled using an inhomogeneously strained network," J. Appl. Polym. Sci. [**72**]{}, 563–575 (1999). Sweeney, J. and I.M. Ward, “The modelling of multiaxial necking in polypropylene using a sliplink-crosslink theory," J. Rheol. [**39**]{}, 861–872 (1995). Sweeney, J. and I.M. Ward, “A constitutive law for large deformations of polymers at high temperatures," J. Mech. Phys. Solids [**44**]{}, 1033–1049 (1996). Tomlins, P.E. and B.E. Read, “Creep and physical ageing of polypropylene: a comparison of models," Polymer [**39**]{}, 355–367 (1998). Verma, R., H. Marand, and B. Hsiao, “Morphological changes during secondary crystallization and subsequent melting in poly(ether ether ketone) as studied by real time small angle X-ray scattering," Macromolecules [**29**]{}, 7767–7775 (1996). Ward, I.M. and J.M. Wolfe, “The non-linear mechanical behaviour of polypropylene fibers under complex loading programmes," J. Mech. Phys. Solids [**14**]{}, 131–140 (1966). Wineman, A.S., “Branching of strain histories for nonlinear viscoelastic solids with a strain clock," Acta Mech. [**153**]{}, 15–21 (2002). Wortmann, F.-J. and K.V. Schulz, “Non-linear viscoelastic performance of Nomex, Kevlar and polypropylene fibres in a single-step stress relaxation test: 1. Experimental data and principles of analysis," Polymer [**35**]{}, 2108 (1994). Wortmann, F.-J. and K.V. Schulz, “Non-linear viscoelastic performance of Nomex, Kevlar and polypropylene fibres in a single step stress relaxation test: 2. Moduli, viscocities and isochronal stress/strain curves," Polymer [**36**]{}, 2363–2369 (1995). Zhang, C. and I.D. Moore, “Nonlinear mechanical response of high density polyethylene. 1: Experimental investigation and model evaluation," Polym. Eng. Sci. [**37**]{}, 404–413 (1997). Zhang, X.C., M.F. Butler, and R.E. Cameron, “The relationships between morphology, irradiation and the ductile–brittle transition of isotactic polypropylene," Polym. Int. [**48**]{}, 1173–1178 (1999). List of figures {#list-of-figures .unnumbered} =============== 0 mm [**Figure 1:**]{} The engineering stress $\sigma$ MPa (unfilled circles) and the true stress $\sigma_{\rm t}$ MPa (filled circles) versus elongation ratio $\lambda$ in a tensile test. Symbols: experimental data [**Figure 2:**]{} The strain $\epsilon$ versus time $t$ s in a tensile creep test with an engineering stress $\sigma^{0}$ MPa. Circles: experimental data. Curve 1: $\sigma_{1}^{0}=10.0$; curve 2: $\sigma_{2}^{0}=15.0$; curve 3: $\sigma_{3}^{0}=20.0$; curve 4: $\sigma_{4}^{0}=25.0$; curve 5: $\sigma_{5}^{0}=30.0$; curve 6: $\sigma_{6}^{0}=30.38$. Solid lines: results of numerical simulation [**Figure 3:**]{} The Hencky strain $\epsilon_{H}$ versus time $t$ s in a tensile creep test with an engineering stress $\sigma^{0}$ MPa. Symbols: experimental data. Unfilled circles: $\sigma_{4}^{0}=25.00$; filled circles: $\sigma_{5}^{0}=30.00$; asterisks: $\sigma_{6}^{0}=30.38$; diamonds: $\sigma_{7}^{0}=30.94$; triangles: $\sigma_{8}^{0}=32.80$. The lines AA$^{\prime}$ and BB$^{\prime}$ indicate the strains corresponding to transitions from the primary creep to the secondary creep and from the secondary creep to the ternary creep, respectively [**Figure 4:**]{} The engineering stress $\sigma$ MPa versus time $t$ s in a tensile relaxation test with a longitudinal strain $\epsilon^{0}$. Circles: experimental data. Solid lines: results of numerical simulation. Curve 1: $\epsilon_{1}^{0}=0.05$; curve 2: $\epsilon_{2}^{0}=0.10$; curve 3: $\epsilon_{3}^{0}=0.15$; curve 4: $\epsilon_{4}^{0}=0.20$ [**Figure 5:**]{} The engineering stress $\sigma$ MPa versus strain $\epsilon$ in a tensile test. Circles: experimental data. Solid line: results of numerical simulation [**Figure 6:**]{} The elastoplastic strain $\epsilon_{\rm ep}$ (curve 1) and the ratio $r$ of the rate of elastoplastic strain to the engineering stress (curve 2) versus strain $\epsilon$ in a tensile test. Solid lines: results of numerical simulation [**Figure 7:**]{} The average potential energy of meso-regions $\Omega$ versus engineering stress $\sigma$ MPa. Symbols: treatment of observations. Unfilled circles: creep tests; filled circle: relaxation test. Solid lines: approximation of the experimental data by Eq. (33). Curve 1: $\Omega_{0}=5.62$, $\Omega_{1}=0.17$; curve 2: $\Omega_{0}=25.66$, $\Omega_{1}=-0.60$ [**Figure 8:**]{} The standard deviation of potential energies of meso-regions $\Sigma$ versus engineering stress $\sigma$ MPa. Symbols: treatment of observations. Unfilled circles: creep tests; filled circle: relaxation test. Solid lines: approximation of the experimental data by Eq. (33). Curve 1: $\Sigma_{0}=2.52$, $\Sigma_{1}=0.09$; curve 2: $\Sigma_{0}=10.28$, $\Sigma_{1}=-0.26$ [**Figure 9:**]{} The concentration of active meso-regions $\kappa$ versus engineering stress $\sigma$ MPa. Symbols: treatment of observations. Unfilled circles: creep tests; filled circle: relaxation test. Solid lines: approximation of the experimental data by Eq. (33). Curve 1: $\kappa_{0}=0.0900$, $\kappa_{1}=0.0080$; curve 2: $\kappa_{0}=-1.1995$, $\kappa_{1}=0.0595$ [**Figure 10:**]{} The average potential energy of meso-regions $\Omega$ versus longitudinal strain $\epsilon$. Symbols: treatment of observations. Unfilled circles: creep tests; filled circles: relaxation tests. Solid lines: approximation of the experimental data by Eq. (35). Curve 1: $\Omega_{0}=6.66$, $\Omega_{1}=130.66$; curve 2: $\Omega_{0}=3.95$, $\Omega_{1}=20.20$ [**Figure 11:**]{} The standard deviation of potential energies of meso-regions $\Sigma$ versus longitudinal strain $\epsilon$. Symbols: treatment of observations. Unfilled circles: creep tests; filled circles: relaxation tests. Solid lines: approximation of the experimental data by Eq. (35). Curve 1: $\Sigma_{0}=3.53$, $\Sigma_{1}=25.88$; curve 2: $\Sigma_{0}=1.00$, $\Sigma_{1}=23.80$ [**Figure 12:**]{} The concentration of active meso-regions $\kappa$ versus longitudinal strain $\epsilon$. Symbols: treatment of observations. Unfilled circles: creep tests; filled circles: relaxation tests. Solid line: approximation of the experimental data by Eq. (35) with $\kappa_{0}=0.16$, $\kappa_{1}=3.95$ (100,100) (0,0)[(100,100)]{} (10,0)(10,0)[9]{}[(0,1)[2]{}]{} (0,10)(0,10)[9]{}[(1,0)[2]{}]{} (100,10)(0,10)[9]{}[(-1,0)[2]{}]{} (0,-8)[1.0]{} (94,-8)[1.5]{} (60,-8)[$\lambda$]{} (-12,0)[0.0]{} (-12,96)[50.0]{} (-12,60)[$\sigma$]{} (103,0)[0.0]{} (103,96)[50.0]{} (103,60)[$\sigma_{\rm t}$]{} ( 2.16, 32.57) ( 4.15, 45.17) ( 6.17, 53.17) ( 8.03, 58.68) ( 10.14, 63.19) ( 12.19, 65.69) ( 14.16, 66.61) ( 16.04, 66.78) ( 18.03, 66.57) ( 20.08, 66.18) ( 21.94, 65.82) ( 24.11, 65.37) ( 26.03, 65.02) ( 27.99, 64.71) ( 29.97, 64.41) ( 31.97, 64.10) ( 33.99, 63.81) ( 36.02, 63.54) ( 38.07, 63.30) ( 40.12, 63.04) ( 42.19, 62.82) ( 43.97, 62.63) ( 46.06, 62.40) ( 48.15, 62.20) ( 49.96, 62.04) ( 52.07, 61.81) ( 54.20, 61.63) ( 56.03, 61.46) ( 58.17, 61.26) ( 60.02, 61.08) ( 62.17, 60.91) ( 64.03, 60.75) ( 66.20, 60.58) ( 68.07, 60.44) ( 69.94, 60.29) ( 72.12, 60.12) ( 74.00, 60.00) ( 76.20, 59.84) ( 78.09, 59.72) ( 79.99, 59.60) ( 82.20, 59.45) ( 84.10, 59.35) ( 86.01, 59.24) ( 87.92, 59.13) ( 90.15, 59.00) ( 92.07, 58.89) ( 93.99, 58.73) ( 95.92, 58.57) ( 98.17, 58.42) ( 99.78, 58.29) ( 2.16, 32.92) ( 4.15, 46.11) ( 6.17, 54.81) ( 8.03, 61.04) ( 10.14, 66.39) ( 12.19, 69.69) ( 14.16, 71.33) ( 16.04, 72.14) ( 18.03, 72.57) ( 20.08, 72.82) ( 21.94, 73.04) ( 24.11, 73.24) ( 26.03, 73.48) ( 27.99, 73.77) ( 29.97, 74.06) ( 31.97, 74.35) ( 33.99, 74.66) ( 36.02, 74.98) ( 38.07, 75.35) ( 40.12, 75.69) ( 42.19, 76.07) ( 43.97, 76.40) ( 46.06, 76.77) ( 48.15, 77.18) ( 49.96, 77.54) ( 52.07, 77.91) ( 54.20, 78.34) ( 56.03, 78.68) ( 58.17, 79.08) ( 60.02, 79.41) ( 62.17, 79.84) ( 64.03, 80.19) ( 66.20, 80.63) ( 68.07, 81.00) ( 69.94, 81.37) ( 72.12, 81.80) ( 74.00, 82.21) ( 76.20, 82.64) ( 78.09, 83.04) ( 79.99, 83.43) ( 82.20, 83.89) ( 84.10, 84.30) ( 86.01, 84.71) ( 87.92, 85.13) ( 90.15, 85.59) ( 92.07, 86.00) ( 93.99, 86.32) ( 95.92, 86.67) ( 98.17, 87.10) ( 99.78, 87.36) (100,100) (0,0)[(100,100)]{} (10,0)(10,0)[9]{}[(0,1)[2]{}]{} (0,16.67)(0,16.67)[5]{}[(1,0)[2]{}]{} (0,-8)[1.0]{} (94,-8)[3.0]{} (50,-8)[$\log t$]{} (-12,0)[0.0]{} (-12,96)[0.06]{} (-12,60)[$\epsilon$]{} (103, 13)[1]{} (103, 26)[2]{} (103, 45)[3]{} (103, 79)[4]{} (50, 102)[5]{} (32, 102)[6]{} ( 2.24, 10.68) ( 7.44, 10.78) ( 11.63, 10.85) ( 15.15, 10.93) ( 19.97, 11.03) ( 23.92, 11.10) ( 27.87, 11.18) ( 31.72, 11.28) ( 35.84, 11.37) ( 40.00, 11.48) ( 43.78, 11.58) ( 47.97, 11.70) ( 51.89, 11.83) ( 55.88, 11.93) ( 59.94, 12.03) ( 63.95, 12.17) ( 67.91, 12.27) ( 71.97, 12.40) ( 76.00, 12.60) ( 80.00, 12.72) ( 83.98, 12.88) ( 87.99, 13.07) ( 91.98, 13.23) ( 95.98, 13.43) ( 99.98, 13.67) ( 0.43, 10.62) ( 0.85, 10.63) ( 1.27, 10.64) ( 1.67, 10.64) ( 2.27, 10.65) ( 2.85, 10.66) ( 3.41, 10.67) ( 3.96, 10.68) ( 4.50, 10.69) ( 5.02, 10.70) ( 5.53, 10.71) ( 6.03, 10.72) ( 6.52, 10.72) ( 6.99, 10.73) ( 7.46, 10.74) ( 7.92, 10.75) ( 8.37, 10.76) ( 8.80, 10.76) ( 9.23, 10.77) ( 9.66, 10.78) ( 10.07, 10.79) ( 10.48, 10.79) ( 11.01, 10.80) ( 11.52, 10.81) ( 12.03, 10.82) ( 12.52, 10.83) ( 13.00, 10.84) ( 13.48, 10.85) ( 13.94, 10.86) ( 14.39, 10.87) ( 14.83, 10.87) ( 15.27, 10.88) ( 15.69, 10.89) ( 16.11, 10.90) ( 16.52, 10.91) ( 16.92, 10.92) ( 17.42, 10.93) ( 17.90, 10.94) ( 18.37, 10.94) ( 18.83, 10.95) ( 19.28, 10.96) ( 19.72, 10.97) ( 20.16, 10.98) ( 20.58, 10.99) ( 21.00, 11.00) ( 21.41, 11.01) ( 21.81, 11.02) ( 22.28, 11.03) ( 22.74, 11.04) ( 23.19, 11.05) ( 23.64, 11.06) ( 24.07, 11.07) ( 24.50, 11.07) ( 24.92, 11.08) ( 25.33, 11.09) ( 25.73, 11.10) ( 26.19, 11.11) ( 26.64, 11.12) ( 27.08, 11.13) ( 27.51, 11.14) ( 27.94, 11.15) ( 28.35, 11.16) ( 28.76, 11.17) ( 29.16, 11.18) ( 29.61, 11.19) ( 30.05, 11.20) ( 30.48, 11.21) ( 30.90, 11.22) ( 31.32, 11.23) ( 31.72, 11.24) ( 32.17, 11.25) ( 32.61, 11.26) ( 33.04, 11.27) ( 33.47, 11.28) ( 33.88, 11.29) ( 34.29, 11.30) ( 34.73, 11.31) ( 35.16, 11.33) ( 35.59, 11.34) ( 36.01, 11.35) ( 36.42, 11.36) ( 36.82, 11.37) ( 37.25, 11.38) ( 37.68, 11.39) ( 38.10, 11.40) ( 38.51, 11.41) ( 38.91, 11.42) ( 39.34, 11.43) ( 39.76, 11.44) ( 40.17, 11.46) ( 40.58, 11.47) ( 41.01, 11.48) ( 41.43, 11.49) ( 41.85, 11.50) ( 42.25, 11.51) ( 42.68, 11.52) ( 43.11, 11.53) ( 43.52, 11.55) ( 43.93, 11.56) ( 44.35, 11.57) ( 44.77, 11.58) ( 45.18, 11.59) ( 45.58, 11.60) ( 46.01, 11.62) ( 46.42, 11.63) ( 46.83, 11.64) ( 47.25, 11.65) ( 47.66, 11.66) ( 48.07, 11.67) ( 48.49, 11.69) ( 48.91, 11.70) ( 49.32, 11.71) ( 49.74, 11.72) ( 50.15, 11.74) ( 50.56, 11.75) ( 50.98, 11.76) ( 51.39, 11.77) ( 51.79, 11.78) ( 52.21, 11.80) ( 52.62, 11.81) ( 53.02, 11.82) ( 53.43, 11.83) ( 53.83, 11.85) ( 54.25, 11.86) ( 54.65, 11.87) ( 55.07, 11.88) ( 55.48, 11.90) ( 55.88, 11.91) ( 56.29, 11.92) ( 56.69, 11.94) ( 57.10, 11.95) ( 57.51, 11.96) ( 57.92, 11.97) ( 58.32, 11.99) ( 58.73, 12.00) ( 59.13, 12.01) ( 59.54, 12.03) ( 59.95, 12.04) ( 60.35, 12.05) ( 60.76, 12.07) ( 61.16, 12.08) ( 61.57, 12.09) ( 61.98, 12.11) ( 62.39, 12.12) ( 62.79, 12.13) ( 63.19, 12.15) ( 63.60, 12.16) ( 64.01, 12.17) ( 64.41, 12.19) ( 64.82, 12.20) ( 65.22, 12.21) ( 65.63, 12.23) ( 66.04, 12.24) ( 66.44, 12.26) ( 66.84, 12.27) ( 67.25, 12.28) ( 67.66, 12.30) ( 68.06, 12.31) ( 68.46, 12.33) ( 68.87, 12.34) ( 69.27, 12.35) ( 69.68, 12.37) ( 70.09, 12.38) ( 70.49, 12.40) ( 70.89, 12.41) ( 71.29, 12.42) ( 71.70, 12.44) ( 72.10, 12.45) ( 72.50, 12.47) ( 72.91, 12.48) ( 73.31, 12.50) ( 73.72, 12.51) ( 74.12, 12.53) ( 74.53, 12.54) ( 74.93, 12.56) ( 75.33, 12.57) ( 75.73, 12.59) ( 76.14, 12.60) ( 76.54, 12.61) ( 76.94, 12.63) ( 77.35, 12.64) ( 77.75, 12.66) ( 78.15, 12.67) ( 78.55, 12.69) ( 78.95, 12.70) ( 79.36, 12.72) ( 79.76, 12.73) ( 80.16, 12.75) ( 80.56, 12.76) ( 80.97, 12.78) ( 81.37, 12.80) ( 81.77, 12.81) ( 82.17, 12.83) ( 82.57, 12.84) ( 82.98, 12.86) ( 83.38, 12.87) ( 83.78, 12.89) ( 84.18, 12.90) ( 84.59, 12.92) ( 84.99, 12.93) ( 85.39, 12.95) ( 85.79, 12.97) ( 86.19, 12.98) ( 86.60, 13.00) ( 87.00, 13.01) ( 87.40, 13.03) ( 87.81, 13.04) ( 88.21, 13.06) ( 88.61, 13.08) ( 89.02, 13.09) ( 89.42, 13.11) ( 89.82, 13.12) ( 90.22, 13.14) ( 90.63, 13.16) ( 91.03, 13.17) ( 91.43, 13.19) ( 91.83, 13.20) ( 92.23, 13.22) ( 92.63, 13.24) ( 93.03, 13.25) ( 93.43, 13.27) ( 93.83, 13.29) ( 94.23, 13.30) ( 94.64, 13.32) ( 95.04, 13.33) ( 95.44, 13.35) ( 95.84, 13.37) ( 96.24, 13.38) ( 96.64, 13.40) ( 97.04, 13.42) ( 97.45, 13.43) ( 97.85, 13.45) ( 98.25, 13.47) ( 98.65, 13.48) ( 99.05, 13.50) ( 99.45, 13.52) ( 99.85, 13.53) ( 5.70, 17.97) ( 8.80, 18.10) ( 12.76, 18.28) ( 16.11, 18.45) ( 20.75, 18.70) ( 24.57, 18.92) ( 28.41, 19.15) ( 32.17, 19.37) ( 36.21, 19.65) ( 40.31, 19.93) ( 44.04, 20.23) ( 48.19, 20.55) ( 52.07, 20.85) ( 56.03, 21.18) ( 60.07, 21.52) ( 64.05, 21.87) ( 68.09, 22.27) ( 72.05, 22.67) ( 76.06, 23.07) ( 80.05, 23.50) ( 84.02, 23.97) ( 88.02, 24.43) ( 92.01, 24.95) ( 96.01, 25.48) ( 100.00, 26.08) ( 0.43, 17.75) ( 0.85, 17.77) ( 1.27, 17.79) ( 1.67, 17.80) ( 2.27, 17.83) ( 2.85, 17.85) ( 3.41, 17.88) ( 3.96, 17.90) ( 4.50, 17.92) ( 5.02, 17.94) ( 5.53, 17.97) ( 6.03, 17.99) ( 6.52, 18.01) ( 6.99, 18.03) ( 7.46, 18.05) ( 7.92, 18.07) ( 8.37, 18.09) ( 8.80, 18.11) ( 9.23, 18.13) ( 9.66, 18.15) ( 10.07, 18.17) ( 10.48, 18.19) ( 11.01, 18.21) ( 11.52, 18.24) ( 12.03, 18.26) ( 12.52, 18.29) ( 13.00, 18.31) ( 13.48, 18.33) ( 13.94, 18.35) ( 14.39, 18.38) ( 14.83, 18.40) ( 15.27, 18.42) ( 15.69, 18.44) ( 16.11, 18.46) ( 16.52, 18.48) ( 16.92, 18.50) ( 17.42, 18.53) ( 17.90, 18.56) ( 18.37, 18.58) ( 18.83, 18.60) ( 19.28, 18.63) ( 19.72, 18.65) ( 20.16, 18.68) ( 20.58, 18.70) ( 21.00, 18.72) ( 21.41, 18.74) ( 21.81, 18.77) ( 22.28, 18.79) ( 22.74, 18.82) ( 23.19, 18.84) ( 23.64, 18.87) ( 24.07, 18.89) ( 24.50, 18.92) ( 24.92, 18.94) ( 25.33, 18.97) ( 25.73, 18.99) ( 26.19, 19.02) ( 26.64, 19.04) ( 27.08, 19.07) ( 27.51, 19.09) ( 27.94, 19.12) ( 28.35, 19.15) ( 28.76, 19.17) ( 29.16, 19.20) ( 29.61, 19.22) ( 30.05, 19.25) ( 30.48, 19.28) ( 30.90, 19.30) ( 31.32, 19.33) ( 31.72, 19.36) ( 32.17, 19.38) ( 32.61, 19.41) ( 33.04, 19.44) ( 33.47, 19.47) ( 33.88, 19.50) ( 34.29, 19.52) ( 34.73, 19.55) ( 35.16, 19.58) ( 35.59, 19.61) ( 36.01, 19.64) ( 36.42, 19.67) ( 36.82, 19.69) ( 37.25, 19.72) ( 37.68, 19.75) ( 38.10, 19.78) ( 38.51, 19.81) ( 38.91, 19.84) ( 39.34, 19.87) ( 39.76, 19.90) ( 40.17, 19.93) ( 40.58, 19.96) ( 41.01, 19.99) ( 41.43, 20.02) ( 41.85, 20.05) ( 42.25, 20.08) ( 42.68, 20.11) ( 43.11, 20.14) ( 43.52, 20.17) ( 43.93, 20.20) ( 44.35, 20.24) ( 44.77, 20.27) ( 45.18, 20.30) ( 45.58, 20.33) ( 46.01, 20.36) ( 46.42, 20.39) ( 46.83, 20.43) ( 47.25, 20.46) ( 47.66, 20.49) ( 48.07, 20.52) ( 48.49, 20.56) ( 48.91, 20.59) ( 49.32, 20.62) ( 49.74, 20.66) ( 50.15, 20.69) ( 50.56, 20.73) ( 50.98, 20.76) ( 51.39, 20.79) ( 51.79, 20.83) ( 52.21, 20.86) ( 52.62, 20.90) ( 53.02, 20.93) ( 53.43, 20.96) ( 53.83, 21.00) ( 54.25, 21.03) ( 54.65, 21.07) ( 55.07, 21.11) ( 55.48, 21.14) ( 55.88, 21.18) ( 56.29, 21.21) ( 56.69, 21.25) ( 57.10, 21.28) ( 57.51, 21.32) ( 57.92, 21.36) ( 58.32, 21.39) ( 58.73, 21.43) ( 59.13, 21.47) ( 59.54, 21.50) ( 59.95, 21.54) ( 60.35, 21.58) ( 60.76, 21.61) ( 61.16, 21.65) ( 61.57, 21.69) ( 61.98, 21.73) ( 62.39, 21.77) ( 62.79, 21.80) ( 63.19, 21.84) ( 63.60, 21.88) ( 64.01, 21.92) ( 64.41, 21.96) ( 64.82, 22.00) ( 65.22, 22.04) ( 65.63, 22.08) ( 66.04, 22.12) ( 66.44, 22.16) ( 66.84, 22.20) ( 67.25, 22.24) ( 67.66, 22.28) ( 68.06, 22.32) ( 68.46, 22.36) ( 68.87, 22.40) ( 69.27, 22.44) ( 69.68, 22.48) ( 70.09, 22.52) ( 70.49, 22.57) ( 70.89, 22.61) ( 71.29, 22.65) ( 71.70, 22.69) ( 72.10, 22.73) ( 72.50, 22.78) ( 72.91, 22.82) ( 73.31, 22.86) ( 73.72, 22.90) ( 74.12, 22.95) ( 74.53, 22.99) ( 74.93, 23.03) ( 75.33, 23.08) ( 75.73, 23.12) ( 76.14, 23.17) ( 76.54, 23.21) ( 76.94, 23.25) ( 77.35, 23.30) ( 77.75, 23.34) ( 78.15, 23.39) ( 78.55, 23.43) ( 78.95, 23.48) ( 79.36, 23.52) ( 79.76, 23.57) ( 80.16, 23.62) ( 80.56, 23.66) ( 80.97, 23.71) ( 81.37, 23.75) ( 81.77, 23.80) ( 82.17, 23.85) ( 82.57, 23.89) ( 82.98, 23.94) ( 83.38, 23.99) ( 83.78, 24.03) ( 84.18, 24.08) ( 84.59, 24.13) ( 84.99, 24.18) ( 85.39, 24.23) ( 85.79, 24.27) ( 86.19, 24.32) ( 86.60, 24.37) ( 87.00, 24.42) ( 87.40, 24.47) ( 87.81, 24.52) ( 88.21, 24.57) ( 88.61, 24.62) ( 89.02, 24.67) ( 89.42, 24.72) ( 89.82, 24.77) ( 90.22, 24.82) ( 90.63, 24.87) ( 91.03, 24.92) ( 91.43, 24.97) ( 91.83, 25.02) ( 92.23, 25.07) ( 92.63, 25.12) ( 93.03, 25.17) ( 93.43, 25.22) ( 93.83, 25.27) ( 94.23, 25.33) ( 94.64, 25.38) ( 95.04, 25.43) ( 95.44, 25.48) ( 95.84, 25.53) ( 96.24, 25.59) ( 96.64, 25.64) ( 97.04, 25.69) ( 97.45, 25.75) ( 97.85, 25.80) ( 98.25, 25.85) ( 98.65, 25.91) ( 99.05, 25.96) ( 99.45, 26.01) ( 99.85, 26.07) ( 3.96, 28.53) ( 8.80, 28.95) ( 11.52, 29.18) ( 16.11, 29.65) ( 19.90, 30.32) ( 23.86, 30.80) ( 27.82, 31.15) ( 32.17, 31.97) ( 36.21, 32.42) ( 39.97, 33.18) ( 44.04, 33.90) ( 47.95, 34.43) ( 52.07, 35.20) ( 56.03, 35.92) ( 60.07, 36.88) ( 64.05, 37.40) ( 67.99, 38.20) ( 71.97, 38.87) ( 75.99, 39.87) ( 79.99, 40.48) ( 84.02, 41.52) ( 88.02, 42.37) ( 92.01, 43.33) ( 96.01, 44.22) ( 100.00, 45.22) ( 0.43, 28.36) ( 0.85, 28.40) ( 1.27, 28.44) ( 1.67, 28.48) ( 2.27, 28.54) ( 2.85, 28.59) ( 3.41, 28.65) ( 3.96, 28.70) ( 4.50, 28.75) ( 5.02, 28.80) ( 5.53, 28.85) ( 6.03, 28.90) ( 6.52, 28.95) ( 6.99, 29.00) ( 7.46, 29.05) ( 7.92, 29.10) ( 8.37, 29.14) ( 8.80, 29.19) ( 9.23, 29.23) ( 9.66, 29.28) ( 10.07, 29.32) ( 10.48, 29.36) ( 11.01, 29.42) ( 11.52, 29.48) ( 12.03, 29.53) ( 12.52, 29.58) ( 13.00, 29.64) ( 13.48, 29.69) ( 13.94, 29.74) ( 14.39, 29.79) ( 14.83, 29.84) ( 15.27, 29.89) ( 15.69, 29.93) ( 16.11, 29.98) ( 16.52, 30.03) ( 16.92, 30.07) ( 17.42, 30.13) ( 17.90, 30.19) ( 18.37, 30.24) ( 18.83, 30.30) ( 19.28, 30.35) ( 19.72, 30.40) ( 20.16, 30.45) ( 20.58, 30.50) ( 21.00, 30.55) ( 21.41, 30.60) ( 21.81, 30.65) ( 22.28, 30.71) ( 22.74, 30.77) ( 23.19, 30.82) ( 23.64, 30.88) ( 24.07, 30.93) ( 24.50, 30.99) ( 24.92, 31.04) ( 25.33, 31.09) ( 25.73, 31.14) ( 26.19, 31.20) ( 26.64, 31.26) ( 27.08, 31.32) ( 27.51, 31.37) ( 27.94, 31.43) ( 28.35, 31.48) ( 28.76, 31.54) ( 29.16, 31.59) ( 29.61, 31.65) ( 30.05, 31.71) ( 30.48, 31.77) ( 30.90, 31.82) ( 31.32, 31.88) ( 31.72, 31.94) ( 32.17, 32.00) ( 32.61, 32.06) ( 33.04, 32.12) ( 33.47, 32.18) ( 33.88, 32.24) ( 34.29, 32.29) ( 34.73, 32.36) ( 35.16, 32.42) ( 35.59, 32.48) ( 36.01, 32.54) ( 36.42, 32.60) ( 36.82, 32.66) ( 37.25, 32.72) ( 37.68, 32.78) ( 38.10, 32.85) ( 38.51, 32.91) ( 38.91, 32.97) ( 39.34, 33.03) ( 39.76, 33.09) ( 40.17, 33.16) ( 40.58, 33.22) ( 41.01, 33.28) ( 41.43, 33.35) ( 41.85, 33.41) ( 42.25, 33.48) ( 42.68, 33.54) ( 43.11, 33.61) ( 43.52, 33.67) ( 43.93, 33.74) ( 44.35, 33.81) ( 44.77, 33.87) ( 45.18, 33.94) ( 45.58, 34.00) ( 46.01, 34.07) ( 46.42, 34.14) ( 46.83, 34.20) ( 47.25, 34.27) ( 47.66, 34.34) ( 48.07, 34.41) ( 48.49, 34.48) ( 48.91, 34.55) ( 49.32, 34.62) ( 49.74, 34.69) ( 50.15, 34.76) ( 50.56, 34.83) ( 50.98, 34.90) ( 51.39, 34.97) ( 51.79, 35.04) ( 52.21, 35.11) ( 52.62, 35.18) ( 53.02, 35.25) ( 53.43, 35.32) ( 53.83, 35.39) ( 54.25, 35.47) ( 54.65, 35.54) ( 55.07, 35.61) ( 55.48, 35.69) ( 55.88, 35.76) ( 56.29, 35.83) ( 56.69, 35.90) ( 57.10, 35.98) ( 57.51, 36.05) ( 57.92, 36.13) ( 58.32, 36.20) ( 58.73, 36.28) ( 59.13, 36.35) ( 59.54, 36.43) ( 59.95, 36.51) ( 60.35, 36.58) ( 60.76, 36.66) ( 61.16, 36.73) ( 61.57, 36.81) ( 61.98, 36.89) ( 62.39, 36.97) ( 62.79, 37.05) ( 63.19, 37.12) ( 63.60, 37.20) ( 64.01, 37.28) ( 64.41, 37.36) ( 64.82, 37.44) ( 65.22, 37.52) ( 65.63, 37.60) ( 66.04, 37.68) ( 66.44, 37.76) ( 66.84, 37.85) ( 67.25, 37.93) ( 67.66, 38.01) ( 68.06, 38.09) ( 68.46, 38.17) ( 68.87, 38.26) ( 69.27, 38.34) ( 69.68, 38.42) ( 70.09, 38.51) ( 70.49, 38.59) ( 70.89, 38.67) ( 71.29, 38.76) ( 71.70, 38.84) ( 72.10, 38.93) ( 72.50, 39.01) ( 72.91, 39.10) ( 73.31, 39.18) ( 73.72, 39.27) ( 74.12, 39.36) ( 74.53, 39.44) ( 74.93, 39.53) ( 75.33, 39.62) ( 75.73, 39.71) ( 76.14, 39.79) ( 76.54, 39.88) ( 76.94, 39.97) ( 77.35, 40.06) ( 77.75, 40.15) ( 78.15, 40.24) ( 78.55, 40.33) ( 78.95, 40.42) ( 79.36, 40.51) ( 79.76, 40.60) ( 80.16, 40.69) ( 80.56, 40.78) ( 80.97, 40.87) ( 81.37, 40.96) ( 81.77, 41.06) ( 82.17, 41.15) ( 82.57, 41.24) ( 82.98, 41.33) ( 83.38, 41.43) ( 83.78, 41.52) ( 84.18, 41.61) ( 84.59, 41.71) ( 84.99, 41.80) ( 85.39, 41.90) ( 85.79, 41.99) ( 86.19, 42.09) ( 86.60, 42.18) ( 87.00, 42.28) ( 87.40, 42.38) ( 87.81, 42.47) ( 88.21, 42.57) ( 88.61, 42.67) ( 89.02, 42.77) ( 89.42, 42.86) ( 89.82, 42.96) ( 90.22, 43.06) ( 90.63, 43.16) ( 91.03, 43.26) ( 91.43, 43.35) ( 91.83, 43.45) ( 92.23, 43.55) ( 92.63, 43.65) ( 93.03, 43.75) ( 93.43, 43.85) ( 93.83, 43.95) ( 94.23, 44.05) ( 94.64, 44.16) ( 95.04, 44.26) ( 95.44, 44.36) ( 95.84, 44.46) ( 96.24, 44.56) ( 96.64, 44.66) ( 97.04, 44.77) ( 97.45, 44.87) ( 97.85, 44.97) ( 98.25, 45.08) ( 98.65, 45.18) ( 99.05, 45.28) ( 99.45, 45.39) ( 99.85, 45.49) ( 3.96, 49.72) ( 7.31, 50.30) ( 11.52, 51.12) ( 16.11, 52.07) ( 19.90, 52.87) ( 23.86, 53.75) ( 27.82, 54.70) ( 31.67, 55.63) ( 35.80, 56.72) ( 39.97, 57.80) ( 44.04, 58.95) ( 47.95, 60.10) ( 51.87, 61.22) ( 56.03, 62.50) ( 59.93, 63.73) ( 63.94, 65.07) ( 67.99, 66.47) ( 71.97, 67.92) ( 75.99, 69.45) ( 79.99, 71.07) ( 83.97, 72.77) ( 87.98, 74.60) ( 92.01, 76.55) ( 95.98, 78.62) ( 100.00, 80.77) ( 0.43, 49.19) ( 0.85, 49.27) ( 1.27, 49.34) ( 1.67, 49.42) ( 2.27, 49.53) ( 2.85, 49.63) ( 3.41, 49.74) ( 3.96, 49.84) ( 4.50, 49.94) ( 5.02, 50.04) ( 5.53, 50.14) ( 6.03, 50.24) ( 6.52, 50.33) ( 6.99, 50.43) ( 7.46, 50.52) ( 7.92, 50.61) ( 8.37, 50.70) ( 8.80, 50.78) ( 9.23, 50.87) ( 9.66, 50.95) ( 10.07, 51.04) ( 10.48, 51.12) ( 11.01, 51.23) ( 11.52, 51.33) ( 12.03, 51.43) ( 12.52, 51.54) ( 13.00, 51.64) ( 13.48, 51.74) ( 13.94, 51.83) ( 14.39, 51.93) ( 14.83, 52.02) ( 15.27, 52.12) ( 15.69, 52.21) ( 16.11, 52.30) ( 16.52, 52.39) ( 16.92, 52.47) ( 17.42, 52.58) ( 17.90, 52.69) ( 18.37, 52.79) ( 18.83, 52.89) ( 19.28, 52.99) ( 19.72, 53.09) ( 20.16, 53.19) ( 20.58, 53.29) ( 21.00, 53.38) ( 21.41, 53.48) ( 21.81, 53.57) ( 22.28, 53.68) ( 22.74, 53.78) ( 23.19, 53.89) ( 23.64, 53.99) ( 24.07, 54.10) ( 24.50, 54.20) ( 24.92, 54.30) ( 25.33, 54.40) ( 25.73, 54.49) ( 26.19, 54.60) ( 26.64, 54.71) ( 27.08, 54.82) ( 27.51, 54.93) ( 27.94, 55.03) ( 28.35, 55.13) ( 28.76, 55.24) ( 29.16, 55.34) ( 29.61, 55.45) ( 30.05, 55.56) ( 30.48, 55.67) ( 30.90, 55.78) ( 31.32, 55.88) ( 31.72, 55.99) ( 32.17, 56.10) ( 32.61, 56.22) ( 33.04, 56.33) ( 33.47, 56.44) ( 33.88, 56.55) ( 34.29, 56.65) ( 34.73, 56.77) ( 35.16, 56.89) ( 35.59, 57.00) ( 36.01, 57.11) ( 36.42, 57.22) ( 36.82, 57.33) ( 37.25, 57.45) ( 37.68, 57.57) ( 38.10, 57.68) ( 38.51, 57.80) ( 38.91, 57.91) ( 39.34, 58.03) ( 39.76, 58.14) ( 40.17, 58.26) ( 40.58, 58.37) ( 41.01, 58.50) ( 41.43, 58.62) ( 41.85, 58.73) ( 42.25, 58.85) ( 42.68, 58.97) ( 43.11, 59.09) ( 43.52, 59.21) ( 43.93, 59.33) ( 44.35, 59.46) ( 44.77, 59.58) ( 45.18, 59.70) ( 45.58, 59.82) ( 46.01, 59.94) ( 46.42, 60.07) ( 46.83, 60.19) ( 47.25, 60.31) ( 47.66, 60.44) ( 48.07, 60.56) ( 48.49, 60.69) ( 48.91, 60.82) ( 49.32, 60.94) ( 49.74, 61.07) ( 50.15, 61.20) ( 50.56, 61.32) ( 50.98, 61.45) ( 51.39, 61.58) ( 51.79, 61.71) ( 52.21, 61.84) ( 52.62, 61.96) ( 53.02, 62.09) ( 53.43, 62.22) ( 53.83, 62.35) ( 54.25, 62.48) ( 54.65, 62.61) ( 55.07, 62.74) ( 55.48, 62.88) ( 55.88, 63.01) ( 56.29, 63.14) ( 56.69, 63.27) ( 57.10, 63.40) ( 57.51, 63.53) ( 57.92, 63.67) ( 58.32, 63.80) ( 58.73, 63.94) ( 59.13, 64.07) ( 59.54, 64.21) ( 59.95, 64.34) ( 60.35, 64.48) ( 60.76, 64.61) ( 61.16, 64.75) ( 61.57, 64.89) ( 61.98, 65.02) ( 62.39, 65.16) ( 62.79, 65.30) ( 63.19, 65.44) ( 63.60, 65.58) ( 64.01, 65.72) ( 64.41, 65.86) ( 64.82, 66.00) ( 65.22, 66.14) ( 65.63, 66.28) ( 66.04, 66.42) ( 66.44, 66.56) ( 66.84, 66.70) ( 67.25, 66.84) ( 67.66, 66.99) ( 68.06, 67.13) ( 68.46, 67.27) ( 68.87, 67.41) ( 69.27, 67.56) ( 69.68, 67.70) ( 70.09, 67.85) ( 70.49, 67.99) ( 70.89, 68.14) ( 71.29, 68.28) ( 71.70, 68.43) ( 72.10, 68.58) ( 72.50, 68.72) ( 72.91, 68.87) ( 73.31, 69.02) ( 73.72, 69.17) ( 74.12, 69.32) ( 74.53, 69.46) ( 74.93, 69.61) ( 75.33, 69.76) ( 75.73, 69.91) ( 76.14, 70.06) ( 76.54, 70.21) ( 76.94, 70.36) ( 77.35, 70.51) ( 77.75, 70.66) ( 78.15, 70.81) ( 78.55, 70.96) ( 78.95, 71.12) ( 79.36, 71.27) ( 79.76, 71.42) ( 80.16, 71.57) ( 80.56, 71.73) ( 80.97, 71.88) ( 81.37, 72.03) ( 81.77, 72.19) ( 82.17, 72.34) ( 82.57, 72.50) ( 82.98, 72.65) ( 83.38, 72.81) ( 83.78, 72.96) ( 84.18, 73.12) ( 84.59, 73.28) ( 84.99, 73.43) ( 85.39, 73.59) ( 85.79, 73.75) ( 86.19, 73.91) ( 86.60, 74.06) ( 87.00, 74.22) ( 87.40, 74.38) ( 87.81, 74.54) ( 88.21, 74.70) ( 88.61, 74.86) ( 89.02, 75.02) ( 89.42, 75.18) ( 89.82, 75.34) ( 90.22, 75.50) ( 90.63, 75.66) ( 91.03, 75.82) ( 91.43, 75.98) ( 91.83, 76.15) ( 92.23, 76.31) ( 92.63, 76.47) ( 93.03, 76.63) ( 93.43, 76.79) ( 93.83, 76.96) ( 94.23, 77.12) ( 94.64, 77.28) ( 95.04, 77.45) ( 95.44, 77.61) ( 95.84, 77.78) ( 96.24, 77.94) ( 96.64, 78.10) ( 97.04, 78.27) ( 97.45, 78.44) ( 97.85, 78.60) ( 98.25, 78.77) ( 98.65, 78.93) ( 99.05, 79.10) ( 99.45, 79.26) ( 99.85, 79.43) ( 3.96, 78.63) ( 7.31, 79.62) ( 11.52, 80.95) ( 16.11, 82.55) ( 19.90, 83.97) ( 23.86, 85.57) ( 27.82, 87.33) ( 32.17, 89.43) ( 35.80, 91.28) ( 39.97, 93.55) ( 44.04, 95.93) ( 47.95, 98.38) ( 0.43, 78.31) ( 0.85, 78.43) ( 1.27, 78.54) ( 1.67, 78.66) ( 2.27, 78.83) ( 2.85, 79.00) ( 3.41, 79.16) ( 3.96, 79.32) ( 4.50, 79.48) ( 5.02, 79.64) ( 5.53, 79.80) ( 6.03, 79.95) ( 6.52, 80.10) ( 6.99, 80.25) ( 7.46, 80.40) ( 7.92, 80.54) ( 8.37, 80.68) ( 8.80, 80.83) ( 9.23, 80.97) ( 9.66, 81.10) ( 10.07, 81.24) ( 10.48, 81.37) ( 11.01, 81.55) ( 11.52, 81.73) ( 12.03, 81.90) ( 12.52, 82.07) ( 13.00, 82.23) ( 13.48, 82.40) ( 13.94, 82.56) ( 14.39, 82.72) ( 14.83, 82.88) ( 15.27, 83.03) ( 15.69, 83.19) ( 16.11, 83.34) ( 16.52, 83.49) ( 16.92, 83.64) ( 17.42, 83.82) ( 17.90, 84.00) ( 18.37, 84.18) ( 18.83, 84.36) ( 19.28, 84.53) ( 19.72, 84.70) ( 20.16, 84.87) ( 20.58, 85.04) ( 21.00, 85.20) ( 21.41, 85.36) ( 21.81, 85.52) ( 22.28, 85.71) ( 22.74, 85.90) ( 23.19, 86.09) ( 23.64, 86.27) ( 24.07, 86.45) ( 24.50, 86.62) ( 24.92, 86.80) ( 25.33, 86.97) ( 25.73, 87.14) ( 26.19, 87.34) ( 26.64, 87.53) ( 27.08, 87.73) ( 27.51, 87.91) ( 27.94, 88.10) ( 28.35, 88.28) ( 28.76, 88.47) ( 29.16, 88.64) ( 29.61, 88.85) ( 30.05, 89.05) ( 30.48, 89.24) ( 30.90, 89.44) ( 31.32, 89.63) ( 31.72, 89.82) ( 32.17, 90.03) ( 32.61, 90.23) ( 33.04, 90.44) ( 33.47, 90.64) ( 33.88, 90.84) ( 34.29, 91.03) ( 34.73, 91.25) ( 35.16, 91.46) ( 35.59, 91.67) ( 36.01, 91.87) ( 36.42, 92.08) ( 36.82, 92.28) ( 37.25, 92.49) ( 37.68, 92.71) ( 38.10, 92.92) ( 38.51, 93.13) ( 38.91, 93.34) ( 39.34, 93.56) ( 39.76, 93.78) ( 40.17, 93.99) ( 40.58, 94.20) ( 41.01, 94.43) ( 41.43, 94.66) ( 41.85, 94.88) ( 42.25, 95.10) ( 42.68, 95.33) ( 43.11, 95.56) ( 43.52, 95.78) ( 43.93, 96.01) ( 44.35, 96.24) ( 44.77, 96.47) ( 45.18, 96.70) ( 45.58, 96.93) ( 46.01, 97.16) ( 46.42, 97.40) ( 46.83, 97.63) ( 47.25, 97.87) ( 47.66, 98.11) ( 48.07, 98.34) ( 48.49, 98.59) ( 48.91, 98.83) ( 49.32, 99.07) ( 49.74, 99.32) ( 50.15, 99.57) ( 50.56, 99.81) ( 3.96, 85.32) ( 8.80, 87.13) ( 11.52, 88.22) ( 16.11, 90.22) ( 19.90, 92.07) ( 23.86, 94.27) ( 27.82, 96.62) ( 32.17, 99.52) ( 0.43, 84.73) ( 0.85, 84.88) ( 1.27, 85.02) ( 1.67, 85.16) ( 2.27, 85.37) ( 2.85, 85.58) ( 3.41, 85.78) ( 3.96, 85.98) ( 4.50, 86.18) ( 5.02, 86.38) ( 5.53, 86.57) ( 6.03, 86.76) ( 6.52, 86.95) ( 6.99, 87.13) ( 7.46, 87.32) ( 7.92, 87.50) ( 8.37, 87.68) ( 8.80, 87.86) ( 9.23, 88.03) ( 9.66, 88.21) ( 10.07, 88.38) ( 10.48, 88.55) ( 11.01, 88.77) ( 11.52, 88.99) ( 12.03, 89.21) ( 12.52, 89.43) ( 13.00, 89.64) ( 13.48, 89.85) ( 13.94, 90.05) ( 14.39, 90.26) ( 14.83, 90.46) ( 15.27, 90.66) ( 15.69, 90.86) ( 16.11, 91.05) ( 16.52, 91.25) ( 16.92, 91.44) ( 17.42, 91.67) ( 17.90, 91.91) ( 18.37, 92.14) ( 18.83, 92.36) ( 19.28, 92.59) ( 19.72, 92.81) ( 20.16, 93.03) ( 20.58, 93.24) ( 21.00, 93.46) ( 21.41, 93.67) ( 21.81, 93.87) ( 22.28, 94.12) ( 22.74, 94.37) ( 23.19, 94.61) ( 23.64, 94.85) ( 24.07, 95.08) ( 24.50, 95.31) ( 24.92, 95.54) ( 25.33, 95.77) ( 25.73, 95.99) ( 26.19, 96.25) ( 26.64, 96.51) ( 27.08, 96.76) ( 27.51, 97.01) ( 27.94, 97.25) ( 28.35, 97.49) ( 28.76, 97.73) ( 29.16, 97.97) ( 29.61, 98.24) ( 30.05, 98.50) ( 30.48, 98.76) ( 30.90, 99.02) ( 31.32, 99.27) ( 31.72, 99.52) ( 32.17, 99.80) (100,100) (0,0)[(100,100)]{} (10,0)(10,0)[9]{}[(0,1)[2]{}]{} (0,10)(0,10)[9]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (86,-8)[1000.0]{} (50,-8)[$t$]{} (-10,0)[0.0]{} (-10,96)[0.5]{} (-10,60)[$\epsilon_{H}$]{} (0,14)[(1,0)[100]{}]{} (-6,14)[A]{} (103,14)[A$^{\prime}$]{} (0,28)[(1,0)[100]{}]{} (-6,28)[B]{} (103,28)[B$^{\prime}$]{} ( 1.20, 5.88) ( 1.40, 5.95) ( 1.70, 6.04) ( 2.10, 6.15) ( 2.50, 6.25) ( 3.00, 6.35) ( 3.60, 6.46) ( 4.30, 6.57) ( 5.20, 6.69) ( 6.30, 6.82) ( 7.60, 6.95) ( 9.10, 7.09) ( 10.90, 7.21) ( 13.20, 7.36) ( 15.80, 7.51) ( 19.00, 7.66) ( 22.90, 7.82) ( 27.50, 7.99) ( 33.10, 8.17) ( 39.80, 8.35) ( 47.80, 8.55) ( 57.50, 8.76) ( 69.20, 8.98) ( 83.10, 9.22) ( 100.00, 9.46) ( 1.20, 9.22) ( 1.40, 9.33) ( 1.70, 9.49) ( 2.10, 9.67) ( 2.50, 9.83) ( 3.00, 10.01) ( 3.60, 10.21) ( 4.40, 10.45) ( 5.20, 10.66) ( 6.30, 10.92) ( 7.60, 11.19) ( 9.10, 11.47) ( 11.00, 11.79) ( 13.20, 12.12) ( 15.80, 12.47) ( 19.00, 12.85) ( 22.90, 13.28) ( 27.50, 13.75) ( 33.10, 14.27) ( 39.80, 14.85) ( 47.90, 15.50) ( 57.50, 16.24) ( 69.20, 17.12) ( 83.70, 18.19) ( 100.00, 19.29) ( 0.20, 8.98)[$\ast$]{} ( 0.50, 9.19)[$\ast$]{} ( 0.70, 9.32)[$\ast$]{} ( 1.10, 9.54)[$\ast$]{} ( 1.50, 9.75)[$\ast$]{} ( 2.00, 10.00)[$\ast$]{} ( 2.60, 10.27)[$\ast$]{} ( 3.40, 10.60)[$\ast$]{} ( 4.30, 10.94)[$\ast$]{} ( 5.30, 11.28)[$\ast$]{} ( 6.60, 11.70)[$\ast$]{} ( 8.10, 12.14)[$\ast$]{} ( 10.00, 12.66)[$\ast$]{} ( 12.20, 13.23)[$\ast$]{} ( 14.90, 13.90)[$\ast$]{} ( 18.10, 14.65)[$\ast$]{} ( 21.90, 15.55)[$\ast$]{} ( 26.50, 16.62)[$\ast$]{} ( 32.10, 18.00)[$\ast$]{} ( 38.80, 19.78)[$\ast$]{} ( 46.90, 22.25)[$\ast$]{} ( 56.50, 26.12)[$\ast$]{} ( 68.20, 33.56)[$\ast$]{} ( 82.20, 57.78)[$\ast$]{} ( 85.40, 82.41)[$\ast$]{} ( 43.60, 21.19)[$\ast$]{} ( 48.60, 22.86)[$\ast$]{} ( 53.60, 24.81)[$\ast$]{} ( 58.60, 27.13)[$\ast$]{} ( 63.60, 30.08)[$\ast$]{} ( 68.60, 33.91)[$\ast$]{} ( 73.60, 39.25)[$\ast$]{} ( 78.60, 47.50)[$\ast$]{} ( 83.60, 64.45)[$\ast$]{} ( 73.60, 39.25)[$\ast$]{} ( 74.60, 40.61)[$\ast$]{} ( 75.60, 42.09)[$\ast$]{} ( 76.60, 43.70)[$\ast$]{} ( 77.60, 45.50)[$\ast$]{} ( 78.60, 47.50)[$\ast$]{} ( 79.60, 49.77)[$\ast$]{} ( 80.60, 52.42)[$\ast$]{} ( 81.60, 55.57)[$\ast$]{} ( 82.60, 59.42)[$\ast$]{} ( 83.60, 64.45)[$\ast$]{} ( 84.10, 67.68)[$\ast$]{} ( 84.60, 71.75)[$\ast$]{} ( 85.10, 77.39)[$\ast$]{} ( -0.80, 8.76)[$\diamond$]{} ( 4.20, 12.14)[$\diamond$]{} ( 9.20, 14.02)[$\diamond$]{} ( 14.20, 15.63)[$\diamond$]{} ( 19.20, 17.20)[$\diamond$]{} ( 24.20, 18.84)[$\diamond$]{} ( 29.20, 20.63)[$\diamond$]{} ( 34.20, 22.70)[$\diamond$]{} ( 39.20, 25.21)[$\diamond$]{} ( 44.20, 28.46)[$\diamond$]{} ( 49.20, 32.95)[$\diamond$]{} ( 54.20, 40.06)[$\diamond$]{} ( 55.20, 42.10)[$\diamond$]{} ( 56.20, 44.50)[$\diamond$]{} ( 57.20, 47.48)[$\diamond$]{} ( 58.20, 51.46)[$\diamond$]{} ( 59.20, 57.93)[$\diamond$]{} ( 59.70, 65.48)[$\diamond$]{} ( -1.80, 10.35)[$\triangle$]{} ( -0.80, 11.91)[$\triangle$]{} ( 0.20, 13.23)[$\triangle$]{} ( 1.20, 14.54)[$\triangle$]{} ( 2.20, 15.91)[$\triangle$]{} ( 3.20, 17.44)[$\triangle$]{} ( 4.20, 19.23)[$\triangle$]{} ( 5.20, 21.47)[$\triangle$]{} ( 6.20, 24.41)[$\triangle$]{} ( 7.20, 28.54)[$\triangle$]{} ( 8.20, 35.01)[$\triangle$]{} ( 8.30, 35.88)[$\triangle$]{} ( 8.40, 36.81)[$\triangle$]{} ( 8.50, 37.79)[$\triangle$]{} ( 8.60, 38.86)[$\triangle$]{} ( 8.70, 40.02)[$\triangle$]{} ( 8.80, 41.28)[$\triangle$]{} ( 8.90, 42.66)[$\triangle$]{} ( 9.00, 44.17)[$\triangle$]{} ( 9.10, 45.88)[$\triangle$]{} ( 9.20, 47.79)[$\triangle$]{} ( 9.30, 49.99)[$\triangle$]{} ( 9.40, 52.61)[$\triangle$]{} ( 9.50, 55.83)[$\triangle$]{} ( 9.60, 60.09)[$\triangle$]{} ( 9.70, 66.68)[$\triangle$]{} (100,100) (0,0)[(100,100)]{} (10,0)(10,0)[9]{}[(0,1)[2]{}]{} (0,16.67)(0,16.67)[5]{}[(1,0)[2]{}]{} (0,-8)[1.0]{} (94,-8)[3.0]{} (50,-8)[$\log t$]{} (-12,0)[17.0]{} (-12,96)[32.0]{} (-12,60)[$\sigma$]{} (102,39)[1]{} (102,33)[2]{} (102,20)[3]{} (102,13)[4]{} ( 3.97, 81.12) ( 7.31, 79.22) ( 11.53, 77.33) ( 15.06, 76.86) ( 19.90, 73.89) ( 23.86, 72.73) ( 27.82, 71.64) ( 31.68, 69.39) ( 35.81, 67.86) ( 39.97, 66.17) ( 43.76, 64.22) ( 47.96, 62.22) ( 51.88, 60.04) ( 55.87, 58.13) ( 59.94, 55.80) ( 63.94, 54.49) ( 68.00, 52.54) ( 71.97, 50.76) ( 76.00, 48.64) ( 80.00, 47.38) ( 83.98, 45.78) ( 87.99, 44.11) ( 92.01, 42.18) ( 95.98, 40.15) ( 100.00, 38.63) ( 0.43, 81.83) ( 0.85, 81.69) ( 1.27, 81.55) ( 1.67, 81.42) ( 2.27, 81.22) ( 2.85, 81.02) ( 3.41, 80.83) ( 3.96, 80.65) ( 4.50, 80.46) ( 5.02, 80.28) ( 5.53, 80.10) ( 6.03, 79.93) ( 6.52, 79.75) ( 6.99, 79.58) ( 7.46, 79.42) ( 7.92, 79.25) ( 8.37, 79.09) ( 8.80, 78.93) ( 9.23, 78.77) ( 9.66, 78.62) ( 10.07, 78.47) ( 10.48, 78.32) ( 11.01, 78.12) ( 11.52, 77.93) ( 12.03, 77.73) ( 12.52, 77.55) ( 13.00, 77.36) ( 13.48, 77.18) ( 13.94, 77.00) ( 14.39, 76.83) ( 14.83, 76.66) ( 15.27, 76.49) ( 15.69, 76.32) ( 16.11, 76.16) ( 16.52, 75.99) ( 16.92, 75.83) ( 17.42, 75.64) ( 17.90, 75.44) ( 18.37, 75.25) ( 18.83, 75.07) ( 19.28, 74.88) ( 19.72, 74.70) ( 20.16, 74.53) ( 20.58, 74.35) ( 21.00, 74.18) ( 21.41, 74.01) ( 21.81, 73.84) ( 22.28, 73.65) ( 22.74, 73.45) ( 23.19, 73.26) ( 23.64, 73.08) ( 24.07, 72.89) ( 24.50, 72.71) ( 24.92, 72.53) ( 25.33, 72.36) ( 25.73, 72.19) ( 26.19, 71.99) ( 26.64, 71.79) ( 27.08, 71.60) ( 27.51, 71.41) ( 27.94, 71.23) ( 28.35, 71.05) ( 28.76, 70.87) ( 29.16, 70.69) ( 29.61, 70.49) ( 30.05, 70.30) ( 30.48, 70.11) ( 30.90, 69.92) ( 31.32, 69.74) ( 31.72, 69.56) ( 32.17, 69.35) ( 32.61, 69.16) ( 33.04, 68.96) ( 33.47, 68.77) ( 33.88, 68.58) ( 34.29, 68.40) ( 34.73, 68.20) ( 35.16, 68.00) ( 35.59, 67.81) ( 36.01, 67.62) ( 36.42, 67.43) ( 36.82, 67.24) ( 37.25, 67.05) ( 37.68, 66.85) ( 38.10, 66.66) ( 38.51, 66.47) ( 38.91, 66.28) ( 39.34, 66.08) ( 39.76, 65.89) ( 40.17, 65.69) ( 40.58, 65.50) ( 41.01, 65.30) ( 41.43, 65.11) ( 41.85, 64.91) ( 42.25, 64.72) ( 42.68, 64.52) ( 43.11, 64.32) ( 43.52, 64.13) ( 43.93, 63.94) ( 44.35, 63.74) ( 44.77, 63.54) ( 45.18, 63.34) ( 45.58, 63.15) ( 46.01, 62.95) ( 46.42, 62.76) ( 46.83, 62.57) ( 47.25, 62.37) ( 47.66, 62.17) ( 48.07, 61.98) ( 48.49, 61.78) ( 48.91, 61.58) ( 49.32, 61.38) ( 49.74, 61.18) ( 50.15, 60.99) ( 50.56, 60.79) ( 50.98, 60.59) ( 51.39, 60.40) ( 51.79, 60.21) ( 52.21, 60.01) ( 52.62, 59.81) ( 53.02, 59.62) ( 53.43, 59.43) ( 53.83, 59.23) ( 54.25, 59.04) ( 54.65, 58.84) ( 55.07, 58.65) ( 55.48, 58.45) ( 55.88, 58.26) ( 56.29, 58.06) ( 56.69, 57.87) ( 57.10, 57.68) ( 57.51, 57.49) ( 57.92, 57.29) ( 58.32, 57.10) ( 58.73, 56.90) ( 59.13, 56.71) ( 59.54, 56.52) ( 59.95, 56.33) ( 60.35, 56.14) ( 60.76, 55.95) ( 61.16, 55.75) ( 61.57, 55.56) ( 61.98, 55.37) ( 62.39, 55.18) ( 62.79, 54.99) ( 63.19, 54.80) ( 63.60, 54.61) ( 64.01, 54.42) ( 64.41, 54.23) ( 64.82, 54.03) ( 65.22, 53.85) ( 65.63, 53.66) ( 66.04, 53.47) ( 66.44, 53.28) ( 66.84, 53.09) ( 67.25, 52.91) ( 67.66, 52.72) ( 68.06, 52.53) ( 68.46, 52.35) ( 68.87, 52.16) ( 69.27, 51.97) ( 69.68, 51.79) ( 70.09, 51.60) ( 70.49, 51.42) ( 70.89, 51.23) ( 71.29, 51.05) ( 71.70, 50.86) ( 72.10, 50.68) ( 72.50, 50.50) ( 72.91, 50.32) ( 73.31, 50.13) ( 73.72, 49.95) ( 74.12, 49.77) ( 74.53, 49.59) ( 74.93, 49.41) ( 75.33, 49.23) ( 75.73, 49.05) ( 76.14, 48.88) ( 76.54, 48.70) ( 76.94, 48.52) ( 77.35, 48.34) ( 77.75, 48.17) ( 78.15, 47.99) ( 78.55, 47.82) ( 78.95, 47.64) ( 79.36, 47.47) ( 79.76, 47.29) ( 80.16, 47.12) ( 80.56, 46.95) ( 80.97, 46.77) ( 81.37, 46.60) ( 81.77, 46.43) ( 82.17, 46.26) ( 82.57, 46.09) ( 82.98, 45.92) ( 83.38, 45.75) ( 83.78, 45.59) ( 84.18, 45.42) ( 84.59, 45.25) ( 84.99, 45.08) ( 85.39, 44.92) ( 85.79, 44.75) ( 86.19, 44.59) ( 86.60, 44.43) ( 87.00, 44.26) ( 87.40, 44.10) ( 87.81, 43.94) ( 88.21, 43.77) ( 88.61, 43.61) ( 89.02, 43.45) ( 89.42, 43.29) ( 89.82, 43.13) ( 90.22, 42.98) ( 90.63, 42.82) ( 91.03, 42.66) ( 91.43, 42.51) ( 91.83, 42.35) ( 92.23, 42.20) ( 92.63, 42.04) ( 93.03, 41.89) ( 93.43, 41.74) ( 93.83, 41.59) ( 94.23, 41.44) ( 94.64, 41.29) ( 95.04, 41.14) ( 95.44, 40.99) ( 95.84, 40.84) ( 96.24, 40.69) ( 96.64, 40.55) ( 97.04, 40.40) ( 97.45, 40.25) ( 97.85, 40.11) ( 98.25, 39.97) ( 98.65, 39.82) ( 99.05, 39.68) ( 99.45, 39.54) ( 99.85, 39.40) ( 3.97, 84.05) ( 7.32, 81.93) ( 11.54, 81.19) ( 15.06, 80.81) ( 19.91, 78.06) ( 23.86, 75.65) ( 27.82, 73.72) ( 31.68, 70.89) ( 35.81, 68.45) ( 39.97, 65.28) ( 43.76, 63.73) ( 47.96, 61.66) ( 51.88, 59.30) ( 55.87, 57.50) ( 59.94, 55.49) ( 63.95, 52.65) ( 68.00, 50.19) ( 69.11, 49.54) ( 71.91, 47.83) ( 76.39, 45.65) ( 80.20, 43.37) ( 84.17, 41.01) ( 88.23, 38.41) ( 92.71, 36.18) ( 94.98, 35.15) ( 100.00, 32.63) ( 0.43, 86.00) ( 0.85, 85.83) ( 1.27, 85.66) ( 1.67, 85.50) ( 2.27, 85.26) ( 2.85, 85.02) ( 3.41, 84.79) ( 3.96, 84.56) ( 4.50, 84.33) ( 5.02, 84.11) ( 5.53, 83.89) ( 6.03, 83.68) ( 6.52, 83.47) ( 6.99, 83.26) ( 7.46, 83.06) ( 7.92, 82.86) ( 8.37, 82.66) ( 8.80, 82.46) ( 9.23, 82.27) ( 9.66, 82.08) ( 10.07, 81.90) ( 10.48, 81.71) ( 11.01, 81.47) ( 11.52, 81.24) ( 12.03, 81.01) ( 12.52, 80.78) ( 13.00, 80.55) ( 13.48, 80.33) ( 13.94, 80.12) ( 14.39, 79.90) ( 14.83, 79.70) ( 15.27, 79.49) ( 15.69, 79.29) ( 16.11, 79.09) ( 16.52, 78.89) ( 16.92, 78.69) ( 17.42, 78.46) ( 17.90, 78.22) ( 18.37, 77.99) ( 18.83, 77.76) ( 19.28, 77.54) ( 19.72, 77.32) ( 20.16, 77.11) ( 20.58, 76.89) ( 21.00, 76.68) ( 21.41, 76.48) ( 21.81, 76.28) ( 22.28, 76.04) ( 22.74, 75.80) ( 23.19, 75.57) ( 23.64, 75.34) ( 24.07, 75.12) ( 24.50, 74.90) ( 24.92, 74.68) ( 25.33, 74.47) ( 25.73, 74.26) ( 26.19, 74.02) ( 26.64, 73.78) ( 27.08, 73.55) ( 27.51, 73.32) ( 27.94, 73.10) ( 28.35, 72.88) ( 28.76, 72.66) ( 29.16, 72.45) ( 29.61, 72.21) ( 30.05, 71.97) ( 30.48, 71.74) ( 30.90, 71.51) ( 31.32, 71.28) ( 31.72, 71.06) ( 32.17, 70.82) ( 32.61, 70.58) ( 33.04, 70.34) ( 33.47, 70.11) ( 33.88, 69.88) ( 34.29, 69.66) ( 34.73, 69.41) ( 35.16, 69.17) ( 35.59, 68.94) ( 36.01, 68.70) ( 36.42, 68.48) ( 36.82, 68.25) ( 37.25, 68.01) ( 37.68, 67.77) ( 38.10, 67.53) ( 38.51, 67.30) ( 38.91, 67.08) ( 39.34, 66.83) ( 39.76, 66.59) ( 40.17, 66.36) ( 40.58, 66.13) ( 41.01, 65.88) ( 41.43, 65.64) ( 41.85, 65.41) ( 42.25, 65.17) ( 42.68, 64.93) ( 43.11, 64.68) ( 43.52, 64.45) ( 43.93, 64.21) ( 44.35, 63.97) ( 44.77, 63.73) ( 45.18, 63.49) ( 45.58, 63.26) ( 46.01, 63.01) ( 46.42, 62.77) ( 46.83, 62.54) ( 47.25, 62.29) ( 47.66, 62.05) ( 48.07, 61.81) ( 48.49, 61.56) ( 48.91, 61.32) ( 49.32, 61.08) ( 49.74, 60.84) ( 50.15, 60.60) ( 50.56, 60.36) ( 50.98, 60.11) ( 51.39, 59.87) ( 51.79, 59.64) ( 52.21, 59.39) ( 52.62, 59.15) ( 53.02, 58.92) ( 53.43, 58.68) ( 53.83, 58.44) ( 54.25, 58.19) ( 54.65, 57.96) ( 55.07, 57.71) ( 55.48, 57.47) ( 55.88, 57.23) ( 56.29, 56.99) ( 56.69, 56.76) ( 57.10, 56.51) ( 57.51, 56.28) ( 57.92, 56.04) ( 58.32, 55.80) ( 58.73, 55.56) ( 59.13, 55.32) ( 59.54, 55.08) ( 59.95, 54.84) ( 60.35, 54.60) ( 60.76, 54.37) ( 61.16, 54.13) ( 61.57, 53.89) ( 61.98, 53.65) ( 62.39, 53.41) ( 62.79, 53.17) ( 63.19, 52.94) ( 63.60, 52.70) ( 64.01, 52.46) ( 64.41, 52.22) ( 64.82, 51.98) ( 65.22, 51.75) ( 65.63, 51.51) ( 66.04, 51.27) ( 66.44, 51.04) ( 66.84, 50.81) ( 67.25, 50.57) ( 67.66, 50.33) ( 68.06, 50.10) ( 68.46, 49.87) ( 68.87, 49.63) ( 69.27, 49.40) ( 69.68, 49.16) ( 70.09, 48.93) ( 70.49, 48.70) ( 70.89, 48.46) ( 71.29, 48.23) ( 71.70, 48.00) ( 72.10, 47.77) ( 72.50, 47.54) ( 72.91, 47.31) ( 73.31, 47.08) ( 73.72, 46.85) ( 74.12, 46.62) ( 74.53, 46.39) ( 74.93, 46.16) ( 75.33, 45.93) ( 75.73, 45.70) ( 76.14, 45.48) ( 76.54, 45.25) ( 76.94, 45.02) ( 77.35, 44.80) ( 77.75, 44.57) ( 78.15, 44.35) ( 78.55, 44.13) ( 78.95, 43.90) ( 79.36, 43.68) ( 79.76, 43.46) ( 80.16, 43.23) ( 80.56, 43.01) ( 80.97, 42.79) ( 81.37, 42.57) ( 81.77, 42.35) ( 82.17, 42.13) ( 82.57, 41.91) ( 82.98, 41.69) ( 83.38, 41.48) ( 83.78, 41.26) ( 84.18, 41.04) ( 84.59, 40.83) ( 84.99, 40.61) ( 85.39, 40.40) ( 85.79, 40.18) ( 86.19, 39.97) ( 86.60, 39.76) ( 87.00, 39.54) ( 87.40, 39.33) ( 87.81, 39.12) ( 88.21, 38.91) ( 88.61, 38.70) ( 89.02, 38.49) ( 89.42, 38.28) ( 89.82, 38.07) ( 90.22, 37.86) ( 90.63, 37.66) ( 91.03, 37.45) ( 91.43, 37.25) ( 91.83, 37.04) ( 92.23, 36.84) ( 92.63, 36.64) ( 93.03, 36.44) ( 93.43, 36.23) ( 93.83, 36.03) ( 94.23, 35.84) ( 94.64, 35.64) ( 95.04, 35.44) ( 95.44, 35.24) ( 95.84, 35.04) ( 96.24, 34.85) ( 96.64, 34.65) ( 97.04, 34.46) ( 97.45, 34.26) ( 97.85, 34.07) ( 98.25, 33.88) ( 98.65, 33.69) ( 99.05, 33.50) ( 99.45, 33.31) ( 99.85, 33.12) ( 3.96, 77.18) ( 7.31, 75.99) ( 11.53, 74.78) ( 16.11, 71.50) ( 19.90, 69.74) ( 23.86, 67.77) ( 27.82, 65.42) ( 31.67, 63.11) ( 35.80, 60.49) ( 39.97, 58.70) ( 44.04, 56.25) ( 47.95, 53.92) ( 51.88, 50.86) ( 56.03, 48.32) ( 59.94, 46.79) ( 63.94, 44.44) ( 68.00, 40.91) ( 71.97, 39.83) ( 75.99, 37.65) ( 80.00, 35.37) ( 83.97, 30.01) ( 87.99, 27.41) ( 92.01, 26.18) ( 95.98, 22.15) ( 100.00, 20.63) ( 0.43, 79.24) ( 0.85, 79.05) ( 1.27, 78.87) ( 1.67, 78.68) ( 2.27, 78.41) ( 2.85, 78.14) ( 3.41, 77.88) ( 3.96, 77.63) ( 4.50, 77.38) ( 5.02, 77.13) ( 5.53, 76.89) ( 6.03, 76.66) ( 6.52, 76.42) ( 6.99, 76.20) ( 7.46, 75.98) ( 7.92, 75.76) ( 8.37, 75.54) ( 8.80, 75.33) ( 9.23, 75.12) ( 9.66, 74.92) ( 10.07, 74.72) ( 10.48, 74.52) ( 11.01, 74.26) ( 11.52, 74.01) ( 12.03, 73.76) ( 12.52, 73.52) ( 13.00, 73.28) ( 13.48, 73.04) ( 13.94, 72.81) ( 14.39, 72.58) ( 14.83, 72.36) ( 15.27, 72.14) ( 15.69, 71.93) ( 16.11, 71.72) ( 16.52, 71.51) ( 16.92, 71.30) ( 17.42, 71.05) ( 17.90, 70.81) ( 18.37, 70.56) ( 18.83, 70.33) ( 19.28, 70.09) ( 19.72, 69.86) ( 20.16, 69.64) ( 20.58, 69.42) ( 21.00, 69.20) ( 21.41, 68.98) ( 21.81, 68.77) ( 22.28, 68.52) ( 22.74, 68.28) ( 23.19, 68.04) ( 23.64, 67.80) ( 24.07, 67.57) ( 24.50, 67.35) ( 24.92, 67.12) ( 25.33, 66.90) ( 25.73, 66.69) ( 26.19, 66.44) ( 26.64, 66.19) ( 27.08, 65.96) ( 27.51, 65.72) ( 27.94, 65.49) ( 28.35, 65.26) ( 28.76, 65.04) ( 29.16, 64.82) ( 29.61, 64.57) ( 30.05, 64.33) ( 30.48, 64.09) ( 30.90, 63.86) ( 31.32, 63.63) ( 31.72, 63.40) ( 32.17, 63.15) ( 32.61, 62.90) ( 33.04, 62.66) ( 33.47, 62.42) ( 33.88, 62.19) ( 34.29, 61.96) ( 34.73, 61.71) ( 35.16, 61.47) ( 35.59, 61.22) ( 36.01, 60.99) ( 36.42, 60.75) ( 36.82, 60.52) ( 37.25, 60.27) ( 37.68, 60.03) ( 38.10, 59.79) ( 38.51, 59.55) ( 38.91, 59.32) ( 39.34, 59.07) ( 39.76, 58.83) ( 40.17, 58.59) ( 40.58, 58.35) ( 41.01, 58.10) ( 41.43, 57.85) ( 41.85, 57.61) ( 42.25, 57.37) ( 42.68, 57.12) ( 43.11, 56.87) ( 43.52, 56.63) ( 43.93, 56.39) ( 44.35, 56.13) ( 44.77, 55.89) ( 45.18, 55.64) ( 45.58, 55.40) ( 46.01, 55.15) ( 46.42, 54.90) ( 46.83, 54.66) ( 47.25, 54.41) ( 47.66, 54.16) ( 48.07, 53.91) ( 48.49, 53.66) ( 48.91, 53.41) ( 49.32, 53.16) ( 49.74, 52.91) ( 50.15, 52.66) ( 50.56, 52.41) ( 50.98, 52.15) ( 51.39, 51.90) ( 51.79, 51.66) ( 52.21, 51.40) ( 52.62, 51.15) ( 53.02, 50.91) ( 53.43, 50.65) ( 53.83, 50.41) ( 54.25, 50.15) ( 54.65, 49.90) ( 55.07, 49.64) ( 55.48, 49.39) ( 55.88, 49.14) ( 56.29, 48.89) ( 56.69, 48.64) ( 57.10, 48.38) ( 57.51, 48.13) ( 57.92, 47.87) ( 58.32, 47.62) ( 58.73, 47.36) ( 59.13, 47.11) ( 59.54, 46.85) ( 59.95, 46.60) ( 60.35, 46.34) ( 60.76, 46.09) ( 61.16, 45.84) ( 61.57, 45.58) ( 61.98, 45.32) ( 62.39, 45.06) ( 62.79, 44.81) ( 63.19, 44.55) ( 63.60, 44.29) ( 64.01, 44.04) ( 64.41, 43.78) ( 64.82, 43.52) ( 65.22, 43.26) ( 65.63, 43.00) ( 66.04, 42.74) ( 66.44, 42.48) ( 66.84, 42.22) ( 67.25, 41.96) ( 67.66, 41.70) ( 68.06, 41.44) ( 68.46, 41.19) ( 68.87, 40.92) ( 69.27, 40.66) ( 69.68, 40.40) ( 70.09, 40.14) ( 70.49, 39.88) ( 70.89, 39.62) ( 71.29, 39.36) ( 71.70, 39.10) ( 72.10, 38.83) ( 72.50, 38.57) ( 72.91, 38.31) ( 73.31, 38.05) ( 73.72, 37.78) ( 74.12, 37.52) ( 74.53, 37.26) ( 74.93, 36.99) ( 75.33, 36.73) ( 75.73, 36.47) ( 76.14, 36.20) ( 76.54, 35.94) ( 76.94, 35.68) ( 77.35, 35.41) ( 77.75, 35.15) ( 78.15, 34.88) ( 78.55, 34.62) ( 78.95, 34.36) ( 79.36, 34.09) ( 79.76, 33.83) ( 80.16, 33.56) ( 80.56, 33.30) ( 80.97, 33.03) ( 81.37, 32.77) ( 81.77, 32.50) ( 82.17, 32.24) ( 82.57, 31.97) ( 82.98, 31.70) ( 83.38, 31.44) ( 83.78, 31.17) ( 84.18, 30.91) ( 84.59, 30.64) ( 84.99, 30.37) ( 85.39, 30.11) ( 85.79, 29.84) ( 86.19, 29.57) ( 86.60, 29.31) ( 87.00, 29.04) ( 87.40, 28.77) ( 87.81, 28.50) ( 88.21, 28.24) ( 88.61, 27.97) ( 89.02, 27.70) ( 89.42, 27.43) ( 89.82, 27.16) ( 90.22, 26.89) ( 90.63, 26.63) ( 91.03, 26.36) ( 91.43, 26.09) ( 91.83, 25.83) ( 92.23, 25.56) ( 92.63, 25.29) ( 93.03, 25.02) ( 93.43, 24.76) ( 93.83, 24.49) ( 94.23, 24.22) ( 94.64, 23.95) ( 95.04, 23.68) ( 95.44, 23.42) ( 95.84, 23.15) ( 96.24, 22.88) ( 96.64, 22.61) ( 97.04, 22.35) ( 97.45, 22.08) ( 97.85, 21.81) ( 98.25, 21.54) ( 98.65, 21.27) ( 99.05, 21.00) ( 99.45, 20.74) ( 99.85, 20.47) ( 3.96, 72.88) ( 7.31, 70.93) ( 11.52, 68.99) ( 16.11, 66.55) ( 19.90, 64.21) ( 23.86, 62.55) ( 27.82, 60.10) ( 32.17, 57.54) ( 35.80, 55.65) ( 39.97, 53.05) ( 44.04, 50.79) ( 47.95, 47.98) ( 52.07, 45.76) ( 56.03, 43.19) ( 60.07, 40.46) ( 64.05, 37.85) ( 67.99, 35.27) ( 71.97, 32.30) ( 75.99, 29.84) ( 79.99, 27.09) ( 84.02, 24.44) ( 87.98, 21.28) ( 92.01, 18.87) ( 96.01, 16.56) ( 100.00, 13.05) ( 0.43, 74.61) ( 0.85, 74.41) ( 1.27, 74.21) ( 1.67, 74.01) ( 2.27, 73.72) ( 2.85, 73.44) ( 3.41, 73.16) ( 3.96, 72.89) ( 4.50, 72.62) ( 5.02, 72.36) ( 5.53, 72.11) ( 6.03, 71.86) ( 6.52, 71.61) ( 6.99, 71.37) ( 7.46, 71.13) ( 7.92, 70.90) ( 8.37, 70.67) ( 8.80, 70.45) ( 9.23, 70.23) ( 9.66, 70.01) ( 10.07, 69.80) ( 10.48, 69.59) ( 11.01, 69.32) ( 11.52, 69.05) ( 12.03, 68.79) ( 12.52, 68.53) ( 13.00, 68.28) ( 13.48, 68.03) ( 13.94, 67.79) ( 14.39, 67.55) ( 14.83, 67.31) ( 15.27, 67.08) ( 15.69, 66.86) ( 16.11, 66.63) ( 16.52, 66.41) ( 16.92, 66.20) ( 17.42, 65.93) ( 17.90, 65.68) ( 18.37, 65.42) ( 18.83, 65.17) ( 19.28, 64.93) ( 19.72, 64.68) ( 20.16, 64.45) ( 20.58, 64.22) ( 21.00, 63.99) ( 21.41, 63.76) ( 21.81, 63.54) ( 22.28, 63.28) ( 22.74, 63.02) ( 23.19, 62.77) ( 23.64, 62.53) ( 24.07, 62.28) ( 24.50, 62.05) ( 24.92, 61.81) ( 25.33, 61.58) ( 25.73, 61.36) ( 26.19, 61.10) ( 26.64, 60.84) ( 27.08, 60.59) ( 27.51, 60.35) ( 27.94, 60.11) ( 28.35, 59.87) ( 28.76, 59.63) ( 29.16, 59.41) ( 29.61, 59.15) ( 30.05, 58.89) ( 30.48, 58.65) ( 30.90, 58.40) ( 31.32, 58.16) ( 31.72, 57.93) ( 32.17, 57.66) ( 32.61, 57.41) ( 33.04, 57.16) ( 33.47, 56.91) ( 33.88, 56.67) ( 34.29, 56.43) ( 34.73, 56.17) ( 35.16, 55.91) ( 35.59, 55.66) ( 36.01, 55.41) ( 36.42, 55.17) ( 36.82, 54.93) ( 37.25, 54.67) ( 37.68, 54.42) ( 38.10, 54.17) ( 38.51, 53.92) ( 38.91, 53.68) ( 39.34, 53.42) ( 39.76, 53.17) ( 40.17, 52.92) ( 40.58, 52.67) ( 41.01, 52.41) ( 41.43, 52.16) ( 41.85, 51.91) ( 42.25, 51.66) ( 42.68, 51.40) ( 43.11, 51.14) ( 43.52, 50.88) ( 43.93, 50.64) ( 44.35, 50.37) ( 44.77, 50.12) ( 45.18, 49.86) ( 45.58, 49.61) ( 46.01, 49.35) ( 46.42, 49.10) ( 46.83, 48.85) ( 47.25, 48.58) ( 47.66, 48.33) ( 48.07, 48.07) ( 48.49, 47.81) ( 48.91, 47.55) ( 49.32, 47.29) ( 49.74, 47.03) ( 50.15, 46.77) ( 50.56, 46.51) ( 50.98, 46.25) ( 51.39, 45.99) ( 51.79, 45.73) ( 52.21, 45.47) ( 52.62, 45.21) ( 53.02, 44.96) ( 53.43, 44.70) ( 53.83, 44.44) ( 54.25, 44.18) ( 54.65, 43.92) ( 55.07, 43.65) ( 55.48, 43.39) ( 55.88, 43.13) ( 56.29, 42.87) ( 56.69, 42.61) ( 57.10, 42.34) ( 57.51, 42.08) ( 57.92, 41.82) ( 58.32, 41.56) ( 58.73, 41.29) ( 59.13, 41.03) ( 59.54, 40.77) ( 59.95, 40.50) ( 60.35, 40.24) ( 60.76, 39.98) ( 61.16, 39.71) ( 61.57, 39.44) ( 61.98, 39.18) ( 62.39, 38.91) ( 62.79, 38.65) ( 63.19, 38.38) ( 63.60, 38.11) ( 64.01, 37.85) ( 64.41, 37.58) ( 64.82, 37.31) ( 65.22, 37.05) ( 65.63, 36.78) ( 66.04, 36.51) ( 66.44, 36.24) ( 66.84, 35.98) ( 67.25, 35.71) ( 67.66, 35.44) ( 68.06, 35.17) ( 68.46, 34.90) ( 68.87, 34.63) ( 69.27, 34.36) ( 69.68, 34.09) ( 70.09, 33.82) ( 70.49, 33.55) ( 70.89, 33.28) ( 71.29, 33.01) ( 71.70, 32.74) ( 72.10, 32.47) ( 72.50, 32.19) ( 72.91, 31.92) ( 73.31, 31.65) ( 73.72, 31.38) ( 74.12, 31.10) ( 74.53, 30.83) ( 74.93, 30.56) ( 75.33, 30.29) ( 75.73, 30.01) ( 76.14, 29.74) ( 76.54, 29.47) ( 76.94, 29.20) ( 77.35, 28.92) ( 77.75, 28.65) ( 78.15, 28.37) ( 78.55, 28.10) ( 78.95, 27.83) ( 79.36, 27.55) ( 79.76, 27.28) ( 80.16, 27.00) ( 80.56, 26.73) ( 80.97, 26.45) ( 81.37, 26.18) ( 81.77, 25.90) ( 82.17, 25.63) ( 82.57, 25.35) ( 82.98, 25.07) ( 83.38, 24.80) ( 83.78, 24.52) ( 84.18, 24.24) ( 84.59, 23.97) ( 84.99, 23.69) ( 85.39, 23.41) ( 85.79, 23.14) ( 86.19, 22.86) ( 86.60, 22.58) ( 87.00, 22.30) ( 87.40, 22.02) ( 87.81, 21.75) ( 88.21, 21.47) ( 88.61, 21.19) ( 89.02, 20.91) ( 89.42, 20.63) ( 89.82, 20.35) ( 90.22, 20.07) ( 90.63, 19.79) ( 91.03, 19.51) ( 91.43, 19.24) ( 91.83, 18.96) ( 92.23, 18.68) ( 92.63, 18.40) ( 93.03, 18.12) ( 93.43, 17.84) ( 93.83, 17.56) ( 94.23, 17.28) ( 94.64, 17.00) ( 95.04, 16.72) ( 95.44, 16.44) ( 95.84, 16.16) ( 96.24, 15.88) ( 96.64, 15.60) ( 97.04, 15.32) ( 97.45, 15.04) ( 97.85, 14.76) ( 98.25, 14.48) ( 98.65, 14.20) ( 99.05, 13.92) ( 99.45, 13.64) ( 99.85, 13.36) (100,100) (0,0)[(100,100)]{} (16.67,0)(16.67,0)[5]{}[(0,1)[2]{}]{} (0,14.29)(0,14.29)[6]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[0.06]{} (50,-8)[$\epsilon$]{} (-12,0)[0.0]{} (-12,96)[35.0]{} (-12,60)[$\sigma$]{} ( 3.90, 11.90) ( 7.53, 22.39) ( 10.02, 29.07) ( 13.85, 38.47) ( 16.57, 44.07) ( 20.87, 50.93) ( 23.82, 54.65) ( 26.85, 57.92) ( 29.92, 60.79) ( 34.55, 64.53) ( 42.23, 70.17) ( 49.92, 75.06) ( 51.43, 75.96) ( 59.20, 80.19) ( 66.92, 83.83) ( 74.85, 87.06) ( 84.52, 90.27) ( 92.92, 92.37) ( 99.83, 93.59) ( 0.17, 0.60) ( 0.33, 1.20) ( 0.50, 1.79) ( 0.67, 2.38) ( 0.83, 2.97) ( 1.00, 3.55) ( 1.17, 4.13) ( 1.33, 4.70) ( 1.50, 5.27) ( 1.67, 5.83) ( 1.83, 6.39) ( 2.00, 6.95) ( 2.17, 7.50) ( 2.33, 8.05) ( 2.50, 8.59) ( 2.67, 9.13) ( 2.83, 9.67) ( 3.00, 10.20) ( 3.17, 10.73) ( 3.33, 11.25) ( 3.50, 11.78) ( 3.67, 12.29) ( 3.83, 12.81) ( 4.00, 13.32) ( 4.17, 13.82) ( 4.33, 14.32) ( 4.50, 14.82) ( 4.67, 15.32) ( 4.83, 15.81) ( 5.00, 16.30) ( 5.17, 16.78) ( 5.33, 17.26) ( 5.50, 17.74) ( 5.67, 18.21) ( 5.83, 18.68) ( 6.00, 19.15) ( 6.17, 19.61) ( 6.33, 20.07) ( 6.50, 20.53) ( 6.67, 20.98) ( 6.83, 21.44) ( 7.00, 21.88) ( 7.17, 22.33) ( 7.33, 22.77) ( 7.50, 23.20) ( 7.67, 23.64) ( 7.83, 24.07) ( 8.00, 24.50) ( 8.17, 24.92) ( 8.33, 25.35) ( 8.50, 25.77) ( 8.67, 26.18) ( 8.83, 26.59) ( 9.00, 27.00) ( 9.17, 27.41) ( 9.33, 27.82) ( 9.50, 28.22) ( 9.75, 28.81) ( 10.00, 29.40) ( 10.25, 29.99) ( 10.50, 30.57) ( 10.75, 31.14) ( 11.00, 31.70) ( 11.25, 32.26) ( 11.50, 32.82) ( 11.75, 33.36) ( 12.00, 33.90) ( 12.25, 34.44) ( 12.50, 34.97) ( 12.75, 35.49) ( 13.00, 36.01) ( 13.25, 36.53) ( 13.50, 37.04) ( 13.75, 37.54) ( 14.00, 38.04) ( 14.25, 38.53) ( 14.50, 39.01) ( 14.75, 39.50) ( 15.00, 39.97) ( 15.25, 40.44) ( 15.50, 40.91) ( 15.75, 41.37) ( 16.00, 41.83) ( 16.25, 42.28) ( 16.50, 42.73) ( 16.75, 43.17) ( 17.00, 43.61) ( 17.25, 44.04) ( 17.50, 44.47) ( 17.75, 44.90) ( 18.00, 45.32) ( 18.25, 45.73) ( 18.50, 46.14) ( 18.75, 46.55) ( 19.00, 46.95) ( 19.33, 47.48) ( 19.67, 48.01) ( 20.00, 48.52) ( 20.33, 49.03) ( 20.67, 49.53) ( 21.00, 50.03) ( 21.33, 50.52) ( 21.67, 51.00) ( 22.00, 51.47) ( 22.33, 51.94) ( 22.67, 52.41) ( 23.00, 52.86) ( 23.33, 53.31) ( 23.67, 53.76) ( 24.00, 54.19) ( 24.33, 54.63) ( 24.67, 55.05) ( 25.00, 55.47) ( 25.33, 55.89) ( 25.67, 56.30) ( 26.00, 56.70) ( 26.42, 57.20) ( 26.83, 57.69) ( 27.25, 58.17) ( 27.67, 58.64) ( 28.08, 59.11) ( 28.50, 59.57) ( 28.92, 60.02) ( 29.33, 60.46) ( 29.75, 60.90) ( 30.17, 61.33) ( 30.58, 61.75) ( 31.00, 62.17) ( 31.42, 62.58) ( 31.83, 62.98) ( 32.25, 63.38) ( 32.67, 63.77) ( 33.08, 64.15) ( 33.50, 64.53) ( 33.92, 64.90) ( 34.33, 65.27) ( 34.75, 65.63) ( 35.17, 65.99) ( 35.58, 66.34) ( 36.00, 66.69) ( 36.42, 67.03) ( 36.83, 67.36) ( 37.25, 67.69) ( 37.67, 68.02) ( 38.08, 68.34) ( 38.50, 68.66) ( 38.92, 68.97) ( 39.33, 69.28) ( 39.75, 69.58) ( 40.17, 69.88) ( 40.58, 70.17) ( 41.00, 70.46) ( 41.42, 70.75) ( 41.83, 71.03) ( 42.25, 71.31) ( 42.67, 71.59) ( 43.08, 71.86) ( 43.50, 72.12) ( 43.92, 72.39) ( 44.33, 72.65) ( 44.75, 72.91) ( 45.17, 73.16) ( 45.58, 73.41) ( 46.00, 73.66) ( 46.42, 73.91) ( 46.83, 74.15) ( 47.25, 74.39) ( 47.67, 74.62) ( 48.08, 74.86) ( 48.50, 75.09) ( 48.92, 75.31) ( 49.33, 75.54) ( 49.75, 75.76) ( 50.17, 75.98) ( 50.58, 76.20) ( 51.00, 76.41) ( 51.42, 76.63) ( 51.83, 76.84) ( 52.25, 77.04) ( 52.67, 77.25) ( 53.08, 77.45) ( 53.50, 77.66) ( 53.92, 77.86) ( 54.33, 78.05) ( 54.75, 78.25) ( 55.17, 78.44) ( 55.58, 78.64) ( 56.00, 78.83) ( 56.42, 79.01) ( 56.83, 79.20) ( 57.25, 79.39) ( 57.67, 79.57) ( 58.08, 79.75) ( 58.50, 79.93) ( 58.92, 80.11) ( 59.33, 80.29) ( 59.75, 80.46) ( 60.17, 80.64) ( 60.58, 80.81) ( 61.00, 80.98) ( 61.42, 81.15) ( 61.83, 81.32) ( 62.25, 81.49) ( 62.67, 81.66) ( 63.08, 81.82) ( 63.50, 81.99) ( 63.92, 82.15) ( 64.33, 82.31) ( 64.75, 82.47) ( 65.17, 82.63) ( 65.58, 82.79) ( 66.00, 82.95) ( 66.42, 83.11) ( 66.83, 83.26) ( 67.25, 83.42) ( 67.67, 83.57) ( 68.08, 83.73) ( 68.50, 83.88) ( 68.92, 84.03) ( 69.33, 84.18) ( 69.75, 84.33) ( 70.17, 84.48) ( 70.58, 84.63) ( 71.00, 84.78) ( 71.42, 84.93) ( 71.83, 85.07) ( 72.25, 85.22) ( 72.67, 85.37) ( 73.08, 85.51) ( 73.50, 85.66) ( 73.92, 85.80) ( 74.33, 85.94) ( 74.75, 86.09) ( 75.17, 86.23) ( 75.58, 86.37) ( 76.00, 86.51) ( 76.42, 86.65) ( 76.83, 86.80) ( 77.25, 86.94) ( 77.67, 87.08) ( 78.08, 87.22) ( 78.50, 87.36) ( 78.92, 87.49) ( 79.33, 87.63) ( 79.75, 87.77) ( 80.17, 87.91) ( 80.58, 88.05) ( 81.00, 88.19) ( 81.42, 88.32) ( 81.83, 88.46) ( 82.25, 88.60) ( 82.67, 88.73) ( 83.08, 88.87) ( 83.50, 89.01) ( 83.92, 89.14) ( 84.33, 89.28) ( 84.75, 89.41) ( 85.17, 89.55) ( 85.58, 89.68) ( 86.00, 89.82) ( 86.42, 89.96) ( 86.83, 90.09) ( 87.25, 90.23) ( 87.67, 90.36) ( 88.08, 90.50) ( 88.50, 90.63) ( 88.92, 90.76) ( 89.33, 90.90) ( 89.75, 91.03) ( 90.17, 91.17) ( 90.58, 91.30) ( 91.00, 91.44) ( 91.42, 91.57) ( 91.83, 91.71) ( 92.25, 91.84) ( 92.67, 91.98) ( 93.08, 92.11) ( 93.50, 92.25) ( 93.92, 92.38) ( 94.33, 92.51) ( 94.75, 92.65) ( 95.17, 92.78) ( 95.58, 92.92) ( 96.00, 93.05) ( 96.42, 93.19) ( 96.83, 93.32) ( 97.25, 93.46) ( 97.67, 93.59) ( 98.08, 93.73) ( 98.50, 93.86) ( 98.92, 94.00) ( 99.33, 94.14) ( 99.75, 94.27) (100,100) (0,0)[(100,100)]{} (16.67,0)(16.67,0)[5]{}[(0,1)[2]{}]{} (0,16.67)(0,16.67)[5]{}[(1,0)[2]{}]{} (100,10)(0,10)[9]{}[(-1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[0.06]{} (50,-8)[$\epsilon$]{} (-12,0)[0.0]{} (-12,96)[0.03]{} (-12,60)[$\epsilon_{\rm ep}$]{} (103,0)[0.0]{} (103,96)[0.02]{} (103,60)[$r$]{} (45,26)[1]{} (45,75)[2]{} (43,28)[(-1,0)[10]{}]{} (49,77)[(1,0)[10]{}]{} ( 0.42, 0.00) ( 0.83, 0.01) ( 1.25, 0.02) ( 1.67, 0.04) ( 2.08, 0.06) ( 2.50, 0.09) ( 2.92, 0.13) ( 3.33, 0.16) ( 3.75, 0.20) ( 4.17, 0.25) ( 4.58, 0.30) ( 5.00, 0.36) ( 5.42, 0.42) ( 5.83, 0.48) ( 6.25, 0.55) ( 6.67, 0.63) ( 7.08, 0.70) ( 7.50, 0.79) ( 7.92, 0.87) ( 8.33, 0.96) ( 8.75, 1.06) ( 9.17, 1.16) ( 9.58, 1.26) ( 10.00, 1.37) ( 10.42, 1.48) ( 10.83, 1.60) ( 11.25, 1.71) ( 11.67, 1.84) ( 12.08, 1.97) ( 12.50, 2.10) ( 12.92, 2.23) ( 13.33, 2.37) ( 13.75, 2.51) ( 14.17, 2.66) ( 14.58, 2.81) ( 15.00, 2.96) ( 15.42, 3.12) ( 15.83, 3.28) ( 16.25, 3.44) ( 16.67, 3.61) ( 17.08, 3.78) ( 17.50, 3.96) ( 17.92, 4.13) ( 18.33, 4.32) ( 18.75, 4.50) ( 19.17, 4.69) ( 19.58, 4.88) ( 20.00, 5.07) ( 20.42, 5.27) ( 20.83, 5.47) ( 21.25, 5.68) ( 21.67, 5.88) ( 22.08, 6.09) ( 22.50, 6.31) ( 22.92, 6.52) ( 23.33, 6.74) ( 23.75, 6.96) ( 24.17, 7.19) ( 24.58, 7.42) ( 25.00, 7.65) ( 25.42, 7.88) ( 25.83, 8.12) ( 26.25, 8.36) ( 26.67, 8.60) ( 27.08, 8.85) ( 27.50, 9.09) ( 27.92, 9.34) ( 28.33, 9.60) ( 28.75, 9.85) ( 29.17, 10.11) ( 29.58, 10.37) ( 30.00, 10.64) ( 30.42, 10.90) ( 30.83, 11.17) ( 31.25, 11.44) ( 31.67, 11.72) ( 32.08, 11.99) ( 32.50, 12.27) ( 32.92, 12.55) ( 33.33, 12.84) ( 33.75, 13.12) ( 34.17, 13.41) ( 34.58, 13.70) ( 35.00, 13.99) ( 35.42, 14.29) ( 35.83, 14.59) ( 36.25, 14.89) ( 36.67, 15.19) ( 37.08, 15.49) ( 37.50, 15.80) ( 37.92, 16.11) ( 38.33, 16.42) ( 38.75, 16.73) ( 39.17, 17.04) ( 39.58, 17.36) ( 40.00, 17.68) ( 40.42, 18.00) ( 40.83, 18.32) ( 41.25, 18.65) ( 41.67, 18.97) ( 42.08, 19.30) ( 42.50, 19.63) ( 42.92, 19.97) ( 43.33, 20.30) ( 43.75, 20.64) ( 44.17, 20.98) ( 44.58, 21.32) ( 45.00, 21.66) ( 45.42, 22.00) ( 45.83, 22.35) ( 46.25, 22.69) ( 46.67, 23.04) ( 47.08, 23.39) ( 47.50, 23.75) ( 47.92, 24.10) ( 48.33, 24.46) ( 48.75, 24.81) ( 49.17, 25.17) ( 49.58, 25.53) ( 50.00, 25.90) ( 50.42, 26.26) ( 50.83, 26.63) ( 51.25, 26.99) ( 51.67, 27.36) ( 52.08, 27.73) ( 52.50, 28.11) ( 52.92, 28.48) ( 53.33, 28.86) ( 53.75, 29.23) ( 54.17, 29.61) ( 54.58, 29.99) ( 55.00, 30.37) ( 55.42, 30.75) ( 55.83, 31.14) ( 56.25, 31.52) ( 56.67, 31.91) ( 57.08, 32.30) ( 57.50, 32.69) ( 57.92, 33.08) ( 58.33, 33.47) ( 58.75, 33.86) ( 59.17, 34.26) ( 59.58, 34.65) ( 60.00, 35.05) ( 60.42, 35.45) ( 60.83, 35.85) ( 61.25, 36.25) ( 61.67, 36.66) ( 62.08, 37.06) ( 62.50, 37.46) ( 62.92, 37.87) ( 63.33, 38.28) ( 63.75, 38.69) ( 64.17, 39.10) ( 64.58, 39.51) ( 65.00, 39.92) ( 65.42, 40.33) ( 65.83, 40.75) ( 66.25, 41.16) ( 66.67, 41.58) ( 67.08, 42.00) ( 67.50, 42.42) ( 67.92, 42.84) ( 68.33, 43.26) ( 68.75, 43.68) ( 69.17, 44.10) ( 69.58, 44.53) ( 70.00, 44.95) ( 70.42, 45.38) ( 70.83, 45.81) ( 71.25, 46.23) ( 71.67, 46.66) ( 72.08, 47.09) ( 72.50, 47.52) ( 72.92, 47.96) ( 73.33, 48.39) ( 73.75, 48.82) ( 74.17, 49.26) ( 74.58, 49.70) ( 75.00, 50.13) ( 75.42, 50.57) ( 75.83, 51.01) ( 76.25, 51.45) ( 76.67, 51.89) ( 77.08, 52.33) ( 77.50, 52.77) ( 77.92, 53.22) ( 78.33, 53.66) ( 78.75, 54.11) ( 79.17, 54.55) ( 79.58, 55.00) ( 80.00, 55.44) ( 80.42, 55.89) ( 80.83, 56.34) ( 81.25, 56.79) ( 81.67, 57.24) ( 82.08, 57.69) ( 82.50, 58.15) ( 82.92, 58.60) ( 83.33, 59.05) ( 83.75, 59.51) ( 84.17, 59.96) ( 84.58, 60.42) ( 85.00, 60.87) ( 85.42, 61.33) ( 85.83, 61.79) ( 86.25, 62.25) ( 86.67, 62.71) ( 87.08, 63.17) ( 87.50, 63.63) ( 87.92, 64.09) ( 88.33, 64.55) ( 88.75, 65.02) ( 89.17, 65.48) ( 89.58, 65.94) ( 90.00, 66.41) ( 90.42, 66.87) ( 90.83, 67.34) ( 91.25, 67.81) ( 91.67, 68.27) ( 92.08, 68.74) ( 92.50, 69.21) ( 92.92, 69.68) ( 93.33, 70.15) ( 93.75, 70.62) ( 94.17, 71.09) ( 94.58, 71.56) ( 95.00, 72.04) ( 95.42, 72.51) ( 95.83, 72.98) ( 96.25, 73.46) ( 96.67, 73.93) ( 97.08, 74.41) ( 97.50, 74.88) ( 97.92, 75.36) ( 98.33, 75.84) ( 98.75, 76.31) ( 99.17, 76.79) ( 99.58, 77.27) ( 100.00, 77.75) ( 0.08, 57.64) ( 0.50, 57.89) ( 0.92, 58.15) ( 1.33, 58.40) ( 1.75, 58.65) ( 2.17, 58.90) ( 2.58, 59.15) ( 3.00, 59.40) ( 3.42, 59.65) ( 3.83, 59.90) ( 4.25, 60.15) ( 4.67, 60.40) ( 5.08, 60.65) ( 5.50, 60.89) ( 5.92, 61.14) ( 6.33, 61.39) ( 6.75, 61.64) ( 7.17, 61.88) ( 7.58, 62.13) ( 8.00, 62.37) ( 8.42, 62.62) ( 8.83, 62.86) ( 9.25, 63.10) ( 9.67, 63.35) ( 10.08, 63.59) ( 10.50, 63.83) ( 10.92, 64.07) ( 11.33, 64.31) ( 11.75, 64.55) ( 12.17, 64.79) ( 12.58, 65.03) ( 13.00, 65.27) ( 13.42, 65.50) ( 13.83, 65.74) ( 14.25, 65.98) ( 14.67, 66.21) ( 15.08, 66.44) ( 15.50, 66.68) ( 15.92, 66.91) ( 16.33, 67.14) ( 16.75, 67.37) ( 17.17, 67.60) ( 17.58, 67.83) ( 18.00, 68.06) ( 18.42, 68.29) ( 18.83, 68.51) ( 19.25, 68.74) ( 19.67, 68.96) ( 20.08, 69.19) ( 20.50, 69.41) ( 20.92, 69.63) ( 21.33, 69.85) ( 21.75, 70.07) ( 22.17, 70.29) ( 22.58, 70.51) ( 23.00, 70.72) ( 23.42, 70.94) ( 23.83, 71.15) ( 24.25, 71.37) ( 24.67, 71.58) ( 25.08, 71.79) ( 25.50, 72.00) ( 25.92, 72.21) ( 26.33, 72.42) ( 26.75, 72.62) ( 27.17, 72.83) ( 27.58, 73.03) ( 28.00, 73.24) ( 28.42, 73.44) ( 28.83, 73.64) ( 29.25, 73.84) ( 29.67, 74.04) ( 30.08, 74.24) ( 30.50, 74.43) ( 30.92, 74.63) ( 31.33, 74.82) ( 31.75, 75.01) ( 32.17, 75.20) ( 32.58, 75.39) ( 33.00, 75.58) ( 33.42, 75.77) ( 33.83, 75.95) ( 34.25, 76.14) ( 34.67, 76.32) ( 35.08, 76.50) ( 35.50, 76.68) ( 35.92, 76.86) ( 36.33, 77.04) ( 36.75, 77.21) ( 37.17, 77.39) ( 37.58, 77.56) ( 38.00, 77.73) ( 38.42, 77.90) ( 38.83, 78.07) ( 39.25, 78.24) ( 39.67, 78.41) ( 40.08, 78.57) ( 40.50, 78.74) ( 40.92, 78.90) ( 41.33, 79.06) ( 41.75, 79.22) ( 42.17, 79.37) ( 42.58, 79.53) ( 43.00, 79.69) ( 43.42, 79.84) ( 43.83, 79.99) ( 44.25, 80.14) ( 44.67, 80.29) ( 45.08, 80.44) ( 45.50, 80.58) ( 45.92, 80.73) ( 46.33, 80.87) ( 46.75, 81.01) ( 47.17, 81.15) ( 47.58, 81.29) ( 48.00, 81.42) ( 48.42, 81.56) ( 48.83, 81.69) ( 49.25, 81.82) ( 49.67, 81.95) ( 50.08, 82.08) ( 50.50, 82.21) ( 50.92, 82.34) ( 51.33, 82.46) ( 51.75, 82.58) ( 52.17, 82.70) ( 52.58, 82.82) ( 53.00, 82.94) ( 53.42, 83.06) ( 53.83, 83.17) ( 54.25, 83.29) ( 54.67, 83.40) ( 55.08, 83.51) ( 55.50, 83.62) ( 55.92, 83.72) ( 56.33, 83.83) ( 56.75, 83.93) ( 57.17, 84.03) ( 57.58, 84.14) ( 58.00, 84.23) ( 58.42, 84.33) ( 58.83, 84.43) ( 59.25, 84.52) ( 59.67, 84.62) ( 60.08, 84.71) ( 60.50, 84.80) ( 60.92, 84.89) ( 61.33, 84.97) ( 61.75, 85.06) ( 62.17, 85.14) ( 62.58, 85.23) ( 63.00, 85.31) ( 63.42, 85.39) ( 63.83, 85.46) ( 64.25, 85.54) ( 64.67, 85.61) ( 65.08, 85.69) ( 65.50, 85.76) ( 65.92, 85.83) ( 66.33, 85.90) ( 66.75, 85.97) ( 67.17, 86.03) ( 67.58, 86.10) ( 68.00, 86.16) ( 68.42, 86.22) ( 68.83, 86.28) ( 69.25, 86.34) ( 69.67, 86.39) ( 70.08, 86.45) ( 70.50, 86.50) ( 70.92, 86.56) ( 71.33, 86.61) ( 71.75, 86.66) ( 72.17, 86.71) ( 72.58, 86.75) ( 73.00, 86.80) ( 73.42, 86.84) ( 73.83, 86.88) ( 74.25, 86.93) ( 74.67, 86.97) ( 75.08, 87.00) ( 75.50, 87.04) ( 75.92, 87.08) ( 76.33, 87.11) ( 76.75, 87.14) ( 77.17, 87.18) ( 77.58, 87.21) ( 78.00, 87.23) ( 78.42, 87.26) ( 78.83, 87.29) ( 79.25, 87.31) ( 79.67, 87.34) ( 80.08, 87.36) ( 80.50, 87.38) ( 80.92, 87.40) ( 81.33, 87.42) ( 81.75, 87.43) ( 82.17, 87.45) ( 82.58, 87.47) ( 83.00, 87.48) ( 83.42, 87.49) ( 83.83, 87.50) ( 84.25, 87.51) ( 84.67, 87.52) ( 85.08, 87.53) ( 85.50, 87.53) ( 85.92, 87.54) ( 86.33, 87.54) ( 86.75, 87.54) ( 87.17, 87.54) ( 87.58, 87.54) ( 88.00, 87.54) ( 88.42, 87.54) ( 88.83, 87.54) ( 89.25, 87.53) ( 89.67, 87.53) ( 90.08, 87.52) ( 90.50, 87.51) ( 90.92, 87.50) ( 91.33, 87.49) ( 91.75, 87.48) ( 92.17, 87.47) ( 92.58, 87.46) ( 93.00, 87.44) ( 93.42, 87.43) ( 93.83, 87.41) ( 94.25, 87.39) ( 94.67, 87.37) ( 95.08, 87.35) ( 95.50, 87.33) ( 95.92, 87.31) ( 96.33, 87.29) ( 96.75, 87.27) ( 97.17, 87.24) ( 97.58, 87.22) ( 98.00, 87.19) ( 98.42, 87.16) ( 98.83, 87.13) ( 99.25, 87.10) ( 99.67, 87.07) (100,100) (0,0)[(100,100)]{} (12.5,0)(12.5,0)[7]{}[(0,1)[2]{}]{} (0,16.67)(0,16.67)[5]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[40.0]{} (50,-8)[$\sigma$]{} (-12,0)[0.0]{} (-12,96)[12.0]{} (-12,60)[$\Omega$]{} (95,102)[1]{} (102,12)[2]{} ( 25.00, 62.50) ( 37.50, 67.50) ( 50.00, 69.17) ( 62.50, 85.00) ( 75.00, 72.50) ( 75.95, 66.67) ( 78.23, 43.33) ( 0.25, 46.97) ( 0.50, 47.11) ( 0.75, 47.25) ( 1.00, 47.39) ( 1.25, 47.52) ( 1.50, 47.66) ( 1.75, 47.80) ( 2.00, 47.94) ( 2.25, 48.08) ( 2.50, 48.22) ( 2.75, 48.35) ( 3.00, 48.49) ( 3.25, 48.63) ( 3.50, 48.77) ( 3.75, 48.91) ( 4.00, 49.05) ( 4.25, 49.18) ( 4.50, 49.32) ( 4.75, 49.46) ( 5.00, 49.60) ( 5.25, 49.74) ( 5.50, 49.88) ( 5.75, 50.01) ( 6.00, 50.15) ( 6.25, 50.29) ( 6.50, 50.43) ( 6.75, 50.57) ( 7.00, 50.71) ( 7.25, 50.84) ( 7.50, 50.98) ( 7.75, 51.12) ( 8.00, 51.26) ( 8.25, 51.40) ( 8.50, 51.54) ( 8.75, 51.67) ( 9.00, 51.81) ( 9.25, 51.95) ( 9.50, 52.09) ( 9.75, 52.23) ( 10.00, 52.37) ( 10.25, 52.50) ( 10.50, 52.64) ( 10.75, 52.78) ( 11.00, 52.92) ( 11.25, 53.06) ( 11.50, 53.20) ( 11.75, 53.33) ( 12.00, 53.47) ( 12.25, 53.61) ( 12.50, 53.75) ( 12.75, 53.89) ( 13.00, 54.03) ( 13.25, 54.16) ( 13.50, 54.30) ( 13.75, 54.44) ( 14.00, 54.58) ( 14.25, 54.72) ( 14.50, 54.86) ( 14.75, 54.99) ( 15.00, 55.13) ( 15.25, 55.27) ( 15.50, 55.41) ( 15.75, 55.55) ( 16.00, 55.69) ( 16.25, 55.82) ( 16.50, 55.96) ( 16.75, 56.10) ( 17.00, 56.24) ( 17.25, 56.38) ( 17.50, 56.52) ( 17.75, 56.65) ( 18.00, 56.79) ( 18.25, 56.93) ( 18.50, 57.07) ( 18.75, 57.21) ( 19.00, 57.35) ( 19.25, 57.48) ( 19.50, 57.62) ( 19.75, 57.76) ( 20.00, 57.90) ( 20.25, 58.04) ( 20.50, 58.18) ( 20.75, 58.31) ( 21.00, 58.45) ( 21.25, 58.59) ( 21.50, 58.73) ( 21.75, 58.87) ( 22.00, 59.01) ( 22.25, 59.14) ( 22.50, 59.28) ( 22.75, 59.42) ( 23.00, 59.56) ( 23.25, 59.70) ( 23.50, 59.84) ( 23.75, 59.97) ( 24.00, 60.11) ( 24.25, 60.25) ( 24.50, 60.39) ( 24.75, 60.53) ( 25.00, 60.67) ( 25.25, 60.80) ( 25.50, 60.94) ( 25.75, 61.08) ( 26.00, 61.22) ( 26.25, 61.36) ( 26.50, 61.50) ( 26.75, 61.63) ( 27.00, 61.77) ( 27.25, 61.91) ( 27.50, 62.05) ( 27.75, 62.19) ( 28.00, 62.33) ( 28.25, 62.46) ( 28.50, 62.60) ( 28.75, 62.74) ( 29.00, 62.88) ( 29.25, 63.02) ( 29.50, 63.16) ( 29.75, 63.29) ( 30.00, 63.43) ( 30.25, 63.57) ( 30.50, 63.71) ( 30.75, 63.85) ( 31.00, 63.99) ( 31.25, 64.12) ( 31.50, 64.26) ( 31.75, 64.40) ( 32.00, 64.54) ( 32.25, 64.68) ( 32.50, 64.82) ( 32.75, 64.95) ( 33.00, 65.09) ( 33.25, 65.23) ( 33.50, 65.37) ( 33.75, 65.51) ( 34.00, 65.65) ( 34.25, 65.78) ( 34.50, 65.92) ( 34.75, 66.06) ( 35.00, 66.20) ( 35.25, 66.34) ( 35.50, 66.48) ( 35.75, 66.61) ( 36.00, 66.75) ( 36.25, 66.89) ( 36.50, 67.03) ( 36.75, 67.17) ( 37.00, 67.31) ( 37.25, 67.44) ( 37.50, 67.58) ( 37.75, 67.72) ( 38.00, 67.86) ( 38.25, 68.00) ( 38.50, 68.14) ( 38.75, 68.27) ( 39.00, 68.41) ( 39.25, 68.55) ( 39.50, 68.69) ( 39.75, 68.83) ( 40.00, 68.97) ( 40.25, 69.11) ( 40.50, 69.24) ( 40.75, 69.38) ( 41.00, 69.52) ( 41.25, 69.66) ( 41.50, 69.80) ( 41.75, 69.93) ( 42.00, 70.07) ( 42.25, 70.21) ( 42.50, 70.35) ( 42.75, 70.49) ( 43.00, 70.63) ( 43.25, 70.77) ( 43.50, 70.90) ( 43.75, 71.04) ( 44.00, 71.18) ( 44.25, 71.32) ( 44.50, 71.46) ( 44.75, 71.59) ( 45.00, 71.73) ( 45.25, 71.87) ( 45.50, 72.01) ( 45.75, 72.15) ( 46.00, 72.29) ( 46.25, 72.43) ( 46.50, 72.56) ( 46.75, 72.70) ( 47.00, 72.84) ( 47.25, 72.98) ( 47.50, 73.12) ( 47.75, 73.26) ( 48.00, 73.39) ( 48.25, 73.53) ( 48.50, 73.67) ( 48.75, 73.81) ( 49.00, 73.95) ( 49.25, 74.09) ( 49.50, 74.22) ( 49.75, 74.36) ( 50.00, 74.50) ( 50.25, 74.64) ( 50.50, 74.78) ( 50.75, 74.92) ( 51.00, 75.05) ( 51.25, 75.19) ( 51.50, 75.33) ( 51.75, 75.47) ( 52.00, 75.61) ( 52.25, 75.75) ( 52.50, 75.88) ( 52.75, 76.02) ( 53.00, 76.16) ( 53.25, 76.30) ( 53.50, 76.44) ( 53.75, 76.58) ( 54.00, 76.71) ( 54.25, 76.85) ( 54.50, 76.99) ( 54.75, 77.13) ( 55.00, 77.27) ( 55.25, 77.41) ( 55.50, 77.54) ( 55.75, 77.68) ( 56.00, 77.82) ( 56.25, 77.96) ( 56.50, 78.10) ( 56.75, 78.24) ( 57.00, 78.37) ( 57.25, 78.51) ( 57.50, 78.65) ( 57.75, 78.79) ( 58.00, 78.93) ( 58.25, 79.07) ( 58.50, 79.20) ( 58.75, 79.34) ( 59.00, 79.48) ( 59.25, 79.62) ( 59.50, 79.76) ( 59.75, 79.90) ( 60.00, 80.03) ( 60.25, 80.17) ( 60.50, 80.31) ( 60.75, 80.45) ( 61.00, 80.59) ( 61.25, 80.73) ( 61.50, 80.86) ( 61.75, 81.00) ( 62.00, 81.14) ( 62.25, 81.28) ( 62.50, 81.42) ( 62.75, 81.56) ( 63.00, 81.69) ( 63.25, 81.83) ( 63.50, 81.97) ( 63.75, 82.11) ( 64.00, 82.25) ( 64.25, 82.39) ( 64.50, 82.52) ( 64.75, 82.66) ( 65.00, 82.80) ( 65.25, 82.94) ( 65.50, 83.08) ( 65.75, 83.22) ( 66.00, 83.35) ( 66.25, 83.49) ( 66.50, 83.63) ( 66.75, 83.77) ( 67.00, 83.91) ( 67.25, 84.05) ( 67.50, 84.18) ( 67.75, 84.32) ( 68.00, 84.46) ( 68.25, 84.60) ( 68.50, 84.74) ( 68.75, 84.88) ( 69.00, 85.01) ( 69.25, 85.15) ( 69.50, 85.29) ( 69.75, 85.43) ( 70.00, 85.57) ( 70.25, 85.71) ( 70.50, 85.84) ( 70.75, 85.98) ( 71.00, 86.12) ( 71.25, 86.26) ( 71.50, 86.40) ( 71.75, 86.54) ( 72.00, 86.67) ( 72.25, 86.81) ( 72.50, 86.95) ( 72.75, 87.09) ( 73.00, 87.23) ( 73.25, 87.37) ( 73.50, 87.50) ( 73.75, 87.64) ( 74.00, 87.78) ( 74.25, 87.92) ( 74.50, 88.06) ( 74.75, 88.20) ( 75.00, 88.33) ( 75.25, 88.47) ( 75.50, 88.61) ( 75.75, 88.75) ( 76.00, 88.89) ( 76.25, 89.03) ( 76.50, 89.16) ( 76.75, 89.30) ( 77.00, 89.44) ( 77.25, 89.58) ( 77.50, 89.72) ( 77.75, 89.86) ( 78.00, 89.99) ( 78.25, 90.13) ( 78.50, 90.27) ( 78.75, 90.41) ( 79.00, 90.55) ( 79.25, 90.69) ( 79.50, 90.82) ( 79.75, 90.96) ( 80.00, 91.10) ( 80.25, 91.24) ( 80.50, 91.38) ( 80.75, 91.52) ( 81.00, 91.65) ( 81.25, 91.79) ( 81.50, 91.93) ( 81.75, 92.07) ( 82.00, 92.21) ( 82.25, 92.35) ( 82.50, 92.48) ( 82.75, 92.62) ( 83.00, 92.76) ( 83.25, 92.90) ( 83.50, 93.04) ( 83.75, 93.18) ( 84.00, 93.31) ( 84.25, 93.45) ( 84.50, 93.59) ( 84.75, 93.73) ( 85.00, 93.87) ( 85.25, 94.01) ( 85.50, 94.14) ( 85.75, 94.28) ( 86.00, 94.42) ( 86.25, 94.56) ( 86.50, 94.70) ( 86.75, 94.84) ( 87.00, 94.97) ( 87.25, 95.11) ( 87.50, 95.25) ( 87.75, 95.39) ( 88.00, 95.53) ( 88.25, 95.67) ( 88.50, 95.80) ( 88.75, 95.94) ( 89.00, 96.08) ( 89.25, 96.22) ( 89.50, 96.36) ( 89.75, 96.50) ( 90.00, 96.63) ( 90.25, 96.77) ( 90.50, 96.91) ( 90.75, 97.05) ( 91.00, 97.19) ( 91.25, 97.33) ( 91.50, 97.46) ( 91.75, 97.60) ( 92.00, 97.74) ( 92.25, 97.88) ( 92.50, 98.02) ( 92.75, 98.16) ( 93.00, 98.29) ( 93.25, 98.43) ( 93.50, 98.57) ( 93.75, 98.71) ( 94.00, 98.85) ( 94.25, 98.99) ( 94.50, 99.12) ( 94.75, 99.26) ( 95.00, 99.40) ( 95.25, 99.54) ( 95.50, 99.68) ( 95.75, 99.82) ( 96.00, 99.95) ( 56.50, 99.96) ( 56.60, 99.76) ( 56.70, 99.55) ( 56.80, 99.35) ( 56.90, 99.15) ( 57.00, 98.95) ( 57.10, 98.75) ( 57.20, 98.55) ( 57.30, 98.35) ( 57.40, 98.14) ( 57.50, 97.94) ( 57.60, 97.74) ( 57.70, 97.54) ( 57.80, 97.34) ( 57.90, 97.14) ( 58.00, 96.94) ( 58.10, 96.73) ( 58.20, 96.53) ( 58.30, 96.33) ( 58.40, 96.13) ( 58.50, 95.93) ( 58.60, 95.73) ( 58.70, 95.53) ( 58.80, 95.32) ( 58.90, 95.12) ( 59.00, 94.92) ( 59.10, 94.72) ( 59.20, 94.52) ( 59.30, 94.32) ( 59.40, 94.11) ( 59.50, 93.91) ( 59.60, 93.71) ( 59.70, 93.51) ( 59.80, 93.31) ( 59.90, 93.11) ( 60.00, 92.91) ( 60.10, 92.70) ( 60.20, 92.50) ( 60.30, 92.30) ( 60.40, 92.10) ( 60.50, 91.90) ( 60.60, 91.70) ( 60.70, 91.50) ( 60.80, 91.29) ( 60.90, 91.09) ( 61.00, 90.89) ( 61.10, 90.69) ( 61.20, 90.49) ( 61.30, 90.29) ( 61.40, 90.08) ( 61.50, 89.88) ( 61.60, 89.68) ( 61.70, 89.48) ( 61.80, 89.28) ( 61.90, 89.08) ( 62.00, 88.88) ( 62.10, 88.67) ( 62.20, 88.47) ( 62.30, 88.27) ( 62.40, 88.07) ( 62.50, 87.87) ( 62.60, 87.67) ( 62.70, 87.47) ( 62.80, 87.26) ( 62.90, 87.06) ( 63.00, 86.86) ( 63.10, 86.66) ( 63.20, 86.46) ( 63.30, 86.26) ( 63.40, 86.06) ( 63.50, 85.85) ( 63.60, 85.65) ( 63.70, 85.45) ( 63.80, 85.25) ( 63.90, 85.05) ( 64.00, 84.85) ( 64.10, 84.64) ( 64.20, 84.44) ( 64.30, 84.24) ( 64.40, 84.04) ( 64.50, 83.84) ( 64.60, 83.64) ( 64.70, 83.44) ( 64.80, 83.23) ( 64.90, 83.03) ( 65.00, 82.83) ( 65.10, 82.63) ( 65.20, 82.43) ( 65.30, 82.23) ( 65.40, 82.03) ( 65.50, 81.82) ( 65.60, 81.62) ( 65.70, 81.42) ( 65.80, 81.22) ( 65.90, 81.02) ( 66.00, 80.82) ( 66.10, 80.62) ( 66.20, 80.41) ( 66.30, 80.21) ( 66.40, 80.01) ( 66.50, 79.81) ( 66.60, 79.61) ( 66.70, 79.41) ( 66.80, 79.20) ( 66.90, 79.00) ( 67.00, 78.80) ( 67.10, 78.60) ( 67.20, 78.40) ( 67.30, 78.20) ( 67.40, 78.00) ( 67.50, 77.79) ( 67.60, 77.59) ( 67.70, 77.39) ( 67.80, 77.19) ( 67.90, 76.99) ( 68.00, 76.79) ( 68.10, 76.59) ( 68.20, 76.38) ( 68.30, 76.18) ( 68.40, 75.98) ( 68.50, 75.78) ( 68.60, 75.58) ( 68.70, 75.38) ( 68.80, 75.17) ( 68.90, 74.97) ( 69.00, 74.77) ( 69.10, 74.57) ( 69.20, 74.37) ( 69.30, 74.17) ( 69.40, 73.97) ( 69.50, 73.76) ( 69.60, 73.56) ( 69.70, 73.36) ( 69.80, 73.16) ( 69.90, 72.96) ( 70.00, 72.76) ( 70.10, 72.56) ( 70.20, 72.35) ( 70.30, 72.15) ( 70.40, 71.95) ( 70.50, 71.75) ( 70.60, 71.55) ( 70.70, 71.35) ( 70.80, 71.15) ( 70.90, 70.94) ( 71.00, 70.74) ( 71.10, 70.54) ( 71.20, 70.34) ( 71.30, 70.14) ( 71.40, 69.94) ( 71.50, 69.73) ( 71.60, 69.53) ( 71.70, 69.33) ( 71.80, 69.13) ( 71.90, 68.93) ( 72.00, 68.73) ( 72.10, 68.53) ( 72.20, 68.32) ( 72.30, 68.12) ( 72.40, 67.92) ( 72.50, 67.72) ( 72.60, 67.52) ( 72.70, 67.32) ( 72.80, 67.12) ( 72.90, 66.91) ( 73.00, 66.71) ( 73.10, 66.51) ( 73.20, 66.31) ( 73.30, 66.11) ( 73.40, 65.91) ( 73.50, 65.70) ( 73.60, 65.50) ( 73.70, 65.30) ( 73.80, 65.10) ( 73.90, 64.90) ( 74.00, 64.70) ( 74.10, 64.50) ( 74.20, 64.29) ( 74.30, 64.09) ( 74.40, 63.89) ( 74.50, 63.69) ( 74.60, 63.49) ( 74.70, 63.29) ( 74.80, 63.09) ( 74.90, 62.88) ( 75.00, 62.68) ( 75.10, 62.48) ( 75.20, 62.28) ( 75.30, 62.08) ( 75.40, 61.88) ( 75.50, 61.68) ( 75.60, 61.47) ( 75.70, 61.27) ( 75.80, 61.07) ( 75.90, 60.87) ( 76.00, 60.67) ( 76.10, 60.47) ( 76.20, 60.26) ( 76.30, 60.06) ( 76.40, 59.86) ( 76.50, 59.66) ( 76.60, 59.46) ( 76.70, 59.26) ( 76.80, 59.06) ( 76.90, 58.85) ( 77.00, 58.65) ( 77.10, 58.45) ( 77.20, 58.25) ( 77.30, 58.05) ( 77.40, 57.85) ( 77.50, 57.65) ( 77.60, 57.44) ( 77.70, 57.24) ( 77.80, 57.04) ( 77.90, 56.84) ( 78.00, 56.64) ( 78.10, 56.44) ( 78.20, 56.24) ( 78.30, 56.03) ( 78.40, 55.83) ( 78.50, 55.63) ( 78.60, 55.43) ( 78.70, 55.23) ( 78.80, 55.03) ( 78.90, 54.82) ( 79.00, 54.62) ( 79.10, 54.42) ( 79.20, 54.22) ( 79.30, 54.02) ( 79.40, 53.82) ( 79.50, 53.62) ( 79.60, 53.41) ( 79.70, 53.21) ( 79.80, 53.01) ( 79.90, 52.81) ( 80.00, 52.61) ( 80.10, 52.41) ( 80.20, 52.21) ( 80.30, 52.00) ( 80.40, 51.80) ( 80.50, 51.60) ( 80.60, 51.40) ( 80.70, 51.20) ( 80.80, 51.00) ( 80.90, 50.79) ( 81.00, 50.59) ( 81.10, 50.39) ( 81.20, 50.19) ( 81.30, 49.99) ( 81.40, 49.79) ( 81.50, 49.59) ( 81.60, 49.38) ( 81.70, 49.18) ( 81.80, 48.98) ( 81.90, 48.78) ( 82.00, 48.58) ( 82.10, 48.38) ( 82.20, 48.18) ( 82.30, 47.97) ( 82.40, 47.77) ( 82.50, 47.57) ( 82.60, 47.37) ( 82.70, 47.17) ( 82.80, 46.97) ( 82.90, 46.77) ( 83.00, 46.56) ( 83.10, 46.36) ( 83.20, 46.16) ( 83.30, 45.96) ( 83.40, 45.76) ( 83.50, 45.56) ( 83.60, 45.35) ( 83.70, 45.15) ( 83.80, 44.95) ( 83.90, 44.75) ( 84.00, 44.55) ( 84.10, 44.35) ( 84.20, 44.15) ( 84.30, 43.94) ( 84.40, 43.74) ( 84.50, 43.54) ( 84.60, 43.34) ( 84.70, 43.14) ( 84.80, 42.94) ( 84.90, 42.74) ( 85.00, 42.53) ( 85.10, 42.33) ( 85.20, 42.13) ( 85.30, 41.93) ( 85.40, 41.73) ( 85.50, 41.53) ( 85.60, 41.33) ( 85.70, 41.12) ( 85.80, 40.92) ( 85.90, 40.72) ( 86.00, 40.52) ( 86.10, 40.32) ( 86.20, 40.12) ( 86.30, 39.91) ( 86.40, 39.71) ( 86.50, 39.51) ( 86.60, 39.31) ( 86.70, 39.11) ( 86.80, 38.91) ( 86.90, 38.71) ( 87.00, 38.50) ( 87.10, 38.30) ( 87.20, 38.10) ( 87.30, 37.90) ( 87.40, 37.70) ( 87.50, 37.50) ( 87.60, 37.30) ( 87.70, 37.09) ( 87.80, 36.89) ( 87.90, 36.69) ( 88.00, 36.49) ( 88.10, 36.29) ( 88.20, 36.09) ( 88.30, 35.88) ( 88.40, 35.68) ( 88.50, 35.48) ( 88.60, 35.28) ( 88.70, 35.08) ( 88.80, 34.88) ( 88.90, 34.68) ( 89.00, 34.47) ( 89.10, 34.27) ( 89.20, 34.07) ( 89.30, 33.87) ( 89.40, 33.67) ( 89.50, 33.47) ( 89.60, 33.27) ( 89.70, 33.06) ( 89.80, 32.86) ( 89.90, 32.66) ( 90.00, 32.46) ( 90.10, 32.26) ( 90.20, 32.06) ( 90.30, 31.86) ( 90.40, 31.65) ( 90.50, 31.45) ( 90.60, 31.25) ( 90.70, 31.05) ( 90.80, 30.85) ( 90.90, 30.65) ( 91.00, 30.44) ( 91.10, 30.24) ( 91.20, 30.04) ( 91.30, 29.84) ( 91.40, 29.64) ( 91.50, 29.44) ( 91.60, 29.24) ( 91.70, 29.03) ( 91.80, 28.83) ( 91.90, 28.63) ( 92.00, 28.43) ( 92.10, 28.23) ( 92.20, 28.03) ( 92.30, 27.83) ( 92.40, 27.62) ( 92.50, 27.42) ( 92.60, 27.22) ( 92.70, 27.02) ( 92.80, 26.82) ( 92.90, 26.62) ( 93.00, 26.42) ( 93.10, 26.21) ( 93.20, 26.01) ( 93.30, 25.81) ( 93.40, 25.61) ( 93.50, 25.41) ( 93.60, 25.21) ( 93.70, 25.00) ( 93.80, 24.80) ( 93.90, 24.60) ( 94.00, 24.40) ( 94.10, 24.20) ( 94.20, 24.00) ( 94.30, 23.80) ( 94.40, 23.59) ( 94.50, 23.39) ( 94.60, 23.19) ( 94.70, 22.99) ( 94.80, 22.79) ( 94.90, 22.59) ( 95.00, 22.39) ( 95.10, 22.18) ( 95.20, 21.98) ( 95.30, 21.78) ( 95.40, 21.58) ( 95.50, 21.38) ( 95.60, 21.18) ( 95.70, 20.97) ( 95.80, 20.77) ( 95.90, 20.57) ( 96.00, 20.37) ( 96.10, 20.17) ( 96.20, 19.97) ( 96.30, 19.77) ( 96.40, 19.56) ( 96.50, 19.36) ( 96.60, 19.16) ( 96.70, 18.96) ( 96.80, 18.76) ( 96.90, 18.56) ( 97.00, 18.36) ( 97.10, 18.15) ( 97.20, 17.95) ( 97.30, 17.75) ( 97.40, 17.55) ( 97.50, 17.35) ( 97.60, 17.15) ( 97.70, 16.95) ( 97.80, 16.74) ( 97.90, 16.54) ( 98.00, 16.34) ( 98.10, 16.14) ( 98.20, 15.94) ( 98.30, 15.74) ( 98.40, 15.53) ( 98.50, 15.33) ( 98.60, 15.13) ( 98.70, 14.93) ( 98.80, 14.73) ( 98.90, 14.53) ( 99.00, 14.33) ( 99.10, 14.12) ( 99.20, 13.92) ( 99.30, 13.72) ( 99.40, 13.52) ( 99.50, 13.32) ( 99.60, 13.12) ( 99.70, 12.92) ( 99.80, 12.71) ( 99.90, 12.51) ( 100.00, 12.31) (100,100) (0,0)[(100,100)]{} (12.5,0)(12.5,0)[7]{}[(0,1)[2]{}]{} (0,20)(0,20)[4]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[40.0]{} (50,-8)[$\sigma$]{} (-12,0)[0.0]{} (-12,96)[5.0]{} (-12,60)[$\Sigma$]{} (68,102)[1]{} (102,4)[2]{} ( 25.00, 70.00) ( 37.50, 74.00) ( 50.00, 88.00) ( 62.50, 80.00) ( 75.00, 58.00) ( 75.95, 48.00) ( 78.23, 50.00) ( 0.25, 50.51) ( 0.50, 50.69) ( 0.75, 50.87) ( 1.00, 51.05) ( 1.25, 51.23) ( 1.50, 51.41) ( 1.75, 51.59) ( 2.00, 51.77) ( 2.25, 51.95) ( 2.50, 52.13) ( 2.75, 52.31) ( 3.00, 52.49) ( 3.25, 52.67) ( 3.50, 52.85) ( 3.75, 53.03) ( 4.00, 53.21) ( 4.25, 53.39) ( 4.50, 53.57) ( 4.75, 53.75) ( 5.00, 53.93) ( 5.25, 54.11) ( 5.50, 54.29) ( 5.75, 54.47) ( 6.00, 54.65) ( 6.25, 54.83) ( 6.50, 55.01) ( 6.75, 55.19) ( 7.00, 55.37) ( 7.25, 55.55) ( 7.50, 55.73) ( 7.75, 55.91) ( 8.00, 56.09) ( 8.25, 56.27) ( 8.50, 56.45) ( 8.75, 56.63) ( 9.00, 56.81) ( 9.25, 56.99) ( 9.50, 57.17) ( 9.75, 57.35) ( 10.00, 57.53) ( 10.25, 57.71) ( 10.50, 57.89) ( 10.75, 58.07) ( 11.00, 58.25) ( 11.25, 58.43) ( 11.50, 58.61) ( 11.75, 58.79) ( 12.00, 58.97) ( 12.25, 59.15) ( 12.50, 59.33) ( 12.75, 59.51) ( 13.00, 59.69) ( 13.25, 59.87) ( 13.50, 60.05) ( 13.75, 60.23) ( 14.00, 60.41) ( 14.25, 60.59) ( 14.50, 60.77) ( 14.75, 60.95) ( 15.00, 61.13) ( 15.25, 61.31) ( 15.50, 61.49) ( 15.75, 61.67) ( 16.00, 61.85) ( 16.25, 62.03) ( 16.50, 62.21) ( 16.75, 62.39) ( 17.00, 62.57) ( 17.25, 62.75) ( 17.50, 62.93) ( 17.75, 63.11) ( 18.00, 63.29) ( 18.25, 63.47) ( 18.50, 63.65) ( 18.75, 63.83) ( 19.00, 64.01) ( 19.25, 64.19) ( 19.50, 64.37) ( 19.75, 64.55) ( 20.00, 64.73) ( 20.25, 64.91) ( 20.50, 65.09) ( 20.75, 65.27) ( 21.00, 65.45) ( 21.25, 65.63) ( 21.50, 65.81) ( 21.75, 65.99) ( 22.00, 66.17) ( 22.25, 66.35) ( 22.50, 66.53) ( 22.75, 66.71) ( 23.00, 66.89) ( 23.25, 67.07) ( 23.50, 67.25) ( 23.75, 67.43) ( 24.00, 67.61) ( 24.25, 67.79) ( 24.50, 67.97) ( 24.75, 68.15) ( 25.00, 68.33) ( 25.25, 68.51) ( 25.50, 68.69) ( 25.75, 68.87) ( 26.00, 69.05) ( 26.25, 69.23) ( 26.50, 69.41) ( 26.75, 69.59) ( 27.00, 69.77) ( 27.25, 69.95) ( 27.50, 70.13) ( 27.75, 70.31) ( 28.00, 70.49) ( 28.25, 70.67) ( 28.50, 70.85) ( 28.75, 71.03) ( 29.00, 71.21) ( 29.25, 71.39) ( 29.50, 71.57) ( 29.75, 71.75) ( 30.00, 71.93) ( 30.25, 72.11) ( 30.50, 72.29) ( 30.75, 72.47) ( 31.00, 72.65) ( 31.25, 72.83) ( 31.50, 73.01) ( 31.75, 73.19) ( 32.00, 73.37) ( 32.25, 73.55) ( 32.50, 73.73) ( 32.75, 73.91) ( 33.00, 74.09) ( 33.25, 74.27) ( 33.50, 74.45) ( 33.75, 74.63) ( 34.00, 74.81) ( 34.25, 74.99) ( 34.50, 75.17) ( 34.75, 75.35) ( 35.00, 75.53) ( 35.25, 75.71) ( 35.50, 75.89) ( 35.75, 76.07) ( 36.00, 76.25) ( 36.25, 76.43) ( 36.50, 76.61) ( 36.75, 76.79) ( 37.00, 76.97) ( 37.25, 77.15) ( 37.50, 77.33) ( 37.75, 77.51) ( 38.00, 77.69) ( 38.25, 77.87) ( 38.50, 78.05) ( 38.75, 78.23) ( 39.00, 78.41) ( 39.25, 78.59) ( 39.50, 78.77) ( 39.75, 78.95) ( 40.00, 79.13) ( 40.25, 79.31) ( 40.50, 79.49) ( 40.75, 79.67) ( 41.00, 79.85) ( 41.25, 80.03) ( 41.50, 80.21) ( 41.75, 80.39) ( 42.00, 80.57) ( 42.25, 80.75) ( 42.50, 80.93) ( 42.75, 81.11) ( 43.00, 81.29) ( 43.25, 81.47) ( 43.50, 81.65) ( 43.75, 81.83) ( 44.00, 82.01) ( 44.25, 82.19) ( 44.50, 82.37) ( 44.75, 82.55) ( 45.00, 82.73) ( 45.25, 82.91) ( 45.50, 83.09) ( 45.75, 83.27) ( 46.00, 83.45) ( 46.25, 83.63) ( 46.50, 83.81) ( 46.75, 83.99) ( 47.00, 84.17) ( 47.25, 84.35) ( 47.50, 84.53) ( 47.75, 84.71) ( 48.00, 84.89) ( 48.25, 85.07) ( 48.50, 85.25) ( 48.75, 85.43) ( 49.00, 85.61) ( 49.25, 85.79) ( 49.50, 85.97) ( 49.75, 86.15) ( 50.00, 86.33) ( 50.25, 86.51) ( 50.50, 86.69) ( 50.75, 86.87) ( 51.00, 87.05) ( 51.25, 87.23) ( 51.50, 87.41) ( 51.75, 87.59) ( 52.00, 87.77) ( 52.25, 87.95) ( 52.50, 88.13) ( 52.75, 88.31) ( 53.00, 88.49) ( 53.25, 88.67) ( 53.50, 88.85) ( 53.75, 89.03) ( 54.00, 89.21) ( 54.25, 89.39) ( 54.50, 89.57) ( 54.75, 89.75) ( 55.00, 89.93) ( 55.25, 90.11) ( 55.50, 90.29) ( 55.75, 90.47) ( 56.00, 90.65) ( 56.25, 90.83) ( 56.50, 91.01) ( 56.75, 91.19) ( 57.00, 91.37) ( 57.25, 91.55) ( 57.50, 91.73) ( 57.75, 91.91) ( 58.00, 92.09) ( 58.25, 92.27) ( 58.50, 92.45) ( 58.75, 92.63) ( 59.00, 92.81) ( 59.25, 92.99) ( 59.50, 93.17) ( 59.75, 93.35) ( 60.00, 93.53) ( 60.25, 93.71) ( 60.50, 93.89) ( 60.75, 94.07) ( 61.00, 94.25) ( 61.25, 94.43) ( 61.50, 94.61) ( 61.75, 94.79) ( 62.00, 94.97) ( 62.25, 95.15) ( 62.50, 95.33) ( 62.75, 95.51) ( 63.00, 95.69) ( 63.25, 95.87) ( 63.50, 96.05) ( 63.75, 96.23) ( 64.00, 96.41) ( 64.25, 96.59) ( 64.50, 96.77) ( 64.75, 96.95) ( 65.00, 97.13) ( 65.25, 97.31) ( 65.50, 97.49) ( 65.75, 97.67) ( 66.00, 97.85) ( 66.25, 98.03) ( 66.50, 98.21) ( 66.75, 98.39) ( 67.00, 98.57) ( 67.25, 98.75) ( 67.50, 98.93) ( 67.75, 99.11) ( 68.00, 99.29) ( 68.25, 99.47) ( 68.50, 99.65) ( 68.75, 99.83) ( 52.75, 99.54) ( 53.00, 99.04) ( 53.25, 98.53) ( 53.50, 98.03) ( 53.75, 97.53) ( 54.00, 97.03) ( 54.25, 96.52) ( 54.50, 96.02) ( 54.75, 95.52) ( 55.00, 95.02) ( 55.25, 94.51) ( 55.50, 94.01) ( 55.75, 93.51) ( 56.00, 93.01) ( 56.25, 92.50) ( 56.50, 92.00) ( 56.75, 91.50) ( 57.00, 91.00) ( 57.25, 90.49) ( 57.50, 89.99) ( 57.75, 89.49) ( 58.00, 88.99) ( 58.25, 88.48) ( 58.50, 87.98) ( 58.75, 87.48) ( 59.00, 86.98) ( 59.25, 86.47) ( 59.50, 85.97) ( 59.75, 85.47) ( 60.00, 84.97) ( 60.25, 84.46) ( 60.50, 83.96) ( 60.75, 83.46) ( 61.00, 82.96) ( 61.25, 82.45) ( 61.50, 81.95) ( 61.75, 81.45) ( 62.00, 80.95) ( 62.25, 80.44) ( 62.50, 79.94) ( 62.75, 79.44) ( 63.00, 78.94) ( 63.25, 78.43) ( 63.50, 77.93) ( 63.75, 77.43) ( 64.00, 76.93) ( 64.25, 76.42) ( 64.50, 75.92) ( 64.75, 75.42) ( 65.00, 74.92) ( 65.25, 74.41) ( 65.50, 73.91) ( 65.75, 73.41) ( 66.00, 72.91) ( 66.25, 72.40) ( 66.50, 71.90) ( 66.75, 71.40) ( 67.00, 70.90) ( 67.25, 70.39) ( 67.50, 69.89) ( 67.75, 69.39) ( 68.00, 68.89) ( 68.25, 68.39) ( 68.50, 67.88) ( 68.75, 67.38) ( 69.00, 66.88) ( 69.25, 66.38) ( 69.50, 65.87) ( 69.75, 65.37) ( 70.00, 64.87) ( 70.25, 64.37) ( 70.50, 63.86) ( 70.75, 63.36) ( 71.00, 62.86) ( 71.25, 62.36) ( 71.50, 61.85) ( 71.75, 61.35) ( 72.00, 60.85) ( 72.25, 60.35) ( 72.50, 59.84) ( 72.75, 59.34) ( 73.00, 58.84) ( 73.25, 58.34) ( 73.50, 57.83) ( 73.75, 57.33) ( 74.00, 56.83) ( 74.25, 56.33) ( 74.50, 55.82) ( 74.75, 55.32) ( 75.00, 54.82) ( 75.25, 54.32) ( 75.50, 53.81) ( 75.75, 53.31) ( 76.00, 52.81) ( 76.25, 52.31) ( 76.50, 51.80) ( 76.75, 51.30) ( 77.00, 50.80) ( 77.25, 50.30) ( 77.50, 49.79) ( 77.75, 49.29) ( 78.00, 48.79) ( 78.25, 48.29) ( 78.50, 47.78) ( 78.75, 47.28) ( 79.00, 46.78) ( 79.25, 46.28) ( 79.50, 45.77) ( 79.75, 45.27) ( 80.00, 44.77) ( 80.25, 44.27) ( 80.50, 43.76) ( 80.75, 43.26) ( 81.00, 42.76) ( 81.25, 42.26) ( 81.50, 41.75) ( 81.75, 41.25) ( 82.00, 40.75) ( 82.25, 40.25) ( 82.50, 39.74) ( 82.75, 39.24) ( 83.00, 38.74) ( 83.25, 38.24) ( 83.50, 37.73) ( 83.75, 37.23) ( 84.00, 36.73) ( 84.25, 36.23) ( 84.50, 35.72) ( 84.75, 35.22) ( 85.00, 34.72) ( 85.25, 34.22) ( 85.50, 33.71) ( 85.75, 33.21) ( 86.00, 32.71) ( 86.25, 32.21) ( 86.50, 31.70) ( 86.75, 31.20) ( 87.00, 30.70) ( 87.25, 30.20) ( 87.50, 29.69) ( 87.75, 29.19) ( 88.00, 28.69) ( 88.25, 28.19) ( 88.50, 27.68) ( 88.75, 27.18) ( 89.00, 26.68) ( 89.25, 26.18) ( 89.50, 25.67) ( 89.75, 25.17) ( 90.00, 24.67) ( 90.25, 24.17) ( 90.50, 23.66) ( 90.75, 23.16) ( 91.00, 22.66) ( 91.25, 22.16) ( 91.50, 21.65) ( 91.75, 21.15) ( 92.00, 20.65) ( 92.25, 20.15) ( 92.50, 19.64) ( 92.75, 19.14) ( 93.00, 18.64) ( 93.25, 18.14) ( 93.50, 17.63) ( 93.75, 17.13) ( 94.00, 16.63) ( 94.25, 16.13) ( 94.50, 15.62) ( 94.75, 15.12) ( 95.00, 14.62) ( 95.25, 14.12) ( 95.50, 13.61) ( 95.75, 13.11) ( 96.00, 12.61) ( 96.25, 12.11) ( 96.50, 11.60) ( 96.75, 11.10) ( 97.00, 10.60) ( 97.25, 10.10) ( 97.50, 9.59) ( 97.75, 9.09) ( 98.00, 8.59) ( 98.25, 8.09) ( 98.50, 7.58) ( 98.75, 7.08) ( 99.00, 6.58) ( 99.25, 6.08) ( 99.50, 5.57) ( 99.75, 5.07) ( 100.00, 4.57) (100,100) (0,0)[(100,100)]{} (12.5,0)(12.5,0)[7]{}[(0,1)[2]{}]{} (0,10)(0,10)[9]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[40.0]{} (50,-8)[$\sigma$]{} (-12,0)[0.0]{} (-12,96)[1.0]{} (-12,60)[$\kappa$]{} (102,41)[1]{} (91,102)[2]{} ( 25.00, 16.00) ( 37.50, 23.00) ( 50.00, 24.00) ( 62.50, 29.00) ( 75.00, 55.00) ( 75.95, 64.00) ( 78.23, 34.00) ( 0.25, 9.08) ( 0.50, 9.16) ( 0.75, 9.24) ( 1.00, 9.32) ( 1.25, 9.40) ( 1.50, 9.48) ( 1.75, 9.56) ( 2.00, 9.64) ( 2.25, 9.72) ( 2.50, 9.80) ( 2.75, 9.88) ( 3.00, 9.96) ( 3.25, 10.04) ( 3.50, 10.12) ( 3.75, 10.20) ( 4.00, 10.28) ( 4.25, 10.36) ( 4.50, 10.44) ( 4.75, 10.52) ( 5.00, 10.60) ( 5.25, 10.68) ( 5.50, 10.76) ( 5.75, 10.84) ( 6.00, 10.92) ( 6.25, 11.00) ( 6.50, 11.08) ( 6.75, 11.16) ( 7.00, 11.24) ( 7.25, 11.32) ( 7.50, 11.40) ( 7.75, 11.48) ( 8.00, 11.56) ( 8.25, 11.64) ( 8.50, 11.72) ( 8.75, 11.80) ( 9.00, 11.88) ( 9.25, 11.96) ( 9.50, 12.04) ( 9.75, 12.12) ( 10.00, 12.20) ( 10.25, 12.28) ( 10.50, 12.36) ( 10.75, 12.44) ( 11.00, 12.52) ( 11.25, 12.60) ( 11.50, 12.68) ( 11.75, 12.76) ( 12.00, 12.84) ( 12.25, 12.92) ( 12.50, 13.00) ( 12.75, 13.08) ( 13.00, 13.16) ( 13.25, 13.24) ( 13.50, 13.32) ( 13.75, 13.40) ( 14.00, 13.48) ( 14.25, 13.56) ( 14.50, 13.64) ( 14.75, 13.72) ( 15.00, 13.80) ( 15.25, 13.88) ( 15.50, 13.96) ( 15.75, 14.04) ( 16.00, 14.12) ( 16.25, 14.20) ( 16.50, 14.28) ( 16.75, 14.36) ( 17.00, 14.44) ( 17.25, 14.52) ( 17.50, 14.60) ( 17.75, 14.68) ( 18.00, 14.76) ( 18.25, 14.84) ( 18.50, 14.92) ( 18.75, 15.00) ( 19.00, 15.08) ( 19.25, 15.16) ( 19.50, 15.24) ( 19.75, 15.32) ( 20.00, 15.40) ( 20.25, 15.48) ( 20.50, 15.56) ( 20.75, 15.64) ( 21.00, 15.72) ( 21.25, 15.80) ( 21.50, 15.88) ( 21.75, 15.96) ( 22.00, 16.04) ( 22.25, 16.12) ( 22.50, 16.20) ( 22.75, 16.28) ( 23.00, 16.36) ( 23.25, 16.44) ( 23.50, 16.52) ( 23.75, 16.60) ( 24.00, 16.68) ( 24.25, 16.76) ( 24.50, 16.84) ( 24.75, 16.92) ( 25.00, 17.00) ( 25.25, 17.08) ( 25.50, 17.16) ( 25.75, 17.24) ( 26.00, 17.32) ( 26.25, 17.40) ( 26.50, 17.48) ( 26.75, 17.56) ( 27.00, 17.64) ( 27.25, 17.72) ( 27.50, 17.80) ( 27.75, 17.88) ( 28.00, 17.96) ( 28.25, 18.04) ( 28.50, 18.12) ( 28.75, 18.20) ( 29.00, 18.28) ( 29.25, 18.36) ( 29.50, 18.44) ( 29.75, 18.52) ( 30.00, 18.60) ( 30.25, 18.68) ( 30.50, 18.76) ( 30.75, 18.84) ( 31.00, 18.92) ( 31.25, 19.00) ( 31.50, 19.08) ( 31.75, 19.16) ( 32.00, 19.24) ( 32.25, 19.32) ( 32.50, 19.40) ( 32.75, 19.48) ( 33.00, 19.56) ( 33.25, 19.64) ( 33.50, 19.72) ( 33.75, 19.80) ( 34.00, 19.88) ( 34.25, 19.96) ( 34.50, 20.04) ( 34.75, 20.12) ( 35.00, 20.20) ( 35.25, 20.28) ( 35.50, 20.36) ( 35.75, 20.44) ( 36.00, 20.52) ( 36.25, 20.60) ( 36.50, 20.68) ( 36.75, 20.76) ( 37.00, 20.84) ( 37.25, 20.92) ( 37.50, 21.00) ( 37.75, 21.08) ( 38.00, 21.16) ( 38.25, 21.24) ( 38.50, 21.32) ( 38.75, 21.40) ( 39.00, 21.48) ( 39.25, 21.56) ( 39.50, 21.64) ( 39.75, 21.72) ( 40.00, 21.80) ( 40.25, 21.88) ( 40.50, 21.96) ( 40.75, 22.04) ( 41.00, 22.12) ( 41.25, 22.20) ( 41.50, 22.28) ( 41.75, 22.36) ( 42.00, 22.44) ( 42.25, 22.52) ( 42.50, 22.60) ( 42.75, 22.68) ( 43.00, 22.76) ( 43.25, 22.84) ( 43.50, 22.92) ( 43.75, 23.00) ( 44.00, 23.08) ( 44.25, 23.16) ( 44.50, 23.24) ( 44.75, 23.32) ( 45.00, 23.40) ( 45.25, 23.48) ( 45.50, 23.56) ( 45.75, 23.64) ( 46.00, 23.72) ( 46.25, 23.80) ( 46.50, 23.88) ( 46.75, 23.96) ( 47.00, 24.04) ( 47.25, 24.12) ( 47.50, 24.20) ( 47.75, 24.28) ( 48.00, 24.36) ( 48.25, 24.44) ( 48.50, 24.52) ( 48.75, 24.60) ( 49.00, 24.68) ( 49.25, 24.76) ( 49.50, 24.84) ( 49.75, 24.92) ( 50.00, 25.00) ( 50.25, 25.08) ( 50.50, 25.16) ( 50.75, 25.24) ( 51.00, 25.32) ( 51.25, 25.40) ( 51.50, 25.48) ( 51.75, 25.56) ( 52.00, 25.64) ( 52.25, 25.72) ( 52.50, 25.80) ( 52.75, 25.88) ( 53.00, 25.96) ( 53.25, 26.04) ( 53.50, 26.12) ( 53.75, 26.20) ( 54.00, 26.28) ( 54.25, 26.36) ( 54.50, 26.44) ( 54.75, 26.52) ( 55.00, 26.60) ( 55.25, 26.68) ( 55.50, 26.76) ( 55.75, 26.84) ( 56.00, 26.92) ( 56.25, 27.00) ( 56.50, 27.08) ( 56.75, 27.16) ( 57.00, 27.24) ( 57.25, 27.32) ( 57.50, 27.40) ( 57.75, 27.48) ( 58.00, 27.56) ( 58.25, 27.64) ( 58.50, 27.72) ( 58.75, 27.80) ( 59.00, 27.88) ( 59.25, 27.96) ( 59.50, 28.04) ( 59.75, 28.12) ( 60.00, 28.20) ( 60.25, 28.28) ( 60.50, 28.36) ( 60.75, 28.44) ( 61.00, 28.52) ( 61.25, 28.60) ( 61.50, 28.68) ( 61.75, 28.76) ( 62.00, 28.84) ( 62.25, 28.92) ( 62.50, 29.00) ( 62.75, 29.08) ( 63.00, 29.16) ( 63.25, 29.24) ( 63.50, 29.32) ( 63.75, 29.40) ( 64.00, 29.48) ( 64.25, 29.56) ( 64.50, 29.64) ( 64.75, 29.72) ( 65.00, 29.80) ( 65.25, 29.88) ( 65.50, 29.96) ( 65.75, 30.04) ( 66.00, 30.12) ( 66.25, 30.20) ( 66.50, 30.28) ( 66.75, 30.36) ( 67.00, 30.44) ( 67.25, 30.52) ( 67.50, 30.60) ( 67.75, 30.68) ( 68.00, 30.76) ( 68.25, 30.84) ( 68.50, 30.92) ( 68.75, 31.00) ( 69.00, 31.08) ( 69.25, 31.16) ( 69.50, 31.24) ( 69.75, 31.32) ( 70.00, 31.40) ( 70.25, 31.48) ( 70.50, 31.56) ( 70.75, 31.64) ( 71.00, 31.72) ( 71.25, 31.80) ( 71.50, 31.88) ( 71.75, 31.96) ( 72.00, 32.04) ( 72.25, 32.12) ( 72.50, 32.20) ( 72.75, 32.28) ( 73.00, 32.36) ( 73.25, 32.44) ( 73.50, 32.52) ( 73.75, 32.60) ( 74.00, 32.68) ( 74.25, 32.76) ( 74.50, 32.84) ( 74.75, 32.92) ( 75.00, 33.00) ( 75.25, 33.08) ( 75.50, 33.16) ( 75.75, 33.24) ( 76.00, 33.32) ( 76.25, 33.40) ( 76.50, 33.48) ( 76.75, 33.56) ( 77.00, 33.64) ( 77.25, 33.72) ( 77.50, 33.80) ( 77.75, 33.88) ( 78.00, 33.96) ( 78.25, 34.04) ( 78.50, 34.12) ( 78.75, 34.20) ( 79.00, 34.28) ( 79.25, 34.36) ( 79.50, 34.44) ( 79.75, 34.52) ( 80.00, 34.60) ( 80.25, 34.68) ( 80.50, 34.76) ( 80.75, 34.84) ( 81.00, 34.92) ( 81.25, 35.00) ( 81.50, 35.08) ( 81.75, 35.16) ( 82.00, 35.24) ( 82.25, 35.32) ( 82.50, 35.40) ( 82.75, 35.48) ( 83.00, 35.56) ( 83.25, 35.64) ( 83.50, 35.72) ( 83.75, 35.80) ( 84.00, 35.88) ( 84.25, 35.96) ( 84.50, 36.04) ( 84.75, 36.12) ( 85.00, 36.20) ( 85.25, 36.28) ( 85.50, 36.36) ( 85.75, 36.44) ( 86.00, 36.52) ( 86.25, 36.60) ( 86.50, 36.68) ( 86.75, 36.76) ( 87.00, 36.84) ( 87.25, 36.92) ( 87.50, 37.00) ( 87.75, 37.08) ( 88.00, 37.16) ( 88.25, 37.24) ( 88.50, 37.32) ( 88.75, 37.40) ( 89.00, 37.48) ( 89.25, 37.56) ( 89.50, 37.64) ( 89.75, 37.72) ( 90.00, 37.80) ( 90.25, 37.88) ( 90.50, 37.96) ( 90.75, 38.04) ( 91.00, 38.12) ( 91.25, 38.20) ( 91.50, 38.28) ( 91.75, 38.36) ( 92.00, 38.44) ( 92.25, 38.52) ( 92.50, 38.60) ( 92.75, 38.68) ( 93.00, 38.76) ( 93.25, 38.84) ( 93.50, 38.92) ( 93.75, 39.00) ( 94.00, 39.08) ( 94.25, 39.16) ( 94.50, 39.24) ( 94.75, 39.32) ( 95.00, 39.40) ( 95.25, 39.48) ( 95.50, 39.56) ( 95.75, 39.64) ( 96.00, 39.72) ( 96.25, 39.80) ( 96.50, 39.88) ( 96.75, 39.96) ( 97.00, 40.04) ( 97.25, 40.12) ( 97.50, 40.20) ( 97.75, 40.28) ( 98.00, 40.36) ( 98.25, 40.44) ( 98.50, 40.52) ( 98.75, 40.60) ( 99.00, 40.68) ( 99.25, 40.76) ( 99.50, 40.84) ( 99.75, 40.92) ( 100.00, 41.00) ( 50.50, 0.20) ( 50.60, 0.44) ( 50.70, 0.68) ( 50.80, 0.92) ( 50.90, 1.15) ( 51.00, 1.39) ( 51.10, 1.63) ( 51.20, 1.87) ( 51.30, 2.11) ( 51.40, 2.34) ( 51.50, 2.58) ( 51.60, 2.82) ( 51.70, 3.06) ( 51.80, 3.30) ( 51.90, 3.53) ( 52.00, 3.77) ( 52.10, 4.01) ( 52.20, 4.25) ( 52.30, 4.49) ( 52.40, 4.72) ( 52.50, 4.96) ( 52.60, 5.20) ( 52.70, 5.44) ( 52.80, 5.68) ( 52.90, 5.91) ( 53.00, 6.15) ( 53.10, 6.39) ( 53.20, 6.63) ( 53.30, 6.86) ( 53.40, 7.10) ( 53.50, 7.34) ( 53.60, 7.58) ( 53.70, 7.82) ( 53.80, 8.05) ( 53.90, 8.29) ( 54.00, 8.53) ( 54.10, 8.77) ( 54.20, 9.01) ( 54.30, 9.24) ( 54.40, 9.48) ( 54.50, 9.72) ( 54.60, 9.96) ( 54.70, 10.20) ( 54.80, 10.43) ( 54.90, 10.67) ( 55.00, 10.91) ( 55.10, 11.15) ( 55.20, 11.39) ( 55.30, 11.62) ( 55.40, 11.86) ( 55.50, 12.10) ( 55.60, 12.34) ( 55.70, 12.57) ( 55.80, 12.81) ( 55.90, 13.05) ( 56.00, 13.29) ( 56.10, 13.53) ( 56.20, 13.76) ( 56.30, 14.00) ( 56.40, 14.24) ( 56.50, 14.48) ( 56.60, 14.72) ( 56.70, 14.95) ( 56.80, 15.19) ( 56.90, 15.43) ( 57.00, 15.67) ( 57.10, 15.91) ( 57.20, 16.14) ( 57.30, 16.38) ( 57.40, 16.62) ( 57.50, 16.86) ( 57.60, 17.10) ( 57.70, 17.33) ( 57.80, 17.57) ( 57.90, 17.81) ( 58.00, 18.05) ( 58.10, 18.28) ( 58.20, 18.52) ( 58.30, 18.76) ( 58.40, 19.00) ( 58.50, 19.24) ( 58.60, 19.47) ( 58.70, 19.71) ( 58.80, 19.95) ( 58.90, 20.19) ( 59.00, 20.43) ( 59.10, 20.66) ( 59.20, 20.90) ( 59.30, 21.14) ( 59.40, 21.38) ( 59.50, 21.62) ( 59.60, 21.85) ( 59.70, 22.09) ( 59.80, 22.33) ( 59.90, 22.57) ( 60.00, 22.81) ( 60.10, 23.04) ( 60.20, 23.28) ( 60.30, 23.52) ( 60.40, 23.76) ( 60.50, 23.99) ( 60.60, 24.23) ( 60.70, 24.47) ( 60.80, 24.71) ( 60.90, 24.95) ( 61.00, 25.18) ( 61.10, 25.42) ( 61.20, 25.66) ( 61.30, 25.90) ( 61.40, 26.14) ( 61.50, 26.37) ( 61.60, 26.61) ( 61.70, 26.85) ( 61.80, 27.09) ( 61.90, 27.33) ( 62.00, 27.56) ( 62.10, 27.80) ( 62.20, 28.04) ( 62.30, 28.28) ( 62.40, 28.52) ( 62.50, 28.75) ( 62.60, 28.99) ( 62.70, 29.23) ( 62.80, 29.47) ( 62.90, 29.70) ( 63.00, 29.94) ( 63.10, 30.18) ( 63.20, 30.42) ( 63.30, 30.66) ( 63.40, 30.89) ( 63.50, 31.13) ( 63.60, 31.37) ( 63.70, 31.61) ( 63.80, 31.85) ( 63.90, 32.08) ( 64.00, 32.32) ( 64.10, 32.56) ( 64.20, 32.80) ( 64.30, 33.04) ( 64.40, 33.27) ( 64.50, 33.51) ( 64.60, 33.75) ( 64.70, 33.99) ( 64.80, 34.23) ( 64.90, 34.46) ( 65.00, 34.70) ( 65.10, 34.94) ( 65.20, 35.18) ( 65.30, 35.42) ( 65.40, 35.65) ( 65.50, 35.89) ( 65.60, 36.13) ( 65.70, 36.37) ( 65.80, 36.60) ( 65.90, 36.84) ( 66.00, 37.08) ( 66.10, 37.32) ( 66.20, 37.56) ( 66.30, 37.79) ( 66.40, 38.03) ( 66.50, 38.27) ( 66.60, 38.51) ( 66.70, 38.75) ( 66.80, 38.98) ( 66.90, 39.22) ( 67.00, 39.46) ( 67.10, 39.70) ( 67.20, 39.94) ( 67.30, 40.17) ( 67.40, 40.41) ( 67.50, 40.65) ( 67.60, 40.89) ( 67.70, 41.13) ( 67.80, 41.36) ( 67.90, 41.60) ( 68.00, 41.84) ( 68.10, 42.08) ( 68.20, 42.31) ( 68.30, 42.55) ( 68.40, 42.79) ( 68.50, 43.03) ( 68.60, 43.27) ( 68.70, 43.50) ( 68.80, 43.74) ( 68.90, 43.98) ( 69.00, 44.22) ( 69.10, 44.46) ( 69.20, 44.69) ( 69.30, 44.93) ( 69.40, 45.17) ( 69.50, 45.41) ( 69.60, 45.65) ( 69.70, 45.88) ( 69.80, 46.12) ( 69.90, 46.36) ( 70.00, 46.60) ( 70.10, 46.84) ( 70.20, 47.07) ( 70.30, 47.31) ( 70.40, 47.55) ( 70.50, 47.79) ( 70.60, 48.02) ( 70.70, 48.26) ( 70.80, 48.50) ( 70.90, 48.74) ( 71.00, 48.98) ( 71.10, 49.21) ( 71.20, 49.45) ( 71.30, 49.69) ( 71.40, 49.93) ( 71.50, 50.17) ( 71.60, 50.40) ( 71.70, 50.64) ( 71.80, 50.88) ( 71.90, 51.12) ( 72.00, 51.36) ( 72.10, 51.59) ( 72.20, 51.83) ( 72.30, 52.07) ( 72.40, 52.31) ( 72.50, 52.55) ( 72.60, 52.78) ( 72.70, 53.02) ( 72.80, 53.26) ( 72.90, 53.50) ( 73.00, 53.73) ( 73.10, 53.97) ( 73.20, 54.21) ( 73.30, 54.45) ( 73.40, 54.69) ( 73.50, 54.92) ( 73.60, 55.16) ( 73.70, 55.40) ( 73.80, 55.64) ( 73.90, 55.88) ( 74.00, 56.11) ( 74.10, 56.35) ( 74.20, 56.59) ( 74.30, 56.83) ( 74.40, 57.07) ( 74.50, 57.30) ( 74.60, 57.54) ( 74.70, 57.78) ( 74.80, 58.02) ( 74.90, 58.26) ( 75.00, 58.49) ( 75.10, 58.73) ( 75.20, 58.97) ( 75.30, 59.21) ( 75.40, 59.44) ( 75.50, 59.68) ( 75.60, 59.92) ( 75.70, 60.16) ( 75.80, 60.40) ( 75.90, 60.63) ( 76.00, 60.87) ( 76.10, 61.11) ( 76.20, 61.35) ( 76.30, 61.59) ( 76.40, 61.82) ( 76.50, 62.06) ( 76.60, 62.30) ( 76.70, 62.54) ( 76.80, 62.78) ( 76.90, 63.01) ( 77.00, 63.25) ( 77.10, 63.49) ( 77.20, 63.73) ( 77.30, 63.97) ( 77.40, 64.20) ( 77.50, 64.44) ( 77.60, 64.68) ( 77.70, 64.92) ( 77.80, 65.16) ( 77.90, 65.39) ( 78.00, 65.63) ( 78.10, 65.87) ( 78.20, 66.11) ( 78.30, 66.34) ( 78.40, 66.58) ( 78.50, 66.82) ( 78.60, 67.06) ( 78.70, 67.30) ( 78.80, 67.53) ( 78.90, 67.77) ( 79.00, 68.01) ( 79.10, 68.25) ( 79.20, 68.49) ( 79.30, 68.72) ( 79.40, 68.96) ( 79.50, 69.20) ( 79.60, 69.44) ( 79.70, 69.68) ( 79.80, 69.91) ( 79.90, 70.15) ( 80.00, 70.39) ( 80.10, 70.63) ( 80.20, 70.87) ( 80.30, 71.10) ( 80.40, 71.34) ( 80.50, 71.58) ( 80.60, 71.82) ( 80.70, 72.05) ( 80.80, 72.29) ( 80.90, 72.53) ( 81.00, 72.77) ( 81.10, 73.01) ( 81.20, 73.24) ( 81.30, 73.48) ( 81.40, 73.72) ( 81.50, 73.96) ( 81.60, 74.20) ( 81.70, 74.43) ( 81.80, 74.67) ( 81.90, 74.91) ( 82.00, 75.15) ( 82.10, 75.39) ( 82.20, 75.62) ( 82.30, 75.86) ( 82.40, 76.10) ( 82.50, 76.34) ( 82.60, 76.58) ( 82.70, 76.81) ( 82.80, 77.05) ( 82.90, 77.29) ( 83.00, 77.53) ( 83.10, 77.76) ( 83.20, 78.00) ( 83.30, 78.24) ( 83.40, 78.48) ( 83.50, 78.72) ( 83.60, 78.95) ( 83.70, 79.19) ( 83.80, 79.43) ( 83.90, 79.67) ( 84.00, 79.91) ( 84.10, 80.14) ( 84.20, 80.38) ( 84.30, 80.62) ( 84.40, 80.86) ( 84.50, 81.10) ( 84.60, 81.33) ( 84.70, 81.57) ( 84.80, 81.81) ( 84.90, 82.05) ( 85.00, 82.29) ( 85.10, 82.52) ( 85.20, 82.76) ( 85.30, 83.00) ( 85.40, 83.24) ( 85.50, 83.47) ( 85.60, 83.71) ( 85.70, 83.95) ( 85.80, 84.19) ( 85.90, 84.43) ( 86.00, 84.66) ( 86.10, 84.90) ( 86.20, 85.14) ( 86.30, 85.38) ( 86.40, 85.62) ( 86.50, 85.85) ( 86.60, 86.09) ( 86.70, 86.33) ( 86.80, 86.57) ( 86.90, 86.81) ( 87.00, 87.04) ( 87.10, 87.28) ( 87.20, 87.52) ( 87.30, 87.76) ( 87.40, 88.00) ( 87.50, 88.23) ( 87.60, 88.47) ( 87.70, 88.71) ( 87.80, 88.95) ( 87.90, 89.18) ( 88.00, 89.42) ( 88.10, 89.66) ( 88.20, 89.90) ( 88.30, 90.14) ( 88.40, 90.37) ( 88.50, 90.61) ( 88.60, 90.85) ( 88.70, 91.09) ( 88.80, 91.33) ( 88.90, 91.56) ( 89.00, 91.80) ( 89.10, 92.04) ( 89.20, 92.28) ( 89.30, 92.52) ( 89.40, 92.75) ( 89.50, 92.99) ( 89.60, 93.23) ( 89.70, 93.47) ( 89.80, 93.71) ( 89.90, 93.94) ( 90.00, 94.18) ( 90.10, 94.42) ( 90.20, 94.66) ( 90.30, 94.89) ( 90.40, 95.13) ( 90.50, 95.37) ( 90.60, 95.61) ( 90.70, 95.85) ( 90.80, 96.08) ( 90.90, 96.32) ( 91.00, 96.56) ( 91.10, 96.80) ( 91.20, 97.04) ( 91.30, 97.27) ( 91.40, 97.51) ( 91.50, 97.75) ( 91.60, 97.99) ( 91.70, 98.23) ( 91.80, 98.46) ( 91.90, 98.70) ( 92.00, 98.94) ( 92.10, 99.18) ( 92.20, 99.42) ( 92.30, 99.65) ( 92.40, 99.89) (100,100) (0,0)[(100,100)]{} (20,0)(20,0)[4]{}[(0,1)[2]{}]{} (0,12.5)(0,12.5)[7]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[0.25]{} (50,-8)[$\epsilon$]{} (-12,0)[4.0]{} (-12,96)[12.0]{} (-12,60)[$\Omega$]{} (15.5,102)[1]{} (102,62)[2]{} ( 2.40, 43.75) ( 3.90, 51.25) ( 6.14, 53.75) ( 10.42, 77.50) ( 0.05, 33.43) ( 0.10, 33.63) ( 0.15, 33.84) ( 0.20, 34.04) ( 0.25, 34.24) ( 0.30, 34.45) ( 0.35, 34.65) ( 0.40, 34.86) ( 0.45, 35.06) ( 0.50, 35.27) ( 0.55, 35.47) ( 0.60, 35.67) ( 0.65, 35.88) ( 0.70, 36.08) ( 0.75, 36.29) ( 0.80, 36.49) ( 0.85, 36.69) ( 0.90, 36.90) ( 0.95, 37.10) ( 1.00, 37.31) ( 1.05, 37.51) ( 1.10, 37.72) ( 1.15, 37.92) ( 1.20, 38.12) ( 1.25, 38.33) ( 1.30, 38.53) ( 1.35, 38.74) ( 1.40, 38.94) ( 1.45, 39.14) ( 1.50, 39.35) ( 1.55, 39.55) ( 1.60, 39.76) ( 1.65, 39.96) ( 1.70, 40.17) ( 1.75, 40.37) ( 1.80, 40.57) ( 1.85, 40.78) ( 1.90, 40.98) ( 1.95, 41.19) ( 2.00, 41.39) ( 2.05, 41.59) ( 2.10, 41.80) ( 2.15, 42.00) ( 2.20, 42.21) ( 2.25, 42.41) ( 2.30, 42.61) ( 2.35, 42.82) ( 2.40, 43.02) ( 2.45, 43.23) ( 2.50, 43.43) ( 2.55, 43.64) ( 2.60, 43.84) ( 2.65, 44.04) ( 2.70, 44.25) ( 2.75, 44.45) ( 2.80, 44.66) ( 2.85, 44.86) ( 2.90, 45.06) ( 2.95, 45.27) ( 3.00, 45.47) ( 3.05, 45.68) ( 3.10, 45.88) ( 3.15, 46.09) ( 3.20, 46.29) ( 3.25, 46.49) ( 3.30, 46.70) ( 3.35, 46.90) ( 3.40, 47.11) ( 3.45, 47.31) ( 3.50, 47.51) ( 3.55, 47.72) ( 3.60, 47.92) ( 3.65, 48.13) ( 3.70, 48.33) ( 3.75, 48.54) ( 3.80, 48.74) ( 3.85, 48.94) ( 3.90, 49.15) ( 3.95, 49.35) ( 4.00, 49.56) ( 4.05, 49.76) ( 4.10, 49.96) ( 4.15, 50.17) ( 4.20, 50.37) ( 4.25, 50.58) ( 4.30, 50.78) ( 4.35, 50.99) ( 4.40, 51.19) ( 4.45, 51.39) ( 4.50, 51.60) ( 4.55, 51.80) ( 4.60, 52.01) ( 4.65, 52.21) ( 4.70, 52.41) ( 4.75, 52.62) ( 4.80, 52.82) ( 4.85, 53.03) ( 4.90, 53.23) ( 4.95, 53.43) ( 5.00, 53.64) ( 5.05, 53.84) ( 5.10, 54.05) ( 5.15, 54.25) ( 5.20, 54.46) ( 5.25, 54.66) ( 5.30, 54.86) ( 5.35, 55.07) ( 5.40, 55.27) ( 5.45, 55.48) ( 5.50, 55.68) ( 5.55, 55.88) ( 5.60, 56.09) ( 5.65, 56.29) ( 5.70, 56.50) ( 5.75, 56.70) ( 5.80, 56.91) ( 5.85, 57.11) ( 5.90, 57.31) ( 5.95, 57.52) ( 6.00, 57.72) ( 6.05, 57.93) ( 6.10, 58.13) ( 6.15, 58.33) ( 6.20, 58.54) ( 6.25, 58.74) ( 6.30, 58.95) ( 6.35, 59.15) ( 6.40, 59.36) ( 6.45, 59.56) ( 6.50, 59.76) ( 6.55, 59.97) ( 6.60, 60.17) ( 6.65, 60.38) ( 6.70, 60.58) ( 6.75, 60.78) ( 6.80, 60.99) ( 6.85, 61.19) ( 6.90, 61.40) ( 6.95, 61.60) ( 7.00, 61.81) ( 7.05, 62.01) ( 7.10, 62.21) ( 7.15, 62.42) ( 7.20, 62.62) ( 7.25, 62.83) ( 7.30, 63.03) ( 7.35, 63.23) ( 7.40, 63.44) ( 7.45, 63.64) ( 7.50, 63.85) ( 7.55, 64.05) ( 7.60, 64.25) ( 7.65, 64.46) ( 7.70, 64.66) ( 7.75, 64.87) ( 7.80, 65.07) ( 7.85, 65.28) ( 7.90, 65.48) ( 7.95, 65.68) ( 8.00, 65.89) ( 8.05, 66.09) ( 8.10, 66.30) ( 8.15, 66.50) ( 8.20, 66.70) ( 8.25, 66.91) ( 8.30, 67.11) ( 8.35, 67.32) ( 8.40, 67.52) ( 8.45, 67.73) ( 8.50, 67.93) ( 8.55, 68.13) ( 8.60, 68.34) ( 8.65, 68.54) ( 8.70, 68.75) ( 8.75, 68.95) ( 8.80, 69.15) ( 8.85, 69.36) ( 8.90, 69.56) ( 8.95, 69.77) ( 9.00, 69.97) ( 9.05, 70.18) ( 9.10, 70.38) ( 9.15, 70.58) ( 9.20, 70.79) ( 9.25, 70.99) ( 9.30, 71.20) ( 9.35, 71.40) ( 9.40, 71.60) ( 9.45, 71.81) ( 9.50, 72.01) ( 9.55, 72.22) ( 9.60, 72.42) ( 9.65, 72.63) ( 9.70, 72.83) ( 9.75, 73.03) ( 9.80, 73.24) ( 9.85, 73.44) ( 9.90, 73.65) ( 9.95, 73.85) ( 10.00, 74.05) ( 10.05, 74.26) ( 10.10, 74.46) ( 10.15, 74.67) ( 10.20, 74.87) ( 10.25, 75.07) ( 10.30, 75.28) ( 10.35, 75.48) ( 10.40, 75.69) ( 10.45, 75.89) ( 10.50, 76.10) ( 10.55, 76.30) ( 10.60, 76.50) ( 10.65, 76.71) ( 10.70, 76.91) ( 10.75, 77.12) ( 10.80, 77.32) ( 10.85, 77.52) ( 10.90, 77.73) ( 10.95, 77.93) ( 11.00, 78.14) ( 11.05, 78.34) ( 11.10, 78.55) ( 11.15, 78.75) ( 11.20, 78.95) ( 11.25, 79.16) ( 11.30, 79.36) ( 11.35, 79.57) ( 11.40, 79.77) ( 11.45, 79.97) ( 11.50, 80.18) ( 11.55, 80.38) ( 11.60, 80.59) ( 11.65, 80.79) ( 11.70, 81.00) ( 11.75, 81.20) ( 11.80, 81.40) ( 11.85, 81.61) ( 11.90, 81.81) ( 11.95, 82.02) ( 12.00, 82.22) ( 12.05, 82.42) ( 12.10, 82.63) ( 12.15, 82.83) ( 12.20, 83.04) ( 12.25, 83.24) ( 12.30, 83.45) ( 12.35, 83.65) ( 12.40, 83.85) ( 12.45, 84.06) ( 12.50, 84.26) ( 12.55, 84.47) ( 12.60, 84.67) ( 12.65, 84.87) ( 12.70, 85.08) ( 12.75, 85.28) ( 12.80, 85.49) ( 12.85, 85.69) ( 12.90, 85.89) ( 12.95, 86.10) ( 13.00, 86.30) ( 13.05, 86.51) ( 13.10, 86.71) ( 13.15, 86.92) ( 13.20, 87.12) ( 13.25, 87.32) ( 13.30, 87.53) ( 13.35, 87.73) ( 13.40, 87.94) ( 13.45, 88.14) ( 13.50, 88.34) ( 13.55, 88.55) ( 13.60, 88.75) ( 13.65, 88.96) ( 13.70, 89.16) ( 13.75, 89.37) ( 13.80, 89.57) ( 13.85, 89.77) ( 13.90, 89.98) ( 13.95, 90.18) ( 14.00, 90.39) ( 14.05, 90.59) ( 14.10, 90.79) ( 14.15, 91.00) ( 14.20, 91.20) ( 14.25, 91.41) ( 14.30, 91.61) ( 14.35, 91.82) ( 14.40, 92.02) ( 14.45, 92.22) ( 14.50, 92.43) ( 14.55, 92.63) ( 14.60, 92.84) ( 14.65, 93.04) ( 14.70, 93.24) ( 14.75, 93.45) ( 14.80, 93.65) ( 14.85, 93.86) ( 14.90, 94.06) ( 14.95, 94.27) ( 15.00, 94.47) ( 15.05, 94.67) ( 15.10, 94.88) ( 15.15, 95.08) ( 15.20, 95.29) ( 15.25, 95.49) ( 15.30, 95.69) ( 15.35, 95.90) ( 15.40, 96.10) ( 15.45, 96.31) ( 15.50, 96.51) ( 15.55, 96.71) ( 15.60, 96.92) ( 15.65, 97.12) ( 15.70, 97.33) ( 15.75, 97.53) ( 15.80, 97.74) ( 15.85, 97.94) ( 15.90, 98.14) ( 15.95, 98.35) ( 16.00, 98.55) ( 16.05, 98.76) ( 16.10, 98.96) ( 16.15, 99.16) ( 16.20, 99.37) ( 16.25, 99.57) ( 16.30, 99.78) ( 16.35, 99.98) ( 20.00, 15.00) ( 40.00, 17.50) ( 60.00, 42.50) ( 80.00, 48.75) ( 1.00, 0.01) ( 1.50, 0.32) ( 2.00, 0.64) ( 2.50, 0.95) ( 3.00, 1.27) ( 3.50, 1.58) ( 4.00, 1.90) ( 4.50, 2.22) ( 5.00, 2.53) ( 5.50, 2.85) ( 6.00, 3.16) ( 6.50, 3.48) ( 7.00, 3.79) ( 7.50, 4.11) ( 8.00, 4.42) ( 8.50, 4.74) ( 9.00, 5.06) ( 9.50, 5.37) ( 10.00, 5.69) ( 10.50, 6.00) ( 11.00, 6.32) ( 11.50, 6.63) ( 12.00, 6.95) ( 12.50, 7.27) ( 13.00, 7.58) ( 13.50, 7.90) ( 14.00, 8.21) ( 14.50, 8.53) ( 15.00, 8.84) ( 15.50, 9.16) ( 16.00, 9.47) ( 16.50, 9.79) ( 17.00, 10.11) ( 17.50, 10.42) ( 18.00, 10.74) ( 18.50, 11.05) ( 19.00, 11.37) ( 19.50, 11.68) ( 20.00, 12.00) ( 20.50, 12.32) ( 21.00, 12.63) ( 21.50, 12.95) ( 22.00, 13.26) ( 22.50, 13.58) ( 23.00, 13.89) ( 23.50, 14.21) ( 24.00, 14.52) ( 24.50, 14.84) ( 25.00, 15.16) ( 25.50, 15.47) ( 26.00, 15.79) ( 26.50, 16.10) ( 27.00, 16.42) ( 27.50, 16.73) ( 28.00, 17.05) ( 28.50, 17.37) ( 29.00, 17.68) ( 29.50, 18.00) ( 30.00, 18.31) ( 30.50, 18.63) ( 31.00, 18.94) ( 31.50, 19.26) ( 32.00, 19.57) ( 32.50, 19.89) ( 33.00, 20.21) ( 33.50, 20.52) ( 34.00, 20.84) ( 34.50, 21.15) ( 35.00, 21.47) ( 35.50, 21.78) ( 36.00, 22.10) ( 36.50, 22.42) ( 37.00, 22.73) ( 37.50, 23.05) ( 38.00, 23.36) ( 38.50, 23.68) ( 39.00, 23.99) ( 39.50, 24.31) ( 40.00, 24.62) ( 40.50, 24.94) ( 41.00, 25.26) ( 41.50, 25.57) ( 42.00, 25.89) ( 42.50, 26.20) ( 43.00, 26.52) ( 43.50, 26.83) ( 44.00, 27.15) ( 44.50, 27.47) ( 45.00, 27.78) ( 45.50, 28.10) ( 46.00, 28.41) ( 46.50, 28.73) ( 47.00, 29.04) ( 47.50, 29.36) ( 48.00, 29.67) ( 48.50, 29.99) ( 49.00, 30.31) ( 49.50, 30.62) ( 50.00, 30.94) ( 50.50, 31.25) ( 51.00, 31.57) ( 51.50, 31.88) ( 52.00, 32.20) ( 52.50, 32.52) ( 53.00, 32.83) ( 53.50, 33.15) ( 54.00, 33.46) ( 54.50, 33.78) ( 55.00, 34.09) ( 55.50, 34.41) ( 56.00, 34.72) ( 56.50, 35.04) ( 57.00, 35.36) ( 57.50, 35.67) ( 58.00, 35.99) ( 58.50, 36.30) ( 59.00, 36.62) ( 59.50, 36.93) ( 60.00, 37.25) ( 60.50, 37.57) ( 61.00, 37.88) ( 61.50, 38.20) ( 62.00, 38.51) ( 62.50, 38.83) ( 63.00, 39.14) ( 63.50, 39.46) ( 64.00, 39.77) ( 64.50, 40.09) ( 65.00, 40.41) ( 65.50, 40.72) ( 66.00, 41.04) ( 66.50, 41.35) ( 67.00, 41.67) ( 67.50, 41.98) ( 68.00, 42.30) ( 68.50, 42.62) ( 69.00, 42.93) ( 69.50, 43.25) ( 70.00, 43.56) ( 70.50, 43.88) ( 71.00, 44.19) ( 71.50, 44.51) ( 72.00, 44.82) ( 72.50, 45.14) ( 73.00, 45.46) ( 73.50, 45.77) ( 74.00, 46.09) ( 74.50, 46.40) ( 75.00, 46.72) ( 75.50, 47.03) ( 76.00, 47.35) ( 76.50, 47.67) ( 77.00, 47.98) ( 77.50, 48.30) ( 78.00, 48.61) ( 78.50, 48.93) ( 79.00, 49.24) ( 79.50, 49.56) ( 80.00, 49.87) ( 80.50, 50.19) ( 81.00, 50.51) ( 81.50, 50.82) ( 82.00, 51.14) ( 82.50, 51.45) ( 83.00, 51.77) ( 83.50, 52.08) ( 84.00, 52.40) ( 84.50, 52.72) ( 85.00, 53.03) ( 85.50, 53.35) ( 86.00, 53.66) ( 86.50, 53.98) ( 87.00, 54.29) ( 87.50, 54.61) ( 88.00, 54.92) ( 88.50, 55.24) ( 89.00, 55.56) ( 89.50, 55.87) ( 90.00, 56.19) ( 90.50, 56.50) ( 91.00, 56.82) ( 91.50, 57.13) ( 92.00, 57.45) ( 92.50, 57.77) ( 93.00, 58.08) ( 93.50, 58.40) ( 94.00, 58.71) ( 94.50, 59.03) ( 95.00, 59.34) ( 95.50, 59.66) ( 96.00, 59.98) ( 96.50, 60.29) ( 97.00, 60.61) ( 97.50, 60.92) ( 98.00, 61.24) ( 98.50, 61.55) ( 99.00, 61.87) ( 99.50, 62.18) ( 100.00, 62.50) (100,100) (0,0)[(100,100)]{} (20,0)(20,0)[4]{}[(0,1)[2]{}]{} (0,12.5)(0,12.5)[7]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[0.25]{} (50,-8)[$\epsilon$]{} (-12,0)[2.0]{} (-12,96)[6.0]{} (-12,60)[$\Sigma$]{} (37.5,102)[1]{} (83,102)[2]{} ( 2.40, 37.50) ( 3.90, 42.50) ( 6.14, 60.00) ( 10.42, 50.00) ( 0.10, 38.42) ( 0.20, 38.58) ( 0.30, 38.74) ( 0.40, 38.90) ( 0.50, 39.06) ( 0.60, 39.22) ( 0.70, 39.39) ( 0.80, 39.55) ( 0.90, 39.71) ( 1.00, 39.87) ( 1.10, 40.03) ( 1.20, 40.19) ( 1.30, 40.36) ( 1.40, 40.52) ( 1.50, 40.68) ( 1.60, 40.84) ( 1.70, 41.00) ( 1.80, 41.17) ( 1.90, 41.33) ( 2.00, 41.49) ( 2.10, 41.65) ( 2.20, 41.81) ( 2.30, 41.97) ( 2.40, 42.14) ( 2.50, 42.30) ( 2.60, 42.46) ( 2.70, 42.62) ( 2.80, 42.78) ( 2.90, 42.94) ( 3.00, 43.11) ( 3.10, 43.27) ( 3.20, 43.43) ( 3.30, 43.59) ( 3.40, 43.75) ( 3.50, 43.92) ( 3.60, 44.08) ( 3.70, 44.24) ( 3.80, 44.40) ( 3.90, 44.56) ( 4.00, 44.72) ( 4.10, 44.89) ( 4.20, 45.05) ( 4.30, 45.21) ( 4.40, 45.37) ( 4.50, 45.53) ( 4.60, 45.69) ( 4.70, 45.86) ( 4.80, 46.02) ( 4.90, 46.18) ( 5.00, 46.34) ( 5.10, 46.50) ( 5.20, 46.67) ( 5.30, 46.83) ( 5.40, 46.99) ( 5.50, 47.15) ( 5.60, 47.31) ( 5.70, 47.47) ( 5.80, 47.64) ( 5.90, 47.80) ( 6.00, 47.96) ( 6.10, 48.12) ( 6.20, 48.28) ( 6.30, 48.44) ( 6.40, 48.61) ( 6.50, 48.77) ( 6.60, 48.93) ( 6.70, 49.09) ( 6.80, 49.25) ( 6.90, 49.42) ( 7.00, 49.58) ( 7.10, 49.74) ( 7.20, 49.90) ( 7.30, 50.06) ( 7.40, 50.22) ( 7.50, 50.39) ( 7.60, 50.55) ( 7.70, 50.71) ( 7.80, 50.87) ( 7.90, 51.03) ( 8.00, 51.19) ( 8.10, 51.36) ( 8.20, 51.52) ( 8.30, 51.68) ( 8.40, 51.84) ( 8.50, 52.00) ( 8.60, 52.17) ( 8.70, 52.33) ( 8.80, 52.49) ( 8.90, 52.65) ( 9.00, 52.81) ( 9.10, 52.97) ( 9.20, 53.14) ( 9.30, 53.30) ( 9.40, 53.46) ( 9.50, 53.62) ( 9.60, 53.78) ( 9.70, 53.94) ( 9.80, 54.11) ( 9.90, 54.27) ( 10.00, 54.43) ( 10.10, 54.59) ( 10.20, 54.75) ( 10.30, 54.92) ( 10.40, 55.08) ( 10.50, 55.24) ( 10.60, 55.40) ( 10.70, 55.56) ( 10.80, 55.72) ( 10.90, 55.89) ( 11.00, 56.05) ( 11.10, 56.21) ( 11.20, 56.37) ( 11.30, 56.53) ( 11.40, 56.69) ( 11.50, 56.86) ( 11.60, 57.02) ( 11.70, 57.18) ( 11.80, 57.34) ( 11.90, 57.50) ( 12.00, 57.67) ( 12.10, 57.83) ( 12.20, 57.99) ( 12.30, 58.15) ( 12.40, 58.31) ( 12.50, 58.47) ( 12.60, 58.64) ( 12.70, 58.80) ( 12.80, 58.96) ( 12.90, 59.12) ( 13.00, 59.28) ( 13.10, 59.44) ( 13.20, 59.61) ( 13.30, 59.77) ( 13.40, 59.93) ( 13.50, 60.09) ( 13.60, 60.25) ( 13.70, 60.42) ( 13.80, 60.58) ( 13.90, 60.74) ( 14.00, 60.90) ( 14.10, 61.06) ( 14.20, 61.22) ( 14.30, 61.39) ( 14.40, 61.55) ( 14.50, 61.71) ( 14.60, 61.87) ( 14.70, 62.03) ( 14.80, 62.19) ( 14.90, 62.36) ( 15.00, 62.52) ( 15.10, 62.68) ( 15.20, 62.84) ( 15.30, 63.00) ( 15.40, 63.17) ( 15.50, 63.33) ( 15.60, 63.49) ( 15.70, 63.65) ( 15.80, 63.81) ( 15.90, 63.97) ( 16.00, 64.14) ( 16.10, 64.30) ( 16.20, 64.46) ( 16.30, 64.62) ( 16.40, 64.78) ( 16.50, 64.94) ( 16.60, 65.11) ( 16.70, 65.27) ( 16.80, 65.43) ( 16.90, 65.59) ( 17.00, 65.75) ( 17.10, 65.92) ( 17.20, 66.08) ( 17.30, 66.24) ( 17.40, 66.40) ( 17.50, 66.56) ( 17.60, 66.72) ( 17.70, 66.89) ( 17.80, 67.05) ( 17.90, 67.21) ( 18.00, 67.37) ( 18.10, 67.53) ( 18.20, 67.69) ( 18.30, 67.86) ( 18.40, 68.02) ( 18.50, 68.18) ( 18.60, 68.34) ( 18.70, 68.50) ( 18.80, 68.67) ( 18.90, 68.83) ( 19.00, 68.99) ( 19.10, 69.15) ( 19.20, 69.31) ( 19.30, 69.47) ( 19.40, 69.64) ( 19.50, 69.80) ( 19.60, 69.96) ( 19.70, 70.12) ( 19.80, 70.28) ( 19.90, 70.44) ( 20.00, 70.61) ( 20.10, 70.77) ( 20.20, 70.93) ( 20.30, 71.09) ( 20.40, 71.25) ( 20.50, 71.42) ( 20.60, 71.58) ( 20.70, 71.74) ( 20.80, 71.90) ( 20.90, 72.06) ( 21.00, 72.22) ( 21.10, 72.39) ( 21.20, 72.55) ( 21.30, 72.71) ( 21.40, 72.87) ( 21.50, 73.03) ( 21.60, 73.19) ( 21.70, 73.36) ( 21.80, 73.52) ( 21.90, 73.68) ( 22.00, 73.84) ( 22.10, 74.00) ( 22.20, 74.17) ( 22.30, 74.33) ( 22.40, 74.49) ( 22.50, 74.65) ( 22.60, 74.81) ( 22.70, 74.97) ( 22.80, 75.14) ( 22.90, 75.30) ( 23.00, 75.46) ( 23.10, 75.62) ( 23.20, 75.78) ( 23.30, 75.94) ( 23.40, 76.11) ( 23.50, 76.27) ( 23.60, 76.43) ( 23.70, 76.59) ( 23.80, 76.75) ( 23.90, 76.92) ( 24.00, 77.08) ( 24.10, 77.24) ( 24.20, 77.40) ( 24.30, 77.56) ( 24.40, 77.72) ( 24.50, 77.89) ( 24.60, 78.05) ( 24.70, 78.21) ( 24.80, 78.37) ( 24.90, 78.53) ( 25.00, 78.69) ( 25.10, 78.86) ( 25.20, 79.02) ( 25.30, 79.18) ( 25.40, 79.34) ( 25.50, 79.50) ( 25.60, 79.67) ( 25.70, 79.83) ( 25.80, 79.99) ( 25.90, 80.15) ( 26.00, 80.31) ( 26.10, 80.47) ( 26.20, 80.64) ( 26.30, 80.80) ( 26.40, 80.96) ( 26.50, 81.12) ( 26.60, 81.28) ( 26.70, 81.44) ( 26.80, 81.61) ( 26.90, 81.77) ( 27.00, 81.93) ( 27.10, 82.09) ( 27.20, 82.25) ( 27.30, 82.42) ( 27.40, 82.58) ( 27.50, 82.74) ( 27.60, 82.90) ( 27.70, 83.06) ( 27.80, 83.22) ( 27.90, 83.39) ( 28.00, 83.55) ( 28.10, 83.71) ( 28.20, 83.87) ( 28.30, 84.03) ( 28.40, 84.19) ( 28.50, 84.36) ( 28.60, 84.52) ( 28.70, 84.68) ( 28.80, 84.84) ( 28.90, 85.00) ( 29.00, 85.17) ( 29.10, 85.33) ( 29.20, 85.49) ( 29.30, 85.65) ( 29.40, 85.81) ( 29.50, 85.97) ( 29.60, 86.14) ( 29.70, 86.30) ( 29.80, 86.46) ( 29.90, 86.62) ( 30.00, 86.78) ( 30.10, 86.94) ( 30.20, 87.11) ( 30.30, 87.27) ( 30.40, 87.43) ( 30.50, 87.59) ( 30.60, 87.75) ( 30.70, 87.92) ( 30.80, 88.08) ( 30.90, 88.24) ( 31.00, 88.40) ( 31.10, 88.56) ( 31.20, 88.72) ( 31.30, 88.89) ( 31.40, 89.05) ( 31.50, 89.21) ( 31.60, 89.37) ( 31.70, 89.53) ( 31.80, 89.69) ( 31.90, 89.86) ( 32.00, 90.02) ( 32.10, 90.18) ( 32.20, 90.34) ( 32.30, 90.50) ( 32.40, 90.67) ( 32.50, 90.83) ( 32.60, 90.99) ( 32.70, 91.15) ( 32.80, 91.31) ( 32.90, 91.47) ( 33.00, 91.64) ( 33.10, 91.80) ( 33.20, 91.96) ( 33.30, 92.12) ( 33.40, 92.28) ( 33.50, 92.44) ( 33.60, 92.61) ( 33.70, 92.77) ( 33.80, 92.93) ( 33.90, 93.09) ( 34.00, 93.25) ( 34.10, 93.42) ( 34.20, 93.58) ( 34.30, 93.74) ( 34.40, 93.90) ( 34.50, 94.06) ( 34.60, 94.22) ( 34.70, 94.39) ( 34.80, 94.55) ( 34.90, 94.71) ( 35.00, 94.87) ( 35.10, 95.03) ( 35.20, 95.19) ( 35.30, 95.36) ( 35.40, 95.52) ( 35.50, 95.68) ( 35.60, 95.84) ( 35.70, 96.00) ( 35.80, 96.17) ( 35.90, 96.33) ( 36.00, 96.49) ( 36.10, 96.65) ( 36.20, 96.81) ( 36.30, 96.97) ( 36.40, 97.14) ( 36.50, 97.30) ( 36.60, 97.46) ( 36.70, 97.62) ( 36.80, 97.78) ( 36.90, 97.94) ( 37.00, 98.11) ( 37.10, 98.27) ( 37.20, 98.43) ( 37.30, 98.59) ( 37.40, 98.75) ( 37.50, 98.92) ( 37.60, 99.08) ( 37.70, 99.24) ( 37.80, 99.40) ( 37.90, 99.56) ( 38.00, 99.72) ( 38.10, 99.89) ( 20.00, 12.50) ( 40.00, 17.50) ( 60.00, 75.00) ( 80.00, 92.50) ( 16.90, 0.14) ( 17.00, 0.29) ( 17.10, 0.44) ( 17.20, 0.59) ( 17.30, 0.73) ( 17.40, 0.88) ( 17.50, 1.03) ( 17.60, 1.18) ( 17.70, 1.33) ( 17.80, 1.48) ( 17.90, 1.63) ( 18.00, 1.78) ( 18.10, 1.92) ( 18.20, 2.07) ( 18.30, 2.22) ( 18.40, 2.37) ( 18.50, 2.52) ( 18.60, 2.67) ( 18.70, 2.82) ( 18.80, 2.97) ( 18.90, 3.11) ( 19.00, 3.26) ( 19.10, 3.41) ( 19.20, 3.56) ( 19.30, 3.71) ( 19.40, 3.86) ( 19.50, 4.01) ( 19.60, 4.16) ( 19.70, 4.30) ( 19.80, 4.45) ( 19.90, 4.60) ( 20.00, 4.75) ( 20.10, 4.90) ( 20.20, 5.05) ( 20.30, 5.20) ( 20.40, 5.34) ( 20.50, 5.49) ( 20.60, 5.64) ( 20.70, 5.79) ( 20.80, 5.94) ( 20.90, 6.09) ( 21.00, 6.24) ( 21.10, 6.39) ( 21.20, 6.54) ( 21.30, 6.68) ( 21.40, 6.83) ( 21.50, 6.98) ( 21.60, 7.13) ( 21.70, 7.28) ( 21.80, 7.43) ( 21.90, 7.58) ( 22.00, 7.73) ( 22.10, 7.87) ( 22.20, 8.02) ( 22.30, 8.17) ( 22.40, 8.32) ( 22.50, 8.47) ( 22.60, 8.62) ( 22.70, 8.77) ( 22.80, 8.92) ( 22.90, 9.06) ( 23.00, 9.21) ( 23.10, 9.36) ( 23.20, 9.51) ( 23.30, 9.66) ( 23.40, 9.81) ( 23.50, 9.96) ( 23.60, 10.11) ( 23.70, 10.25) ( 23.80, 10.40) ( 23.90, 10.55) ( 24.00, 10.70) ( 24.10, 10.85) ( 24.20, 11.00) ( 24.30, 11.15) ( 24.40, 11.30) ( 24.50, 11.44) ( 24.60, 11.59) ( 24.70, 11.74) ( 24.80, 11.89) ( 24.90, 12.04) ( 25.00, 12.19) ( 25.10, 12.34) ( 25.20, 12.49) ( 25.30, 12.63) ( 25.40, 12.78) ( 25.50, 12.93) ( 25.60, 13.08) ( 25.70, 13.23) ( 25.80, 13.38) ( 25.90, 13.53) ( 26.00, 13.68) ( 26.10, 13.82) ( 26.20, 13.97) ( 26.30, 14.12) ( 26.40, 14.27) ( 26.50, 14.42) ( 26.60, 14.57) ( 26.70, 14.72) ( 26.80, 14.87) ( 26.90, 15.01) ( 27.00, 15.16) ( 27.10, 15.31) ( 27.20, 15.46) ( 27.30, 15.61) ( 27.40, 15.76) ( 27.50, 15.91) ( 27.60, 16.06) ( 27.70, 16.20) ( 27.80, 16.35) ( 27.90, 16.50) ( 28.00, 16.65) ( 28.10, 16.80) ( 28.20, 16.95) ( 28.30, 17.10) ( 28.40, 17.24) ( 28.50, 17.39) ( 28.60, 17.54) ( 28.70, 17.69) ( 28.80, 17.84) ( 28.90, 17.99) ( 29.00, 18.14) ( 29.10, 18.29) ( 29.20, 18.44) ( 29.30, 18.58) ( 29.40, 18.73) ( 29.50, 18.88) ( 29.60, 19.03) ( 29.70, 19.18) ( 29.80, 19.33) ( 29.90, 19.48) ( 30.00, 19.63) ( 30.10, 19.77) ( 30.20, 19.92) ( 30.30, 20.07) ( 30.40, 20.22) ( 30.50, 20.37) ( 30.60, 20.52) ( 30.70, 20.67) ( 30.80, 20.82) ( 30.90, 20.96) ( 31.00, 21.11) ( 31.10, 21.26) ( 31.20, 21.41) ( 31.30, 21.56) ( 31.40, 21.71) ( 31.50, 21.86) ( 31.60, 22.01) ( 31.70, 22.15) ( 31.80, 22.30) ( 31.90, 22.45) ( 32.00, 22.60) ( 32.10, 22.75) ( 32.20, 22.90) ( 32.30, 23.05) ( 32.40, 23.20) ( 32.50, 23.34) ( 32.60, 23.49) ( 32.70, 23.64) ( 32.80, 23.79) ( 32.90, 23.94) ( 33.00, 24.09) ( 33.10, 24.24) ( 33.20, 24.39) ( 33.30, 24.53) ( 33.40, 24.68) ( 33.50, 24.83) ( 33.60, 24.98) ( 33.70, 25.13) ( 33.80, 25.28) ( 33.90, 25.43) ( 34.00, 25.58) ( 34.10, 25.72) ( 34.20, 25.87) ( 34.30, 26.02) ( 34.40, 26.17) ( 34.50, 26.32) ( 34.60, 26.47) ( 34.70, 26.62) ( 34.80, 26.77) ( 34.90, 26.91) ( 35.00, 27.06) ( 35.10, 27.21) ( 35.20, 27.36) ( 35.30, 27.51) ( 35.40, 27.66) ( 35.50, 27.81) ( 35.60, 27.96) ( 35.70, 28.10) ( 35.80, 28.25) ( 35.90, 28.40) ( 36.00, 28.55) ( 36.10, 28.70) ( 36.20, 28.85) ( 36.30, 29.00) ( 36.40, 29.15) ( 36.50, 29.29) ( 36.60, 29.44) ( 36.70, 29.59) ( 36.80, 29.74) ( 36.90, 29.89) ( 37.00, 30.04) ( 37.10, 30.19) ( 37.20, 30.34) ( 37.30, 30.48) ( 37.40, 30.63) ( 37.50, 30.78) ( 37.60, 30.93) ( 37.70, 31.08) ( 37.80, 31.23) ( 37.90, 31.38) ( 38.00, 31.53) ( 38.10, 31.67) ( 38.20, 31.82) ( 38.30, 31.97) ( 38.40, 32.12) ( 38.50, 32.27) ( 38.60, 32.42) ( 38.70, 32.57) ( 38.80, 32.72) ( 38.90, 32.86) ( 39.00, 33.01) ( 39.10, 33.16) ( 39.20, 33.31) ( 39.30, 33.46) ( 39.40, 33.61) ( 39.50, 33.76) ( 39.60, 33.91) ( 39.70, 34.05) ( 39.80, 34.20) ( 39.90, 34.35) ( 40.00, 34.50) ( 40.10, 34.65) ( 40.20, 34.80) ( 40.30, 34.95) ( 40.40, 35.10) ( 40.50, 35.24) ( 40.60, 35.39) ( 40.70, 35.54) ( 40.80, 35.69) ( 40.90, 35.84) ( 41.00, 35.99) ( 41.10, 36.14) ( 41.20, 36.29) ( 41.30, 36.43) ( 41.40, 36.58) ( 41.50, 36.73) ( 41.60, 36.88) ( 41.70, 37.03) ( 41.80, 37.18) ( 41.90, 37.33) ( 42.00, 37.48) ( 42.10, 37.62) ( 42.20, 37.77) ( 42.30, 37.92) ( 42.40, 38.07) ( 42.50, 38.22) ( 42.60, 38.37) ( 42.70, 38.52) ( 42.80, 38.67) ( 42.90, 38.81) ( 43.00, 38.96) ( 43.10, 39.11) ( 43.20, 39.26) ( 43.30, 39.41) ( 43.40, 39.56) ( 43.50, 39.71) ( 43.60, 39.86) ( 43.70, 40.00) ( 43.80, 40.15) ( 43.90, 40.30) ( 44.00, 40.45) ( 44.10, 40.60) ( 44.20, 40.75) ( 44.30, 40.90) ( 44.40, 41.05) ( 44.50, 41.19) ( 44.60, 41.34) ( 44.70, 41.49) ( 44.80, 41.64) ( 44.90, 41.79) ( 45.00, 41.94) ( 45.10, 42.09) ( 45.20, 42.24) ( 45.30, 42.38) ( 45.40, 42.53) ( 45.50, 42.68) ( 45.60, 42.83) ( 45.70, 42.98) ( 45.80, 43.13) ( 45.90, 43.28) ( 46.00, 43.43) ( 46.10, 43.57) ( 46.20, 43.72) ( 46.30, 43.87) ( 46.40, 44.02) ( 46.50, 44.17) ( 46.60, 44.32) ( 46.70, 44.47) ( 46.80, 44.62) ( 46.90, 44.76) ( 47.00, 44.91) ( 47.10, 45.06) ( 47.20, 45.21) ( 47.30, 45.36) ( 47.40, 45.51) ( 47.50, 45.66) ( 47.60, 45.81) ( 47.70, 45.95) ( 47.80, 46.10) ( 47.90, 46.25) ( 48.00, 46.40) ( 48.10, 46.55) ( 48.20, 46.70) ( 48.30, 46.85) ( 48.40, 47.00) ( 48.50, 47.14) ( 48.60, 47.29) ( 48.70, 47.44) ( 48.80, 47.59) ( 48.90, 47.74) ( 49.00, 47.89) ( 49.10, 48.04) ( 49.20, 48.19) ( 49.30, 48.33) ( 49.40, 48.48) ( 49.50, 48.63) ( 49.60, 48.78) ( 49.70, 48.93) ( 49.80, 49.08) ( 49.90, 49.23) ( 50.00, 49.38) ( 50.10, 49.52) ( 50.20, 49.67) ( 50.30, 49.82) ( 50.40, 49.97) ( 50.50, 50.12) ( 50.60, 50.27) ( 50.70, 50.42) ( 50.80, 50.57) ( 50.90, 50.71) ( 51.00, 50.86) ( 51.10, 51.01) ( 51.20, 51.16) ( 51.30, 51.31) ( 51.40, 51.46) ( 51.50, 51.61) ( 51.60, 51.76) ( 51.70, 51.90) ( 51.80, 52.05) ( 51.90, 52.20) ( 52.00, 52.35) ( 52.10, 52.50) ( 52.20, 52.65) ( 52.30, 52.80) ( 52.40, 52.95) ( 52.50, 53.09) ( 52.60, 53.24) ( 52.70, 53.39) ( 52.80, 53.54) ( 52.90, 53.69) ( 53.00, 53.84) ( 53.10, 53.99) ( 53.20, 54.14) ( 53.30, 54.28) ( 53.40, 54.43) ( 53.50, 54.58) ( 53.60, 54.73) ( 53.70, 54.88) ( 53.80, 55.03) ( 53.90, 55.18) ( 54.00, 55.33) ( 54.10, 55.47) ( 54.20, 55.62) ( 54.30, 55.77) ( 54.40, 55.92) ( 54.50, 56.07) ( 54.60, 56.22) ( 54.70, 56.37) ( 54.80, 56.52) ( 54.90, 56.66) ( 55.00, 56.81) ( 55.10, 56.96) ( 55.20, 57.11) ( 55.30, 57.26) ( 55.40, 57.41) ( 55.50, 57.56) ( 55.60, 57.71) ( 55.70, 57.85) ( 55.80, 58.00) ( 55.90, 58.15) ( 56.00, 58.30) ( 56.10, 58.45) ( 56.20, 58.60) ( 56.30, 58.75) ( 56.40, 58.90) ( 56.50, 59.04) ( 56.60, 59.19) ( 56.70, 59.34) ( 56.80, 59.49) ( 56.90, 59.64) ( 57.00, 59.79) ( 57.10, 59.94) ( 57.20, 60.09) ( 57.30, 60.23) ( 57.40, 60.38) ( 57.50, 60.53) ( 57.60, 60.68) ( 57.70, 60.83) ( 57.80, 60.98) ( 57.90, 61.13) ( 58.00, 61.28) ( 58.10, 61.42) ( 58.20, 61.57) ( 58.30, 61.72) ( 58.40, 61.87) ( 58.50, 62.02) ( 58.60, 62.17) ( 58.70, 62.32) ( 58.80, 62.47) ( 58.90, 62.61) ( 59.00, 62.76) ( 59.10, 62.91) ( 59.20, 63.06) ( 59.30, 63.21) ( 59.40, 63.36) ( 59.50, 63.51) ( 59.60, 63.66) ( 59.70, 63.80) ( 59.80, 63.95) ( 59.90, 64.10) ( 60.00, 64.25) ( 60.10, 64.40) ( 60.20, 64.55) ( 60.30, 64.70) ( 60.40, 64.84) ( 60.50, 64.99) ( 60.60, 65.14) ( 60.70, 65.29) ( 60.80, 65.44) ( 60.90, 65.59) ( 61.00, 65.74) ( 61.10, 65.89) ( 61.20, 66.04) ( 61.30, 66.18) ( 61.40, 66.33) ( 61.50, 66.48) ( 61.60, 66.63) ( 61.70, 66.78) ( 61.80, 66.93) ( 61.90, 67.08) ( 62.00, 67.22) ( 62.10, 67.37) ( 62.20, 67.52) ( 62.30, 67.67) ( 62.40, 67.82) ( 62.50, 67.97) ( 62.60, 68.12) ( 62.70, 68.27) ( 62.80, 68.42) ( 62.90, 68.56) ( 63.00, 68.71) ( 63.10, 68.86) ( 63.20, 69.01) ( 63.30, 69.16) ( 63.40, 69.31) ( 63.50, 69.46) ( 63.60, 69.61) ( 63.70, 69.75) ( 63.80, 69.90) ( 63.90, 70.05) ( 64.00, 70.20) ( 64.10, 70.35) ( 64.20, 70.50) ( 64.30, 70.65) ( 64.40, 70.80) ( 64.50, 70.94) ( 64.60, 71.09) ( 64.70, 71.24) ( 64.80, 71.39) ( 64.90, 71.54) ( 65.00, 71.69) ( 65.10, 71.84) ( 65.20, 71.99) ( 65.30, 72.13) ( 65.40, 72.28) ( 65.50, 72.43) ( 65.60, 72.58) ( 65.70, 72.73) ( 65.80, 72.88) ( 65.90, 73.03) ( 66.00, 73.18) ( 66.10, 73.32) ( 66.20, 73.47) ( 66.30, 73.62) ( 66.40, 73.77) ( 66.50, 73.92) ( 66.60, 74.07) ( 66.70, 74.22) ( 66.80, 74.37) ( 66.90, 74.51) ( 67.00, 74.66) ( 67.10, 74.81) ( 67.20, 74.96) ( 67.30, 75.11) ( 67.40, 75.26) ( 67.50, 75.41) ( 67.60, 75.56) ( 67.70, 75.70) ( 67.80, 75.85) ( 67.90, 76.00) ( 68.00, 76.15) ( 68.10, 76.30) ( 68.20, 76.45) ( 68.30, 76.60) ( 68.40, 76.75) ( 68.50, 76.89) ( 68.60, 77.04) ( 68.70, 77.19) ( 68.80, 77.34) ( 68.90, 77.49) ( 69.00, 77.64) ( 69.10, 77.79) ( 69.20, 77.94) ( 69.30, 78.08) ( 69.40, 78.23) ( 69.50, 78.38) ( 69.60, 78.53) ( 69.70, 78.68) ( 69.80, 78.83) ( 69.90, 78.98) ( 70.00, 79.13) ( 70.10, 79.27) ( 70.20, 79.42) ( 70.30, 79.57) ( 70.40, 79.72) ( 70.50, 79.87) ( 70.60, 80.02) ( 70.70, 80.17) ( 70.80, 80.31) ( 70.90, 80.46) ( 71.00, 80.61) ( 71.10, 80.76) ( 71.20, 80.91) ( 71.30, 81.06) ( 71.40, 81.21) ( 71.50, 81.36) ( 71.60, 81.51) ( 71.70, 81.65) ( 71.80, 81.80) ( 71.90, 81.95) ( 72.00, 82.10) ( 72.10, 82.25) ( 72.20, 82.40) ( 72.30, 82.55) ( 72.40, 82.70) ( 72.50, 82.84) ( 72.60, 82.99) ( 72.70, 83.14) ( 72.80, 83.29) ( 72.90, 83.44) ( 73.00, 83.59) ( 73.10, 83.74) ( 73.20, 83.89) ( 73.30, 84.03) ( 73.40, 84.18) ( 73.50, 84.33) ( 73.60, 84.48) ( 73.70, 84.63) ( 73.80, 84.78) ( 73.90, 84.93) ( 74.00, 85.08) ( 74.10, 85.22) ( 74.20, 85.37) ( 74.30, 85.52) ( 74.40, 85.67) ( 74.50, 85.82) ( 74.60, 85.97) ( 74.70, 86.12) ( 74.80, 86.27) ( 74.90, 86.41) ( 75.00, 86.56) ( 75.10, 86.71) ( 75.20, 86.86) ( 75.30, 87.01) ( 75.40, 87.16) ( 75.50, 87.31) ( 75.60, 87.46) ( 75.70, 87.60) ( 75.80, 87.75) ( 75.90, 87.90) ( 76.00, 88.05) ( 76.10, 88.20) ( 76.20, 88.35) ( 76.30, 88.50) ( 76.40, 88.65) ( 76.50, 88.79) ( 76.60, 88.94) ( 76.70, 89.09) ( 76.80, 89.24) ( 76.90, 89.39) ( 77.00, 89.54) ( 77.10, 89.69) ( 77.20, 89.84) ( 77.30, 89.98) ( 77.40, 90.13) ( 77.50, 90.28) ( 77.60, 90.43) ( 77.70, 90.58) ( 77.80, 90.73) ( 77.90, 90.88) ( 78.00, 91.03) ( 78.10, 91.17) ( 78.20, 91.32) ( 78.30, 91.47) ( 78.40, 91.62) ( 78.50, 91.77) ( 78.60, 91.92) ( 78.70, 92.07) ( 78.80, 92.22) ( 78.90, 92.36) ( 79.00, 92.51) ( 79.10, 92.66) ( 79.20, 92.81) ( 79.30, 92.96) ( 79.40, 93.11) ( 79.50, 93.26) ( 79.60, 93.41) ( 79.70, 93.55) ( 79.80, 93.70) ( 79.90, 93.85) ( 80.00, 94.00) ( 80.10, 94.15) ( 80.20, 94.30) ( 80.30, 94.45) ( 80.40, 94.60) ( 80.50, 94.74) ( 80.60, 94.89) ( 80.70, 95.04) ( 80.80, 95.19) ( 80.90, 95.34) ( 81.00, 95.49) ( 81.10, 95.64) ( 81.20, 95.79) ( 81.30, 95.93) ( 81.40, 96.08) ( 81.50, 96.23) ( 81.60, 96.38) ( 81.70, 96.53) ( 81.80, 96.68) ( 81.90, 96.83) ( 82.00, 96.98) ( 82.10, 97.12) ( 82.20, 97.27) ( 82.30, 97.42) ( 82.40, 97.57) ( 82.50, 97.72) ( 82.60, 97.87) ( 82.70, 98.02) ( 82.80, 98.17) ( 82.90, 98.31) ( 83.00, 98.46) ( 83.10, 98.61) ( 83.20, 98.76) ( 83.30, 98.91) ( 83.40, 99.06) ( 83.50, 99.21) ( 83.60, 99.36) ( 83.70, 99.50) ( 83.80, 99.65) ( 83.90, 99.80) ( 84.00, 99.95) (100,100) (0,0)[(100,100)]{} (20,0)(20,0)[4]{}[(0,1)[2]{}]{} (0,10)(0,10)[9]{}[(1,0)[2]{}]{} (0,-8)[0.0]{} (92,-8)[0.25]{} (50,-8)[$\epsilon$]{} (-12,0)[0.0]{} (-12,96)[1.0]{} (-12,60)[$\kappa$]{} ( 2.40, 16.00) ( 3.90, 23.00) ( 6.14, 24.00) ( 10.42, 29.00) ( 20.00, 34.00) ( 40.00, 43.00) ( 60.00, 81.00) ( 80.00, 96.00) ( 0.50, 16.26) ( 1.00, 16.75) ( 1.50, 17.25) ( 2.00, 17.74) ( 2.50, 18.23) ( 3.00, 18.73) ( 3.50, 19.22) ( 4.00, 19.71) ( 4.50, 20.21) ( 5.00, 20.70) ( 5.50, 21.19) ( 6.00, 21.69) ( 6.50, 22.18) ( 7.00, 22.67) ( 7.50, 23.16) ( 8.00, 23.66) ( 8.50, 24.15) ( 9.00, 24.64) ( 9.50, 25.14) ( 10.00, 25.63) ( 10.50, 26.12) ( 11.00, 26.62) ( 11.50, 27.11) ( 12.00, 27.60) ( 12.50, 28.10) ( 13.00, 28.59) ( 13.50, 29.08) ( 14.00, 29.58) ( 14.50, 30.07) ( 15.00, 30.56) ( 15.50, 31.06) ( 16.00, 31.55) ( 16.50, 32.04) ( 17.00, 32.54) ( 17.50, 33.03) ( 18.00, 33.52) ( 18.50, 34.02) ( 19.00, 34.51) ( 19.50, 35.00) ( 20.00, 35.50) ( 20.50, 35.99) ( 21.00, 36.48) ( 21.50, 36.98) ( 22.00, 37.47) ( 22.50, 37.96) ( 23.00, 38.46) ( 23.50, 38.95) ( 24.00, 39.44) ( 24.50, 39.94) ( 25.00, 40.43) ( 25.50, 40.92) ( 26.00, 41.42) ( 26.50, 41.91) ( 27.00, 42.40) ( 27.50, 42.90) ( 28.00, 43.39) ( 28.50, 43.88) ( 29.00, 44.38) ( 29.50, 44.87) ( 30.00, 45.36) ( 30.50, 45.86) ( 31.00, 46.35) ( 31.50, 46.84) ( 32.00, 47.34) ( 32.50, 47.83) ( 33.00, 48.32) ( 33.50, 48.82) ( 34.00, 49.31) ( 34.50, 49.80) ( 35.00, 50.30) ( 35.50, 50.79) ( 36.00, 51.28) ( 36.50, 51.78) ( 37.00, 52.27) ( 37.50, 52.76) ( 38.00, 53.26) ( 38.50, 53.75) ( 39.00, 54.24) ( 39.50, 54.74) ( 40.00, 55.23) ( 40.50, 55.72) ( 41.00, 56.22) ( 41.50, 56.71) ( 42.00, 57.20) ( 42.50, 57.70) ( 43.00, 58.19) ( 43.50, 58.68) ( 44.00, 59.18) ( 44.50, 59.67) ( 45.00, 60.16) ( 45.50, 60.66) ( 46.00, 61.15) ( 46.50, 61.64) ( 47.00, 62.14) ( 47.50, 62.63) ( 48.00, 63.12) ( 48.50, 63.62) ( 49.00, 64.11) ( 49.50, 64.60) ( 50.00, 65.10) ( 50.50, 65.59) ( 51.00, 66.08) ( 51.50, 66.57) ( 52.00, 67.07) ( 52.50, 67.56) ( 53.00, 68.05) ( 53.50, 68.55) ( 54.00, 69.04) ( 54.50, 69.53) ( 55.00, 70.03) ( 55.50, 70.52) ( 56.00, 71.01) ( 56.50, 71.51) ( 57.00, 72.00) ( 57.50, 72.49) ( 58.00, 72.99) ( 58.50, 73.48) ( 59.00, 73.97) ( 59.50, 74.47) ( 60.00, 74.96) ( 60.50, 75.45) ( 61.00, 75.95) ( 61.50, 76.44) ( 62.00, 76.93) ( 62.50, 77.43) ( 63.00, 77.92) ( 63.50, 78.41) ( 64.00, 78.91) ( 64.50, 79.40) ( 65.00, 79.89) ( 65.50, 80.39) ( 66.00, 80.88) ( 66.50, 81.37) ( 67.00, 81.87) ( 67.50, 82.36) ( 68.00, 82.85) ( 68.50, 83.35) ( 69.00, 83.84) ( 69.50, 84.33) ( 70.00, 84.83) ( 70.50, 85.32) ( 71.00, 85.81) ( 71.50, 86.31) ( 72.00, 86.80) ( 72.50, 87.29) ( 73.00, 87.79) ( 73.50, 88.28) ( 74.00, 88.77) ( 74.50, 89.27) ( 75.00, 89.76) ( 75.50, 90.25) ( 76.00, 90.75) ( 76.50, 91.24) ( 77.00, 91.73) ( 77.50, 92.23) ( 78.00, 92.72) ( 78.50, 93.21) ( 79.00, 93.71) ( 79.50, 94.20) ( 80.00, 94.69) ( 80.50, 95.19) ( 81.00, 95.68) ( 81.50, 96.17) ( 82.00, 96.67) ( 82.50, 97.16) ( 83.00, 97.65) ( 83.50, 98.15) ( 84.00, 98.64) ( 84.50, 99.13) ( 85.00, 99.63)
Networks of many interacting units occur in diverse areas as, for example, gene regulation, neural networks, food webs in ecology, species relationships in biological evolution, economic interactions, and the organization of the internet. For studying statistical mechanics properties of such complex systems, discrete dynamical networks provide a simple testbed for effects of globally interacting information transfer in network structures. One example is the threshold network with sparse asymmetric connections. Networks of this kind were first studied as diluted, non-symmetric spin glasses [@D87] and diluted, asymmetric neural networks [@DGZ87; @KZ87]. For the study of topological questions in networks, a version with discrete connections $c_{ij}=\pm1$ is convenient and will be considered here. It is a subset of Boolean networks [@K69; @K90] with similar dynamical properties. Random realizations of these networks exhibit complex non-Hamiltonian dynamics including transients and limit cycles [@K88a; @B96]. In particular, a phase transition is observed at a critical average connectivity $K_c$ with lengths of transients and attractors (limit cycles) diverging exponentially with system size for an average connectivity larger than $K_c$. A theoretical analysis is limited by the non-Hamiltonian character of the asymmetric interactions, such that standard tools of statistical mechanics do not apply [@D87]. However, combinatorial as well as numerical methods provide a quite detailed picture about their dynamical properties and correspondence with Boolean Networks [@K88a; @B96; @DP86; @DS86; @DW86; @DF86; @K88b; @F88; @FK88; @B98]. While basic dynamical properties of interaction networks with fixed architecture have been studied with such models, the origin of specific structural properties of networks in natural systems is often unknown. For example, the observed average connectivity in a nervous structure or in a biological genome is hard to explain in a framework of networks with a static architecture. For the case of regulation networks in the genome, Kauffman postulated that gene regulatory networks may exhibit properties of dynamical networks near criticality [@K69; @K93]. However, this postulate does not provide a mechanism able to generate an average connectivity near the critical point. An interesting question is whether connectivity may be driven towards a critical point by some dynamical mechanism. In the following we will sketch such an approach in a setting of an explicit evolution of the connectivity of networks. Network models of evolving topology, in general, have been studied with respect to critical properties earlier in other areas, e.g., in models of macro-evolution [@Sole]. Network evolution with a focus on gene regulation has been studied first for Boolean networks in [@BS98] observing self-organization in network evolution, and for threshold networks in [@BS00]. Combining the evolution of Boolean networks with game theoretical interactions is used to model networks in economy [@PBC00]. In a recent paper Christensen et al. [@CDKS98] introduce a static network with evolving topology of undirected links that explicitly evolves towards a critical connectivity in the largest cluster of the network. In particular they observe for a neighborhood-oriented rewiring rule that the connectivity of the largest cluster evolves towards the critical $K_c=2$ of a marginally connected network. Motivated by this work we here consider the topological evolution of threshold networks with asymmetric links to study how local rules may affect global connectivity of a network, including the entire set of clusters of the network. In the remainder of this Letter we define a threshold network model with a local, topology-evolving rule. Then numerical results are presented that indicate an evolution of topology towards a critical connectivity in the limit of large system size. Finally, we discuss these results with respect to other mechanisms of self-organization and point to possible links with interaction networks in natural systems. Let us consider a network of $N$ randomly interconnected binary elements with states $\sigma_i=\pm1$. For each site $i$, its state at time $t+1$ is a function of the inputs it receives from other elements at time $t$: $$\begin{aligned} \sigma_i(t+1) = \mbox{sgn}\left(f_i(t)\right) \end{aligned}$$ with $$\begin{aligned} f_i(t) = \sum_{j=1}^N c_{ij}\sigma_j(t) + h. \end{aligned}$$ The interaction weights $c_{ij}$ take discrete values $c_{ij}=\pm1$, with $c_{ij} = 0$ if site $i$ does not receive any input from element $j$. In the following, the threshold parameter $h$ is set to zero. The dynamics of the network states is generated by iterating this rule starting from a random initial condition, eventually reaching a periodic attractor (limit cycle or fixed point). Then we apply the following local rewiring rule to a randomly selected node $i$ of the network:\ [ **If node $i$ does not change its state during the attractor, it receives a new non-zero link $c_{ij}$ from a random node $j$. If it changes its state at least once during the attractor, it loses one of its non-zero links $c_{ij}$.** ]{}\ Iterating this process leads to a self-organization of the average connectivity of the network. To be more specific, let us now describe one of several possible realizations of such an algorithm in detail. We define the average activity $A(i)$ of a site $i$ $$\begin{aligned} A(i) = \frac{1}{T_2-T_1}\sum_{t=T_1}^{T_2}\sigma_i(t) \end{aligned}$$ where the sum is taken over the dynamical attractor of the network defined by $T_1$ and $T_2$. For practical purposes, if the attractor is not reached after $T_{max}$ updates, $A(i)$ is measured over the last $T_{max}/2$ updates. This avoids exponential slowing down by long attractor periods for an average connectivity $K>2$. The algorithm is then defined as follows: \(1) Choose a random network with an average connectivity $K_{ini}$. \(2) Choose a random initial state vector $\vec{\sigma}(0)=$ $(\sigma_1(0),...,\sigma_N(0) )$. \(3) Calculate the new system states $\vec{\sigma}(t),\quad t=1,...,T $ according to eqn. (2), using parallel update of the $N$ sites. \(4) Once a previous state reappears (a dynamical attractor is reached) or otherwise after $T_{max}$ updates the simulation is stopped. Then change the topology of the network according to the following local rewiring rule: \(5) A site $i$ is chosen at random and its average activity $A(i)$ is determined. \(6) If $|A(i)|=1$, $i$ receives a new link $c_{ij}$ from a site $j$ selected at random, choosing $c_{ij}=+1$ or $-1$ with equal probability. If $|A(i)|<1$, one of the existing non-zero links of site $i$ is set to zero. \(7) Finally, one non-zero entry of the connectivity-matrix is selected at random and its sign reversed. \(8) Go to step number 2 and iterate. The fluctuations introduced in step 7 as random spin reversals are motivated by structurally neutral noise often observed in natural systems. Omitting this step does not change the basic behavior of the algorithm, however, the distribution of the number of inputs per node then evolves away from a Poissonian, thereby increasing the fraction of nodes with many inputs. The resulting dynamics only differs from the original algorithm in a slightly larger connectivity $K_{ev}$ of the evolved networks. This effect vanishes $\sim 1/N$ with increasing system size. The typical picture arising from the model as defined above is shown in Fig. 1 for a system of size $N=1024$. =N Y Independent of the initial connectivity, the system evolves towards a statistically stationary state with an average connectivity $K_{ev}(N=1024)=2.55 \pm 0.04$. With varying system size we find that with increasing $N$ the average connectivity converges towards $K_c$ (which, for threshold $h=0$ as considered here, is found at $K_c=2$), see Fig. 2. =N Y One observes the scaling relationship $$\begin{aligned} K_{ev}(N) - 2 = c\cdot N^{-\delta} \end{aligned}$$ with $c = 12.4 \pm 0.5$ and $\delta = 0.47 \pm 0.01$. Thus, in the large system size limit $N \rightarrow \infty$ the networks evolve towards the critical connectivity $K_c = 2$. The self-organization towards criticality observed in this model is different from currently known mechanisms exhibiting the amazingly general phenomenon of self-organized criticality (SOC) [@SOC; @SBCW; @Sole]. Our model introduces a (new, and interestingly different) type of mechanism by which a system self-organizes towards criticality, here $K \rightarrow K_c$. This class of mechanisms lifts the notions of SOC to a new level. In particular, it exhibits considerable robustness against noise in the system. The main mechanism here is based on a topological phase transition in dynamical networks. To see this consider the statistical properties of the average activity $A(i)$ of a site $i$ for a random network. It is closely related to the frozen component $C(K)$ of the network, defined as the fraction of nodes that do not change their state along the attractor. The average activity $A(i)$ of a frozen site $i$ thus obeys $|A(i)|=1$. In the limit of large $N$, $C(K)$ undergoes a transition at $K_c$ vanishing for larger $K$. With respect to the average activity of a node, $C(K)$ equals the probability that a random site $i$ in the network has $|A(i)|=1$. Note that this is the quantity which is checked stochastically by the local update rule in the above algorithm. The frozen component $C(K,N)$ is shown for random networks of two different system sizes $N$ in Fig. 3. =N Y One finds that $C(K,N)$ can be approximated by $$\begin{aligned} C(K,N) = \frac{1}{2} \{ 1+\tanh{[-\alpha(N)\cdot(K - K_0(N)\,)]} \}. \end{aligned}$$ This describes the transition of $C(K,N)$ at an average connectivity $K_0(N)$ which depends only on the system size $N$. =N Y One finds for the finite size scaling of $K_0(N)$ that $$\begin{aligned} K_0(N) - 2 = a\cdot N^{-\beta} \end{aligned}$$ with $a = 3.30 \pm 0.17$ and $\beta = 0.34 \pm 0.01 $ (see Fig. 4), whereas the parameter $\alpha$ scales with system size as $$\begin{aligned} \alpha(N) = b\cdot N^\gamma \end{aligned}$$ with $b= 0.14 \pm 0.016$ and $\gamma = 0.41 \pm 0.01$. Thus we see that in the thermodynamic limit $N \rightarrow \infty$ the transition from the frozen to the chaotic phase becomes a sharp step function at $K_0(\infty) = K_c$. These considerations apply well to the evolving networks in the rewiring algorithm. In addition to the rewiring algorithm as described in this Letter, we tested a number of different versions of the model. Including the transient in the measurement of the average activity $A(i)$ results in a similar overall behavior (where we allowed a few time steps for the transient to decouple from initial conditions). Another version succeeds using the correlation between two sites instead of $A(i)$ as a mutation criterion (this rule could be called “anti-Hebbian” as in the context of neural network learning). In addition, this version was further changed allowing different locations of mutated links, both, between the tested sites or just at one of the nodes. Some of these versions will be discussed in detail in a separate article [@RB00]. All these different realizations exhibit the same basic behavior as found for the model above. Thus, the mechanism proposed in this Letter exhibits considerable robustness. An interesting question is whether a comparable mechanism may occur in natural complex systems, in particular, whether it could lead to observable consequences that cannot be explained otherwise. One example where such mechanisms could occur is the regulation of connectivity density in neural systems. Activity-dependent attachment of synapses to a neuron is well known experimentally, for example in the form of the gating of synaptic changes by activity correlation between neurons [@RS79]. Such local attachment rules could provide a sufficient basis for a collective organization to occur as described in this Letter. For symmetric neural networks similar rules have been discussed, e.g., in the context of “Hebbian unlearning” suppressing spurious memories [@HFP83]. In the here studied asymmetric networks, however, such rules appear to generate a completely new form of self-organization dynamics. As a consequence, an emerging average connectivity $K_{ev}$ could be stabilized to a specific value mostly determined by local properties of the dynamical elements of the system. It would be interesting to discuss whether synaptic density in biological systems could be regulated by such mechanisms. Another biological observable of interest is the connectivity of gene-gene interactions in the expression of the genome as first studied by Kauffman [@K69]. Whether this observable results from any such mechanism clearly is an open question. However, one may discuss whether biological evolution exerts selection pressure on the single gene level, that results in a selection rule similar to our algorithm; E.g., for a frozen regulation gene which is practically non-functional to obtain a new function (obtain a new link), as well as for a quite active gene to reduce functionality (remove a link). First experimental estimates for the global observable of genome connectivity are available for E. coli with a value in the range $2-3$ [@THPC98]. While it is clearly too early to speculate about the mechanisms of global genome organization, it is interesting to note that the robust self-organizing algorithm presented here provides a mechanism that in principle predicts a value in this range. To summarize, we study topological evolution of asymmetric dynamical networks on the basis of a local rewiring rule. We observe a network evolution with the average connectivity $K$ of the network evolving towards the critical connectivity $K_c$ without tuning. In the limit of large system size $N$ this limit becomes accurate. It is well conceivable that this form of global evolution of a network structure towards criticality might be found in natural complex systems. [99]{} B. Derrida, J. Phys. A [**20**]{}, L721 (1987). B. Derrida, E. Gardner, and A. Zippelius, Europhys. Lett. [**4**]{}, 167 (1987). R. Kree and A. Zippelius, Phys. Rev. A [**36**]{}, 4421 (1987). S.A. Kauffman, J. Theor. Biol. [**22**]{}, 437 (1969). S.A. Kauffman, Physica (Amsterdam) [**42D**]{}, 135 (1990). K.E. Kürten, Phys. Lett. A [**129**]{}, 157-160 (1988). U. Bastola and G. Parisi, Physica (Amsterdam) [**98D**]{}, 1-25 (1996). B. Derrida and Y. Pomeau, Europhys. Lett. [**1**]{}, 45-49 (1986). B. Derrida and D. Stauffer, Europhys. Lett. [**2**]{}, 79 (1986). B. Derrida and G. Weisbuch, J. Phys. (Paris) [**47**]{}, 1297-1303 (1986). B. Derrida and H. Flyvbjerg, J. Phys. (Paris) [**48**]{}, 971-978 (1986). K.E. Kürten, J. Phys. A [**21**]{}, L615-L619 (1988). H. Flyvbjerg, J. Phys. A [**21**]{}, L955 (1988). H. Flyvbjerg and N.J. Kjaer, J. Phys. A [**21**]{}, 1695 (1988). U. Bastola and G. Parisi, Physica (Amsterdam) [**115D**]{}, 203-218 (1998); [**115D**]{}, 219-233 (1998). S.A. Kauffman, [*The Origins of Order: Self-Organization and Selection in Evolution*]{} (Oxford University Press, New York, 1993). R.V. Sole and S. Manrubia, Phys. Rev. E [**54**]{}, R42-R45 (1996). S. Bornholdt and K. Sneppen, Phys. Rev. Lett. [**81**]{}, 236-239 (1998). S. Bornholdt and K. Sneppen (to be published); cond-mat/0003333. M. Paczuski, K.E. Bassler, and A. Corral, Phys. Rev. Lett. [**84**]{}, 3185 (2000). K. Christensen, R. Donangelo, B. Koiler and K. Sneppen, Phys. Rev. Lett. [**81**]{}, 2380 (1998). P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. [**59**]{}, 381 (1987); P. Bak and K. Sneppen, Phys. Rev. Lett. [**71**]{}, 4083 (1993). S. Bornholdt and C. Wetterich, Phys. Lett. B [**282**]{}, 399 (1992). T. Rohlf and S. Bornholdt (to be published). J.P. Rauschecker and W. Singer, Nature (London) [**280**]{}, 58-60 (1979). J.J. Hopfield, D.I. Feinstein, and R.G. Palmer, Nature (London) [**304**]{}, 158-159 (1983). D. Thieffry, A.M. Huerta, E. Perez-Rueda, and J. Collado-Vides, Bioessays [**20**]{}, 433-440 (1998).
--- abstract: 'In recent years, work has been done to develop the theory of General Reinforcement Learning (GRL). However, there are few examples demonstrating the known results regarding generalised discounting. We have added to the GRL simulation platform AIXIjs the functionality to assign an agent arbitrary discount functions, and an environment which can be used to determine the effect of discounting on an agent’s policy. Using this, we investigate how geometric, hyperbolic and power discounting affect an informed agent in a simple MDP. We experimentally reproduce a number of theoretical results, and discuss some related subtleties. It was found that the agent’s behaviour followed what is expected theoretically, assuming appropriate parameters were chosen for the Monte-Carlo Tree Search (MCTS) planning algorithm.' author: - 'Sean Lamont[^1]' - 'John Aslanides[^2]' - 'Jan Leike[^3]' - 'Marcus Hutter[^4]' title: | Generalised Discount Functions\ applied to a Monte-Carlo AI$\mu$ Implementation --- Reinforcement Learning, Discount Function, Time Consistency, Monte Carlo Introduction ============ Reinforcement learning (RL) is a branch of artificial intelligence which is focused on designing and implementing agents that learn how to achieve a task through rewards. Most RL methods focus on one specialised area, for example the Alpha-Go program from Google Deepmind which is targeted towards the board game Go [\[]{}12[\]]{}. General Reinforcement Learning (GRL) is concerned with the design of agents which are effective in a wide range of environments. RL agents use a *discount function* when choosing their future actions, which controls how heavily they weight future rewards. Several theoretical results have been proven for arbitrary discount functions relating to GRL agents [\[]{}8[\]]{}. We present some contributions to the platform AIXIjs[^5] [\[]{}1[\]]{}[\[]{}2[\]]{}, which enables the simulation of GRL agents for gridworld problems. Being web-based allows this platform to be used as an educational tool, as it provides an understandable visual demonstration of theoretical results. In addition, it allows the testing of GRL agents in several different types of environments and scenarios, which can be used to analyze and compare models. This helps to showcase the different strengths and weaknesses among GRL agents, making it a useful tool for the GRL community in terms of demonstrating results. Our main work here is to extend this platform to arbitrary discount functions. Using this, we then compare the behaviour induced by common discount functions and compare this to what is theoretically expected. We first provide the necessary background to understand the experiments by introducing the RL setup, agent and planning algorithms, general discounting, and AIXIjs. We then present details of the environment and agent implementation used for the analyses. Finally, we present the experiments and the results, along with a discussion for each function. æ Background =========== Reinforcement Learning Setup ---------------------------- RL research is concerned with the design and implementation of goal-oriented agents. The characteristic approach of RL is to associate *rewards* with the desired goal and allow the *agent* to learn the best strategy for gaining rewards itself through trial and error [\[]{}14[\]]{}. The agent interacts with an *environment* by producing an action $a$, and the environment responds with an observation and reward pair $(o,r)=e$ which we call a *percept*. The *history* up to interaction cycle $k$ is given by the string of all actions and percepts, $a_{1}e_{1}.....a_{k-1}e_{k-1}$. To simplify notation, this is written as æ$_{<k}$. Mathematically, an agent’s *policy* is a stochastic function mapping a history to an action, $\pi:(\mathcal{A}\times\mathcal{E})\rightsquigarrow\mathcal{A},$ while an environment is a stochastic map from a history and an action to a percept, $\mu:(\mathcal{A}\times\mathcal{E})^{*}\times\mathcal{A}\rightsquigarrow\mathcal{E}$, where $\rightsquigarrow$ is a stochastic mapping. In the context of adaptive control, Bellman [\[]{}3[\]]{} first introduced equations for expressing optimal policies in both deterministic and stochastic environments, including infinite state spaces. Also introduced was the idea of a *value function*. A value function is how an agent assigns value to an environment *state* (or a state-action pair), where value is a measure of the expected future discounted reward sum. To solve the Bellman equations, it is necessary to assume a fully observable *Markovian* environment (a *Markov Decision Process*, or a *MDP*). In an MDP, the agent can observe all relevant information from the environment at any time, without needing to remember the history. Although useful for MDPs, many problems of interest lack the necessary assumptions to tractably solve the Bellman equations. The problem of scaling RL to non-Markovian and *partially observable* real world domains provides the motivation for General Reinforcement Learning. In such cases, it is useful to express the value function in terms of the agent’s history, with the value of a policy $\pi$ with history $\text{�}_{<t}$ and environment $\mu$ given by the equation: $$V_{\mu}^{\pi}(\text{�}_{<t}):=\mathbb{\mathbb{E_{\mu}^{\pi}}}\left[\sum_{k=t}^{\infty}\gamma_{k}r_{k}|\text{�}_{<t}\right]$$ Where $r$ is the reward and $\gamma$ is a discount function [\[]{}9[\]]{}. This equation gives the $\mu$-expected utility for a policy $\pi$ . If we are in a MDP, then we can replace the history by the current state, and rewrite this as a Bellman Equation [\[]{}3[\]]{}. AI$\mu$ ------- The GRL agent AI$\mu$ [\[]{}4[\]]{} is purposed to find the optimal reward in a known environment. There are no other assumptions made about the environment, so this agent extends to partially observable cases. AI$\mu$ is simply defined as the agent which maximises the value function given by (1). Specifically, for any environment $\mu$, $$\pi^{AI\mu}\in\arg\max_{\pi}V_{\mu}^{\pi}$$ As there is usually no way to know the true environment, the main purpose of AI$\mu$ is to provide a theoretical upper bound for the performance of an agent for a given environment. As we wish to isolate the effect of discounting, AI$\mu$ is the agent used for our experiments to remove uncertainty in the agent’s model. Generalised Discounting ----------------------- A discount function is used to weight rewards based on their temporal position relative to the current time. There are several motivations for using a discount function to determine utility, as opposed to taking an unaltered sum of rewards. In practice, a discount function allows the agent’s designer to decide how it would like the agent to value rewards based on how far away they are. A discount function also serves to prevent the utility from diverging to infinity, as is the case when using undiscounted reward sums. Samuelson [\[]{}11[\]]{} first introduced the model of discounted utility, with the utility at time $k$ given by the sum of discounted future rewards: $$V_{k}=\sum_{t=k}^{\infty}\gamma_{t-k}r_{t}$$ This model is the most commonly used in both RL and other disciplines, but has several issues. These include that the discount function cannot change over time, and that the value of an action is independent of the history. Hutter and Lattimore [\[]{}8[\]]{} address several issues with this model first by using the GRL framwork to allow decisions which consider the agent’s history. They also generalise the setting to allow a change in discounting over time. Specifically, they define a discount vector $\mathbf{\gamma}^{k}$ for each time step $k$, with the entries in the vector being the discount applied at each time step $t>k.$ Replacing $\gamma_{t-k}$ with $\mathbf{\gamma}^{k}$ in (3) gives a more general model of discounted utility, as it allows the discount function to change over time by using different vectors for different time steps. Using this model, Hutter and Lattimore [\[]{}8[\]]{} provide a general classification of *time inconsistent* discounting. Qualitatively, a policy is time consistent if it agrees with previous plans and time inconsistent if it does not. For example, if I plan to complete a task in 2 hours but then after 1 hour plan to do it after another 2 hours, my policy will be time inconsistent. Formally, an agent using discount vectors $\gamma^{k}$ is time consistent iff: $$\forall k,\;\exists a_{k}>0\;such\;that\enskip\gamma_{t}^{k}=a_{k}\mathbf{\gamma}_{t}^{1},\quad\forall t\geq k\in\mathbb{N}$$ Which is to say, the discount applied from the current time $k$ to the reward at time $t$ is equal to some positive scalar multiple of the discount used for $t$ at time 1. Also presented in their work is a list of common discount functions and a characterisation of which of these are time consistent. These form the basis for our experiments and we present a taxonomy below: Given the current time $k$, future time $t>k$, and a discount vector $\gamma$, we have: *Geometric Discounting*: $\mathbf{\gamma}_{t}^{k}=g^{t}, g\in(0,1)$. Geometric discounting is the most commonly used discount function, as it provides a straightforward and predictable way to value closer rewards higher. It is also convenient as for $\gamma\in(0,1)$ it ensures the expected discounted reward (i.e. value) will always be bounded, and therefore well defined in all instances. Geometric discounting is always time consistent, which is apparent when considering the definition in (4). *Hyperbolic Discounting*: $\mathbf{\gamma}_{t}^{k}=\frac{1}{(1+\kappa(t-k))^{\beta}},\kappa\in\mathbb{R}^{+},\beta\geq1$. Hyperbolic discounting has been thought to accurately model human behaviour, with some research suggesting humans discount this way when deciding actions [\[]{}15[\]]{}. Hyperbolic discounting is time inconsistent, which is much of the reason why it is considered to model many irrational human behaviour patterns. It is clear that hyperbolic discounting is time inconsistent, as it is not possible to factor the above expression in a way which satisfies (4). Hyperbolic discounting is most commonly seen for $\beta=1$, with $\beta>1$ ensuring the discounted reward sum doesn’t diverge to infinity. *Power Discounting*: $\mathbf{\gamma}_{t}^{k}=t^{-\beta},\beta>1$. Power discounting is of interest because it causes a *growing effective horizon.* This in effect causes the agent to become more far sighted over time, with future rewards becoming relatively more desirable as time progresses. This is flexible as there is no need to assign an arbitrary fixed effective horizon, it will instead grow over time. Hutter and Lattimore [\[]{}8[\]]{} point out that this function is time consistent, which combined with the growing effective horizon makes it an effective means of agent discounting. Monte-Carlo Tree Search with $\rho$UCT --------------------------------------- Monte-Carlo Tree Search (MCTS) is a planning algorithm designed to approximate the expectimax search tree generated by (1), which is usually intractable to fully enumerate. UCT [\[]{}7[\]]{} is a MCTS algorithm which is effective for Markovian settings. Veness et al. [\[]{}16[\]]{} extend this to general environments with the $\rho$UCT algorithm. The algortithm generates a tree comprised of two types of nodes, ’decision’ nodes and ’chance’ nodes. A decision node reflects the agents possible actions, while chance nodes represent the possible environment responses. A summary of the algorithm is as follows: First, plan forward using standard Monte-Carlo simulation. Then select an action in the tree using the UCB action policy; Define a search horizon $m$, maximum and minumum reward $\beta$ and $\alpha$, value estimate $V'$, and history $h$, with $T(ha)$ being the number of visits to a chance node, and $T(h)$ the number of visits to a decision node. Then, for $T(ha)>0$: $$a_{UCB}=\arg\max_{a}\frac{1}{m(\beta-\alpha)}V'(ha)+C\sqrt{\frac{\log(T(h))}{T(ha)}}$$ If $T(ha)=0$ then the best action will default to $a$. The parameter $C$ is an exploration constant, which can be modified to control the likelihood that an agent will take an exploratory action. Veness et al. [\[]{}16[\]]{} remark that high values of $C$ lead to ’bushy’ and short trees, compared to low values yielding longer and more discerning trees. Once the best action is selected, the values for each node are updated backwards to the root to reflect the new action. The primary strength of this algorithm is that it allows for history based tree search, by using $\rho$ as the current environment model and planning based on that. AIXIjs ------ We implement our experiments using AIXIjs, a JavaScript platform designed to demonstrate GRL results. AIXIjs is structured as follows: There are currently several GRL agents which have been implemented to work in different (toy) gridworld and MDP environments. Using these, there are a collection of demos which are each designed to showcase some theoretical result in GRL and are presented on the web page. Once a demo is selected, the user can choose to alter some default parameters and then run the demo. This then begins a batch simulation with the specified agent and environment for the selected number of time cycles (a batch simulation runs the whole simulation as one job, without any interference). The data collected during the simulation is then used to visualise the interaction. The API allows for anyone to design their own demos based on current agents and environments, and for new agents and environments to be added and interfaced into the system. It also includes the option to run the simulations as experiments, collecting the data relevant to the simulation and storing it in a JSON file for analysis. The source code can be accessed on: <https://github.com/aslanides/aixijs> While the demos can be found at: <http://aslanides.io/aixijs/> or <http://www.hutter1.net/aixijs/> There has been some related work in adapting GRL results to a practical setting. In particular, the Monte-Carlo AIXI approximation [\[]{}16[\]]{} successfully implemented a AIXI model using the aforementioned $\rho$UCT algorithm. This agent was quite successful, even within a challenge domain (a modified Pac-Man game with 1060 possible states) with the agent learning several key tactics for the game and consistently improving. This example demonstrated that it is possible to effectively adapt GRL agents to a practical setting, and is the basis for the approximation of AI$\mu$ presented here. Related to the AIXIjs platform is the REINFORCEjs web demo by Karpathy [\[]{}6[\]]{}. This demo implements Q-Learning [\[]{}17[\]]{} and SARSA [\[]{}10[\]]{} RL methods in a grid world scenario, as well as deep Q-Learning for two continuous state settings. The limitation of this example is its restriction to a small set of environments, with Q-Learning and SARSA being defined only for Markovian environments. These algorithms do not extend to more complicated environments or agents, which is addressed by AIXIjs. Technical Details ================= AI$\mu$ Implementation ----------------------- The agent used for experiments is an MCTS approximated version of AI$\mu$. By using AI$\mu$, we are removing any potential uncertainty in the agent’s model which facilitates a more accurate analysis of the effect of discounting. This agent knows the true environment, so for a fixed discount this implies that its policy $\pi(s)$ will stay the same for any particular state, assuming a Markovian environment. Although we will not be using very large tree depth, enumerating the expectimax by solving equation (1) is not generally feasible. We instead use MCTS to approximate the search tree, specifically the $\rho$UCT algorithm introduced in the background. Although UCT would suffice in our deterministic setting, $\rho$UCT is the default search algorithm incorporated into AIXIjs and as such was used without modification. Agent Plan and Time Inconsistency Heuristic ------------------------------------------- We determine the agent’s plan at time step $k$ by traversing the tree created by $\rho$UCT, first selecting the highest value decision node and then choosing the corresponding chance node with the most number of visits. In the case of the environment used here, each decision node has only one chance child as it is deterministic. The process is then repeated up to the maximum horizon reached by the search, and the sequence of actions taken are recorded as the agent’s plan. The plan is recorded as a numeric string representing the sequence of actions the agent plans to take. For example, a recorded plan of 000111 indicates the agent plans to first take action ’0’ three times in a row and then take ’1’ three times. If the action at cycle $k$ is not equal to the action predicted by the plan at time $k-1$ then we consider this time inconsistent. Formally, if the following equation is satisfied then the action at $t$ is time inconsistent: $$\pi_{\gamma^{k-1}}(S_{k})\neq\pi_{\gamma^{k}}(S_{k})$$ Where $\pi_{\gamma^{t}}(S_{t})$ is a policy $\pi$ using discount vector $\gamma^{t}$ in state $S$ at time $k$ and $\pi_{\gamma^{k-1}}(S_{k})$ is the same policy using an older discount vector $\gamma^{k-1}$. If this is true, then the action will be time inconsistent. If it is not true, the action may still be time inconsistent in regards to older plans. This method is used to prevent false positives, as the agent plan deep in the search tree is often not representative due to the cutoff at the horizon. Environment Setup ----------------- The environment we use is a deterministic fully observable finite state MDP, represented by figure 1. This environment is structured to provide a simple means to differentiate myopic and far-sighted agent policies. The idea behind the environment is to give the agent the option of receiving an instant reward at any point, which it will take if it is sufficiently myopic. The other option gives a very large reward only after following a second action for $N$ steps. If the agent is far-sighted enough, it will ignore the low instant reward and plan ahead to reach the very large reward in $N$ time steps. Formally, the agent has 2 actions from state $S_{i}$ : The first is to go to $S_{0}$ and receive a small instant reward $r_{I}$. The other takes the agent to $S_{i+1}$ (where $i\in\mathbb{Z}/(N+1)\mathbb{Z}$) and gives very low reward $r_{0}<\frac{1}{N}r_{I},$ and a large reward $r_{L}>Nr_{I}$ for $i=N-1$. In figure 1 the straight lines represent the first action $a_{0}$ while the other lines representing the second action $a_{1}$. Experiments =========== Overview -------- In this section we present the experiments for the discount functions, which were conducted using the AIXIjs experiment API mentioned in the background. In particular, we will investigate the effect of geometric, hyperbolic and power discounting on the $\rho$UCT AI$\mu$ model. The environment used was the instance of figure 1 parametrised by $N=6,r_{I}=4,r_{0}=0,r_{L}=1000$. We use average reward as our metric for agent performance. We avoid using total reward as, in this environment, it is monotonically increasing with respect to time. This would affect the scale of graphs, which could obscure an agent’s behaviour. We now present the MCTS parameters, after which we detail two specific policies prior to the experiments which comprise the rest of the section. We introduce these policies to avoid unnecessary overlap in the analysis of geometric and hyperbolic discounting, as they displayed very similar behaviours. (start) [$\text{S}_0$]{}; (bed0) \[right=of start\] [$\text{S}_1$]{}; (bed1) \[right=of bed0\] [$\text{S}_2$]{}; (bed2) \[right=of bed1\] [$\text{S}_3$]{}; (bedN) \[right=of bed2\] [$\text{S}_N$]{}; (start) edge \[loop above\] node [$r_I$]{} () (bed0) edge \[bend left\] node [$$]{} (start) (bed1) edge \[bend left\] node [$$]{} (start) (bed2) edge \[bend left\] node [$$]{} (start) (bedN) edge \[bend left\] node [$r_I$]{} (start); (start) edge \[decorate\] node [$r_0$]{} (bed0); (bed0) edge \[decorate\] node [$r_0$]{} (bed1); (bed1) edge \[decorate\] node [$r_0$]{} (bed2); (bed2) edge \[decorate\] node [ $\dots$ $r_L$]{} (bedN); (bedN) edge \[decorate,bend right\] node [$r_0$]{} (bed0); MCTS Parameters --------------- Horizon UCB Parameter Samples ------------ --------- --------------- --------- Geometric 10 0.01 10 000 Hyperbolic 10 0.01 10 000 Power 7 0.001 100 000 It was necessary to increase the samples and lower the exploration constant for power discounting because over time, the discount factor becomes exponentially lower with respect to $\beta$. A high exploration constant would overpower the UCB expression in (5) and result in erratic policies as there is no clear better action. Given the large number of samples, it was also necessary to reduce the horizon to shorten the depth of the tree. 7 is the minimum required to see far enough into the future to notice the delayed reward. Far-Sighted Policy ------------------ With reference to figure 1, this policy takes action $a_{1}$ (the alternating arrow) for every time step. The total reward for the far-sighted policy in 200 time cycles is 33 000, given a delayed reward of 1000 and a reward interval of 6 time steps. Figure 3 presents a plot of the average reward of this policy in our environment. table 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 125 9 111.1111111 10 100 11 90.90909091 12 83.33333333 13 76.92307692 14 142.8571429 15 133.3333333 16 125 17 117.6470588 18 111.1111111 19 105.2631579 20 150 21 142.8571429 22 136.3636364 23 130.4347826 24 125 25 120 26 153.8461538 27 148.1481481 28 142.8571429 29 137.9310345 30 133.3333333 31 129.0322581 32 156.25 33 151.5151515 34 147.0588235 35 142.8571429 36 138.8888889 37 135.1351351 38 157.8947368 39 153.8461538 40 150 41 146.3414634 42 142.8571429 43 139.5348837 44 159.0909091 45 155.5555556 46 152.173913 47 148.9361702 48 145.8333333 49 142.8571429 50 160 51 156.8627451 52 153.8461538 53 150.9433962 54 148.1481481 55 145.4545455 56 160.7142857 57 157.8947368 58 155.1724138 59 152.5423729 60 150 61 147.5409836 62 161.2903226 63 158.7301587 64 156.25 65 153.8461538 66 151.5151515 67 149.2537313 68 161.7647059 69 159.4202899 70 157.1428571 71 154.9295775 72 152.7777778 73 150.6849315 74 162.1621622 75 160 76 157.8947368 77 155.8441558 78 153.8461538 79 151.8987342 80 162.5 81 160.4938272 82 158.5365854 83 156.626506 84 154.7619048 85 152.9411765 86 162.7906977 87 160.9195402 88 159.0909091 89 157.3033708 90 155.5555556 91 153.8461538 92 163.0434783 93 161.2903226 94 159.5744681 95 157.8947368 96 156.25 97 154.6391753 98 163.2653061 99 161.6161616 100 160 101 158.4158416 102 156.8627451 103 155.3398058 104 163.4615385 105 161.9047619 106 160.3773585 107 158.8785047 108 157.4074074 109 155.9633028 110 163.6363636 111 162.1621622 112 160.7142857 113 159.2920354 114 157.8947368 115 156.5217391 116 163.7931034 117 162.3931624 118 161.0169492 119 159.6638655 120 158.3333333 121 157.0247934 122 163.9344262 123 162.601626 124 161.2903226 125 160 126 158.7301587 127 157.480315 128 164.0625 129 162.7906977 130 161.5384615 131 160.3053435 132 159.0909091 133 157.8947368 134 164.1791045 135 162.962963 136 161.7647059 137 160.5839416 138 159.4202899 139 158.2733813 140 164.2857143 141 163.1205674 142 161.971831 143 160.8391608 144 159.7222222 145 158.6206897 146 164.3835616 147 163.2653061 148 162.1621622 149 161.0738255 150 160 151 158.9403974 152 164.4736842 153 163.3986928 154 162.3376623 155 161.2903226 156 160.2564103 157 159.2356688 158 164.556962 159 163.5220126 160 162.5 161 161.4906832 162 160.4938272 163 159.5092025 164 164.6341463 165 163.6363636 166 162.6506024 167 161.6766467 168 160.7142857 169 159.7633136 170 164.7058824 171 163.7426901 172 162.7906977 173 161.849711 174 160.9195402 175 160 176 164.7727273 177 163.8418079 178 162.9213483 179 162.0111732 180 161.1111111 181 160.2209945 182 164.8351648 183 163.9344262 184 163.0434783 185 162.1621622 186 161.2903226 187 160.4278075 188 164.893617 189 164.021164 190 163.1578947 191 162.3036649 192 161.4583333 193 160.6217617 194 164.9484536 195 164.1025641 196 163.2653061 197 162.4365482 198 161.6161616 199 160.8040201 200 165 ; The periodic nature of the delayed reward is reflected in the zig zag shape of the average reward graph. As this policy is consistently taking $a_{1}$, the time between spikes is constant. Short-Sighted (Myopic) Agent ---------------------------- The second policy takes action $a_{1}$ (solid arrow in figure 1) for every time step. The total reward for this policy is 792, given an instant reward of 4. Figure 4 presents a plot of the average reward of this policy in our environment. table[ 1 0 2 0 3 1.333333333 4 2 5 2.4 6 2.666666667 7 2.857142857 8 3 9 3.111111111 10 3.2 11 3.272727273 12 3.333333333 13 3.384615385 14 3.428571429 15 3.466666667 16 3.5 17 3.529411765 18 3.555555556 19 3.578947368 20 3.6 21 3.619047619 22 3.636363636 23 3.652173913 24 3.666666667 25 3.68 26 3.692307692 27 3.703703704 28 3.714285714 29 3.724137931 30 3.733333333 31 3.741935484 32 3.75 33 3.757575758 34 3.764705882 35 3.771428571 36 3.777777778 37 3.783783784 38 3.789473684 39 3.794871795 40 3.8 41 3.804878049 42 3.80952381 43 3.813953488 44 3.818181818 45 3.822222222 46 3.826086957 47 3.829787234 48 3.833333333 49 3.836734694 50 3.84 51 3.843137255 52 3.846153846 53 3.849056604 54 3.851851852 55 3.854545455 56 3.857142857 57 3.859649123 58 3.862068966 59 3.86440678 60 3.866666667 61 3.868852459 62 3.870967742 63 3.873015873 64 3.875 65 3.876923077 66 3.878787879 67 3.880597015 68 3.882352941 69 3.884057971 70 3.885714286 71 3.887323944 72 3.888888889 73 3.890410959 74 3.891891892 75 3.893333333 76 3.894736842 77 3.896103896 78 3.897435897 79 3.898734177 80 3.9 81 3.901234568 82 3.902439024 83 3.903614458 84 3.904761905 85 3.905882353 86 3.906976744 87 3.908045977 88 3.909090909 89 3.91011236 90 3.911111111 91 3.912087912 92 3.913043478 93 3.913978495 94 3.914893617 95 3.915789474 96 3.916666667 97 3.917525773 98 3.918367347 99 3.919191919 100 3.92 101 3.920792079 102 3.921568627 103 3.922330097 104 3.923076923 105 3.923809524 106 3.924528302 107 3.925233645 108 3.925925926 109 3.926605505 110 3.927272727 111 3.927927928 112 3.928571429 113 3.92920354 114 3.929824561 115 3.930434783 116 3.931034483 117 3.931623932 118 3.93220339 119 3.932773109 120 3.933333333 121 3.933884298 122 3.93442623 123 3.93495935 124 3.935483871 125 3.936 126 3.936507937 127 3.937007874 128 3.9375 129 3.937984496 130 3.938461538 131 3.938931298 132 3.939393939 133 3.939849624 134 3.940298507 135 3.940740741 136 3.941176471 137 3.941605839 138 3.942028986 139 3.942446043 140 3.942857143 141 3.943262411 142 3.943661972 143 3.944055944 144 3.944444444 145 3.944827586 146 3.945205479 147 3.945578231 148 3.945945946 149 3.946308725 150 3.946666667 151 3.947019868 152 3.947368421 153 3.947712418 154 3.948051948 155 3.948387097 156 3.948717949 157 3.949044586 158 3.949367089 159 3.949685535 160 3.95 161 3.950310559 162 3.950617284 163 3.950920245 164 3.951219512 165 3.951515152 166 3.951807229 167 3.952095808 168 3.952380952 169 3.952662722 170 3.952941176 171 3.953216374 172 3.953488372 173 3.953757225 174 3.954022989 175 3.954285714 176 3.954545455 177 3.95480226 178 3.95505618 179 3.955307263 180 3.955555556 181 3.955801105 182 3.956043956 183 3.956284153 184 3.956521739 185 3.956756757 186 3.956989247 187 3.957219251 188 3.957446809 189 3.957671958 190 3.957894737 191 3.958115183 192 3.958333333 193 3.958549223 194 3.958762887 195 3.958974359 196 3.959183673 197 3.959390863 198 3.95959596 199 3.959798995 200 3.96 ]{}; The graph reflects the initial reward of 0 as the agent starts off, and then the constant reward of 4 every following time cycle. Geometric Discounting --------------------- We ran experiments by altering $\gamma$ in increments of 0.1, ranging from 0.1 to 1.0. We found that in all test runs, the number of time inconsistent actions given by our heuristic was 0. We found that for $\gamma\leq0.4$ the agent followed exactly the myopic policy from the previous subsection, receiving a total reward of 792. For $\gamma\geq0.6$ the agent behaved as described in the far-sighted policy subsection, achieving the optimal reward of 33 000. For $\gamma=0.5$, the agent behaved somewhat erratically, occasionally altering its behaviour between both policies. As this value lies between the $\gamma$ which cause strict far/short sightedness, there would be a small difference in weighted rewards between both policies. It is therefore likely the erratic behaviour is caused by the MCTS struggling to find the best decision, given there is a degree of innacuracy in the tree search. The agent plan enumeration gave a consistent plan of 0000000000 for $\gamma\leq0.4$ and varied between 1111100000 and 1111111111 for $\gamma\geq0.6$, where ’0’ and ’1’ are shorthand for $a_{0}$ and $a_{1}$ respectively. This variation is due to the horizon cutoff at 10, since at some points the agent won’t see far ahead enough to plan for 2 far-sighted actions. Hyperbolic Discounting ---------------------- We varied $\kappa$ between 1.0 and 3.0 in increments of 0.2, and kept $\beta$ constant at 1.0. We found that only $\kappa=1.8$ yielded non-zero time inconsistent actions, with the total number of such actions recorded as 200. We found for $\kappa\leq1.8$ that the agent followed exactly the behaviour from the myopic policy subsection, receiving a total reward of 792. For $\kappa>1.8$ the agent behaved as described in the far-sighted policy subsection, achieving the optimal reward of 33 000. The plan for $\kappa=1.8$ was $0111111000$ at every time step. For $\kappa>1.8$ the plan stayed as 1111111111, and $\kappa\leq1.8$ gave a constant plan of 0000000000. In the interest of reproducibility, the experiments for hyperbolic discounting were performed on commit 3911d73 on the provided github link. The results can also be replicated with recent and future versions, however the MCTS parameters may need to be adjusted. Power Discounting ----------------- We only used a single $\beta$ value in this case, with $\beta$= 1.01. We note that any change in $\beta$ would result in similar behaviour, with only the length and time between these stages changing (hence we need only present the results of one $\beta$ value). No time inconsistent actions were detected for this function. The total reward obtained by the agent was 15412. table 1 0 2 0 3 1.333333333 4 2 5 2.4 6 2.666666667 7 2.857142857 8 3 9 3.111111111 10 3.2 11 3.272727273 12 3.333333333 13 3.384615385 14 3.428571429 15 3.466666667 16 3.5 17 3.529411765 18 3.555555556 19 3.578947368 20 3.6 21 3.619047619 22 3.636363636 23 3.652173913 24 3.666666667 25 3.68 26 3.692307692 27 3.703703704 28 3.714285714 29 3.724137931 30 3.733333333 31 3.741935484 32 3.75 33 3.757575758 34 3.764705882 35 3.771428571 36 3.777777778 37 3.783783784 38 3.789473684 39 3.794871795 40 3.8 41 3.804878049 42 3.80952381 43 3.813953488 44 3.818181818 45 3.822222222 46 3.826086957 47 3.829787234 48 3.833333333 49 3.836734694 50 3.84 51 3.843137255 52 3.846153846 53 3.849056604 54 3.851851852 55 3.854545455 56 3.857142857 57 3.859649123 58 3.862068966 59 3.86440678 60 3.866666667 61 3.868852459 62 3.870967742 63 3.873015873 64 3.875 65 3.876923077 66 3.878787879 67 3.880597015 68 3.882352941 69 3.884057971 70 3.885714286 71 3.887323944 72 3.888888889 73 3.890410959 74 3.891891892 75 3.893333333 76 3.894736842 77 3.896103896 78 3.897435897 79 3.898734177 80 3.9 81 3.901234568 82 3.902439024 83 3.903614458 84 3.904761905 85 3.905882353 86 3.906976744 87 3.908045977 88 3.909090909 89 3.91011236 90 3.911111111 91 3.912087912 92 3.913043478 93 3.913978495 94 3.914893617 95 3.873684211 96 3.833333333 97 3.793814433 98 3.755102041 99 3.717171717 100 13.68 101 13.58415842 102 13.49019608 103 13.39805825 104 13.30769231 105 13.18095238 106 13.05660377 107 12.93457944 108 12.81481481 109 12.69724771 110 21.67272727 111 21.51351351 112 21.35714286 113 21.16814159 114 20.98245614 115 20.8 116 20.62068966 117 20.44444444 118 28.74576271 119 28.53781513 120 28.33333333 121 28.09917355 122 27.86885246 123 27.64227642 124 27.41935484 125 27.2 126 34.92063492 127 34.67716535 128 34.40625 129 34.13953488 130 33.87692308 131 33.61832061 132 33.36363636 133 40.63157895 134 40.35820896 135 40.05925926 136 39.76470588 137 39.47445255 138 39.1884058 139 38.90647482 140 45.77142857 141 45.4751773 142 45.15492958 143 44.83916084 144 44.52777778 145 44.22068966 146 43.91780822 147 50.42176871 148 50.08108108 149 49.74496644 150 49.41333333 151 49.08609272 152 48.76315789 153 54.98039216 154 54.62337662 155 54.27096774 156 53.92307692 157 53.57961783 158 53.24050633 159 59.19496855 160 58.825 161 58.45962733 162 58.09876543 163 57.74233129 164 57.3902439 165 63.1030303 166 62.72289157 167 62.34730539 168 61.97619048 169 61.60946746 170 61.24705882 171 66.73684211 172 66.34883721 173 65.96531792 174 65.5862069 175 65.21142857 176 64.84090909 177 70.12429379 178 69.73033708 179 69.34078212 180 68.95555556 181 68.57458564 182 68.1978022 183 73.28961749 184 72.89130435 185 72.4972973 186 72.10752688 187 71.72192513 188 71.34042553 189 76.25396825 190 75.85263158 191 75.45549738 192 75.0625 193 74.67357513 194 74.28865979 195 79.03589744 196 78.63265306 197 78.23350254 198 77.83838384 199 77.44723618 200 77.06 ; The behaviour in this circumstance follows three stages: For around 100 time steps, the agent behaves completely myopically, reflected by the small continuous rise in the first half of the graph. The discount function then reaches a stage where distant rewards are weighted high enough so that the agent decides to act far-sightedly. For several time steps, the agent collects the delayed reward then goes to the instant one for a few cycles. The number of cycles it stays there gradually decreases until it strictly follows the far-sighted policy. This can be seen in the graph, as the intervals between peaks are larger from cycles 100-150 than 150-200 when the agent acts completely far-sighted. Discussion ========== In regards to time inconsistent agent behaviour, the results were consistent with theoretical predictions. Geometric discounting was, for all instances of $\gamma$, time consistent as expected. Somewhat suprisingly, hyperbolic discounting was time consistent for all measured $\kappa$ except 1.8 when it was continuously acting inconsistently. The results of power discounting also lacked any time inconsistent actions which is expected. The hyperbolic agent plan of 0111111000 for $\kappa=1.8$ reflects some interesting behaviour. We can see the agent is planning to stay at the instant reward for the next time step, and then move off to collect a delayed reward. But as this plan is the same for all time steps, the agent continuously stays on the instant reward planning to do the better long term action later. In effect, the agent is eternally procrastinating. The fact that this behaviour can be induced with this function also supports the claim that hyperbolic discounting can model certain irrational human behaviour. We note the trailing $0$s are an artifact of the horizon being too low to see far ahead enough to notice another delayed reward, given the horizon was only 10. The results of power discounting clearly demonstrate how a growing effective horizon can effect an agent’s policy. Initially the agent is too short sighted to collect the delayed reward, but over time this reward becomes more heavily weighted compared to the instant reward. After some time the agent starts to collect the delayed reward and soon is fixed to a far-sighted policy. This shows that a growing effective horizon can cause an agent to collect distant rewards only after some time has passed, which again reflects what is theoretically predicted. There will continue to be new results proven for GRL, so an avenue for future work would be to demonstrate those results in a similar fashion to the work presented here. Our contributions to the AIXIjs framework would allow for this to be done easily for results pertaining to agent discounting. Other future work would be the development of practical GRL systems which extend beyond a toy environment, and which can be used for non-trivial tasks. Summary ======= We have adapted the platform AIXIjs to include arbitrary discount functions. Using this, we were able to isolate time inconsistent behaviour and illustrate the effect of the discount function on an agent’s farsightedness. We were able to show it is possible to use power discounting in a concrete setting to observe the impact of a growing effective horizon, which influenced the time at which an agent chose to collect distant rewards. We also demonstrated that hyperbolic discounting can induce procrastinating behaviour in an agent. Our current framework now permits a larger class of experiments and demos with general discounting, which will be useful for future research on the topic. [10]{} J Aslanides. , , 2016. J Aslanides and M. Hutter and J. Leike., , 2016. R Bellman. Dynamic programming. , 1957. M. Hutter. A theory of universal artificial intelligence based on algorithmic complexity. , 2000. M. Hutter. Universal artificial intelligence: Sequential decisions based on algorithmic probability. , 2005. A. Karpathy. Reinforcejs, 2015. https://cs.stanford.edu/people/karpathy/reinforcejs /index.html. L. Kocsis and C. Szepesvari. Bandit based [M]{}onte-[C]{}arlo planning. , 2006. T. Lattimore and M. Hutter. General time consistent discounting. , 2014. J. Leike. , 2015. https://jan.leike.name/AIXI.html. G. A. Rummery and M. Niranjan. On-line [Q]{}-learning using connectionist systems. , 1994. P. Samuelson. A note on measurement of utility. , 1937. D. Sliver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, N. Kalchbrenner, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. , 2016. R. Sutton. Learning to predict by the methods of temporal differences. , 1988. R. S. Sutton and A. G. Barto. . MIT Press, Cambridge, MA, 1998. R. Thaler. Some empirical evidence on dynamic inconsistency. , 1981. J. Veness, M. Hutter, W. Uther, D. Silver, and K. S. Ng. . , 2011. C. Watkins and P. Dayan. Q-learning. , 1992. [^1]: sean.a.lamont@outlook.com [^2]: john.stewart.aslanides@gmail.com [^3]: leike@google.com [^4]: marcus.hutter@anu.edu.au [^5]: For a thorough introduction to AIXIjs, [aslanides.io/docs/masters\_thesis.pdf](aslanides.io/docs/masters_thesis.pdf)
--- abstract: 'We present a new study investigating whether active galactic nuclei (AGN) beyond the local universe are preferentially fed via large-scale bars. Our investigation combines data from [*Chandra*]{} and Galaxy Zoo: [*Hubble*]{} (GZH) in the AEGIS, COSMOS, and GOODS-S surveys to create samples of face-on, disc galaxies at $0.2 < z < 1.0$. We use a novel method to robustly compare a sample of 120 AGN host galaxies, defined to have $10^{42} ~{\rm erg~s^{-1}} < L_{\rm X} < 10^{44} ~\rm erg~s^{-1}$, with inactive control galaxies matched in stellar mass, rest-frame colour, size, Sérsic index, and redshift. Using the GZH bar classifications of each sample, we demonstrate that AGN hosts show no statistically significant enhancement in bar fraction or average bar likelihood compared to closely-matched inactive galaxies. In detail, we find that the AGN bar fraction cannot be enhanced above the control bar fraction by more than a factor of two, at 99.7% confidence. We similarly find no significant difference in the AGN fraction among barred and non-barred galaxies. Thus we find no compelling evidence that large-scale bars directly fuel AGN at $0.2<z<1.0$. This result, coupled with previous results at $z=0$, implies that moderate-luminosity AGN have not been preferentially fed by [**large-scale**]{} bars since $z=1$. Furthermore, given the low bar fractions at $z>1$, our findings suggest that large-scale bars have likely never directly been a dominant fueling mechanism for supermassive black hole growth.' author: - | \ $^{1}$Department of Astronomy and Astrophysics, 1156 High Street, University of California, Santa Cruz, CA 95064\ $^{2}$Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, The University of Tokyo\ $^{3}$Department of Astronomy and Astrophysics, 525 Davey Lab, Penn State University, University Park, PA 16802\ $^{4}$Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, 13388, Marseille, France\ $^{5}$School of Physics and Astronomy, The University of Nottingham, University Park, Nottingham NG7 2RD, UK\ $^{6}$Department of Astronomy, University of Michigan, 500 Church St., Ann Arbor, MI 48109, USA\ $^{7}$The Harriet W. Sheridan Center for Teaching and Learning, Brown University, Box 1912, 96 Waterman Street, Providence, RI 02912, USA\ $^{8}$Institut de Cincies del Cosmos. Universitat de Barcelona (UB-IEEC), Mart i Franqus 1, E-08028 Barcelona, Spain\ $^{9}$UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064\ $^{10}$Minnesota Institute for Astrophysics, School of Physics and Astronomy, University of Minnesota, MN 55455, USA\ $^{11}$Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506, USA\ $^{12}$Spitzer Science Center, MS 314-6, California Institute of Technology, 1200 East California Blvd, Pasadena, CA 91125, USA\ $^{13}$Oxford Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH\ $^{14}$Astronomy Department, Adler Planetarium and Astronomy Museum, 1300 Lake Shore Drive, Chicago, IL 60605, USA\ $^{15}$Institute of Cosmology & Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX, UK\ $^{16}$SEPnet, South East Physics Network\ $^{17}$Institute for Astronomy, Department of Physics, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich, Switzerland\ $^{18}$Hubble Fellow\ title: 'Galaxy Zoo: Are Bars Responsible for the Feeding of Active Galactic Nuclei at $0.2<z<1.0$?[^1] ' --- \[firstpage\] galaxies: general — galaxies: structure — galaxies: Seyfert — galaxies: evolution Introduction {#sec:introduction} ============ Most simulations of galaxy evolution require some kind of feedback that correlates with bulge mass (and is often assumed to be active galactic nucleus \[AGN\] feedback) to reproduce key observations, such as the colour bimodality of galaxies [e.g., @springel05; @croton06; @cimatti13]. Yet, the mechanism that funnels gas toward the central supermassive black hole that powers the AGN is still unknown (e.g., @hopkins06, @hopkins11, @hopkins13; see @fabian12, @kormendy13, and @heckman14 for recent reviews). Major mergers are often cited as a key trigger for AGN activity [@sanders88; @barnes91; @mihos96; @dimatteo05; @hopkins05a; @hopkins05b]. Although major mergers seem to drive the most luminous and rapidly accreting AGN [@sanders88; @koss10; @kartaltepe10; @treister12; @hopkins13; @trump13], low- to moderate-luminosity AGN, which make up the majority of AGN by number, seem to be fueled by processes that do not visibly disturb the discy structure of galaxies [@schawinski10; @cisternas11; @schawinski11; @simmons12; @kocevski12; @schawinski12; @simmons13]. An obvious process that satisfies this constraint is secular evolution [@kormendy79; @martinet95; @kk04; @athanassoula13a; @sellwood14]. A major driver of secular evolution in disc galaxies is large-scale bars[^2], and they are predicted to affect galaxies in a variety of ways, including the fueling of AGN [@simkin80; @noguchi88; @shlosman89; @shlosman90; @wada92]. The non-axisymmetric potential of a bar is predicted to funnel interstellar gas into the central kpc [@athanassoula92]—which has been confirmed by multiple observational works [@regan95; @regan99a; @sakamoto99; @sheth00; @sheth02; @sheth05; @zurita04]—where a possible nested, secondary bar may further funnel gas to the inner $\sim10$ pc. From this distance, cloud-cloud collisions may lead to inflows onto the AGN accretion disc. Collectively, this scenario is known as “bars within bars” [@shlosman89; @shlosman90; @hopkins10; @hopkins11]. Observations at low redshift, however, find no excess of primary bars in active galaxies (@ho97 [@mulchaey97; @malkan98; @hunt99; @regan99b; @martini99; @erwin02; @martini03; @lee12b; @cisternas13], but see @knapen00 [@laine02; @laurikainen04; @oh12; @alonso13]; Galloway et al. 2014, in preparation). There is also no direct correlation between primary bars and secondary bars, with $\sim30\%$ of all disc galaxies having a secondary bar [@mulchaey97; @regan99b; @martini99; @erwin02; @laine02]. These results indicate that bars—both primary and secondary bars—may not fuel AGN. Almost all previous observational work on the bar-AGN connection has been limited to the local universe, where the number density of AGN is low. Thus a compelling link between bars and AGN might still be found at *earlier* epochs, when the number density of AGN is higher [@ueda03; @silverman08; @aird10]. In this work we focus on galaxies at $0.2<z<1.0$. The upper limit of $z=1$ is based on [@melvin14], who show that bars are detectable out to $z=1$ with the [*Hubble Space Telescope*]{} Advanced Camera for Surveys ([*HST*]{}/ACS). We describe the data in §2. Our sample selection, and in particular, our selection of control samples of inactive galaxies is detailed in §3. §4 presents our main result that there is no statistically significant excess of bars in AGN hosts. We discuss the implications of our results in §5. Conclusions follow in §6. Throughout this paper, we assume a flat cosmological model with $H_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m} = 0.30$, and $\Omega_{\Lambda} =0.70$, and all magnitudes are given in the AB magnitude system. Data {#sec:data} ==== In this section, we briefly describe the three surveys and their respective data products that are used in this paper. We also describe Galaxy Zoo: [*Hubble*]{}, which uses high resolution [*HST*]{}/ACS imaging to accurately visually classify galaxies. Thus this paper will only focus on the area of these surveys that have [*HST*]{}/ACS imaging. A summary of the three surveys is presented in Table \[tab:survey\_summary\]. AEGIS COSMOS GOODS-S -------------------------------- ------- -------- --------- Area (deg$^2$) 0.197 1.8 0.07 [*HST*]{}/ACS Exp Time (s)[^3] 2180 2028 2223 Pixel Scale (pixel$^{-1}$) 0.03 0.05 0.03 PSF FWHM () 0.120 0.090 0.125 : Survey Summary[]{data-label="tab:survey_summary"} AEGIS {#sub:aegis_data} ----- The All-wavelength Extended Groth strip International Survey (AEGIS; @davis07) is an international collaboration that produced one of the most comprehensive multi-wavelength data sets currently available. This data set includes [*HST*]{}/ACS imaging, which is centered on the EGS region and is composed of 63 pointings in both the F606W ($V$) and the F814W ($I$) filters. The final images have a pixel scale of $0\farcs03\rm ~pixel^{-1}$ and a point-spread function (PSF) of $0\farcs12$ FWHM. The [*HST*]{}/ACS images cover a total area of $\sim710~\rm arcmin^2$. The multi-wavelength coverage of AEGIS also includes $Chandra$ ACIS-I [@garmire03] $X$-ray observations that have a nominal exposure of 800 ks [@nandra05; @georgakakis06; @laird09]. The spectroscopic redshifts ($z$) of AEGIS are from the DEEP2 and DEEP3 redshift surveys [@davis03; @newman13; @cooper11; @cooper12], which used the DEIMOS spectrograph [@faber03] on the Keck II telescope. Spectroscopic redshifts with quality code of 3 or 4 are considered secure; we only consider these spectroscopic redshifts throughout this paper. Measurements of AEGIS galaxy properties are from [@cheung12], who compiled several galaxy measurements from several different sources. Stellar masses, $M_*$ are from [@huang13], and are estimated by fitting the multi-wavelength AEGIS photometry to a grid of synthetic SEDs from [@Bruzual03], assuming a [@salpeter55] IMF and solar metallicity. These synthetic SEDs span a range of ages, dust content, and exponentially declining star formation histories (SFHs). Rest-frame $U-B$ colours are obtained through the $k$-correct v4.2 code [@blanton07] with CFHT $BRI$ photometry and spectroscopic redshifts as inputs. Structural parameters such as the global Sérsic index ($n$), effective radius ($r_{\rm e}$), and axis ratio ($b/a$) are measured with GIM2D through a single Sérsic fit on the [*HST*]{}/ACS $V$ and $I$ images [@simard02]. COSMOS {#sub:cosmos_data} ------ The Cosmological Evolution Survey (COSMOS; @scoville07 [@koekemoer07]) is the largest contiguous [*HST*]{}/ACS imaging survey to date, covering $\sim1.8~\rm deg^2$ in the F814W ($I$) band and consists of 590 pointings. The final images have a pixel scale of $0\farcs05\rm ~pixel^{-1}$ and a point-spread function (PSF) of $0\farcs09$ FWHM. In addition to the [*HST*]{}/ACS coverage, COSMOS also includes $Chandra$ ACIS-I observations that cover the central part of the COSMOS field with four pointings, each totaling to a nominal exposure of 200 ks [@elvis09]. We use the $Chandra$ COSMOS catalog as described in [@civano12]. The spectroscopic $z$’s of COSMOS are mainly from zCOSMOS [@lilly09]. Supplemental spectroscopic $z$’s are from the $Chandra$ COSMOS survey [e.g., from @trump09]. We only consider spectroscopic $z$’s that are deemed secure by these surveys, e.g., for zCOSMOS, we only consider $z$’s with confidence class of 3.x, 4.x, 1.5, 2.4, 2.5, 9.3, 9.5, 13.x, 14.x, 23.x, and 24.x. Measurements of COSMOS galaxy properties are from a variety of sources. Stellar masses and rest-frame $U-V$ colours are from the UltraVISTA survey [@muzzin13]. The $M_*$’s are estimated using FAST [@kriek09] to fit the galaxy SEDs to [@Bruzual03] models, assuming solar metallicity, a [@chabrier03] IMF, a [@calzetti00] dust extinction law, and exponentially-declining SFHs. The rest-frame $U-V$ colours are estimated by using EAZY [@brammer08] to determine the colours by integrating the best-fit SED through the redshifted filter curves over the appropriate wavelength range. The structural parameters of COSMOS galaxies, i.e., $n$, $r_{\rm e}$, and $b/a$, are provided by the ACS-GC catalog [@griffith12]; they used GALFIT [@peng02] to fit a single Sérsic profile on the [*HST*]{}/ACS $I$ images. GOODS-S {#sub:goodss_data} ------- The Great Observatories Origins Deep Survey (GOODS; @dickinson03 [@giavalisco04; @rix04]) is a deep multiwavelength survey that includes the deepest [*HST*]{} images to date. The GOODS survey targeted two separate fields, the [*Hubble*]{} Deep-Field North (HDF-N; now referred to as GOODS-N) and the $Chandra$ Deep-Field South (CDF-S; now referred to as GOODS-S). We will only use the GOODS-S for this paper. The [*HST*]{}/ACS imaging of GOODS-S was carried out in several bands, of which we are only interested in two – F606W ($V$) and F850LP ($z$). The imaging comprises of 15 pointings, with a final pixel scale of the images of $0\farcs03\rm ~pixel^{-1}$ and a point-spread function (PSF) of $0\farcs125$ FWHM. The [*HST*]{}/ACS imaging area of GOODS-S covers a total area of $\sim160~\rm arcmin^2$. In addition to containing the deepest [*Hubble*]{} images to date, GOODS-S also contains the deepest $Chandra$ observations to date. The 4 Ms CDF-S Survey [@luo08; @xue11] made 54 $Chandra$ ACIS-I observations over a period of three $Chandra$ observing cycles in 2000, 2007, and 2010. We use the catalog presented by [@xue11]. The spectroscopic $z$’s of GOODS-S come from a variety of sources, many of which are listed in Table 2 of [@griffith12]. We only consider redshifts of the highest quality ($\ge3$). Stellar masses and rest-frame $U-B$ colours of GOODS-S galaxies are from the CANDELS survey [@koekemoer11; @grogin11; @barro11a; @guo13; @williams14]. The $M_*$’s are estimated with the FAST code by fitting galaxy SEDs based on optical to infrared photometry to models of [@Bruzual03], assuming a [@chabrier03] IMF. They also assumed a [@calzetti00] extinction law, solar metallicity, and exponentially-declining SFHs. Rest-frame $U-B$ colours are estimated with the EAZY code by fitting galaxy SEDs to the templates from [@muzzin13]. The structural parameters of GOODS-S galaxies, i.e., $n$, $r_{\rm e}$, and $b/a$, are provided by the ACS-GC catalog [@griffith12]; they used GALFIT [@peng02] to fit a single Sérsic profile on the [*HST*]{}/ACS $V$ and $z$ images. Calculating $L_{\rm X}$ ----------------------- To identify AGN, we use full-band (0.5-10 keV for AEGIS and COSMOS, 0.5-8 keV for GOODS-S) [*X*]{}-ray fluxes from [*Chandra*]{} observations described by [@laird09], [@civano12], and [@xue11] for AEGIS, COSMOS, and GOODS-S, respectively. We calculate [*X*]{}-ray luminosities using the equation $L_{\rm X} = 4\pi d^2_{L}f_{x}(1+z)^{\Gamma -2}$, where $d_{L}$ is the luminosity distance, $z$ is the redshift, $f_{x}$ is the flux, and $\Gamma$ is the power-law photon index. We set $\Gamma=1.8$, which is a typical power-law photon index for intrinsic AGN spectra. In §\[sub:agn\_selection\], we select AGN using these [*X*]{}-ray luminosities. ![image](agnbar_histograms.eps) ![image](agnbar_gallery.eps){width="\textwidth"} Galaxy Zoo: [*Hubble*]{} {#sub:galaxyzoohubble_data} ------------------------ Our work relies on bar identifications from the Galaxy Zoo: [*Hubble*]{} citizen science project (GZH; @melvin14). Volunteers were asked to visually classify the morphologies of galaxies at $z\sim1$ based on [*HST*]{}/ACS optical imaging from the surveys listed above. Like the previous Galaxy Zoo project, Galaxy Zoo 2 [@willett13], GZH used a decision tree with multiple branches and nested, dependent questions.[^4] These questions include, [*“Is there a sign of a bar feature through the centre of the galaxy?”*]{}, which can only be reached if a volunteer identifies some type of a feature (e.g., clumps, spiral arms, rings, bars) or a disc within a galaxy. Thus a galaxy must have a feature or a disc in order to be classified as barred. Each galaxy is classified by at least 33 volunteers, with the median number of volunteers per galaxy being 47. These classifications produce vote percentages that we refer to throughout as “likelihoods”; e.g., if 25 out of 50 volunteers classified a galaxy as having a bar, then the bar likelihood is $p_{\rm bar}=0.5$, moduli small corrections to downweight consistently unreliable classifiers (following the procedure explained in @willett13). GZH only classifies galaxies brighter than the following magnitudes: for AEGIS and COSMOS, [*HST*]{}/ACS F814W $< 23.5$ (AB) magnitude, and for GOODS-S, [*HST*]{}/ACS F850LP$ < 23.5$ (AB) magnitude. ### Selecting Barred Galaxies {#sub:bar_selection} We consider galaxies to be barred if they have bar likelihoods greater than 0.5 ($p_{\rm bar}\ge0.5$) and no obvious dust lanes[^5] ($p_{\rm dust~lane}<0.5$). The bar threshold of $p_{\rm bar}=0.5$ is based on previous Galaxy Zoo works that have shown it to be a reliable indicator of strong bar features [@masters11; @masters12; @willett13; @melvin14]. To calculate the bar fraction, $f_{\rm bar}$, one can divide the total number of barred galaxies by the total number of disc galaxies in the sample. Varying the bar likelihood threshold between $0.3 \leq p_{\rm bar} \leq 0.6$ does not change our qualitative conclusions. In our analysis, we also use the average bar likelihood, $\overline{p}_{\rm bar}$, defined as the average of all the bar likelihoods of a sample of galaxies. This parameter is another measure of bar presence and has been used by other studies [e.g., @skibba12; @casteels13; @cheung13]. Sample Selection {#sec:selection} ================ Since our study seeks to determine if AGN activity is linked with bars, we first construct a parent sample of face-on, disc-dominated objects whose bars can be robustly identified (§\[sub:disc\_selection\]). We then select AGN-hosting galaxies from this parent sample based on their [*X*]{}-ray luminosity (§\[sub:agn\_selection\]). Finally, we construct samples of inactive control galaxies that are matched to the AGN galaxies (§\[sub:control\_selection\]). The number counts of these samples are listed in Table \[tab:sample\_counts\]. Face-on Disc Selection {#sub:disc_selection} ---------------------- The face-on disc samples for each field are defined by the following criteria: 1. $0.2<z<1.0$ – To obtain the most accurate [*X*]{}-ray luminosities and to identify broad-line AGN (which may contaminate their host galaxy measurements), we choose only galaxies with secure spectroscopic redshifts. Although the ability to identify a bar is not uniform over this redshift range, our robust matching of AGN and inactive control galaxies ensures that the two samples have the same distributions of completeness for bar detection: see §\[sub:redshift\_effects\]. 2. $b/a > 0.5$ – Since bars in highly inclined galaxies are difficult to identify, we exclude edge-on galaxies with global axis ratios less than or equal to 0.5. 3. $r_{e} > 8 ~\rm pixels$ – Selecting galaxies with $r_{\rm e}$ larger than 8 pixels, which corresponds to about twice the FWHM of the [*HST*]{}/ACS PSF, ensures that any bars with semimajor axes ${{_>\atop^{\sim}}}3$ kpc will be identified. Since the typical bar lengths in the local universe are $2-7$ kpc [@erwin05; @gadotti11; @hoyle11], we should be able to detect most large-scale barred galaxies, assuming bars at $z>0$ are similar to bars at $z\sim0$. 4. $N_{\rm Bar ~question}/N_{\rm Total} \ge 0.15$ – In order to answer the bar question, the GZH decision tree requires a volunteer to classify a galaxy as displaying some kind of feature or disc. Thus demanding that at least $15\%$ of a galaxy’s classifiers answer the bar question results in an effective selection for disc galaxies. Although the number of AGN is sensitive to the exact $N_{\rm Bar ~question}/N_{\rm Total}$ threshold, our qualitative conclusions are not, e.g., requiring $N_{\rm Bar ~question}/N_{\rm Total} \ge 0.75$ does not change our ultimate conclusion. Moreover, the majority of our $N_{\rm Bar ~question}/N_{\rm Total} \ge 0.15$ sample has more than ten bar classifications, which is more than most visual bar classifications [e.g. @nair10b; @lee12a]. 5. $p_{\rm merge} < 0.65$ – In order to separate out the effects of mergers from our analysis, we choose non-interacting galaxies by requiring a merging likelihood ($p_{\rm merge}$) less than 0.65. This criterion mirrors that of [@melvin14] and eliminates a small fraction of our sample. Discarding this criterion does not affect our conclusion. ![image](lx_redshift_all.eps) AGN Selection {#sub:agn_selection} ------------- Out of these face-on disc samples (one from each survey) we select AGN hosts with [*X*]{}-ray luminosities $10^{42} ~{\rm erg~s^{-1}} < L_{\rm X} < 10^{44} ~\rm erg~s^{-1}$. The lower limit removes starburst galaxies with weak [*X*]{}-ray emission [@bauer02], and the upper limit excludes quasars that may have optical point sources which would contaminate the measurements of their host galaxies [@silverman08b]. We also discard luminous unobscured AGN which might also contaminate the visible appearance of their host galaxy measurements. In AEGIS and COSMOS, we use broad emission lines to identify such AGN since they dominate the optical spectra of their host galaxies. There are zero broad-line AGN in AEGIS and five broad-line AGN in COSMOS, which we reject from our sample. GOODS-S lacks a public catalog of broad-line AGN, and so we instead use low [*X*]{}-ray hardness (HR $<-0.3$; @mainieri07) as a proxy for unobscured AGN: this results in the rejection of one AGN. The rejection or inclusion of these six potential-contaminant AGN does not affect our qualitative conclusions. Table \[tab:sample\_counts\] lists the final AGN count for each survey. Control Selection {#sub:control_selection} ----------------- For each AGN host galaxy, we select three unique, non-AGN control galaxies from the *same* survey (AEGIS, COSMOS, or GOODS-S) that are matched in $M_*$, $U-B$[^6], $n$, $r_{\rm e}$, and $z$. These parameters have been shown to correlate with both AGN presence and bar presence [e.g., @kauffmann03; @nandra07; @schawinski09; @masters11; @cheung13] and thus must be controlled for in order to uncover any underlying bar-AGN connection. For a given AGN host galaxy, we first select a pool of control galaxies satisfying the following conditions: - $\lvert \log ~M_{*, \rm AGN}/M_* \rvert < 0.35$ - $\lvert (U - B)_{\rm AGN} - (U - B) \rvert < 0.4$ - $\lvert \log ~r_{\rm e, AGN}/r \rvert < 0.48$ - $\lvert n_{\rm AGN} - n \rvert < 2.0$ - $\lvert z_{\rm AGN} - z \rvert < 0.4$ These limits are tuned in order to find enough control galaxies for each AGN. Our conclusions are not sensitive to the exact limits, e.g., reducing these limits by $50\%$ does not change the conclusions. With this initial pool of control galaxies, we perform a 5-stage matching process that iteratively reduces the pool until it reaches a final set of three unique and matched control galaxies. The first stage cuts the initial pool of control galaxies to the 15 closest matched galaxies in one of the matching parameters, i.e., $M_{*, \rm AGN}$, $U-B_{\rm AGN}$, $n_{\rm AGN}$, $r_{\rm e, AGN}$, or $z_{\rm AGN}$. Each successive stage matches the remaining control galaxies to one of the unused matching parameters and eliminates the three worst matched galaxies. Thus the second stage reduces the pool to 12, the third stage reduces the pool to 9, the fourth reduces the pool to 6, and finally, the fifth stage reduces the pool to 3. Ultimately, for each survey we have a control sample that contains no duplicates and is three times larger than the AGN sample (see Table \[tab:sample\_counts\]). This 5-stage matching process was performed for each AGN host in our sample. However, there were four AGN—one from AEGIS, one from COSMOS, and two from GOODS-S—that were discarded due to a lack of control galaxies for the AGN host. These host galaxies have abnormally high $n$ for their rest-frame colours and/or $M_*$ compared to the pool of control galaxies. The AGN and control samples that this matching technique produces are affected by the order in which we match the AGN hosts to the control galaxies and the order in which the matching parameters are used. In order to adequately sample the parameter space, we repeat this 5-stage matching technique 100 times for each survey, with each iteration randomly shuffling the order of the AGN hosts, the order of the control galaxies, and the order of the matching parameters. Ultimately, we generate 100 AGN samples and 100 control samples for each survey. For brevity and clarity, we define an “AGN-control sample” to be all AGN hosts and their corresponding control galaxies for a given survey and for a given matching iteration. We use the median counts and the resulting $f_{\rm bar}$ and $\overline{p}_{\rm bar}$ of the 100 AGN-control samples in presenting our results (see Table \[tab:sample\_counts\]). To demonstrate the quality of our matching procedure, Fig. \[fig:agn\_controls\] presents stacked histograms of the distribution of parameter differences between the AGN hosts and their matched control galaxies. Each panel stacks 100 translucent histograms, with each histogram representing one AGN-control sample. To elaborate, each histogram represents the difference in a parameter between each AGN and its three control galaxies for a given realization. The highly shaded regions represent the most populated parameter space, which are generally centered around 0 with small spreads, as supported by the mean and standard deviation at the upper left of each panel, indicating that our matching technique works well. Moreover, we also calculated the two-sample Kolmogorov-Smirnov (K-S) null probability for each pair of AGN-control parameter distributions, where small values indicate that the two distributions in question are probably not from the same underlying distribution. We display the median K-S null probability, P$_{\rm K-S}$, of all 100 pairs of AGN-control distributions in each panel, most of which show high values, indicating that the AGN and control samples are consistent. However, it is clear that the histograms of GOODS-S are slightly broader and more skewed than those of AEGIS and COSMOS, especially in Sérsic index and redshift, as supported by P$_{\rm K-S}$. The relatively small sample size of GOODS-S (see Table \[tab:sample\_counts\]) makes it difficult to identify well-matched control galaxies for the GOODS-S AGN hosts. However, §\[sec:results\] shows that the results from the GOODS-S sample are consistent with those of AEGIS and COSMOS, indicating that the skewness of the GOODS-S AGN-control samples does not bias our analysis. ![image](bar_agn_frac_all.eps) To further illustrate the quality of our matching technique, we show images of two matched sets of AGN-control galaxies from each survey in Fig. \[fig:agnbar\_gallery\]. Each set of AGN-control galaxies is reassuringly similar in appearance, confirming that our matching technique is reasonable. The [*X*]{}-ray luminosity-redshift distribution of all [*X*]{}-ray sources in our chosen redshift range, i.e., from the first row of Table \[tab:sample\_counts\] labelled “$0.2<z<1.0$”, is shown in Fig. \[fig:lx\_redshift\_all\]. The dashed horizontal lines define our AGN selection. The black points encircled in green represent the AGN that satisfy our face-on disc criteria, and hence are eligible to undergo our matching process. The red points encircled in green represent the face-on disc AGN that have enough control galaxies, and thus are in our AGN samples. Fig. \[fig:lx\_redshift\_all\] shows that our AGN samples span most of $L_{\rm X}$-$z$ space. It also shows that the number of moderate-luminosity AGN increases with $z$, a well-known behaviour that has been shown by previous works [e.g., @ueda03; @silverman08; @aird10]. Redshift Effects on Bar Detection {#sub:redshift_effects} --------------------------------- Although redshift effects (e.g., cosmological surface brightness dimming, angular size change, band-shifting) hinder bar detection, our experiment is a *relative* comparison between two matched samples that controls for redshift and several of the most correlated parameters of bar presence (e.g., stellar mass, colour, and Sérsic index; @nair10b [@masters11; @lee12a; @cheung13]), so our study naturally takes this bias into account. Assuming that there are no AGN-dependent selection effects, our experiment should be robust against any known bar detection biases. As an additional check, we tested for differential redshift effects by splitting our sample into two redshift intervals, $0.20<z<0.84$ and $0.84<z<1.00$, and repeating our analysis. We chose these $z$ intervals because [@sheth08] argued that the ability to detect bars at $z>0.84$ is hampered by the overlap of the [*HST*]{}/ACS $I$ band with the rest-frame near-UV, where clumpy star formation can hide smooth bar structures. However, as pointed out by [@melvin14], the majority of the light gathered in the [*HST*]{}/ACS $I$ band at $0.84 < z < 1.00$ is from the rest-frame optical, and it is only beyond $z\sim1$ that this filter becomes dominated by rest-frame UV where bar detection may be hindered. Therefore we do not expect a reduction in our ability to detect bars above $z=0.84$. Repeating our analysis for these two $z$ intervals corroborates our expectations: there is no statistically significant ($<3\sigma$) difference in the AGN sample’s bar fraction and the control sample’s bar fraction at either $0.20<z<0.84$ or at $0.84<z<1.00$. Therefore we find no redshift dependence on our results, indicating that our results are not affected by redshift biases. Results {#sec:results} ======= Do AGN Hosts Contain An Excess Of Large-scale Bars? --------------------------------------------------- [cccc]{} & AEGIS & COSMOS & GOODS-S\ $0.2<z<1.0$ & 3,958 & 6,673 & 1,023\ Face-on Disc & 1,227 & 2,244 & 260\ AGN & 25 & 86 & 9\ Control & 75 & 258 & 27\ Barred AGN & 2 & 12 & 0\ Barred Control & 6 & 28 & 2\ $f_{\rm bar,~AGN}$ & $0.08\substack{+0.09 \\ -0.03} $ & $0.14\substack{+0.05 \\ -0.03} $ & $0.07\substack{+0.10 \\ -0.05} $ [^7]\ \ $f_{\rm bar,~Control}$ & $0.08\substack{+0.04 \\ -0.02}$ & $0.11\substack{+0.02 \\ -0.02} $ & $0.07\substack{+0.08 \\ -0.03} $\ \ $\overline{p}_{\rm bar,~AGN}$ & $0.23\pm{0.04}$ & $0.23\pm{0.03}$ & $0.12 \pm{0.03}$\ \ $\overline{p}_{\rm bar,~Control}$ & $0.20\pm{0.02}$ & $0.20\pm{0.01}$ & $0.18\pm{0.04}$\ The first row represents the total \# of galaxies at $0.2<z<1.0$ with secure spectroscopic $z$’s and [*HST*]{}/ACS imaging in our sample, and the second row represents the \# of face-on disc galaxies that are in the previous row. The rest of the table shows the median counts from the 100 AGN-control samples (see §3.3), and the resulting bar fraction, $f_{\rm bar}$, and average bar likelihood, $\overline{p}_{\rm bar}$. The main result of this paper is shown in Fig. \[fig:bar\_agn\_frac\], which plots the bar fraction, $f_{\rm bar}$, and the average bar likelihood, $\overline{p}_{\rm bar}$, of the AGN and non-AGN control samples for the AEGIS, COSMOS, and GOODS-S surveys. The uncertainties shown for $f_{\rm bar}$ are $68.3\%$ binomial confidence limits, calculated using quantiles of the beta distribution given the bar counts, total sample counts, and desired confidence level[^8] [@cameron11]. The uncertainties shown for $\overline{p}_{\rm bar}$ are calculated as $\sigma/\sqrt{N}$, where $\sigma$ is the standard deviation of $p_{\rm bar}$, and $N$ is the total number of galaxies. We find no statistically significant enhancement in $f_{\rm bar}$ or $\overline{p}_{\rm bar}$ in AGN hosts compared to the non-AGN control galaxies. The probabilities that each survey’s $f_{\rm bar, AGN}$ and $f_{\rm bar, Control}$ are different, given the binomial errors, are insignificant (${{_<\atop^{\sim}}}$ 1$\sigma$). Conducting a two-sample K-S test on the $p_{\rm bar}$ distributions of each survey’s AGN and control samples reveals that the AGN and control samples are consistent with being drawn from the same parent sample to the $99.9\%$ level. With our results, we can quantify the level of bar excess in AGN hosts that we can eliminate by combining all three surveys together. Using the combined counts of AGN, controls, barred-AGN, and barred-controls, we find that the bar fraction of the combined AGN sample cannot be greater than twice the bar fraction of the combined control sample at 99.7% confidence. Therefore, we conclude that *there is no large excess of bars in AGN hosts*. Are Large-scale Bars Efficient Fuelers Of AGN? ---------------------------------------------- A slightly different, but related question is, “Are bars efficient fuelers of AGN?” We answer this question by studying the AGN fraction of barred and non-barred galaxies, as presented in Fig. \[fig:agn\_fraction\]. From our face-on disc sample (see §\[sub:disc\_selection\]), we select barred galaxies with the criteria described in §\[sub:bar\_selection\]. We then create a control sample of non-barred galaxies by demanding that $p_{\rm bar}<0.05$ and by using our 5-stage matching technique that we described in §\[sub:control\_selection\]. That is, for each barred galaxy, we find three non-barred galaxies matched in stellar mass, rest-frame colour, size, Sérsic index, and redshift. Using the AGN criteria defined in §\[sub:agn\_selection\], Fig. \[fig:agn\_fraction\] shows that there is no statistically significant excess of AGN among barred galaxies. Discussion {#sec:discussion} ========== At $z\sim0$, several works have previously found no link between bars and AGN (@ho97 [@mulchaey97; @malkan98; @hunt99; @regan99b; @martini99; @erwin02; @martini03; @lee12b; @cisternas13], but see @laine02 [@knapen00; @laurikainen04; @oh12; @alonso13]; Galloway et al. 2014, in preparation), and our results suggest that this absence of direct bar-driven AGN activity persists out to $z=1$. Our chosen redshift range corresponds to an epoch where approximately half of the local supermassive black hole mass density was formed [@aird10], indicating that bars are not directly responsible for the buildup of at least half of the local supermassive black hole mass density. Moreover, the paucity of bars at $z>1$ [@kraljic12; @simmons14] indicates that bars were probably not closely associated with AGN at $z>1$ either. Therefore, large-scale bars are likely not the primary fueling mechanism for supermassive black hole growth over cosmic time. Recently, [@cisternas14] also searched for a bar-AGN connection using a slightly smaller sample of 95 AGN in COSMOS, with both photometric and spectroscopic redshifts at $0.15<z<0.84$. Their results are broadly consistent with ours, with neither a significant bar excess among AGN nor an AGN excess among barred galaxies at $z>0.4$. However, [@cisternas14] do suggest a marginal excess of bars among AGN (compared to non-AGN) at $z \sim 0.3$, which we do not detect in our results. This difference is likely due to slight differences in bar identification[^9], or due to the differences in our matching of AGN and inactive galaxies. (Our samples are matched in 5 parameters, while @cisternas14 matched AGN and inactive galaxies in stellar mass and redshift only.) However, these are only small differences, and the results of [@cisternas14] are consistent with our conclusion that bars do not dominate AGN fueling at $z \lesssim 1$. ![The AGN fraction, $f_{\rm AGN}$, of the barred (green squares) and non-barred control samples (purple triangles) for the AEGIS, COSMOS, and GOODS-S surveys. The error bars on $f_{\rm AGN}$ are the $68.3\%$ binomial confidence limits. There is no statistically significant difference in $f_{\rm AGN}$ between the barred and non-barred control samples across all three surveys, indicating that there is no statistically significant excess of AGN in barred galaxies.[]{data-label="fig:agn_fraction"}](agn_fraction_all.eps) However, before ruling out bars as the primary fueling mechanism for supermassive black hole growth, one must ask if the bar–AGN connection can be concealed if a bar dissolves while a black hole is still accreting the bar-funneled gas. In the present analysis, we are assuming that the bar instantaneously funnels gas to the central black hole upon its formation, and moreover, that there is no delay in AGN activity. The typical lifetimes of AGN and bars are uncertain. For AGN, the current estimates range from $10^6$ to $10^8$ years (@haehnelt93 [@martini04], see Hanny’s Voorwerp in @keel12 for an example of an AGN with a short lifetime). For bars, early simulations of isolated disc galaxies by [@bournaud02] indicate that they are short-lived, with a lifetime of 1–2$\times10^9$ years. The latest simulations of isolated disc galaxies by [@athanassoula13b], however, indicate that bars are long-lived, with a lifetime as long as $10^{10}$ years. This latter result is supported by recent zoom-in cosmological simulations by [@kraljic12], who show that bars formed at $z\approx1$ generally persist down to $z=0$. Despite the uncertainty in both AGN and bar lifetimes, even the shortest bar lifetime is an order of magnitude larger than the longest AGN lifetime, meaning that the bar-AGN connection is not likely to be concealed by short bar lifetimes. Small-scale, nuclear bars may also fuel supermassive black hole growth. Unfortunately, we are unable to resolve such small structures in our images. However, work in the local universe shows that nuclear bars are not more frequent in AGN hosts compared to non-AGN hosts [@mulchaey97; @regan99b; @martini99; @erwin02; @laine02]. This result mirrors that of large-scale bars, suggesting that nuclear bars do not fuel supermassive black hole growth at $z\sim0$ either. Whether nuclear bars can fuel AGN at $z>0$ will be left for future work. Interestingly, studies of the relative angle between AGN accretion discs and host galaxy discs are consistent with our interpretation that bars do not fuel AGN. If AGN accretion discs are fueled by bar-funneled gas, then one would expect this gas to have an angular momentum vector that is parallel to the bulk of the gaseous disc of the galaxy. However, it appears that the accretion discs are randomly orientated with respect to their host galaxies [@ulvestad84; @kinney00; @schmitt02; @schmitt03; @greenhill09], which fits with our interpretation that bars do not directly fuel AGN. This misalignment could be interpreted even more generally—there simply may not be a galactic-scale black hole fueling mechanism. Instead, a collection of processes, including minor mergers [e.g., @kaviraj14a; @kaviraj14b], cooling flows [e.g., @best07] and multi-body interactions with star clusters or clouds [e.g., @genzel94], may work to transport gas into the vicinity of the black hole. This process, known as “stochastic fueling,” [@sanders84] has been implemented in models that successfully reproduce observations of low to intermediate luminosity AGN [@hopkins06; @hopkins13]. Conclusion ========== In this paper, we present a new study on the bar-AGN connection beyond the local universe. We combine [*Chandra*]{} and Galaxy Zoo: [*Hubble*]{} data in the AEGIS, COSMOS, and GOODS-S surveys to determine whether AGN are preferentially fed by large-scale bars at $0.2<z<1.0$. Using GZH classifications and galaxy structural measurements, we select non-merging, face-on disc galaxies that have sizes large enough to accurately identify large-scale bars. From this face-on disc sample, we identify AGN with [*X*]{}-ray luminosities $10^{42} ~{\rm erg~s^{-1}} < L_{\rm X} < 10^{44} ~\rm erg~s^{-1}$. We then use a novel multi-parameter technique to construct control samples of non-AGN galaxies robustly matched to the AGN hosts in stellar mass, rest-frame colour, Sérsic index, effective radius, and redshift. With these samples, we find no statistically significant excess of barred galaxies in AGN hosts (and no excess of AGN in barred galaxies). Specifically, we find that the bar fraction of the AGN sample cannot be greater than twice the bar fraction of the control sample at 99.7% confidence. The simplest interpretation is that AGN are not preferentially nor directly fed via [**large-scale**]{} bars at $0.2<z<1.0$. Acknowledgements {#acknowledgements .unnumbered} ================ Authors from UC Santa Cruz acknowledge financial support from the NSF Grant AST 08-08133. KS gratefully acknowledges support from Swiss National Science Foundation Grant PP00P2\_138979/1. JT acknowledges support by NASA through [Hubble]{} Fellowship grant HST-HF-51330.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. EA and AB acknowledge financial support to the DAGAL network from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007-2013/ under REA grant agreement number PITN-GA-2011-289313. They also acknowledge financial support from the CNES (Centre National d’Etudes Spatiales - France). RCN acknowledges STFC Rolling Grant ST/I001204/1 “Survey Cosmology and Astrophysics”. BDS gratefully acknowledges support from the Oxford Martin School and from the Henry Skynner Junior Research Fellowship at Balliol College, Oxford. LFF and KWW acknowledge support from the UMN GIA program. EC thanks Ramin A. Skibba, Michael Williams, Sugata Kaviraj, Yicheng Guo, Hassen Yesuf, and Guillermo Barro for useful discussions. The JavaScript Cosmology Calculator [@wright06] and TOPCAT [@taylor05] were used in the preparation of this paper. We also thank the anonymous referee for a helpful report. [90]{} natexlab\#1[\#1]{} Aird, J., Nandra, K., Laird, E. S., et al. 2010, [MNRAS]{}, 401, 2531 Alonso, M. S., Coldwell, G., & Lambas, D. G. 2013, [A[&]{}A]{}, 549, A141 Athanassoula, E. 1992, [MNRAS]{}, 259, 345 Athanassoula, E. 2013a, Secular Evolution of Galaxies, 305 Athanassoula, E., Machado, R. E. G., & Rodionov, S. A. 2013b, [MNRAS]{}, 429, 1949 Barnes, J. E., & Hernquist, L. E. 1991, [ApJL]{}, 370, L65 Barro, G., P[é]{}rez-Gonz[á]{}lez, P. G., Gallego, J., et al. 2011, [ApJS]{}, 193, 13 Bauer, F. E., Alexander, D. M., Brandt, W. N., et al. 2002, [AJ]{}, 124, 2351 Best, P. N., von der Linden, A., Kauffmann, G., Heckman, T. M., & Kaiser, C. R. 2007, [MNRAS]{}, 379, 894 Blanton, M. R., & Roweis, S. 2007, [AJ]{}, 133, 734 Bournaud, F., & Combes, F. 2002, [A[&]{}A]{}, 392, 83 Brammer, G. B., van Dokkum, P. G., & Coppi, P. 2008, [ApJ]{}, 686, 1503 Bruzual, G., & Charlot, S. 2003, [MNRAS]{}, 344, 1000 Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, [ApJ]{}, 533, 682 Cameron, E. 2011, PASA, 28, 128 Casteels, K. R. V., Bamford, S. P., Skibba, R. A., et al. 2013, [MNRAS]{}, 429, 1051 Chabrier, G. 2003, [PASP]{}, 115, 763 Cheung, E., Faber, S. M., Koo, D. C., et al. 2012, [ApJ]{}, 760, 131 Cheung, E., Athanassoula, E., Masters, K. L., et al. 2013, [ApJ]{}, 779, 162 Cimatti, A., Brusa, M., Talia, M., et al. 2013, [ApJL]{}, 779, LL13 Cisternas, M., Jahnke, K., Inskip, K. J., et al. 2011, [ApJ]{}, 726, 57 Cisternas, M., Gadotti, D. A., Knapen, J. H., et al. 2013, [ApJ]{}, 776, 50 Cisternas, M., Sheth, K., Salvato, M., et al. 2014, arXiv:1409.2871 Civano, F., Elvis, M., Brusa, M., et al. 2012, [ApJS]{}, 201, 30 Cooper, M. C., Aird, J. A., Coil, A. L., et al. 2011, [ApJS]{}, 193, 14 Cooper, M. C., Griffith, R. L., Newman, J. A., et al. 2012, [MNRAS]{}, 419, 3018 Croton, D. J., Springel, V., White, S. D. M., et al. 2006, [MNRAS]{}, 365, 11 Davis, M., Faber, S. M., Newman, J., et al. 2003, [Proc. SPIE]{}, 4834, 161 Davis, M., Guhathakurta, P., Konidaris, N. P., et al. 2007, [ApJL]{}, 660, L1 Di Matteo, T., Springel, V., & Hernquist, L. 2005, [Nature]{}, 433, 604 Dickinson, M., Giavalisco, M., & GOODS Team 2003, The Mass of Galaxies at Low and High Redshift, 324 Elvis, M., Civano, F., Vignali, C., et al. 2009, [ApJS]{}, 184, 158 Erwin, P., & Sparke, L. S. 2002, [AJ]{}, 124, 65 Erwin, P. 2005, [MNRAS]{}, 364, 283 Faber, S. M., Phillips, A. C., Kibrick, R. I., et al. 2003, [Proc. SPIE]{}, 4841, 1657 Fabian, A. C. 2012, [ARAA]{}, 50, 455 Gadotti, D. A. 2011, [MNRAS]{}, 415, 3308 Garmire, G. P., Bautz, M. W., Ford, P. G., Nousek, J. A., & Ricker, G. R., Jr. 2003, [Proc. SPIE]{}, 4851, 28 Genzel, R., Hollenbach, D., & Townes, C. H. 1994, Reports on Progress in Physics, 57, 417 Georgakakis, A., Nandra, K., Laird, E. S., et al. 2006, [MNRAS]{}, 371, 221 Giavalisco, M., Ferguson, H. C., Koekemoer, A. M., et al. 2004, [ApJL]{}, 600, L93 Greenhill, L. J., Kondratko, P. T., Moran, J. M., & Tilak, A. 2009, [ApJ]{}, 707, 787 Griffith, R. L., Cooper, M. C., Newman, J. A., et al. 2012, [ApJS]{}, 200, 9 Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, [ApJS]{}, 197, 35 Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, [ApJS]{}, 207, 24 Haehnelt, M. G., & Rees, M. J. 1993, [MNRAS]{}, 263, 168 Heckman, T., & Best, P. 2014, arXiv:1403.4620 Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997, [ApJ]{}, 487, 591 Hopkins, P. F., Hernquist, L., Cox, T. J., et al. 2005a, [ApJ]{}, 630, 705 Hopkins, P. F., Hernquist, L., Martini, P., et al. 2005b, [ApJL]{}, 625, L71 Hopkins, P. F., & Hernquist, L. 2006, [ApJS]{}, 166, 1 Hopkins, P. F., & Quataert, E. 2010, [MNRAS]{}, 407, 1529 Hopkins, P. F., & Quataert, E. 2011, [MNRAS]{}, 415, 1027 Hopkins, P. F., Kocevski, D. D., & Bundy, K. 2013, arXiv:1309.6321 Hoyle, B., Masters, K. L., Nichol, R. C., et al. 2011, [MNRAS]{}, 415, 3627 Huang, J.-S., Faber, S. M., Willmer, C. N. A., et al. 2013, [ApJ]{}, 766, 21 Hunt, L. K., & Malkan, M. A. 1999, [ApJ]{}, 516, 660 Kartaltepe, J. S., Sanders, D. B., Le Floc’h, E., et al. 2010, [ApJ]{}, 721, 98 Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, [MNRAS]{}, 346, 1055 Kaviraj, S. 2014a, [MNRAS]{}, 437, L41 Kaviraj, S. 2014b, arXiv:1402.1166 Keel, W. C., Lintott, C. J., Schawinski, K., et al. 2012, [AJ]{}, 144, 66 Kinney, A. L., Schmitt, H. R., Clarke, C. J., et al. 2000, [ApJ]{}, 537, 152 Knapen, J. H., Shlosman, I., & Peletier, R. F. 2000, [ApJ]{}, 529, 93 Kocevski, D. D., Faber, S. M., Mozena, M., et al. 2012, [ApJ]{}, 744, 148 Koekemoer, A. M., Aussel, H., Calzetti, D., et al. 2007, [ApJS]{}, 172, 196 Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, [ApJS]{}, 197, 36 Kormendy, J. 1979, [ApJ]{}, 227, 714 , J., & [Kennicutt]{}, R. C. 2004, [ARAA]{}, 42, 603 Kormendy, J., & Ho, L. C. 2013, [ARAA]{}, 51, 511 Koss, M., Mushotzky, R., Veilleux, S., & Winter, L. 2010, [ApJL]{}, 716, L125 Kraljic, K., Bournaud, F., & Martig, M. 2012, [ApJ]{}, 757, 60 Kriek, M., van Dokkum, P. G., Labb[é]{}, I., et al. 2009, [ApJ]{}, 700, 221 Laine, S., Shlosman, I., Knapen, J. H., & Peletier, R. F. 2002, [ApJ]{}, 567, 97 Laird, E. S., Nandra, K., Georgakakis, A., et al. 2009, [ApJS]{}, 180, 102 Laurikainen, E., Salo, H., & Buta, R. 2004, [ApJ]{}, 607, 103 Lee, G.-H., Park, C., Lee, M. G., & Choi, Y.-Y. 2012a, [ApJ]{}, 745, 125 Lee, G.-H., Woo, J.-H., Lee, M. G., et al. 2012b, [ApJ]{}, 750, 141 Lilly, S. J., Le Brun, V., Maier, C., et al. 2009, [ApJS]{}, 184, 218 Luo, B., Bauer, F. E., Brandt, W. N., et al. 2008, [ApJS]{}, 179, 19 Mainieri, V., Hasinger, G., Cappelluti, N., et al. 2007, [ApJS]{}, 172, 368 Malkan, M. A., Gorjian, V., & Tam, R. 1998, [ApJS]{}, 117, 25 Martinet, L. 1995, [Fund. Cosm. Phys]{}, 15, 341 Martini, P., & Pogge, R. W. 1999, [AJ]{}, 118, 2646 Martini, P., Regan, M. W., Mulchaey, J. S., & Pogge, R. W. 2003, [ApJ]{}, 589, 774 Martini, P. 2004, Coevolution of Black Holes and Galaxies, 169 , K. L., [Nichol]{}, R. C., [Hoyle]{}, B., et al. 2011, [MNRAS]{}, 411, 2026 Masters, K. L., Nichol, R. C., Haynes, M. P., et al. 2012, [MNRAS]{}, 424, 2180 Melvin, T., Masters, K., Lintott, C., et al. 2014, [MNRAS]{}, 438, 2882 Mihos, J. C., & Hernquist, L. 1996, [ApJ]{}, 464, 641 Mulchaey, J. S., & Regan, M. W. 1997, [ApJL]{}, 482, L135 Muzzin, A., Marchesini, D., Stefanon, M., et al. 2013, [ApJS]{}, 206, 8 Nair, P. B., & Abraham, R. G. 2010b, [ApJL]{}, 714, L260 Nandra, K., Laird, E. S., Adelberger, K., et al. 2005, [MNRAS]{}, 356, 568 Nandra, K., Georgakakis, A., Willmer, C. N. A., et al. 2007, [ApJL]{}, 660, L11 Newman, J. A., Cooper, M. C., Davis, M., et al. 2013, [ApJS]{}, 208, 5 Noguchi, M. 1988, [A[&]{}A]{}, 203, 259 Oh, S., Oh, K., & Yi, S. K. 2012, [ApJS]{}, 198, 4 Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2002, [AJ]{}, 124, 266 Regan, M. W., Vogel, S. N., & Teuben, P. J. 1995, [ApJ]{}, 449, 576 Regan, M. W., Sheth, K., & Vogel, S. N. 1999, [ApJ]{}, 526, 97 Regan, M. W., & Mulchaey, J. S. 1999, [AJ]{}, 117, 2676 Rix, H.-W., Barden, M., Beckwith, S. V. W., et al. 2004, [ApJS]{}, 152, 163 Sakamoto, K., Okumura, S. K., Ishizuki, S., & Scoville, N. Z. 1999, [ApJ]{}, 525, 691 Salpeter, E. E. 1955, [ApJ]{}, 121, 161 Sanders, R. H. 1984, [A[&]{}A]{}, 140, 52 Sanders, D. B., Soifer, B. T., Elias, J. H., et al. 1988, [ApJ]{}, 325, 74 Schawinski, K., Virani, S., Simmons, B., et al. 2009, [ApJL]{}, 692, L19 Schawinski, K., Urry, C. M., Virani, S., et al. 2010, [ApJ]{}, 711, 284 Schawinski, K., Treister, E., Urry, C. M., et al. 2011, [ApJL]{}, 727, L31 Schawinski, K., Simmons, B. D., Urry, C. M., Treister, E., & Glikman, E. 2012, [MNRAS]{}, 425, L61 Schmitt, H. R., Pringle, J. E., Clarke, C. J., & Kinney, A. L. 2002, [ApJ]{}, 575, 150 Schmitt, H. R., Donley, J. L., Antonucci, R. R. J., et al. 2003, [ApJ]{}, 597, 768 Scoville, N., Aussel, H., Brusa, M., et al. 2007, [ApJS]{}, 172, 1 Sellwood, J. A. 2014, Reviews of Modern Physics, 86, 1 Sheth, K., Regan, M. W., Vogel, S. N., & Teuben, P. J. 2000, [ApJ]{}, 532, 221 Sheth, K., Vogel, S. N., Regan, M. W., et al. 2002, [AJ]{}, 124, 2581 Sheth, K., Vogel, S. N., Regan, M. W., Thornley, M. D., & Teuben, P. J. 2005, [ApJ]{}, 632, 217 Sheth, K., Elmegreen, D. M., Elmegreen, B. G., et al. 2008, [ApJ]{}, 675, 1141 Shlosman, I., Frank, J., & Begelman, M. C. 1989, [Nature]{}, 338, 45 Shlosman, I., Begelman, M. C., & Frank, J. 1990, [Nature]{}, 345, 679 Silverman, J. D., Green, P. J., Barkhouse, W. A., et al. 2008a, [ApJ]{}, 679, 118 Silverman, J. D., Mainieri, V., Lehmer, B. D., et al. 2008b, [ApJ]{}, 675, 1025 Simkin, S. M., Su, H. J., & Schwarz, M. P. 1980, [ApJ]{}, 237, 404 Simard, L., Willmer, C. N. A., Vogt, N. P., et al. 2002, [ApJS]{}, 142, 1 Simmons, B. D., Urry, C. M., Schawinski, K., Cardamone, C., & Glikman, E. 2012, [ApJ]{}, 761, 75 Simmons, B. D., Lintott, C., Schawinski, K., et al. 2013, [MNRAS]{}, 429, 2199 Simmons, B. D., Melvin, T., Lintott, C., et al. 2014, [MNRAS]{}, 445, 3466 Skibba, R. A., Masters, K. L., Nichol, R. C., et al. 2012, [MNRAS]{}, 423, 1485 Springel, V., Di Matteo, T., & Hernquist, L. 2005, [ApJL]{}, 620, L79 Taylor, M. B. 2005, Astronomical Data Analysis Software and Systems XIV, 347, 29 Treister, E., Schawinski, K., Urry, C. M., & Simmons, B. D. 2012, [ApJL]{}, 758, L39 Trump, J. R., Impey, C. D., Elvis, M., et al. 2009, [ApJ]{}, 696, 1195 Trump, J. R. 2013, Proc. from Galaxy Mergers in an Evolving Universe, Ed. by W.-H. Sun, C. K. Xu, N. Z. Scoville, & D. B. Sanders, ASPC, 477, 227 Ueda, Y., Akiyama, M., Ohta, K., & Miyaji, T. 2003, [ApJ]{}, 598, 886 Ulvestad, J. S., & Wilson, A. S. 1984, [ApJ]{}, 285, 439 Wada, K., & Habe, A. 1992, [MNRAS]{}, 258, 82 Willett, K. W., Lintott, C. J., Bamford, S. P., et al. 2013, [MNRAS]{}, 435, 2835 Williams, C. C., Giavalisco, M., Cassata, P., et al. 2014, [ApJ]{}, 780, 1 Wright, E. L. 2006, [PASP]{}, 118, 1711 Xue, Y. Q., Luo, B., Brandt, W. N., et al. 2011, [ApJS]{}, 195, 10 Zurita, A., Rela[ñ]{}o, M., Beckman, J. E., & Knapen, J. H. 2004, [A[&]{}A]{}, 413, 73 [^1]: This publication has been made possible by the participation of more than 85,000 volunteers in the Galaxy Zoo project. Their contributions are individually acknowledged at http://authors.galaxyzoo.org/. [^2]: Unless otherwise stated, we use “bars” to refer to large-scale structures in isolated systems, i.e., we do not consider bars created through interactions. These large-scale bars are commonly referred to as primary bars while small-scale (less than, or of the order of 1 kpc) bars are commonly referred to as secondary bars. [^3]: For AEGIS and GOODS-S, this is the average exposure time of the two observed [*HST*]{}/ACS bands used in this work (see §\[sub:aegis\_data\] and §\[sub:goodss\_data\]) [^4]: The complete decision tree is available at <http://data.galaxyzoo.org>. [^5]: The exclusion of this criterion does not significantly affect the measured bar fractions, nor does it change our conclusions. [^6]: $U-V$ for COSMOS [^7]: According to [@cameron11], when $f_{\rm bar}=0$, one can adopt the median of the beta distribution likelihood function as one’s best guess for the true $f_{\rm bar}$. [^8]: For small samples, one can refer to the reference Tables in [@cameron11]. [^9]: The discrepancy in GZH bar fractions and those used by [@cisternas14] (which are similar to those of @sheth08), has been explored in [@melvin14]; the most likely reason for this difference is that our $p_{\rm bar}$ threshold of 0.5 tends to identify strong bars, meaning that our work concerns mainly strong bars. The bar fractions used by [@cisternas14] are consistent with the total bar fractions of [@sheth08], meaning that their work concerns both strong and weak bars.
--- abstract: | Oblivious Transfer, a fundamental problem in the field of secure multi-party computation is defined as follows: A database $D\!B$ of $N$ bits held by Bob is queried by a user Alice who is interested in the bit $D\!B_b$ in such a way that (1) Alice learns $D\!B_b$ and only $D\!B_b$ and (2) Bob does not learn anything about Alice’s choice $b$. While solutions to this problem in the classical domain rely largely on unproven computational complexity theoretic assumptions, it is also known that perfect solutions that guarantee both database and user privacy are impossible in the quantum domain. Jakobi et al. \[Phys. Rev. A, 83(2), 022301, Feb 2011\] proposed a protocol for Oblivious Transfer using well known QKD techniques to establish an *Oblivious Key* to solve this problem. Their solution provided a good degree of database and user privacy (using physical principles like impossibility of perfectly distinguishing non-orthogonal quantum states and the impossibility of superluminal communication) while being loss-resistant and implementable with commercial QKD devices (due to the use of SARG04). However, their Quantum Oblivious Key Distribution (QOKD) protocol requires a communication complexity of $O(N \log N)$. Since modern databases can be extremely large, it is important to reduce this communication as much as possible. In this paper, we first suggest a modification of their protocol wherein the number of qubits that need to be exchanged is reduced to $O(N)$. A subsequent generalization reduces the quantum communication complexity even further in such a way that only a few hundred qubits are needed to be transferred even for very large databases. author: - 'M. V. Panduranga Rao$^1$, M. Jakobi$^2$' title: 'Towards Communication-Efficient Quantum Oblivious Key Distribution' --- Introduction\[sec:level1\] ========================== Impressive progress has been made over the last two decades in our understanding of how Quantum principles can be used to secure communication between trustful parties against eavesdropping. For example, Quantum Key Distribution (QKD) techniques have gained steadily in technical applicability. However, in the more general field of secure multi-party computation, which comprises tasks such as Coin Flipping and Bit Commitment and normally implies communication between distrustful parties, only a few quantum alternatives to classical schemes have emerged. One of the most fundamental problems of this type is Oblivious Transfer (OT), also known as Symmetrically Private Information Retrieval (SPIR). This task is complete for secure multi-party computations in the sense that all other tasks may be constructed from it [@Kilian]. Originally introduced in two different flavors by Rabin [@Rabin] in 1981 and by Even, Goldreich and Lempel [@EGL] in 1985, which were shown to be equivalent by Crépeau [@Crepeau], the problem of 1-out-of-2 OT requires Bob to send two bits to Alice such that (i) Alice gets to receive only one bit – she cannot get significant information about the other – and (ii) Bob does not get to know which bit Alice received, i.e. he is oblivious to what she learns. The problem of 1-out-of-N OT is a generalization of the 1-out-of-2 OT: Bob hosts a database $D\!B$ of $N$ bits. Alice wishes to retrieve the value of a certain bit, say the $b^{th}$, from the database. Privacy has to be preserved symmetrically: Bob should not get to know which bit Alice is interested in (that is, in this case he should not get to know $b$); at the same time, Alice should not get to know the value of any other bit in the database that she has not queried for. It is interesting to note that this task may accomplished by precomputing an “Oblivious Key” [@OK]: a string $O\!K$ of $N$ random bits that is (i) completely known to Bob while (ii) Alice knows only one bit $O\!K_j$ of this string, with Bob being oblivious to $j$. Once such a key is established, it can be used to complete the actual OT: Alice being interested in the database element $D\!B_b$ announces a shift $s=j-b$ to Bob. Thus, Bob gets to know neither $j$ nor $b$, but only $s$. Bob then encrypts the database bitwise as $D\!B'_a = \mathrm{X\!O\!R}(D\!B_a,O\!K_{a+s})$, $1\leq a\leq N$, and announces the encrypted database $D\!B'$. From this, Alice recovers the bit that she wanted: $D\!B_b=\mathrm{X\!O\!R}(D\!B'_b,O\!K_j)$ and the OT is complete. There exist several approaches in the classical realm to SPIR and OT (see e.g. [@Kilian; @jacmChor; @Rabin; @EGL]). Existing classical protocols for these problems depend on some unproven computational complexity theoretic assumption like nonexistence of efficient algorithms for integer factoring. More recently, classical approaches have been complemented by several quantum protocols. However, they have been subsequently shown to be inadequate because of susceptibility to different attacks [@Bennett], or practical difficulties [@BrassardCrep; @CrepKil]. A result of Lo [@Lo] put a damper on the quantum efforts. He showed in 1996 that an ideal solution cannot exist even in the quantum world – any protocol that guarantees perfect concealment of $b$ against Bob actually leaves the database completely vulnerable to attacks by Alice. Since then, several workarounds have been proposed (see e.g. [@Giovannetti]), albeit with some vulnerabilities. Recently, Jakobi et al. [@Jakobietal] made interesting progress by proposing a protocol that circumvents the impossibility proofs at the cost of perfect concealment. Their protocol relies on well known QKD techniques to establish an Oblivious Key between Alice and Bob that fulfills the OT requirements to a large extent while being consistent with Lo’s proof. Therefore, we refer to their approach as Quantum Oblivious Key Distribution (QOKD). The QOKD protocol offers good database security as well as user privacy and has been shown to be resilient to several attacks. However, some problems remain in the communication complexity of their solution. A transfer of the $N$ qubits is costly in itself, as modern databases are extremely large. The QOKD protocol involves transferring in addition at least $kN$ qubits, where $k$ is a security parameter. It turns out that Alice will on an average get to know about $N(\frac{1}{4})^k$ bits of the database. Therefore, unless $k$ also increases with $N$, the number of bits that become known to Alice in addition to the one she is supposed to know increases with $N$. By the same coin, if it is required to keep this number a constant, $k$ will have to rise at least logarithmically with $N$. This Paper \[subsec:level1\] ---------------------------- In this paper, we first suggest a modification of the initial QOKD protocol wherein the number of qubits that need to be exchanged is reduced to $N$. We then investigate the impact of the modification on database security and user privacy. Subsequently, we show that the modification can be generalized to reduce the required quantum communication complexity even further. We show simple numerical examples of the generalization indicating that at most a few hundred qubits are sufficient even for extremely large databases. This paper is arranged as follows. The next section gives a brief account of the QOKD protocol of Jakobi et al [@Jakobietal]. In section III we discuss its modification, analysis and generalization. Section IV concludes the paper. The initial Quantum Oblivious Key Distribution Protocol ======================================================= Brief Sketch ------------ The QOKD protocol for SPIR proceeds in three phases: First, a key is established between Bob and Alice using the SARG04 [@Scarani2004] Quantum Key Distribution (QKD) protocol. In the second phase, this key is processed to produce an oblivious key $O\!K$, a string of $N$ bits. While Bob has complete knowledge of this oblivious key, Alice knows only a few bits conclusively. Note that this $O\!K$ is not perfect and therefore does not contradict Lo’s impossibility proof. In the final phase, the oblivious key $O\!K$ is used to classically encrypt the database so that Alice can learn the bit that she is interested in. **First phase:** In contrast to BB84, the SARG04 QKD protocol uses the basis to encode a bit. For example, let the “up-down" basis $\updownarrow$ encode bit value 0 and “left-right" basis encode 1. The protocol would then use the four states ${\left|\uparrow\right\rangle}$, ${\left|\rightarrow\right\rangle}$, ${\left|\downarrow\right\rangle}$, and ${\left|\leftarrow\right\rangle}$, with $\left|\langle{\uparrow}{\left|\rightarrow\right\rangle}\right|^2=\frac{1}{2}$ etc. To establish one bit of the key, Bob prepares one of these states and sends it to Alice. He then announces the sent state and one of the other basis. For instance, to send 0, Bob can prepare the state ${\left|\uparrow\right\rangle}$ and announce the pair $\{{\left|\uparrow\right\rangle},{\left|\rightarrow\right\rangle}\}$. Alice then has to determine whether Bob sent ${\left|\uparrow\right\rangle}$ or ${\left|\rightarrow\right\rangle}$. A simple way to do this is to measure the received state in one of the two bases and hope for a result that will exclude one of the announced states. In the example above, measuring in left-right basis will yield the result ${\left|\leftarrow\right\rangle}$ with probability $1/2$, which excludes the announced state ${\left|\rightarrow\right\rangle}$. This allows Alice to conclude that the state sent by Bob must have been ${\left|\uparrow\right\rangle}$. A measurement in the up-down-basis would never yield a conclusive result as the only possible result is ${\left|\uparrow\right\rangle}$. Since Alice chooses the correct basis half of the time and then obtains a conclusive result with probability $1/2$, the overall probability of having a conclusive result is $\frac{1}{4}$ in SARG04. Therefore, Alice will know only a quarter of the sent bits with certainty; the values of the rest are inconclusive. Indeed, a “bit" can now also have the value “inconclusive" in addition to 0 or 1. To proceed with the extraction of the oblivious key $O\!K$, all sent bits are kept for the second phase. Note that this procedure is completely loss-independent [@comm-loss]. **Second phase:** The steps of the first phase are repeated until a raw key $R$ with elements $\{q_i\}, i=1\ldots kN$ is established. Alice will know the values of $\frac{kN}{4}$ bits of $R$ conclusively, while Bob knows all. The problem now is to extract (from the raw key $R$) an oblivious key $O\!K$, a string of $N$ bits completely known to Bob but of which Alice only knows a few elements. To that end, we form $N$ groups of $k$ qubits each. The elements of the oblivious key $O\!K$ are then defined as the $\mathrm{X\!O\!R}$ of the $N$ groups: $O\!K_j=\mathrm{X\!O\!R}(q_{k\cdot j},q_{k\cdot j+1},\ldots,q_{k\cdot j+k-1})$ for $1\leq j \leq N$. Therefore, even if one of the bits is inconclusive for Alice, her evaluation of $\mathrm{X\!O\!R}$ will be inconclusive. If Alice conducts her measurement as described in phase I, the probability that Alice knows all the bits of a group conclusively and can therefore compute the parity of the group is $(\frac{1}{4})^k$. Finally, she will know on average $N(\frac{1}{4})^k$ elements of the oblivious key $O\!K$ conclusively. $k$ should be chosen such that Alice knows on average only a small number $c$ of key bits, i.e. $k=\log_4(N/c)$. With a probability of $e^{-c}$ Alice is left with no known bit of $O\!K$ and the protocol must be restarted. **Third phase:** After completion of the second phase, an oblivious key $O\!K$ is established such that on average $c$ bits are known to Alice, while Bob knows $O\!K$ completely. This key is used to bitwise encrypt the database $D\!B$ ensuring that Alice obtains little information besides the bit she is interested in. Supposing Alice knows the bit $O\!K_j$ of the key and is interested in $D\!B_b$, the $b^{th}$ bit of $D\!B$, she communicates the shift $s=j-b$ to Bob. As described in the introduction, Bob then encrypts the database bitwise as $D\!B'_a = \mathrm{X\!O\!R}(D\!B_a,O\!K_{a+s})$, $1\leq a\leq N$, announces the encrypted database $D\!B'$, and Alice recovers the bit that she wanted: $D\!B_b=\mathrm{X\!O\!R}(D\!B'_b,O\!K_j)$. If $\{j'\}$ are the indices of the $c-1$ other bits in $O\!K$ that she learns conclusively after phases I and II, she can also get to know some more bits $D\!B_{b'}=\mathrm{X\!O\!R}(D\!B'_{b'},O\!K_{j'})$ of the database. However, the $\{j'\}$ are randomly distributed in the $O\!K$ and will generally not allow Alice retrieving a second bit of interest to her. On the Security of Quantum Oblivious Key Distribution ----------------------------------------------------- Jakobi et al [@Jakobietal] provide interesting arguments for the security of their QOKD scheme, while studying the most obvious attacks directly. Like all quantum SPIR protocols, QOKD cannot offer perfect security for both sides but exploits a trade-off between database security and user privacy. **Database security:** At the outset, Jakobi et al [@Jakobietal] let us know that the above protocol actually provides Alice with information on inconclusive bits, too. In the example of phase I, Alice measuring Bob’s sent state ${\left|\uparrow\right\rangle}$ in the up-down basis will never yield a conclusive result as it is not possible to rule out any of the two announced states $\{{\left|\uparrow\right\rangle},{\left|\rightarrow\right\rangle}\}$. However, with this measurement, Alice will always find the state ${\left|\uparrow\right\rangle}$. Having chosen the same measurement basis as Bob used for state preparation, Alice will always find his sent state “inconclusively". As this happens half of the time, and as the other inconclusive result (${\left|\rightarrow\right\rangle}$ in the example) is found with only $\frac{1}{4}$, Alice has indeed a guess on which state Bob had sent. By assuming her “inconclusive" outcome is actually the state he had prepared, she will be correct about the bit value with likelihood $\frac{2}{3}$. This partial information will be washed out during the extraction of the $O\!K$ in such a way that in a group where Alice measures all but $x$ bits conclusively, she will guess the key bit correctly with $\frac{3^x+1}{2\cdot3^x}, x\geq 1$. While analyzing the protocol’s security, one must assume in general that Alice has a quantum memory at her disposal and is able to postpone her measurements until after Bob’s SARG04 state pair announcement. She then knows that her measurement must distinguish, for instance, the states ${\left|\uparrow\right\rangle}$ and ${\left|\rightarrow\right\rangle}$ in order to decipher the sent bit. It can then be shown that Alice can perform an unambiguous state discrimination (USD) measurement which is successful with a probability of at most $p_{USD}=1-F({\left|\uparrow\right\rangle},{\left|\rightarrow\right\rangle})=1-1/\sqrt{2}\approx 0.29$, where $F$ is the fidelity. If Alice measures each received qubit individually, this attack is optimal and Alice will have on average $0.29N$ conclusive qubits instead of $0.25N$ before starting phase II of the protocol. However, this fact has only limited impact as it will increase the number of key elements known to Alice by only $\left(\frac{p_{USD}}{0.25}\right)^k\approx(1.16)^k$, where typically $k<10$. Instead of performing individual measurements, Alice can also perform a joint measurement on $k$ qubits in order to directly measure their overall parity. This way, she directly measures the associated key bit without using individual bit values. Jakobi et al [@Jakobietal] show that the success probabilities for USD as well as Helstrom maximal information gain measurements on $k$-qubit states decline rapidly with increasing $k$. Therefore, Alice’s knowledge on the final key is physically restricted by the impossibility to perfectly discriminate the non-orthogonal states used for encryption of the key elements. **User privacy:** Jakobi et al. [@Jakobietal] argue that Bob is able to obtain limited information on the conclusiveness of Alice’s bits but will then lose information on which bit value she has actually measured. He will thus introduce errors. For example, sending ${\left|\nearrow\right\rangle}$ or ${\left|\swarrow\right\rangle}$ while announcing a pair $\{\uparrow,\rightarrow \}$ will yield a probability for Alice to measure conclusively of $p_{-}=\frac{1}{2}-\frac{1}{2\sqrt{2}}\approx 0.15$ or $p_{+}=\frac{1}{2}+\frac{1}{2\sqrt{2}}\approx 0.85$, respectively. This turns out to be optimal – Bob can bias the conclusiveness probability $p$ for Alice’s qubits within the limits $p_{-}\leq p \leq p_{+}$. At the same time, sending ${\left|\nearrow\right\rangle}$ or ${\left|\swarrow\right\rangle}$ will obviously not give Bob any information on the result of Alice’s measurement. In fact, Bob cannot know the measurement basis Alice chose, which implies that it is impossible for him to have both increased information on her conclusiveness and full information on the bit value she measures (if conclusive). Every manipulation will hence create errors in the oblivious key. These characteristics are a consequence of the use of non-orthogonal states in SARG04 and the no-signaling principle. As a consequence, the protocol exploits fundamental physical principles to ensure database security and user privacy while allowing small additional information gains for both sides thus preventing a conflict with Lo’s impossibility proof. The Problem of Efficiency ------------------------- The number $c$ of bits revealed to Alice at the end of SARG04 and $\mathrm{X\!O\!R}$ing of the $N$ groups of $k$ bits is on average $N(\frac{1}{4})^k$. Thus, unless $k$ increases with $N$, $c$ would also increases with $N$. In particular, $k$ needs to increase at least logarithmically with $N$ to ensure that $c$ remains constant and quantum communication complexity is therefore $O(N \log N)$. Given the size of modern databases (which run into petabytes), such an increase should be avoided as this would be far too costly for the communication of only one bit to Alice. We now show that it is possible to reduce the required quantum communication complexity, first to $O(N)$ and subsequently even below, while maintaining the protocol’s security. The Road to Communication-efficiency ==================================== The Modified Protocol \[sec:sketch\] ------------------------------------ We propose modifying the second phase of the above protocol in such a way that every element of $R$ is replaced by the $\mathrm{X\!O\!R}$ of its current value with the values of the $k-1$ elements immediately following it. The last element is replaced by the $\mathrm{X\!O\!R}$ of its current value with the value of the first elements. Then, the modified protocol is as follows: - Let $R$ be the raw key after execution of SARG04 for $N$ bits. Then, while Bob knows the entire $R$, on average three quarters of the elements of $R$ are inconclusive at Alice’s end. - Define the elements of $O\!K$ as follows: $O\!K_j=\mathrm{X\!O\!R}(q_j,\ldots,q_{j+k-1})$ for $j=1\ldots N$ (with $q_{N+x}:=q_x$ for $1<x<k-1$). - If no bit survives at Alice’s end, repeat the above two steps. - Continue with the steps of the third phase for database exchange and verification. The modification that we have just described requires a quantum communication complexity of $N$ and is based on the following observations. Suppose we have a coin that shows head with probability $p$ and tails with probability $1-p$ when tossed. It is a folklore theorem that when such a coin is tossed $N$ times, the length of the longest streak of heads is $\Theta(\log_{1/p} N)$ with high probability [@Cormen2001]. The analogue of a streak of heads in coin tosses is a streak of conclusively known bits at the end of the SARG04 protocol for $N$ qubits. Tails would therefore be analogous to the inconclusive bits. We will now argue that with high probability, the expected number of times such a maximum length streak occurs is $O(1)$. Let a bit be conclusively revealed to Alice with probability $p$. Then, the probability of a contiguous streak of conclusively revealed $l$ bits is $p^l$. Let $X_{il}$ be the indicator random variable that takes the value 1 if a streak of length $l$ starts at position $i$ in the key and 0 otherwise. Thus, $X_l=\sum_{i=1}^{N}X_{il}$ is the random variable that counts the number of streaks of length $l$ in the key. By linearity of expectation, the expected number of streaks of length $l$ is $\sum_{i=1}^{N}E[X_{il}]$. Given that $Pr[X_{il}=1]=p^l$, we have $E[X_l]= \sum_{i=1}^{N}p^l$. That is, $E[X_l]=Np^l$. For $l=\log_{1/p}N$, we have $E[X_l]=1$. Moreover, by Markov inequality, the probability that the number of such streaks exceeds some $t$ is at most $\frac{E[X_l]}{t}$. For instance, if we take $p=\frac{1}{4}$ and $l=k=\log_{4}(N/c)$, we find $E[X_k]=c$. That is, the above procedure will yield on average $c$ streaks of length $k$, where $k=\log_4(N/c)$ as in the original QOKD protocol. Finally, by Markov inequality, the probability that the number of such streaks exceeds $E[X_k]^m=c^m$, for any $m> 1$, is at most $\frac{1}{c^{m-1}}$. In other words, it is likely that (i) there is at least one streak of length $k$ in the key, (ii) there is only a small number $c$ of streaks of length $k$, and (iii) every other streak in $O\!K$ is less than $k$ in length. We report in table \[tab:simulations\] simulations that justify the protocol. As pointed out by Jakobi et al. [@Jakobietal], even with a quantum memory, Alice can conclusively obtain only about $0.29$ of the bits after execution of SARG04 (the first step of the protocol). For this reason, and in continuation of our running example, we run the simulations on $p=\frac{1}{4}$ and $p=1-\frac{1}{\sqrt{2}}$ respectively with the same $k$ for both. $N$ $10^4$ $10^5$ $10^6$ $10^7$ $10^8$ ------------------ -------- -------- --------- --------- --------- -- k $6$ $7$ $9$ $11$ $13$ At least one $81$ $98$ $95$ $86$ 79 Average $p=0.25$ $2.37$ $6.5$ $4.09$ $2.45$ 2.17 $$ Average $p=0.29$ $6.46$ $18.9$ $15.45$ $13.42$ 15.74$$ : Simulation over 100 runs of the modified QOKD protocol with database size $N$. “Average" denotes the average number of survivors and “At least one" denotes the number of runs that have at least one survivor.[]{data-label="tab:simulations"} On the Security of the modified protocol ---------------------------------------- The security considerations of [@Jakobietal] presented above largely apply to the modified protocol as well as the changes only concern the post-processing and extraction of the oblivious key. **Database security:** If Alice has a quantum memory at her disposal, she is able to postpone her measurement after the state announcement of Bob during the SARG04 phase. As discussed before, when measuring each received qubit individually, this attack is optimal and directly covered by the considerations on the likelihood of conclusive streaks in section \[sec:sketch\] using $p_{USD}=0.29$ instead of $p=0.25$. The impact is precisely as before an increase in known key bits for Alice by a factor of $\left(\frac{p_{USD}}{0.25}\right)^k\approx(1.16)^k$. However, while individual key bits are as hard to extract as in the initial protocol, the modified version offers less protection with respect to the relative parities between key bits. The difference between two consecutive key bits $O\!K_j$ and $O\!K_{j+1}$ consists in the substitution of the qubit $q_j$ by $q_{j+k}$, i.e. $O\!K_j=\mathrm{X\!O\!R}(q_j,q_{j+1},...,q_{j+k-1})$ and $O\!K_{j+1}=\mathrm{X\!O\!R}(q_{j+1},q_{j+2},...,q_{j+k-1},q_{j+k})$, where the $q_i$ are the bit values corresponding to Bob’s sent states. The parity of $O\!K_j$ and $O\!K_{j+1}$ is revealed upon successful measurement of $q_j$ and $q_{j+k}$. We note hence that the reduction in communication complexity comes at the cost that parity information is easier to obtain. With respect to joint measurement, the new aspect in the modified protocol is that each qubit contributes to $k$ different key elements. Looking at a key bit $O\!K_j=\mathrm{X\!O\!R}(q_j,q_{j+1},..,q_{j+k-1})$, we can assume without loss of generality that Bob announces for all $k$ qubits a SARG04 pair of $\{\uparrow,\rightarrow \}$. The initial state before Alice’s measurement of $O\!K_j$ is then $\rho_k =\frac{1}{2^k} \bigotimes_{i=j}^{j+k-1} ({{\left|\uparrow\right\rangle}\left\langle \uparrow\right|}_i+{{\left|\rightarrow\right\rangle}\left\langle \rightarrow\right|}_i)$. Alice now performs a joint USD measurement on $\rho_k$ in order to retrieve $O\!K_j$ directly. This USD measurement can either be conclusive or non-conclusive, with conclusive results being increasingly unlikely with higher $k$ [@Jakobietal]. In case of a conclusive outcome, Alice will know the overall parity $O\!K_j$ of $\rho_k$, and the state after the measurement is given by all possibilities with parity $O\!K_j$: $\rho_k^{O\!K_j} =\frac{1}{2^{k-1}} \sum_{\mathrm{X\!O\!R}(q_j,..,q_{j+k-1})=O\!K_j} {{\left|q_j,...,q_{j+k-1}\right\rangle}\left\langle q_j,...,q_{j+k-1}\right|}$. Assuming Bob announced a SARG04 state pair $\{\uparrow,\rightarrow \}$, ${\left|q_i\right\rangle}$ should be read as ${\left|q_i=0\right\rangle}={\left|\uparrow\right\rangle}$ and ${\left|q_i=1\right\rangle}={\left|\rightarrow\right\rangle}$; that is, the states are not orthogonal. Alice can now try to determine the parity of the next key element $O\!K_{j+1}=\mathrm{X\!O\!R}(q_{j+1},..,q_{j+k-1},q_{j+k})$. Since all but one of these qubits are part of $\rho_k^{O\!K_j}$, realizing the measurement of $O\!K_{j+1}$ implies tracing out the qubit $q_{j}$ from $\rho_k^{O\!K_j}$, which simply yields $Tr_j \rho_k^{O\!K_j}=\frac{1}{2^{k-1}}\bigotimes_{i=j+1}^{j+k-1} ({{\left|\uparrow\right\rangle}\left\langle \uparrow\right|}_i+{{\left|\rightarrow\right\rangle}\left\langle \rightarrow\right|}_i)$, the initial SARG04 state before measurement for a $k-1$ qubit state. All parity information is hence erased from this sub-state and measuring $O\!K_{j+1}$ is exactly the same (difficult) task as measuring $O\!K_{j}$. Now we consider the case of Alice’s joint USD measurement being inconclusive. Per definition, the parity of the $k$ qubits’ ensemble is lost and can no longer be retrieved. That is, depending on the concrete design of the measurement, at least one of the $k$ qubits must have lost its bit value information and can no longer be used to define other key elements. As each qubit contributes to $k$ different key elements, Alice’s failed joint USD measurement of a single key element renders in fact the decoding of $k$ key elements impossible. In this sense, our modification can indeed increase database security. **User privacy:** The fundamental arguments of [@Jakobietal] for user privacy were based on the impossibility of perfectly distinguishing non-orthogonal quantum states and superluminal communication. These remain valid for the modified protocol as well. In particular, we remind the reader that Bob has no measurement that would allow him learning both conclusiveness and Alice’s bit value information. Our first observation is that by manipulating the conclusiveness of a single qubit $q_i$, Bob will impact the conclusiveness probability of the $k$ key elements that use $q_i$. However, the same is true for the error he introduces which also affects $k$ key elements and becomes hence easier to detect. A possible strategy for Bob to narrow down Alice’s conclusive bits is to increase the conclusiveness of a (contiguous) part of his sent qubits while reducing it for the rest. Remembering that $p_{+}p_{-}=\frac{1}{2}$, increasing the conclusiveness of $p_{-}N$ qubits to $p_{+}$ while reducing the conclusiveness of the remaining $p_{+}N$ qubits to $p_{-}$ will maintain Alice’s statistics of conclusive bits in $R$. Neglecting border effects, these two parts can be seen as independent strings on which the results of section \[sec:sketch\] can be applied. For the number of streaks of length $k$ one finds $E_{+}=p_{-}Np_{+}^k$ and $E_{-}=p_{+}Np_{-}^k$. It follows that $\frac{E_{+}}{E_{-}}=\left(\frac{p_+}{p_-}\right)^{k-1}\gg 1$. Therefore, Bob knows that the conclusive bit, which Alice will use to code the database element she is interested in, will lie with a high probability of $\frac{E_+}{E_+ + E_-}$ in the high conclusiveness part of $O\!K$. However, we note the following observations: (1) Bob’s knowledge remains considerably limited as $p_{-}N\approx 0.15 N$ key elements are still equally likely, (2) Bob does not know a single bit of the final key correctly and will thus give completely random answers during the third phase, and (3) Alice will have significantly more strings of length $k$ than expected since $E_{+} \gg N\left(\frac{1}{4}\right)^{k}$, which should make her more than suspicious. Indeed, as the protocol is both linear in $p$ (number of conclusively measured qubits) as well as non-linear (number of streaks of length $k$), Bob altering the conclusiveness of qubits systematically will easily show in Alice’s statistics. Generalization -------------- In the present modification of the QOKD protocol, a bit of the final key is defined as the parity of a streak of $k$ qubits $O\!K_j=\mathrm{X\!O\!R}(q_j,q_{j+1},...,q_{j+k-1})$. The reduction in communication complexity arises from the re-utilization of each qubit as a contributing element for $k$ bits of the final key, i.e. qubit $q_j$ is used in the definition of the key bits $O\!K_{j-k+1}$ to $O\!K_{j}$. This idea can be generalized in order to further reduce communication requirements: Let us assume phase I of the QOKD protocol is performed until $M<N$ qubits are distributed to Alice. In order to define the elements of the oblivious key, we now consider all possible combinations of $k$ out of these $M$ qubits. This allows to extract a key of length $\binom{M}{k}$ as each combination constitutes an independent parity functions of $k$ qubits in the sense introduced by our modified protocol. As such, by considering all these possible definitions of key bits, the minimal quantum communication complexity required for a $N$-bit database is given by $\binom{M_{min}}{k} \geq N$. Table \[tab:generalization\] provides some numerical examples of the impact of the discussed generalization. As can be seen, in this generalization, there is a certain freedom in choosing $M$ and $k$. While high $k$ and small $M$ will increase database security but also increase the abortion probability, low $k$ and high $M$ achieve the opposite. Even for huge databases, the required quantum communication complexity can be reduced to under 100. However, the small number of exchanged qubits gives rise to generally poor statistics making statistical analyses somewhat unreliable. Also, as this generalization presents an extreme case of re-using qubits for key definition and hence for reduction in quantum communication complexity, it does not come as a surprise that security is considerably less tight. Whereas in the initial and in the modified protocol a small constant number of database bits $c=Np^k$ was revealed to Alice on average, the generalized protocol provides Alice with significantly more bits, especially if the abortion probability should be low ($pM\geq k$). For instance, if Alice measures $k+x$ of the $M$ qubits conclusively, she is able to calculate $\binom{k+x}{k}$ key elements. Additionally, even when Alice measures only exactly $k$ qubits conclusively and can hence only calculate one single key bit, she is still able to calculate parities between many key elements. As such, the generalized protocol provides only little database security. Fortunately, the protocol is sufficiently cheap to be re-performed a couple of times, which allows to completely re-establish database security as we will see in the next section. $N\geq10^5$ ----------- --------- ---------- ------------- ---------- ---------- $M_{min}$ $41$ $29$ $23$ $21$ $20$ $k$ $4$ $5$ $6$ $7$ $8$ Average $397$ $131$ $46$ $28$ $19$ No bit $3.8\%$ $11.5\%$ $46.8\%$ $74.4\%$ $89.8\%$ : Calculated examples for generalized QOKD of databases of size $10^5$ and $10^{10}$ for different combinations of quantum communication complexity $M$ and security parameter $k$. “Average" denotes the average number of survivors conditioned on cases with at least one survivor and “No bit" the probability for no survivors. \[tab:generalization\] \ $N\geq10^{10}$ ----------- ---------- --------- ---------------- ---------- ---------- $M_{min}$ $71$ $58$ $50$ $45$ $42$ $k$ $8$ $9$ $10$ $11$ $12$ Average $162531$ $41833$ $11714$ $4094$ $1876$ No bit $1.2\%$ $2.9\%$ $16.4\%$ $40.9\%$ $64.9\%$ : Calculated examples for generalized QOKD of databases of size $10^5$ and $10^{10}$ for different combinations of quantum communication complexity $M$ and security parameter $k$. “Average" denotes the average number of survivors conditioned on cases with at least one survivor and “No bit" the probability for no survivors. \[tab:generalization\] \ Enhancing database security --------------------------- It is possible to significantly enhance database security by re-performing $r$ times either of the presented variants of QOKD as follows: in each of the $r$ rounds an oblivious key is generated, $O\!K^i, i=1\ldots r$. To obtain the final key $O\!K^\text{fin}$, Alice is asked to combine these $r$ keys bitwise with relative shifts $s_i$ she can freely choose: $O\!K^\text{fin}_j=\displaystyle\bigoplus\limits_{m=1}^r O\!K^m_{j+s_m}$. This final key is then used to encrypt the database as described in phase III of the original protocol. This procedure serves the following purpose. Using QOKD to generate the $r$ oblivious keys ensures that Alice only has partial knowledge on each of them. Therefore, combining them will further reduce her knowledge on the key while the free choice of the offset ensures that Alice always retains at least one element of the sum string. For instance, let us look at the first case of table \[tab:generalization\] with $r=2$: Alice generates two keys of $10^5$ bits, of which she knows $400$ conclusively each. It is important to remember that these bits are randomly distributed over the key strings. As such, just combining these strings without choosing the optimal offset will yield on average $1.6$ remaining conclusive bits. Numerical simulations show that by selecting the optimal offset, Alice will be able to retain on average $9.7$ known bits of the sum string. Obviously, $r=3$ will further reduce Alice knowledge. In principle, choosing a large $r$ will almost guarantee that Alice retains one and only one bit in the end [@Lo-argument]. Note that this procedure will also erase parity information that Alice can gather in the protocols proposed in this paper. The presented “dilution process” can obviously obviously ensure adequate database security and allows hence to take full advantage of the achieved reduction in quantum communication complexity. Conclusion ========== We showed that the protocol proposed by Jakobi et al. can be modified to reduce the required quantum communication complexity without compromising its security and while maintaining its strength of loss-resistance, practical feasibility, and integrability with current QKD devices. As a consequence, it is now possible to bring very large databases into the scope of Quantum Oblivious Key Distribution. Moreover, the modified protocol is sufficiently cheap in terms of quantum communication complexity so as to construct approximate versions of a whole range of quantum cryptographic algorithms based on SPIR. As such, Quantum Oblivious Key Distribution can significantly add to what can practically be realized today in the realm of Quantum cryptography and, together with QKD, it might well provide the basis for all practical future applications of quantum cryptography. [100]{} Joe Kilian. Founding cryptography on oblivious transfer. In [*Proceedings of the twentieth annual ACM symposium on Theory of computing*]{}, STOC ’88, pages 20–31, New York, NY, USA, 1988. ACM. Michael O. Rabin. How to exchange secrets by oblivious transfer. , 1981. Shimon Even, Oded Goldreich, and Abraham Lempel. A randomized protocol for signing contracts. , 28:637–647, June 1985. Claude Crépeau. Equivalence between two flavours of oblivious transfers. In [*A Conference on the Theory and Applications of Cryptographic Techniques on Advances in Cryptology*]{}, CRYPTO ’87, pages 350–354, London, UK, 1988. Springer-Verlag. D. Beaver, [*Lecture Notes in Computer Science, Vol. 963*]{} (Springer, London, 1995), p. 97. Benny Chor, Eyal Kushilevitz, Oded Goldreich, and Madhu Sudan. Private information retrieval. , 45(6):965–981, 1998. Charles H. Bennett, Gilles Brassard, Claude Crépeau, and Marie-Hélène Skubiszewska. Practical quantum oblivious transfer. In [*Proceedings of the 11th Annual International Cryptology Conference on Advances in Cryptology*]{}, CRYPTO ’91, pages 351–366, London, UK, 1992. Springer-Verlag. Gilles Brassard and Claude Crépeau. Quantum bit commitment and coin tossing protocols. In [*Proceedings of the 10th Annual International Cryptology Conference on Advances in Cryptology*]{}, CRYPTO ’90, pages 49–61, London, UK, 1991. Springer-Verlag. C. Crépeau and J. Kilian. Achieving oblivious transfer using weakened security assumptions. In [*Proceedings of the 29th Annual Symposium on Foundations of Computer Science*]{}, pages 42–52, Washington, DC, USA, 1988. IEEE Computer Society. Hoi-Kwong Lo. Insecurity of quantum secure computations. , 56(2):1154–1162, Aug 1997. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum private queries: security analysis. , 56:3465–3477, July 2010. Markus Jakobi, Christoph Simon, Nicolas Gisin, Jean-Daniel Bancal, Cyril Branciard, Nino Walenta, and Hugo Zbinden. Practical private database queries based on a quantum-key-distribution protocol. , 83(2):022301, Feb 2011. V. Scarani, A. Acin, G. Ribordy, and N. Gisin. Quantum cryptography protocols robust against photon number splitting attacks for weak laser pulse implementations. , 92(5):057901, 2004. Transmission and detection losses can easily compromise a protocol’s security, see for instance [@Giovannetti]. The receiver, in our case Alice, could perform her measurement and, if the measurement outcome is inconclusive, claim that the qubit was lost. In particular, if the receiver only pretends the presence of losses, she can to a large extend pick convenient measurement results and thus increase her knowledge. Fortunately, the timing of the protocol can easily prevent this: First, Bob sends a qubit to Alice. She then performs her measurement and acknowledges it if her detection was successful, otherwise a new qubit is sent. Only following this confirmation, Bob will announce the SARG04 state pair. As without this information, she is unable to evaluate if her measurement was conclusive or not, the impact of losses on the protocol’s security is eliminated. Please note that a quantum memory will not change this situation. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. . The MIT Press, New York, 2001. Please note that no direct conflict with Lo’s impossibility proof arises. In his proof, Lo assumes (1) perfect concealment of Alice’s choice $b$ against Bob and argues that Alice can then read the entire database (2) if the states representing the different database elements are eigenstates of her measurement operator. Both assumptions are not fulfilled in this limit.
--- author: - | C.P. Burgess, P. Grenier and D. Hoover\ Physics Department, McGill University, 3600 University Street,\ Montréal, Québec, Canada, H3A 2T8. title: Quintessentially Flat Scalar Potentials --- Introduction ============ Perhaps the most interesting consequence of the recent spate of cosmological measurements is the accumulation of evidence suggesting the Universe has passed through no less than [*two*]{} independent periods of acceleration during that part of its history to which we have observational access. The first of these periods is the early inflationary period [@Inflation], whose simplest predictions for the temperature fluctuations of the cosmic microwave background (CMB) radiation appear to describe very successfully what is seen [@WMAPInflation]. The big surprise of the past decade is the discovery that the present epoch also appears to be a period of incipient inflation, as indicated by both CMB measurements [@WMAPparams] and supernova surveys [@SN]. Both of these epochs can be described by the slow roll of a scalar field since they are both defined by the condition that the universal expansion accelerates, and this in turn requires the dominant contribution to the energy density, $\rho$, to have sufficiently negative pressure: $p < - \rho/3$.[^1] If the dominant energy is due to a rolling homogeneous scalar field, then its pressure-to-energy ratio is related to the fraction, $r = K/V$, of the scalar field’s kinetic and potential energies according to $p/\rho = (r-1)/(r+1)$. This shows that acceleration is possible only if the scalar field is presently potential-energy dominated: $K \lsim V$. For inflation the corresponding energy density is typically chosen to be $\rho \approx V \lsim (10^{15} \; \hbox{GeV})^4$, while applications to the present epoch — which we generically refer to as ‘quintessence’ models [@Quintessence] — instead require $\rho \approx V \sim (10^{-3} \; \hbox{eV})^4$. Both of these applications of slow-roll scalar fields run into difficulties because of the flatness of the potential which they require. The problem is the notorious difficulty in obtaining very flat potentials from realistic theories of microscopic physics [@Naturalness]. Our purpose in this paper is to propose a new mechanism for obtaining extremely flat potentials from within a brane-world picture [@braneworld; @BIQ]. In the model we propose, the scalar of interest is a pseudo-Goldstone boson [@pGB; @PhysRep] for an approximate symmetry (more about which below) which is explicitly broken, but whose breaking requires the presence of more than one brane as well as of a massive field living in the bulk between the branes. This combination ensures that the low-energy effective potential is suppressed by the amplitude, ${\cal A}$, for the massive particle to propagate from one brane to another, which is exponentially small in the inter-brane separation, $a$, in units of the massive-particle Compton wavelength, $M^{-1}$: ${\cal A} \sim \exp(- M a)$. The paper is organized in the following way. In the next section we outline how flat the scalar potentials must be for cosmological applications to inflation and quintessence, and summarize the naturalness problems which one encounters trying to obtain potentials this flat. This section also very briefly reviews what it means for the scalar be a pseudo-Goldstone boson (pGB), and why this can help with the naturalness issues. Since this section is not particularly new (see refs. [@FHSW; @CormierHolman; @pGBcosmology] for applications of pseudo-Goldstone bosons to cosmology), the professionals will want to skip directly to the next section, $\S 3$, where we describe our brane-world model, and show why it can give such flat potentials. At low energies the scalar model we produce is a pseudo-Goldstone boson, and so motivated by this in $\S 4$ we build an explicit quintessence model in order to show a detailed example of a successful cosmology using pseudo-Goldstone bosons, updating the earlier models of refs. [@FHSW; @CormierHolman]. In this section we also re-examine the viability of these models in the light of recent WMAP measurements, and identify potentially-observable differences between this kind of cosmology and other proposals. Finally, our conclusions are summarized in section $\S 5$. Slow-Rolling Scalars and Cosmology ================================== In this section we have two goals. First, we review the general constraints which cosmological applications require of slowly-rolling scalar fields, to see what kinds of hierarchies of scale a successful cosmology requires of an underlying theory. Then we examine pseudo-Goldstone bosons in particular, and ask how large the corresponding heirarchies are related to the corresponding energy scales for the various types of symmetry breaking. In this second section we identify two cases, which differ in whether or not the largest symmetry-breaking effects arise in the scalar kinetic energies or in the scalar potential. Since the arguments in this section are relatively standard, experts should feel free to skip directly to section $\S 3$. Constraints Required by Cosmologically Slow Rolls ------------------------------------------------- A problem with applications of rolling scalar fields to both inflation and to quintessence cosmologies arises because they each require an inordinately flat potential. We here summarize these constraints subject to very mild assumptions. In general, a scalar roll is only slow enough to neglect its kinetic energy if the slow-roll parameters [@LL] $\epsilon = {{\textstyle\frac{1}{2}}} \, (M_p V'/V)^2$ and $\eta = M_P^2 V''/V$ are much smaller than unity. (Here $M_p \approx 10^{18}$ GeV is the 4D Planck mass and the prime denotes differentiation with respect to the canonically-normalized scalar fields.) To see what this requires suppose the scalar action has the generic form \[pGBgenform\] - [[L]{} ]{} = ( )\^2 + \^4 v() , where $0 \le \varphi(x) \le 2 \pi$ is a dimensionless field and $f$ and $\mu$ are constants having dimensions of mass (in units for which $\hbar = c = 1$). If we suppose that $v(\varphi)$ and all of its derivatives are $O(1)$, then $\epsilon \sim \eta \sim M_p^2/f^2$ which shows that we must require $f \gg M_p$, in which case the scalar mass, $m \sim \mu^2/f$, must be much smaller than the Hubble scale, $H = \Bigl( {\rho/3M_p^2} \Bigr)^{1/2} \sim \mu^2/M_p$. In order for such a rolling scalar in an early inflationary period to properly describe the amplitude of CMB temperature fluctuations requires the combination $\delta^2 = (1/150\pi^2) (V/M_p^4 \epsilon)$ must satisfy $\delta \approx 2 \times 10^{-5}$. Using the conditions $\epsilon \sim M_p^2/f^2$ and $H \sim \mu^2/M_p$ just described, implies $\mu/M_p \sim 0.03 \, \sqrt{M_p/f}$. Together with the observational constraint [@WMAPInflation] $\epsilon < 0.03$, we find the requirement $M_p /f \lsim 0.2$ and $\mu/M_p \lsim 0.006$. On the other hand, the situation is even worse if the scalar is to describe today’s Universal acceleration, since such a scalar must satisfy $\mu \sim 10^{-3}$ eV. This, with the slow-roll condition $f \gsim M_p$, leads to $\mu/f \lsim \mu/M_p \sim 10^{-30}$ and the incredibly small scalar mass $m \sim \mu^2/f \lsim \mu^2/M_p \sim 10^{-33}$ eV. Naturalness Issues ------------------ It is notoriously difficult to get very flat scalar potentials from realistic microscopic physics without fine-tuning, and this difficulty comes in two parts. First one must ask: [*How do the small ratios $\mu/f$ and $\mu/M_p$ arise within the microscopic theory as a combination of microscopic parameters?*]{} Given that such a small ratio is predicted by the microscopic physics, one must then ask: [*How does it remain small as one integrates out all the physics between these microscopic scales and the cosmological scales at which it is measured?*]{} Of these, the second problem is the more serious, the more so the lower $\mu$ is required to be. It is a problem because a particle of mass $M$, which interacts with the scalar with order-unity couplings, typically shifts $\mu$ by an amount $\delta \mu \propto M$ when it is integrated out, which can be unacceptably large if $M \gsim \mu$. There are two symmetries which are known to be able to help with this problem, in that they can ensure that particles of mass $M$ do not produce corrections as large as $\delta \mu \sim M$. The two symmetries are: (1) supersymmetry, for which bose-fermi cancellations ensure $\delta \mu \lsim M_s$, where $M_s$ is the typical mass splitting within a supermultiplet; (2) Goldstone symmetries, for which the scalar transforms inhomogeneously according to $\delta \varphi = \epsilon [1 + F(\phi)]$, where $\epsilon$ is the transformation parameter and the potentially nonlinear function $F$ satisfies $F(0) = 0$. This second type of symmetry arises only if the scalar in question is a Goldstone boson for a spontaneously broken global symmetry, and it ensures $v(\varphi)$ must be completely independent of $\varphi$. $v(\varphi)$ can be nontrivial if the global symmetry is only approximate, in which case corrections to $\mu$ are systematically suppressed by whatever small symmetry-breaking parameter makes the symmetry a good approximation. In this case the scalar $\varphi$ is known as a pseudo-Goldstone boson [@pGB; @PhysRep]. Pseudo-Goldstone Bosons ----------------------- The scales $\mu$ and $f$ are related to the scales of symmetry breaking in the underlying microscopic theory. Once set there they naturally remain small as successive scales are integrated out to obtain an effective theory at very low energies. These low-energy corrections remain small precisely because the scalar $\varphi$ is a pseudo-Goldstone boson, and so corrections to $\mu$ are protected by the nonlinearly-realized $G$ symmetry. A lower limit to the amount of this suppression can be inferred completely within the low-energy theory by power-counting within it the size of loop-generated symmetry-breaking corrections [@PhysRep]. To this end consider a system of $N$ scalars, $\varphi^a$, where we choose to rescale the spacetime metric to go to the Einstein Frame, for which the graviton kinetic term takes the canonical Einstein-Hilbert form. The scalar part of the lagrangian density which involves the fewest derivatives may always be written \[scalaraction\] [\_s ]{} = - V() - 12 G\_[ab]{}() g\^ \_\^a \_\^b . The symmetric tensor $G_{ab}$ may be interpreted as a metric on the scalar-field ‘target’ space. For a single scalar field $G_{ab}$ may be set to unity by an appropriate field redefinition, and so in this case it is the scalar potential, $V(\varphi)$, which determines all of the physics. A similar choice, $G_{ab} = \delta_{ab}$ is [*not*]{} possible if $N \ge 2$, however, unless the target space happens to be flat. In order to avoid missing physics associated with $G_{ab}$ we consider models below involving two or more pseudo-Goldstone scalars. When the scalars $\varphi^a$ are Goldstone bosons the functions $V$ and $G_{ab}$ are strongly restricted by symmetry conditions. These imply $V$ must be a constant, and for the symmetry-breaking pattern $G \to H$, $G_{ab}$ must be a metric on the coset space $G/H$ whose isometries include the symmetry group $G$. This usually determines $G_{ab}$ up to a few constants, and often completely determines it up to overall normalization and field redefinitions [@CCWZ; @PhysRep]. For example, for the symmetry-breaking pattern $SO(3) \to SO(2)$ there are then two Goldstone bosons, $(\varphi^1, \varphi^2) = (\theta, \phi)$, which parameterize the coset space $SO(3)/SO(2)$, which in this case is the two-sphere, $S_2$. Here we use standard spherical-polar coordinates, $0 \le \theta < \pi$ and $0 \le \phi \le 2 \pi$, on $S_2$. The $SO(3)$ transformations amount to the rotations of this sphere about its centre, if $S_2$ is embedded into Euclidean three-dimensional space. The condition that the action be invariant under $SO(3)$ transformations then requires $V(\theta,\phi)$ to be constant, and $G_{ab}(\theta,\varphi)$ to be the standard rotationally-invariant – ‘round’ – metric on the 2-sphere: \[E:Round\] G\_[ab]{} \_\^a \^\^b = f\^2 ( \_ \^+ \^2 \_ \^) . $f$ is a dimensionful constant whose size indicates the scale of spontaneous symmetry breaking. Our interest here is in pseudo-Goldstone bosons, for which the global symmetry $G$ is only approximate in the sense that the effective energy scale, $\mu$, associated with the explicit breaking of the symmetry is much smaller than the scale, $f$, of its spontaneous breaking. (We have already seen that this effective scale need not be simply related to the microscopic scales of the microscopic theory.) In this case $V$ need no longer be independent of $\varphi^a$ and $G_{ab}$ need not be a $G$-invariant metric, although deviations from these limits should be small if the scale, $\mu$, is much smaller than the scale, $f$, of spontaneous symmetry breaking. In the limit $f \gg \mu$ on dimensional grounds we expect the generic corrections to $V$ to be of order $\mu^4$ and corrections to $G_{ab}$ to be of order $\mu^2/f^2$. If the asymmetric terms are initially this size, they automatically remain so after being renormalized by quantum corrections within the low-energy theory. As is briefly summarized in the appendix, these orders of magnitude can differ in supersymmetric theories, if $\mu$ is larger than the supersymmetry breaking scale. For instance, for the $SO(3)/SO(2)$ example, suppose the $SO(3)$ symmetry is explicitly broken but the $SO(2)$ symmetry associated with shifting $\phi$ is not. Then examples of the kinds of new terms one might expect at low energies might be \[SO3breakingterms\] V() &=& a + \_[n 1]{} b\_n (n )\ G\_[ab]{} d\^a d\^b &=& f\^2 ,\ G() &=& \^2+ \_[n 2]{} c\_n \^2 (n ) , where the sums run over integer values. The above dimension counting then argues that while $a$ need not be suppressed by $\mu$, we expect $b_n \lsim \mu^4$ and $c_n \lsim \mu^2/f^2$. In applications it is usual to neglect the corrections to $G_{ab}$ and keep only the scalar potential which is induced by explicit symmetry breaking. This is usually justified because the symmetry-breaking potential always dominates at low energies because there is no symmetry-invariant potential with which to compete. For instance, in cosmological applications Hubble damping inevitably slows the scalar motion, making the potential a more and more important influence on the scalar roll. The same is not true for the corrections to the target-space metric, $G_{ab}$, since these are always at most of order $\mu^2/f^2$ relative to the $G$-invariant metric of the symmetry limit. The power-counting statements made above assume that quantum corrections respect the theory’s underlying $G$ invariance. Unfortunately, there is an important kind of quantum correction which may not do so for any global symmetry, and this represents a potential obstacle to using a pseudo-Goldstone symmetry to keep the corrections to $\mu$ small. The problem comes from gravitational quantum corrections, which are believed not to respect global symmetries, for instance due to the virtual appearance and disappearance of black holes (which the ‘no-hair’ theorems ensure cannot carry global-symmetry charges). Estimates [@GravSym] of the amount of symmetry breaking which this induces in a low-energy 4D effective theory predict that the symmetry-breaking interactions are suppressed by powers of $f^2/M_p^2$. This can represent an important renormalization to $\mu$ precisely for the case of cosmologically slowly-rolling fields, for which we’ve seen $f \gsim M_p$. One must keep in mind that these estimates of non-perturbative quantum-gravity effects carry the caveat that they assume many things about the properties of quantum gravity at high energies, and so may not properly capture how things work once this high-energy physics is better understood. In particular, as pointed out in ref. [@GravSym], these symmetry-breaking estimates can change dramatically if the effective theory becomes higher dimensional at scales $M_c \ll M_p$, as is the case in the brane-world models we describe below. Of course, how small a quantum gravity correction may be tolerated depends very much on how flat a scalar potential is desired, making all of these issues much more pressing for present-epoch quintessence models than they are for inflation. In what follows we proceed under the assumption that this, or a similar mechanism, ensures that high-energy quantum-gravity effects do not destroy the effectiveness of the pseudo-Goldstone boson mechanism in protecting the flatness of the scalar potential to the accuracy required for cosmology. Flat Scalar Potentials from the Brane World =========================================== Although pseudo-Goldstone bosons can have naturally flat potentials if $\mu \ll f$, they do not in themselves explain why $\mu$ should be so small. An understanding of this must come from a more microscopic theory. In this section we describe a brane-world model within which such small scales can arise for pseudo-Goldstone boson potentials. Brane models are natural to examine from this point of view, because in many situations they have given new insights on how small quantities can arise in low-energy low-energy physics [@natbrane]. The idea behind our construction is to make a model having a $G = U_A(1) \times U_B(1)$ global symmetry which is spontaneously broken by the [*vev*]{}, $v$, of a bulk scalar field $\Phi$. The symmetry is also broken explicitly by the couplings of a second bulk scalar field $\Psi$ (having a large mass $M$) to various brane fields $\chi_i$. In particular, the model is designed so that there is more than one brane (say two of them) and only the $U_A(1)$ symmetry is broken by the $\Psi$ couplings to the first brane, and only the $U_B(1)$ symmetry is broken by the $\Psi$ couplings to the second brane which is displaced a distance $a$ away from the first brane. Once this is arranged, functional integration over $\Psi$ and the brane modes generates a nontrivial scalar potential for the would-be goldstone mode in $\Phi$, which is suppressed by the amplitude ${\cal A} \propto \exp(- M a)$ for the field $\Psi$ to propagate from one brane to the other. The logic of this construction is reminiscent of brane-based supersymmetry breaking mechanisms, for which each brane preserves some supersymmetries but where all supersymmetries are broken by at least one brane [@braneSSB]. The Higher-Dimensional Toy Model -------------------------------- Consider, then, a model containing the complex scalar bulk fields $\Phi$ and $\Psi$, and complex brane fields $\chi_i, i = 1,2$, whose action is $S = S_B + S_{b1} + S_{b2}$, with $(4+n)$-dimensional bulk action \[Baction\] S\_B &=& -d\^4x d\^ny\ V(,) &=& M\^2 \^\* + (\^\*- v\^2)\^2 , and 4-dimensional brane actions \[baction\] S\_[b1]{} &=& - \_[y=y\_1]{} d\^4x\ S\_[b2]{} &=& - \_[y=y\_2]{} d\^4x . We assume all of the couplings, $\lambda, g_i$ and $h_i$ to be real and nonzero, but sufficiently small to permit a perturbative analysis of the model. We imagine the branes to be parallel 3-branes which are situated at the points $y = y_i$ within the $n$ compact transverse dimensions. We take the size of all of these dimensions to be of the same order, $r$, making the compactification scale (Kaluza-Klein masses) of order $M_c \sim 1/r$. By construction, the model enjoys a global $G = U(1) \times \tilde{U}(1)$ symmetry under which each of the bulk scalars rotate independently: $\Phi \to e^{i \omega} \Phi$ and $\Psi \to e^{i \tilde\omega} \Psi$. Other perturbative couplings could also be permitted in the bulk scalar potential without substantially changing our conclusions, provided they also respect this symmetry. The brane couplings, on the other hand, each explicitly break this symmetry down to a single $U(1)$. The bulk-brane couplings at brane 1 preserve the subgroup $U_A(1)$ under which $\Phi \to e^{i\omega_A} \Phi$, $\Psi \to e^{i\omega_A} \Psi$ and $\chi_1 \to e^{-i\omega_A/2} \chi_1$. Similarly the couplings on brane 2 only preserve the subgroup $U_B(1)$ under which $\Phi \to e^{i\omega_B} \Phi$, $\Psi \to e^{-i\omega_B} \Psi$ and $\chi_2 \to e^{-i\omega_B/2} \chi_2$. Taken together, both branes completely break the symmetry group $U(1) \times \tilde{U}(1)$. Notice also that it is only the field $\Psi$ which transforms differently under $U_A(1)$ and $U_B(1)$, and so both the brane and bulk actions would preserve one of the $U(1)$’s if the field $\Psi$ were everywhere set to zero. The spectrum of this model is easy to understand in the limit $h_i \to 0$, in which case the brane couplings also preserve the $G$ symmetry. Then the nonzero expectation value $\langle \Phi \rangle = v$ spontaneously breaks the $U(1)$ symmetry, while leaving the $\tilde{U}(1)$ unbroken. In the absence of the branes, therefore, the bulk theory would consist of a mass-$M$ complex field $\Psi$ plus the two real mass eigenstates coming from $\Phi$. One of the $\Phi$ mass eigenstates would in this case be a massless Goldstone boson, $\varphi = \arg\Phi$, and the other would have a mass of order $\sqrt\lambda \; v$. We now compute how nonzero $h_i$ couplings on the brane change these conclusions. The Effective 4D Theory ----------------------- Since our interest is in the would-be Goldstone boson, we focus on the low-energy theory below the compactification scale, by integrating out all of the massive fields on the branes and in the bulk. In particular, we concentrate on the effective scalar potential for the would-be Goldstone mode, $\varphi$, in this low-energy theory. We therefore look for those terms in the scalar potential which involve the phase of $\Phi$, neglecting also the Kaluza-Klein tower of compactification modes for this field. We start by integrating out the brane scalars, with the bulk fields held fixed. The leading contribution arises at one loop, leading to a scalar-potential contribution to the effective $(4+n)$-dimensional bulk theory of the form \[bindpot\] \_B = \_[i=1]{}\^2 V\_[[eff]{},i]{} \^n(y - y\_i) , where V\_[[eff]{},i]{} = , and the trace is over the two components of the $\chi_i$ mass matrix \^2\_i = . Here $\mu$ is an arbitrary renormalization scale, $\xi_1 = g_1 \, \Phi + h_1 \, \Psi$ and $\xi_2 = g_2 \, \Phi + h_2 \, \Psi^*$. Evaluating the trace we find \[DVexp\] V\_[[eff]{},i]{} &=&\ && , where we assume $m_i$ to be large enough to ensure that $m_i^2 > |\xi_i|$ for $\Phi \sim v$, and the second, approximate, equality applies for $|\xi_i| \ll m^2_i$. We next integrate out the massive bulk field, $\Psi$, with $\Phi$ temporarily held fixed. The leading result in this case arises at tree level, corresponding to the elimination of $\Psi$ from the classical action (supplemented by the effective brane-induced interaction, eq. ), using its classical field equation ( - + M\^2 ) \_c = - \_[i=1]{}\^2 \^n(y-y\_i) ,which, using eq. , gives the approximate expression ( - + M\^2 ) \_c && - \^n(y-y\_1)\ && - \^n(y-y\_2) . Working to leading order in powers of $h_i$ allows us to write $\xi_i \approx g_i \Phi$ in this last equation, allowing its solution to be written \_c(y) && - { h\_1 g\_1 (y\_1) G(y,y\_1) .\ && . + h\_2 g\_2 \^[\*]{}(y\_2) G(y,y\_2) } . Here $G(y,y')$ is the solution to $(- \Box + M^2) \, G(y,y') = \delta^n(y,y') - 1/\Omega_n$, where $\Omega_n$ denotes the volume of the $n$ extra dimensions. $G(y,y')$ is given explicitly by the mode sum G(y,y’) = [\_]{}’ , in terms of the eigenvalues and eigenfunctions satisfying $(- \Box + M^2) u_\ell(y) = \lambda_\ell \, u_\ell(y)$. The prime on the sum indicates the omission of any zero modes, for which $\lambda_\ell = 0$. Substitution into the classical action, eqs.  and , then gives an action of the form $S_{\rm eff}[\Phi] = S_{\rm inv}[\Phi] + \Delta S[\Phi]$, where $S_{\rm inv}[\Phi]$ is invariant with respect to $\Phi \to e^{i \omega } \Phi$, and S\[\] - k d\^4x + . In this last expression the constant $k$ is given explicitly by k = ( )\^2 . The final step is to integrate out the massive Kaluza-Klein modes for $\Phi$ to obtain the effective four-dimensional action. Since the Kaluza-Klein zero mode for $\Phi$ is independent of the extra-dimensional coordinates $y$, to leading order this corresponds to simply truncating the action using $\Phi(x,y) \to \Phi(x)$. Using the information that the invariant part of the potential is minimized for $\Phi(x) = v_R \, e^{i \varphi(x)} \ne 0$, where $v_R$ is an appropriately renormalized parameter which differs from $v$ because of the changes to the invariant part of the potential (which we do not follow here in detail), we obtain in this way the following effective action for the would-be Goldstone mode, $\varphi$: \[pGBaction\] S\_[eff]{}\[\] = - d\^4x , where $f^2 = v^2_R \, \Omega_n$ with $\Omega_n$ as before denoting the volume of the internal dimensions. The low-energy scalar potential is given within the above approximations by $V \approx \mu^4 \, \cos(2 \varphi) + \dots$ (up to an additive constant, $V_0$, which we may absorb into the renormalization of the cosmological constant). The constant $\mu$ is given approximately by \[pGBpotential\] \^4 2 k g\_1 g\_2 h\_1 h\_2 v\^2\_R G(y\_1,y\_2) . Eqs.  and are the main results of this section. Phenomenological Choices for the Scales --------------------------------------- We see that the higher-dimensional model implies an effective 4D action for $\varphi$ of the generic form of eq. , with the constants $f$ and $\mu$ given in terms of more microscopic parameters. Given an internal space for which $\Omega_n \sim r^n$ and a brane separation $a$, we therefore find the order of magnitude results f \~v r\^[n/2]{} , and \[muresult\] \~(g\_1 g\_2 h\_1 h\_2)\^[1/4]{} \^[1/4]{} , where we take $v_R$ and $v$ to be the same order of magnitude. For comparison the 4D Planck mass is given by $M_p \sim M_g \, (M_g \, r)^{n/2}$, where $M_g$ is the higher-dimensional gravitational scale. Eq.  uses the asymptotic form for the Green’s function in the limit $Ma \gg 1$: $G(a) \sim M^{-1} \, (M/a)^{(n-1)/2} \, \exp(-Ma)$. The exponential dependence of the heavy-field Green’s function is what allows the scale $\mu$ to be naturally much smaller than $f$ and $M_p$. For example, consider the simplest instance where we assume $r$ is much larger than all other fundamental length scales, which we choose to all be of order $M_g$. Taking then $a \sim r \gg 1/M_g$, we therefore suppose the higher-dimensional theory to involve only a single scale, $M_g$, and so take $g_i \sim \hat{g}_i \, M_g^{1 - n/2}$, $h_i \sim \hat{h}_i \, M_g^{1 - n/2}$, $M \sim M_g$ and $v \sim M_g^{1+n/2}$. This leads to f \~M\_p \~M\_g ( M\_g r )\^[n/2]{} \~ M\_g ( - ) . This expression shows that the most natural choice for the higher-dimensional scales implies a large decay constant $f \sim M_p$, but with the ratio $\mu/f$ exponentially small given even only a moderately large value for $M_g r \gg 1$. The exponential dependence on $M_g a$ allows the resulting scale $\mu$ to easily be small enough even for present-epoch applications. For instance taking $\hat{g}_i \sim \hat{h}_i \sim 1$ and $M_g r$ to be only slightly larger than the minimum size required to solve the electroweak hierarchy problem [@BIQ], $M_g r \sim 200$, $n = 6$ and $M_g \sim 10^{11}$ GeV, we have $\mu \sim 10^{-3}$ eV. Applications to inflation are also possible provided it can be ensured that $f \gg M_p$. For instance this might be arranged in one of the above scenarios if $M_\Phi \sim \sqrt{\lambda} v \sim M_g$, but with $\lambda \ll M_g^{-n/2}$. In this case the requirement $\mu \sim 10^{-4} \, M_p$ requires a smaller microscopic hierarchy. For instance if $n= 6$ then $M_g \sim 10^{15}$ GeV and $M_g r \sim 8$ does the job. Pseudo-Goldstone Boson Cosmologies ================================== Given the extremely shallow potentials which are possible with this mechanism, we next re-examine the cosmology of pseudo-Goldstone boson models in more detail. Our purpose in so doing is to reconsider more quantitatively the cosmological viability of these models in the light of present observations. We first describe in general the cosmological rolling of several scalar fields in four dimensions, and then return to the specific cases where the scalars are pseudo-Goldstone bosons. Our purpose is to define our notation, and to highlight the features of generic pGB-based Quintessence models so these may be contrasted with what obtains for the usual axion-based models [@FHSW; @CormierHolman]. General Multi-scalar Equations ------------------------------ The equations of motion which are obtained by varying the sum of the Einstein-Hilbert and the scalar action of eq.  produce the following equations of motion: \[EOM\] R\_ + \^2 &=& 0\ g\^ D\_\_\^a - G\^[ab]{} V\_[,b]{} &=& 0 , where $V_{,a} = \partial V/\partial \varphi^a$ and we adopt Weinberg’s curvature conventions . The spacetime and target-space covariant derivative, $D_\mu$, for the scalar field which appears in eq.  is defined by: D\_\_\^a &=& \_\_ \^a + \^a\_[bc]{}() \_\^b \_\^c\ &=& \_\_\^a - \^\_ \_\^a + \^a\_[bc]{} \_\^b \_\^c. $\gamma^\mu_{\nu\lambda}(x)$ is the usual Christoffel symbol constructed from the spacetime metric $g_{\mu\nu}(x)$ and $\Gamma^a_{bc}(\varphi)$ is the Christoffel symbol built from the target-space metric $G_{ab}(\varphi)$. For cosmological applications we restrict these equations to a homogeneous but time-dependent field configuration and a Friedmann-Robertson-Walker (FRW) spacetime: $\varphi^a = \varphi^a(t)$, and g\_dx\^dx\^= - dt\^2 + a\^2(t) \_[mn]{}(y) dy\^m dy\^n , where $\gamma_{mn}$ is the usual homogeneous metric on the surfaces of constant $t$, parameterized by $k=0,\pm 1$. With these choices the equations of motion reduce to: \[phicosm1\] H\^2 = ( [a a]{} )\^2 &=& [3 M\_p\^2]{} - [k a\^2]{}\ [dda]{}( a\^3 ) &=& -3p a\^2\ [D \^a dt]{} + 3 H \^a + G\^[ab]{} [V \^b]{} &=& 0 , where \[phicosm2\] &=& 12 G\_[ab]{} \^a \^b + V()\ p &=& 12 G\_[ab]{} \^a \^b - V()\ [D \^a d t]{} &=& \^a + \^a\_[bc]{}() \^b \^c . Geometrically, the vanishing of $D \dot \varphi^a /dt$ is equivalent to the statement that $\varphi(t)$ is an affinely-parameterized geodesic of the target-space metric, $G_{ab}$. For instance, for the $SO(3) \to SO(2)$ example the $SO(3)$-invariant metric has the following nonzero Christoffel symbols: \[Connection0\] \^\_ = - , \^\_ = \^\_ = . The geodesics of this metric are the ‘great circles’, corresponding to the intersection of the sphere $S_2 = SO(3)/SO(2)$, with a plane which passes through the circle’s centre. Once the $SO(3)$ symmetry is explicitly broken (with the $SO(2)$ unbroken), the symmetry-breaking terms of eq.  imply changes to the target-space connection, leading to the more general expressions \^\_ = - , \^\_ = , where $G' = dG/d\theta$ and all other components are unchanged. These expressions reduce to eqs.  given the $SO(3)$-invariant choice $G = \sin^2\theta$. The qualitative behaviour of the solutions to these equations is easy to state in the case where the initial scalar kinetic energy, $K_i$, is large compared with its initial potential energy, $V_i$. In this case the scalar potential is initially negligible and the scalar moves along the target-space geodesic determined by its initial position and velocity. As it so moves the scalar experiences Hubble friction, which causes it to move more and more slowly along this geodesic with ever-decreasing kinetic energy. Eventually its kinetic energy is similar in size to its potential energy, and so the scalar makes a transition into a potential-dominated regime. At this point the scalar begins to follow the gradients of the scalar potential, until it eventually comes to rest at one of the potential’s local minima. A slow roll occurs if this potential-dominated motion is sufficiently slow. Because for slow scalar motion both the $\ddot\varphi^a$ and the $\Gamma^a_{bc} \dot\varphi^b \dot\varphi^c$ terms in the scalar field equation are small, the entire covariant derivative $D\dot\varphi^a/dt$ may be neglected during the slow roll. Quintessence Cosmologies {#S:Cosmology} ------------------------ We now examine in more detail the implications of these equations for applications to present-epoch (quintessence) cosmology. We use for these purposes the $SO(3) \to SO(2)$ pseudo-Goldstone model of the previous section. Besides verifying that such cosmologies can be viable, even after the advent of the WMAP measurements, this exercise is also useful for identifying those features of the resulting cosmologies which might be used to distinguish them observationally from other extant proposals. 0.25in =4.0in=3.0in For concreteness we have explored the model given by eqs. , with the choices $f = M_p$, $a = b_4 = \mu^{4}= \frac12 (10^{-30} M_{p})^{4}$, and $b_2 = b_3 = 0$. (These arbitrary choices for $b_2$ and $b_3$ are made to arrange minima for the potential at $\theta = \frac\pi 4$ and $\frac{3\pi}{4}$, and maxima of the potential at $\theta = 0$, $\frac\pi 2$ and $\pi$. The main features of the cosmology we present do not depend on these particular details. $a$ is chosen to make $V = 0$ at its minima, and this [*is*]{} important for the later cosmology. We have no new insights on the cosmological constant problem in this paper.) Motivated by the simplest power-counting estimates we also choose $c_n = 0$, although we return to this choice at the end of this section, where we also show how our results vary if $c = c_2$ is nonzero. Fig.  shows the results of a numerical evolution of the field equations for this model, giving the evolution of the energy density in radiation and matter, as well as the total kinetic and potential energy density associated with the scalar field motion. As this figure shows, the scalar fields in this model are just now entering a period of classical oscillation about the bottom of their potential, with the total scalar energy density falling like $1/a^3$ as it is inter-converted back and forth between kinetic and potential energy. Although it is at first sight tempting to place the present epoch during the last period during which $\Omega$ does not vary appreciably, this option is disfavoured by its predictions for the equation-of-state parameter $w = p/\rho$, as may be seen from fig. . 0.25in =4.0in=3.0in For the cosmology which these figures illustrate, the initial conditions for the fields $\theta$ and $\phi$ were chosen at the epoch of nucleosynthesis, with $\theta_0$ near $\pi/2$. The initial velocities were chosen so that the initial scalar energy is comparable to the energy in matter and radiation, and since this is much larger than $V(\theta,\phi)$ this means the scalar motion is initially dominated by its kinetic energy. Since the success of standard BBN does not permit the scalar to carry more than 10% of the total energy density, we choose the initial scalar velocities so that $K_\varphi = K_\theta + K_\phi$ saturates this upper bound, with $\dot\theta$ initially zero. Here $K_\theta = \frac12 \, f^2 \, \dot\theta^2$ and $K_\phi = \frac12 \, f^2 \, G(\theta) \, \dot\phi^2$. 0.25in =4.0in=3.0in The evolution of the two fields $\theta$ and $\phi$ given these initial assumptions are then shown in fig. . This figure shows that the $\phi$ evolution is quickly damped by Hubble friction. Since the scalar potential has maxima for $\theta = 0$ and $\pi/2$ and minima for $\theta = \pi/4$ and $3\pi/4$, the initial choice $\theta_0 \approx \pi/2$ is close to a maximum. Once Hubble damping reduces the kinetic energy of the scalars to close to their potential energy, $\theta$ starts to roll off of its maximum towards the minimum near $\theta = 3\pi/4$. It is striking that neither scalar evolves very far, even though their motion is kinetic-energy dominated for much of the Universe’s history. This feature of the scalar motion may be understood analytically (see appendix), and is a consequence of the extreme over-damping due to Hubble friction. 0.25in =4.0in=3.0in Two features of the scalar motion in this model are generic to quintessence applications of pseudo-Goldstone boson cosmologies. *Late-Time Oscillatory Cosmology:* A generic feature of pGB quintessence cosmologies is the late-time oscillations of the scalar fields about the potential’s minimum. (See, however, ref. [@CormierHolman], for a model which differs from most in its late-time consequences.) As is clear from fig. , although these oscillations are damped they are not damped faster than the energy density in matter. As a result the Universe settles down into a comparatively steady state, for which the relative proportion of energy tied up in Dark Matter and Dark Energy does not change. As may be seen from fig , these residual scalar oscillations may have observational implications because of the time dependence which they imply for $p/\rho$, and so also for the acceleration of the Universe. The late-time alternation between acceleration and deceleration is very different from both the eternal or temporary inflation predicted by a cosmological constant or by quintessence based on near-exponential potentials [@Quintessence; @ExpPots; @AS1; @ABRS], although it is not clear that this would be observable in the foreseeable future. *Special Initial Conditions:* A second generic feature of these pGB quintessence models is their sensitivity to initial conditions. Schematically this sensitivity arises because a successful cosmology requires the scalar to be near the maximum of its potential once its kinetic energy becomes comparable with its potential energy. This ensures the Universe experiences a sufficiently long period of potential-dominated slow roll before finally coming to rest at the potential’s minimum. 0.25in =4.0in=3.0in To quantify how broad a class of initial conditions are acceptable as descriptions of the present-day Universe, we evolved the cosmology described above for a variety of choices for $\theta_0$ and initial scalar velocities and asked which choices satisfied the two WMAP constraints [@WMAPparams] = \_= 0.73 0.09 , w = &lt; -0.78 , during the present epoch. Choosing always $K_\varphi = K_\theta + K_\phi$ to be fixed at 10% of the total energy density at nucleosynthesis, we varied the distribution of initial energy between the two fields $\theta$ and $\phi$ by varying the parameter $\tan^2\chi = K_\phi/K_\theta$. Fig.  shows the region in the initial $\chi-\theta$ plane which satisfy the two constraints given above. As the figure shows, the allowed region represents a minor fraction of the area of this plane, but is also not infinitesimally small. The shape of the allowed region is easily understood as follows. It passes through the point $(\chi,\theta) = (\frac{\pi}{2}, \frac{\pi}{2})$ because $\chi = \frac{\pi}{2}$ corresponds to starting with $\dot\theta = 0$, and $\theta = \frac{\pi}{2}$ is the maximum of the scalar potential. The corresponding cosmology simply has $\theta$ remain very nearly at rest at the very top of the potential from BBN until now. The curve bends away from $\theta = \frac{\pi}{2}$ as $\chi$ varies because if it starts with an initial velocity, $\theta$ need not begin at the maximum at BBN in order to end up there during the present epoch. We close with a discussion of the sensitivity of the above results to the choice of an $SO(3)$-invariant target-space metric. To test this sensitivity, we repeated the above analyses with the parameter $c$ of eqs.  nonzero. As expected, we find that $c$ does not change the scalar cosmology unless $c$ is quite large. For instance, fig. \[fig:cten\] shows the range of initial conditions which give acceptable present-day cosmologies if $c = 10$. As is seen from this figure, the allowed region changes perceptibly relative to the $c=0$ case, but the total acceptable volume does not change (as might be expected from Liouville’s theorem). 0.25in =4.0in=3.0in Conclusions =========== In this paper we re-examine the cosmological applications of pseudo-Goldstone bosons, with the following results. 1. We examine, in $\S2$, the constraints which inflation and cosmology impose on a slowly-rolling scalar field, and reproduce there standard constraints which are implied for the scale $f$ associated with the scalar’s kinetic energy and the scale $\mu$ related to its potential energy. These typically require $f \gsim M_p$ and $\mu \ll M_p$. 2. In $\S 3$ we identify a new brane-world mechanism for ensuring that the scale $\mu$ is exponentially small while keeping $f \gsim M_p$. It is accomplished by having a theory with an approximate global symmetry which is broken only by the couplings of a massive bulk field to various branes. In this model the scale $\mu$ of the low-energy effective theory below the compactification scale is proportional to $\exp(-Ma)$, where $M$ is the bulk scalar mass and $a$ is the inter-brane separation. Once such a small scale is generated in this way in the low-energy theory it is protected against low-energy radiative corrections by the residual approximately-broken symmetry, in the usual manner for a pseudo-Goldstone boson. 3. Motivated by this mechanism for obtaining extremely small scales, in $\S 4$ we reconsider the late-time cosmology of such a pseudo-Goldstone boson, by constructing an explicit quintessence cosmology. Successful cosmologies can be made subject to mildly restrictive choices for the initial conditions which are assumed for the scalars at the epoch of Big Bang nucleosynthesis. We argue that pseudo-Goldstone bosons of this type will be observationally distinguishable from other types of quintessence proposals because of the late-time scalar field oscillations which they generically predict. The great difficulty in obtaining slowly-rolling scalar fields from realistic microscopic theories of physics poses something of an opportunity given the current observational evidence for two epochs during which the Universe underwent accelerated expansion. The challenge is to identify those few kinds of small-distance physics for which cosmologically acceptable scalar fields are possible. In the past, brane-world models have been very successful in circumventing previously-held naturalness obstacles, and the same may be true for the mechanism illustrated by the brane-world toy model which we propose here. We believe this class of models is sufficiently interesting to merit a more detailed exploration of their observational implications for the CMB. Acknowledgments =============== We would like to acknowledge fruitful discussions with A. Albrecht, as well as partial research funding from NSERC (Canada), FCAR (Québec) and McGill University. C.B. thanks the KITP in Santa Barbara for their hospitality while this work was completed (as such, this research was supported in part by the National Science Foundation under Grant No. PHY99-07949). Appendix A: Supersymmetric Models ================================= In this appendix we briefly summarize how the simple dimensional estimates of the main text can differ for supersymmetric models. In $N=1$ supersymmetry in four dimensions scalars arise in complex pairs, as the partners of spin-1/2 fermions in chiral supermultiplets. Furthermore, supersymmetry also requires the quantity $G_{ab}$ can be put into the particular form [@cremmeretal] \[E:KahlerPot\] G\_[ab\^\*]{} = [\^2 K \^a \^[b\*]{}]{}, for a real function, $K(\varphi,\varphi^*)$, known as the Kähler potential. The scalar potential is similarly given in terms of $K$ and the holomorphic superpotential, $W(\varphi)$ by \[E:PfromSP\] V = e\^[K/M\_p\^2]{} , where $D_a W = \partial_a W + \partial_a K \, W/M_p^2$ and $G^{ab^*}$ denotes the matrix inverse of the target-space metric, eq. . Additional restrictions arise for $K$ and $W$ if the scalars are also Goldstone bosons for the symmetry-breaking pattern $G \to H$ [@susyGB]. In particular $K$ must be the Kähler function for an appropriate complexification of the manifold $G/H$, and $W$ must be independent of the Goldstone bosons and their superpartners. For example, for the two-sphere example, $S_2 = SO(3)/SO(2)$, considered earlier, if the scalars $\theta$ and $\phi$ are related to one another by supersymmetry, then the metric has the form of eq.  when it is expressed in terms of the stereographic projection to the complex plane, \[E:Stereo\] z(,) = ([2]{} ) e\^[i]{} . The Kähler potential in this case is \[E:KSphere\] K(z,z\^\*) = 4 f\^2 (1 + z\^\* z ), since with this choice dz dz\^\* = f\^2 ( d\^2 + \^2 d\^2 ) . For the present purposes, the crucial property of supersymmetric theories is that $W$ is protected by a nonrenormalization theorem [@susyNRT] and so does not receive corrections to any order in perturbation theory, although $K$ does. In supersymmetric models this implies that if a scalar is initially not in the classical superpotential, it cannot enter in perturbation theory as successive scales are integrated out, so long as these integrations remove particles in entire supermultiplets (as is required if the effective theory is to have the supersymmetric form given above). If the vacuum is supersymmetric then the loop-induced dependence of the scalar potential, $V$, on a pseudo-Goldstone boson must arise through symmetry-breaking contributions to $K$ rather than $W$. Once scales of order the supersymmetry-breaking scale, $M_s$, are integrated out, however, supersymmetry is less restrictive in what it requires. So if the pseudo-Goldstone boson symmetry-breaking scale satisfies $\mu \ll M_s$, none of the above discussion is particularly relevant and the estimates of the main text apply. If $M_s \ll \mu$, on the other hand, it can happen that corrections to the Kähler function, $K$ — and so also for the target-space metric $G_{ab}$, can be larger than those for $V$ if these are protected by the nonrenormalization theorems. Consequently supersymmetric suppressions are not likely to be relevant for quintessence cosmologies, although they may be relevant for inflationary models. Appendix B: Over-Damped, Kinetic-Dominated Scalar Rolls ======================================================= In this appendix we identify the $a$ dependence of the scalar field $\psi$ during a period of kinetic-energy-dominated motion. In particular, we establish the result $d\psi/da \propto a^{-p}$, with $p = 3 - m/2$, used in the main text, and derive an upper limit on the total distance $\psi$ can roll during this kind of motion. We start by changing the independent variable from $t$ to $b = \ln a$, in which case the derivatives of a field $\psi$ become: $$\label{E:ttob1} \dot{\psi}=\frac{d\psi}{db}\frac{db}{dt}=H\psi'$$ $$\label{E:ttob2} \ddot{\psi}=H^{2}\psi''+H'H\psi'$$ where over-dots denote $d/dt$ and primes denote $d/db$. If we also suppose only a single field rolls (so $G_{ab}$ may be set to unity by performing a field redefinition), then for a kinetic-dominated roll the Klein-Gordon field equation becomes $$\label{E:roll0} \psi''+[3+H'/H]\psi'=0 \, .$$ Assuming the dominant energy density satisfies $\rho_m \propto a^{-m}$, with $m = 3$ or 4 for matter- or radiation-domination, we have $3 M_p^2 H^{2} \approx \rho_m$ and so $H'/H = -m/2$. Consequently $\psi''+[3-m/2]\psi'=0$, with solution $d\psi/db = \kappa \, \exp[{-(3-m/2) \, b}]$ with $\kappa$ a constant. Clearly this establishes $\psi \propto \exp[-(3 - m/2) \, b] \propto a^{-p}$ with $p = 3-m/2$, as required. Given this solution we may also compute how far the field rolls, $\Delta\psi$, in a given amount of universal expansion, with the result $$\begin{aligned} \label{E:roll3} \Delta\psi & \equiv & \psi_f - \psi_i = \kappa \int_{b_i}^{b_f} e^{-(3-n/2)b} \, db \nonumber \\ & = & \frac{\kappa}{(3-m/2)} \Bigl[ e^{-(3-m/2)b_i} - e^{-(3-m/2)b_f} \Bigr] \nonumber \\ & = & \frac{1}{(3-m/2)} \left[ \left( \frac{d\psi}{db} \right)_{i} - \left( \frac{d\psi}{db} \right)_{f} \right] \, .\end{aligned}$$ We see that $\Delta \psi$ is directly related to the change in the derivative, $\psi' = (d\psi/db)$, which in turn can be related to the change in scalar kinetic energy, $K_i = \frac12 \, \dot\psi^2 = \frac12 H^2 \, {\psi'}^2$, between the initial and final times. Denoting the fraction of energy tied up in the scalar field kinetic energy by $\varepsilon = K/\rho_m$, we have \^2 = = 6 M\_p\^2 . Combining the above results we obtain the final result $$\begin{aligned} \label{E:roll5} \frac{\Delta\psi}{M_p} \approx \frac{\sqrt6}{3-m/2} \, \Bigl( \sqrt{\varepsilon_i} - \sqrt{\varepsilon_f} \Bigr).\end{aligned}$$ This last expression is useful if the fraction of scalar energy is known or bounded at the initial and/or final times. For instance, since constraints from nucleosynthesis require $\epsilon_{BBN} \lsim 0.1$, this is a useful place to choose as the initial or final time. A similar observation has also been made in another context in ref. [@chiba]. [99]{} A.H. Guth (SLAC), Phys. Rev. [**D23**]{} (1981) 347–356; A. Albrecht, P.J. Steinhardt, Phys. Rev. Lett. [**48**]{} (1982) 1220–1223; A.D. Linde, Phys. Lett. [**B108**]{} (1982) 389–393. H.V. Peiris, [*et. al.*]{}, \[astro-ph/0302225\]; V. Barger, H.-S. Lee and D. Marfatia, \[hep-ph/0302150\]; B. Kyae and Q. Shafi, \[astro-ph/0302504\]; J.R. Ellis, M. Raidal and T. Yanagida, \[hep-ph/0303242\]; A. Lue, G.D. Starkman and T. Vachaspati, \[astro-ph/0303268\]; S.M Leach and A.R. Liddle \[astro-ph/0306305\]. D.N. Spergel [*et.al.*]{}, \[astro-ph/0302209\]. S. Perlmutter et al., Ap. J. [**483**]{} 565 (1997) (astro-ph/9712212); A.G. Riess [*et al*]{}, Ast. J. [**116**]{} 1009 (1997) (astro-ph/9805201); N. Bahcall, [*et al.*]{} [*Science*]{} [**284**]{}, 1481, (1999). K. Coble, S. Dodelson and J. Frieman, [*Phys. Rev. D*]{} [**55**]{} 1851 (1997); R. Caldwell and P. Steinhardt, [*Phys. Rev. D*]{} [ **57**]{} 6057 (1998); I. Zlatev, L. Wang and P.J. Steinhardt, [*Phys. Rev. Lett.*]{} [**82**]{} (1999) 896; P.J. Steinhardt, L. Wang and I. Zlatev, [*Phys. Rev.*]{} [**D59**]{} (1999) 123504. S.M. Carroll, (astro-ph/9806099); A.R. Liddle and R.J. Scherrer, [*Phys. Rev.*]{}[**D59**]{} (1999) 023509; C.F. Kolda and D.H. Lyth, Phys. Lett. [**B458**]{} (1999) 197–201, \[hep-ph/9811375\]; C.P. Burgess, to appear in the proceedings of [*Dark 2002*]{} (astro-ph/0207174). I. Antoniadis, Phys. Lett. [**B246**]{} (1990) [377]{}; J. Lykken, Phys. Rev. [**D54**]{} (1996) [3693]{} \[hep-th/9603133\]; N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. [**B429**]{} (1998) [263]{} \[hep-ph/9803315\]; K. Dienes, E. Dudas and T. Gherghetta, Phys. Lett. [**B436**]{} (1998) [55]{} \[hep-ph/9803466\]; I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. [**B436**]{} (1998) [257]{} \[hep-ph/9804398\]; G. Shiu and S. H. Tye, Phys. Rev. [**D58**]{} (1998) [106007]{} \[hep-th/9805157\]; K. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys. [**B537**]{} (1999) [47]{} \[hep-ph/9806292\]; Z. Kakushadze and S. H. Tye, Nucl. Phys. [**B548**]{} (1999) [180]{} \[hep-th/9809147\]; L. Randall and R. Sundrum, Phys. Rev. Lett. [**83**]{} ([1999]{}) [3370]{} \[hep-ph/9905221\]; L. Randall and R. Sundrum, Phys. Rev. Lett. [**83**]{} ([1999]{})[4690]{} \[hep-th/9906064\]. K. Benakli, Phys. Rev. [**D60**]{} ([1999]{}) [104002]{} \[hep-ph/9809582\]; C. P. Burgess, L. E. Ibanez and F. Quevedo, Phys. Lett. [**B447**]{} ([1999]{}) [257]{} \[hep-ph/9810535\]. S. Weinberg, Phys. Rev. Lett. [**18**]{} (1967) 188–191; Phys. Rev. [**D13**]{} (1976) 974–996; L. Susskind, Phys. Rev. [**D20**]{} (1979) 2619–2625. For a review see C.P. Burgess, [*Phys. Rep.*]{} [**C330**]{} (2000) 193 (hep-th/9808176). J. Frieman, C. Hill, A. Stebbins and I. Waga [*Phys. Rev. Lett.*]{} [**75**]{} 2077 (1995). Cormier and R. Holman, [*Phys. Rev. Lett.*]{} [**84**]{} (2000) 5936 (hep-ph/0001168). K. Freese, J.A. Frieman and A.V. Olinto, Phys. Rev. Lett. [**65**]{} (1990) 3233-3236; M. Kawasaki, M. Yamaguchi and T. Yanagida, Phys. Rev. Lett. (2000) 3572-3575 \[hep-ph/0004243\]; M. Yamaguchi, Phys. Rev. [**D64**]{} (2001) 063502 \[hep-ph/0103045\]; G. German, A. Mazumdar and A. Perez-Lorenzana, Mod. Phys. Lett. (2002) 1627-1634 \[hep-ph/0111371\]; N. Arkani-Hamed, H.-C. Cheng, P. Creminelli and L. Randall, \[hep-th/0302034\]. Andrew R.  Liddle and David H.  Lyth “Cosmological Inflation and Large Scale Structure”, Cambridge University Press (2000). S.R. Coleman, J. Wess and B. Zumino, Phys. Rev. [**177**]{} (1969) 2239–2247; C.G. Callan, S.R. Coleman, J. Wess and B. Zumino, Phys. Rev. (1969) 2247–2250. L.F. Abbott and M.B. Wise, Nucl. Phys. [**B325**]{} (1989) 687; R. Holman, S.D.H. Hsu, T.W. Kephart, E.W. Kolb, R. Watkins, and L.M. Widrow, Phys. Lett. [**B282**]{} (1992) 132–136, \[hep-ph/9203206\]; M. Kamionkowski and J. March-Russell, Phys. Lett. [**B282**]{} (1992) 137, \[hep-th/9202003\]; S.M. Barr and D. Seckel, Phys. Rev. [**D46**]{} (1992) 539; S. Ghigna, M. Lusignoli and M. Roncadelli, Phys. Lett. [**B283**]{} (1992) 278; R. Kallosh, A.D. Linde, D.A. Linde, L. Susskind, Phys. Rev. [**D52**]{} (1995) 912–935, \[hep-th/9502069\]. N. Arkani-Hamed, S. Dimopoulos and G. Dvali, [ Phys. Lett.]{} [**B429**]{} (1998) 263 (hep-ph/9803315); Phys. Rev. [**D59**]{} (1999) 086004 (hep-ph/9807344); C.P. Burgess, L.E. Ibanez and F. Quevedo, Phys. Lett. [**B447**]{} (1999) 257-265, \[hep-ph/9810535\]; L. Randall, R. Sundrum, [ Phys. Rev. Lett.]{} [**83**]{} (1999) 3370 \[hep-ph/9905221\], Phys. Rev. Lett. [**83**]{} (1999) 4690 \[hep-th/9906064\]; C.P. Burgess, R.C. Myers and F. Quevedo, Phys. Lett. [**B495**]{} (2000) 384-393, \[hep-th/9911164\]; N. Arkani-Hamed, S. Dimopoulos, N. Kaloper and R. Sundrum, Phys. Lett. B [**480**]{} (2000) 193, \[hep-th/0001197\]; S. Kachru, M. B. Schulz and E. Silverstein, Phys. Rev. D [**62**]{} (2000) 045021, \[hep-th/0001206\]; Y. Aghababaie, C.P. Burgess, S. Parameswaran and F. Quevedo, \[hep-th/0304256\]. L. Randall and R. Sundrum, Nucl. Phys. [**B557**]{} (1999) 79–118, \[hep-th/9810155\]. S. Weinberg, [*Gravitation and Cosmology*]{}, Prentice Hall (1972). J.J. Halliwell, [*Phys. Lett.*]{} [**185B**]{} (1987) 341; J. Barrow, [*Phys. Lett.*]{} [**187B**]{} (1987) 12; E. Copeland, A. Liddle and D. Wands, [*Ann. NY Acad. Sci.*]{} [**688**]{} (1993) 647; C. Wetterich, [*Astron. and Astrophy.*]{} [**301**]{} (1995) 321; B. Ratra and P. Peebles, [*Phys. Rev.*]{} [**D37**]{} (1988) 3406; T. Barreiro, B. de Carlos, E.J. Copeland, [*Phys. Rev.*]{} [**D58**]{} (1998) 083513; E. Copeland, A. Liddle and D. Wands, [*Phys. Rev.*]{} [**D57**]{} (1998) 4686; P. Ferreira and M. Joyce, [*Phys. Rev.*]{} [**D58**]{} (1998) 023503. A. Albrecht and C. Skordis, [*Phys. Rev. Lett.*]{} [**84**]{} 2076 (2000) (astro-ph/9908085); C. Skordis and A. Albrecht, (astro-ph/0012195). A. Albrecht, C.P. Burgess, F. Ravndal and C. Skordis, [*Phys. Rev.*]{} [**D65**]{} (2002) 123506 (hep-th/0105261); A. Albrecht, C.P. Burgess, F. Ravndal and C. Skordis, Phys. Rev. (2002) 123507 (astro-ph/0107573). T. Chiba, (astro-ph/0106550). E. Cremmer, B. Julia, J. Scherk, S. Ferrara, L. Girardello and P. van Nieuwenhuizen, [*Nucl. Phys.*]{} [**B147**]{} (1979) 105; E. Witten and J. Bagger, [*Phys. Lett.*]{} [**B115**]{} (1982) 202. A.J. Buras and W. Slominski, Nucl. Phys. [**B223**]{} (1983) 157; W. Lerche, Nucl. Phys. [**B238**]{} (1984) 582; W. Buchmuller and U. Ellwanger, Nucl. Phys. [**B245**]{} (1984) 237; T.E. Clark and S.T. Love, Phys. Rev. [**D32**]{} (1985) 2148; S. Takeshita and M. Yasue, Phys. Rev. [**D34**]{} (1986) 1847. M. Grisaru, M. Roček and W. Siegel, [*Nucl. Phys.*]{} [**B159**]{} (1979) 429. [^1]: We assume here the universe to be spatially flat, $k=0$.
--- abstract: 'We investigate dust dynamics and evolution during the formation of a protostellar accretion disk around intermediate mass stars via 2D numerical simulations. Using three different detailed dust models, compact spherical particles, fractal BPCA grains, and BCCA grains, we find that even during the early collapse and the first $\sim 10^4$ yr of dynamical disk evolution, the initial dust size distribution is strongly modified. Close to the disk’s midplane coagulation produces dust particles of sizes of several 10$\mu$m (for compact spherical grains) up to several mm (for fluffy BCCA grains), whereas in the vicinity of the accretion shock front (located several density scale heights above the disk), large velocity differences inhibit coagulation. Dust particles larger than about 1$\mu$m segregate from the smaller grains behind the accretion shock. Due to the combined effects of coagulation and grain segregation the infrared dust emission is modified. Throughout the accretion disk a MRN dust distribution provides a poor description of the general dust properties. Estimates of the consequences of the “freezing out” of molecules in protostellar disks should consider strongly modified grains. Physical model parameters such as the limiting sticking strength and the grains’ resistivity against shattering are crucial factors determining the degree of coagulation reached. In dense regions (e.g. in the mid-plane of the disk) a steady-state is quickly attained; for the parameters used here the coagulation time scale for 0.1$\,\mu$m dust particles is $\sim 1\;{\rm yr}\;(10^{-12}\,{\rm g\; cm}^{-3} / \varrho)$. High above the equatorial plane coagulation equilibrium is not reached due to the much lower densities. Here, the dust size distribution is affected primarily by differential advection, rather than coagulation. The influence of grain evolution and grain dynamics on the disk’s near infrared continuum appearance during the disk’s formation phase is only slight, because the most strongly coagulated grains are embedded deep within the accretion disk.' author: - Gerhard Suttner - 'Harold W. Yorke' title: Early dust evolution in protostellar accretion disks --- Introduction ============ Dust in protostellar envelopes and accretion disks is a major component of pre-stellar matter, strongly influencing the thermodynamical and gas dynamical behavior of these young objects as well as their observable appearance. Dust provides the seeds for planetesimals, which in turn evolve into the constituents of a planetary system: comets, planets, moons, and the debris associated with asteroids and Kuiper belt objects. Interstellar dust evolves significantly from its initial state in recently formed molecular clouds up to the formation of planetesimals around stars and, as the gas density increases, it does so at an ever increasing rate. In molecular clouds it takes several 10$^6$ yr to build up large fluffy grains by dust coagulation (Ossenkopf 1993; Weidenschilling & Ruzmaikina 1994). In the midplane of accretion disks this time scale shrinks to about 10$^2$ yr due to the high densities there (Mizuno, Markiewicz, & Völk 1988; Mizuno 1989; Schmitt, Henning, & Mucha 1997). It would be naive to imply, however, that it is only a matter of time before coagulation produces the first planetesimals of several km in size. Other processes affect grain growth and evolution: orbital decay, shattering, cratering, sputtering, and compacting of amorphous grains, in addition to adsorption, outgassing, and chemical reprocessing of molecules. These processes depend critically on the physical conditions within the disk and on the grains’ relative velocities. Large compact dust grains (${\,\raise 0.3ex\hbox{$>$}\kern -0.75em\lower 0.7ex\hbox{$\sim$}\,}10\,\mu$m) can decouple from the gas and gain large relative velocities to each other which could prevent coagulation due to a limited sticking strength. By contrast, large fractal dust grains — such as Ballistic Cluster Cluster Agglomeration (BCCA) particles — are always well coupled to the gas component; large BCCA particles do not achieve sufficient relative velocities to coagulate effectively (Schmitt et al. 1997). The assumption of an “intermediate” type of fractal grains, the so called Ballistic Particle Cluster Agglomeration (BPCA) particles, could alleviate this problem. Such coagulates possibly couple sufficiently well to the gas to prevent high relative velocities and behave in the limit of a large number of constituent grains like compact particles, so that turbulence and systematic relative velocities can drive coagulation (Ossenkopf 1993). A high sticking strength is a necessary prerequisite for effective coagulation up to planetesimal sizes, since turbulent relative velocities can prevent the coagulation of pre-planetesimal dust grains (Weidenschilling & Cuzzi 1993). This could be achieved with ice or frost layers on the colliding particles’ surfaces (Bridges et al. 1996; Supulver et al. 1997). Dust coagulation leads to important modifications of the protostellar matter. Turbulence in accretion disks is strongly coupled to the opacity of the medium which in turn is dependent on the type of dust material and the dust grain size distribution. A high degree of coagulation implies significant reduction of the dust opacity (Mizuno et al. 1988). Reduced opacity is necessary to damp turbulence which otherwise would be a mayor obstacle for the pre-planetesimal dust particles to settle down to the disk’s equatorial plane (Weidenschilling 1984). During the formation of an accretion disk, however, virtually unprocessed dust material is continuously being supplied by the parent molecular cloud core. “Second generation” coagulation occurs (Mizuno 1989) which also influences opacity and the dynamics of the disk. For this study we calculated the dust evolution during the formation of a protostellar accretion disk using a multi-component radiation hydrodynamics code, an improved version of a RHD-code designed for one component (Yorke, Bodenheimer, & Laughlin 1995; Yorke & Bodenheimer 1999). Different dust models are applied and the influence of the dust–dust sticking strength is investigated. Brownian motion, turbulence, differential radiative acceleration and gravitative sedimentation with size dependent relaxation time scales are considered as sources of relative velocities between the dust grains. Mean dust opacities are calculated directly from the actual dust size distribution and are continuously updated in the radiation transport module. Finally, a diagnostic frequency and angle dependent radiation transport code is used to produce synthetic dust continuum emission maps and spectra which give information on observational consequences. In section 2 we describe our models for the dust and in section 3 we sketch our numerical approach to the problem. The initial conditions for the particular cases considered are introduced in section 4 and the numerical results of the simulations are presented in section 5. In the final section 6 these results are discussed in light of observable consequences and conclusions are drawn. Physics of dust grains in star forming regions ============================================== According to our present picture of the interstellar medium, dust grains more or less uniformly permeate gas clouds, contributing about one percent of their total mass. The grain size distribution follows a power law with an exponent of about $-3.5$ (Mathis, Rumpl, & Nordsieck 1977; hereafter MRN). MRN estimated maximum and minimum grain sizes at several nm and several $\mu$m, respectively. The constituent dust grains were assumed to be mainly “astrophysical” silicates with some contribution by “graphites”. With this dust model they could fit the overall extinction curve of the diffuse interstellar medium quite well. However, observations of dense molecular cloud cores show that the upper limiting grain size is shifted to larger radii (polarization measurements of Vrba, Coyne, & Tapia 1993; theoretical predictions by Fischer, Henning, & Yorke 1994). Coagulation is assumed to grow large grains in this environment. Three different dust models are considered. First, we treat the dust as compact spherical particles. For this model there are a several theoretical studies which deal with dust opacities (Yorke 1988), sticking strengths (Chokshi, Tielens, & Hollenbach 1993), the physics of dust shattering (Jones, Tielens, & Hollenbach 1996), and gas–dust coupling strengths (Yorke 1979). Thus, the basic physical properties are reasonably well defined, although more recent experimental work indicates that the critical sticking velocities tend to be higher than theoretical estimates by about a factor of 10 (Poppe & Blum 1997). We next consider fractal BPCA grains and BCCA grains. Ossenkopf (1993) developed a theoretical dust model for these fluffy particles and provided analytical expressions for the gas–dust and the dust–dust interaction cross sections as a function of the compact volume of the grains. Henning & Stognienko (1996) have calculated opacities for these fractal dust particles and Wurm (1997) generalized the sticking strength of Chokshi et al. (1993) to fractal coagulates. Again, recent experimental studies indicate critical sticking velocities to be about an order of magnitude larger than the theoretical values (Poppe & Blum 1997). Compact spherical grains {#SEcompact} ------------------------ ### Opacities and radiative acceleration ![Specific extinction coefficient of compact spherical dust grains with sizes of 5nm ([*solid line*]{}), 0.1$\mu$m ([*dotted line*]{}), 5$\mu$m ([*dashed line*]{}) and 0.2mm ([*dot-dashed line*]{}).[]{data-label="PIopac1"}](f01.ps){width="78mm"} Because dust is the predominant source of extinction in the temperature, density, and wavelength regimes considered (e.g. Yorke & Henning 1994), we have included only the dust’s contribution to opacity and radiative acceleration. The specific extinction $\kappa^{\rm ext}_{\lambda,i}$ of the component $i$ at wavelength $\lambda$ is proportional to the specific cross section of the spherical grains: $$\begin{aligned} \kappa^{\rm ext}_{\lambda,i}&=&Q^{\rm ext}_{\lambda,i} {\rm\pi} a_i^2 / m_i \\ \kappa^{\rm ext}_\lambda&=&\sum_{i=1}^N \varrho_i \kappa^{\rm ext}_{\lambda,i} / \sum_{i=1}^N\varrho_i \\ \kappa^{\rm p}_{\lambda,i}&=&\kappa^{\rm ext}_{\lambda,i} \ (1-A_{\lambda,i} g_{\lambda,i})\end{aligned}$$ The extinction gain factor $Q^{\rm ext}_{\lambda,i}$, the albedo $A_{\lambda,i}$, and the asymmetry parameter $g_{\lambda,i}=\left<\cos{\Theta_\lambda}\right>_i$ are calculated from Mie-theory using the dielectric constants for “astronomical” silicates given by Draine & Lee (1984). The net specific extinction of the dust grains $\kappa^{\rm ext}_\lambda$ is the weighted sum over the different constituent grain sizes (Fig. \[PIopac1\]). Here, $\kappa^{\rm p}_{\lambda,i}$ is the specific radiation pressure opacity needed to determine the interaction of the dust particles with the stellar radiation flux ${\bf F}_\lambda$. The radiative acceleration of the dust particles is given by: $$\left(\frac{\partial{\bf v_i}}{\partial t}\right)_{\rm rad} = \frac 1 {\rm c} \int_0^\infty\kappa^{\rm p}_{\lambda,i}{\bf F_\lambda}\, d\lambda$$ ### Sticking of grains {#SEspstick} Sticking of the spherical dust grains can be described by application of the theory of elasticity (Chokshi et al. 1993). One determines the binding energy of two spheres brought together under consideration of elastic wave dissipation in the two bodies. The resultant binding energy is set equal to the relative kinetic energy of the two colliding particles. This gives a critical sticking velocity, above which coagulation ceases and the particles bounce. Thus, $v_{\rm stick}$ is given by: $$\begin{aligned} \label{eq-stickE} E_{\rm stick} \!\!\!\!&=&\!\!\!\! 9.6\ \gamma^{5/3}\ {\rm R}^{4/3}\ {\rm E}^{-2/3} \\ \label{eq-stickv} v_{\rm stick} \!\!\!\!&=&\!\!\!\! \sqrt{2\, E_{\rm stick} / \mu}\end{aligned}$$ The constants $\gamma$ and E denote the surface energy per unit area and Young’s modulus of the dust material, respectively. ${\rm R}=r_1 r_2 / (r_1 + r_2)$ is the reduced radius of the contact surfaces and $\mu =m_1 m_2 / (m_1 + m_2)$ is the reduced mass of the two colliding dust particles. This expression differs by a numerical factor of about 3 from the original formula by Chokshi et al. (1993) when applied to spheres, because a correction of Dominik & Tielens (1997) has been applied (see discussion by Wurm 1997). Note that Beckwith, Henning, and Nakagawa (2000) give a slightly different formula for $v_{\rm stick}$ (quoting the same authors as above): $$\label{eq-stick3} v_{\rm stick} = 1.07 \frac{\gamma^{5/6}}{{\rm R}^{5/6}\ {\rm E}^{1/3}\ \rho_g^{1/2}} \; ,$$ where $\rho_g$ is the specific mass of the grain material. The formulae \[eq-stickE\]/\[eq-stickv\] and Eq. \[eq-stick3\] are identical when $r_1=r_2$ and $m_1=m_2$ but differ by 18% (50%) when the mass ratio of the colliding particles is 10 (100) and by a factor of almost two for extreme dust mass ratios. An ice layer on the grains’ surfaces enhances the critical sticking velocity by more than an order of magnitude compared to a pure silicate surface. The ice surfaces are destroyed at dust temperatures of about 125K. Thus, pure silicate grains are assumed where the dust temperature exceeds the ice melting limit. According to the experiments of Poppe & Blum (1997) the critical sticking velocities are increased by a factor of 10 for the simulations. ### Grain shattering When the relative velocity rises above a critical velocity $v_{\rm crit}$, the dust particles are shattered. This is a gradual process, starting with crater formation on the grains’ surfaces and ending with total disruption of projectile and target. We assume a critical shattering velocity $v_{\rm crit}$ for silicates (see Jones et al. 1996): $$v_{\rm crit} = 2.7\;{\rm km\;s^{-1}}$$ Jones et al. (1996) also give analytic expressions for the ejected mass during a shattering encounter. When half of the mass of the target particle (the larger of two colliding dust grains) is shocked, they assume total disruption; the debris particles are assumed to follow a power law size distribution with an exponent of $-3.0$ to $-3.5$. In our simulations we choose an exponent $-3.5$. The upper size limit of the fragments first grows with increasing collision velocity (i.e. increasing crater volume) and starts to decrease inversely proportional to the relative velocity when disruption starts (see appendix). ### Gas–dust interaction {#SEgasdusti} The dust grains are coupled to the gas by dynamical friction. According to the gas densities in star forming regions the Epstein coupling law (Epstein 1923; Weidenschilling 1977) is valid because the mean free path $\lambda_f$ of the gas molecules is large compared to the radii of the dust particles: $$\begin{aligned} \lambda_f&=& 1 / n_{gas}\sigma_{gas} \cr &\approx& 10^5\,{\rm cm} \left[\frac{\varrho}{\rm 10^{-14}\,g\,cm^{-3}}\right]^{-1}\gg a_i\end{aligned}$$ Here, we use an extension of the Epstein law for superthermal relative velocities (Yorke 1979), which often occur in later stages of the protostellar collapse: $$\left(\frac{\partial{\bf v_i}}{\partial t}\right)_{\rm ww} \hspace{-1mm}= \frac{4}{3}\varrho\frac{\sigma_i}{m_i} \sqrt{c_s^2+({\bf v}-{\bf v_i})^2}({\bf v}-{\bf v_i})$$ The interaction term $(\partial{\bf v_i} / \partial t)_{\rm ww}$ describes the coupling of dust particles with cross section $\sigma_i$ and mass $m_i$ with gas of density $\varrho$ and isothermal sound speed $c_s$. Fractal grains {#SEfrac} -------------- ### Mass to radius relation {#SEfracrad} There is no simple relation between mass and radius for fractal grains as there is for compact spherical dust particles. Here, a grain model which describes the transition from the compact constituent grains (at the lower radius limit of the size distribution) to the fractals (at the higher end) must be applied. For BPCA and BCCA grains Henning & Stognienko (1993) give an analytic expression for the filling factor $f$ in relation to the extremal radius $r_{ex,i}$ of the grains: $$\begin{aligned} \!\!\!m_i\!\!\!\!&=&\!\!\!\!\frac{4{\rm\pi}}{3}\varrho_{\rm bulk}\ r_{ex,i}^3\ f_i\\ \!\!\!f_i^{PCA}\!\!\!\!&=&\!\!\!\!0.0457 \left\lbrack1+696 \left(\frac{r_{ex,i}}{0.01\,{\rm\mu m}}\right)^{\!-3.93}\right\rbrack\\ \!\!\!f_i^{CCA}\!\!\!\!&=&\!\!\!\!0.279 \left(\frac{r_{ex,i}} {0.01\,{\rm \mu m}}\right)^{-1.04}\cr &&\hspace{1ex}\left\lbrack 1+4.01\left(\frac{r_{ex,i}} {0.01\,{\rm \mu m}}\right)^{-1.34}\right\rbrack\end{aligned}$$ The extremal radius $r_{ex,i}$ is defined as the radius of the minimal envelope sphere covering a fractal particle with bulk density $\varrho_{\rm bulk}$. The constituent grains are assumed to have radii of 0.01$\mu$m. ### Opacities and radiative acceleration ![Specific extinction coefficient of fractal dust particles with sizes of 5nm ([*solid line*]{}), 0.1$\mu$m ([*dotted line*]{}), 5$\mu$m ([*dashed line*]{}), and 0.2mm ([*dot-dashed line*]{}).[]{data-label="PIopac2"}](f02.ps){width="78mm"} Dust opacities for coagulated grains, averaged over a given size distribution, have been calculated by Henning & Stognienko (1996). R. Schräpler (private communication), with the authors’ permission kindly provided us with their basic size dependent specific extinction coefficients. We used the opacities for olivine (\[Fe,Mg\]$_2$SiO$_4$) dust grains (Fig. \[PIopac2\]). ### Sticking of grains {#sticking-of-grains} In contrast to compact spherical grains the fractal coagulates stick at their limb structures which are built by the small constituent grains. The reduced radius ‘R’ in Eq. \[eq-stickE\] for the critical sticking energy must be calculated using the radii of these constituent grains. In this manner the formula for the sticking velocity by Chokshi et al. (1993) can be generalized to fractal particles (see section \[SEspstick\]). ### Grain shattering Two approaches were followed. First, grain shattering is treated the same way as for the compact spherical grains. The analysis depends only on material parameters, the relative velocity, and the masses of the colliding particles. Although this assumption may seem inadequate, it poses an upper limit to the grains’ resistance against shattering. Fractals seem to be rather fragile constructs with low binding energy when only van der Waals adhesion is considered. However, experimental studies show that fractal particles are always more resistant against destruction than the predictions of theoretical models (Poppe & Blum 1997; Wurm 1997). Fractals possess a multitude of vibrational excitation modes which provide a wide channel to dissipate the kinetic energy of the impacting grain. As a second approach we adapt the shattering model of Dominik & Tielens (1997): The critical shattering velocity $v_{\rm crit}$ is proportional to the sticking velocity $v_{\rm stick}$ and the number of contact points between the two colliding grains. The values thus obtained are rather low compared to the model of Jones et al. (1996). ### Gas–dust interaction {#gasdust-interaction} The gas–dust interaction also has to be modified due to the fractal structure of BPCA grains. In order to find an analytic expression for the effective cross section, Ossenkopf (1993) fitted numerical calculations of the cross section $\sigma$ of coagulates in dependence of their compact volume $V$: $$\begin{aligned} \frac{\sigma}{\sigma_0}&=& a\left(\frac{V}{V_0}\right)^{\frac{2}{3}} \exp{\left\lbrack-\frac{b}{(V/V_0)^c}\right\rbrack}\\ a&=&\left\lbrace 15.2,\ \ \ 2 \le V/V_0 \le 1000\atop 4.7,\ \ \ \ \ \ \ \ \ \ V/V_0 > 1000\right.\cr b&=&\left\lbrace 2.86,\ \ \ 2 \le V/V_0 \le 1000\atop 9.02,\ \ \ \ \ \ \ \ \ V/V_0 > 1000\right.\cr c&=&\left\lbrace 0.096,\ \ 2 \le V/V_0 \le 1000\atop 0.503,\ \ \ \ \ \ \ V/V_0 > 1000\right.\cr\nonumber\end{aligned}$$ For BCCA particles the following expression has been derived: $$\begin{aligned} \frac{\sigma}{\sigma_0}\!\!\!\!\!&=&\!\!\!\!\! \cases{ \raise 1mm\hbox{\rm like\ \ \ BPCA} &\raise 1mm\hbox{$\frac{V}{V_0} < 25$}\cr \lower 7mm\hbox{$0.692\left(\frac{V}{V_0}\right)^{0.95} \left(1+\frac{0.301}{\ln{V/V_0}}\right)$} &\lower 7mm\hbox{$\frac{V}{V_0} \ge 25$}\cr }\end{aligned}$$ Here, the normalization factors $\sigma_0$ and $V_0$ are the cross section and the volume of the (compact) constituent grains $\sigma_0=\pi\cdot (0.01\,\mu{\rm m})^2$ and $V_0=4\pi/3\cdot (0.01\,\mu{\rm m})^3$. The compact volume can be calculated using the filling factor of section \[SEfracrad\]. Thus, a relation between $r_{ex}$ and the cross section $\sigma$ of the grains can be established and used in the interaction term of section \[SEgasdusti\]. ### Dust–dust interaction For the dust–dust interaction (coagulation) yet another radius definition has been introduced. The toothing radius $r_{tooth}$ is defined as half the average distance of the center of mass of two sticking grains. It measures the penetration of two fractal grains and has been determined by Ossenkopf (1993) by fits to numerical calculations which simulated the scan of the “coastline” of the larger dust particle by the smaller one. Thus, the collisional cross section for grain–grain collisions $\sigma_{coll}$ reads: $$\begin{aligned} \sigma_{coll}\!\!\!\!&=&\!\!\!\!\sigma^{(1)}\ (1-\zeta^2) \cr &+&\!\!\!\!\left[4\pi-8.3\ (1-\zeta^{-1.22})\right](r_{tooth}^{(2)})^2\end{aligned}$$ with: $$\begin{aligned} \zeta\!\!\!&=&r\!\!\!_{tooth}^{(2)} / r_{tooth}^{(1)} \\ r_{tooth}\!\!\!&=&\!\!\!0.72\ V^{1/3}\ x^{0.21} \left(1-\frac{0.216} {x^{1/3}}\right) \\ x\!\!\!&=&\!\!\!\sigma^3 / V^2\end{aligned}$$ The superscripts $(1)$ and $(2)$ denote the grain with the larger and the smaller toothing radius, respectively. Coagulation and shattering {#SEcoagshat} -------------------------- Adding the different source terms of grain acceleration, it becomes obvious that the dust particles will gain relative velocities to each other. The absolute values of these velocities depend on the dust absorption and scattering cross sections, the radiative flux, the gas density and the specific cross section of the grains: $$\begin{aligned} \left(\frac{\partial{\bf v_i}}{\partial t}\right)\!\!&=&\!\! \left(\frac{\partial{\bf v_i}}{\partial t}\right)_{\rm grav}\!\!\!+ \left(\frac{\partial{\bf v_i}}{\partial t}\right)_{\rm rad}\!\!\!+ \left(\frac{\partial{\bf v_i}}{\partial t}\right)_{\rm ww} \cr \!\!&=& -\nabla\Phi \quad\ \ +\ \ \overline\kappa^{\rm p}_i{\bf F} / c \cr \!\!&+&\!\!\frac{4}{3}\varrho \frac{\sigma_i}{m_i}\sqrt{c_s^2+({\bf v}-{\bf v_i})^2}({\bf v}-{\bf v_i})\end{aligned}$$ In addition to these systematic relative velocities a random contribution is caused by Brownian motion and turbulence. Because our hydrodynamical grid is too coarse to resolve typical turbulent length scales, and because turbulence needs a three dimensional treatment, we apply a turbulence model developed by Völk et al. (1980). Is is coupled to the turbulent angular momentum transport (Shakura & Sunyaev 1973) by the parameter $\alpha$. Here, the macroscopic turbulent velocity $v_{\rm turb}^0$ is set to a fraction $\alpha$ of the isothermal sound speed $c_s$, and the macroscopic revolution time scale $t_{\rm turb}^0$ is proportional to the orbital period $\Omega$: $$\begin{aligned} v_{\rm turb}^0&=&\alpha\ c_s\\ t_{\rm turb}^0&=&2\pi / \Omega\end{aligned}$$ A Kolmogorov-type turbulence spectrum is assumed, whereby the turbulent energy is transported from the largest eddies down to the smallest at a constant rate. The corresponding scales of the smallest eddies are defined where the turbulent Reynolds number $Re_s$ of the gas with viscosity $\eta$ equals one (i.e. the flow becomes laminar, Lang 1974): $$\begin{aligned} v_{\rm turb}^s&=&v_{\rm turb}^0\ Re_0^{-\frac{1}{4}}\\ t_{\rm turb}^s&=&t_{\rm turb}^0\ Re_0^{-\frac{1}{2}}\end{aligned}$$ with: $$Re_0 = \frac{\varrho\ v_{\rm turb}^0\ \lambda_{\rm turb}^0}{\eta}\approx \frac{\varrho\ (v_{\rm turb}^0)^2\ t_{\rm turb}^0}{\eta}$$ The back reaction of the gas turbulence on the dust particles depends on the coupling strength between grains and gas. This strength can be measured by the correlation time scale $\tau_i$: $$\tau_i = |{\bf v} - {\bf v_i}| / (\partial v_i/\partial t)_{\rm ww}$$ According to the analytical fit of Weidenschilling (1984) to the numerical results of Völk et al. (1980), the random relative velocities between two grains with correlation time scales $\tau_1$ and $\tau_2$ ($\tau_1$ $\le$ $\tau_2$) can be expressed as follows: $$\begin{aligned} \delta v_{\rm turb} = \cases{ \raise 1mm\hbox{$v_{\rm turb}^s\, \frac{|\tau_1-\tau_2|}{t_{\rm turb}^s}$} &\raise 1mm\hbox{if $\tau_1, \tau_2 \le t_{\rm turb}^s$}$\!\!\!\!\!$\cr \raise 1mm\hbox{$v_{\rm turb}^0$} &\raise 1mm\hbox{if $\tau_1\le t_{\rm turb}^0 \le \tau_2$}$\!\!\!\!\!\!$\cr \lower 5mm\hbox{$v_{\rm turb}^0\, \frac{t_{\rm turb}^0(\tau_1+\tau_2)}{2\tau_1\tau_2}$} &\lower 5mm\hbox{if $t_{\rm turb}^0 \le \tau_1, \tau_2$}$\!\!\!\!$ \cr \lower 5mm\hbox{$v_{\rm turb}^0\, \frac{3\tau_2}{\tau_1+\tau_2}\sqrt{\frac{\tau_2}{t_{\rm turb}^0}}$} &\lower 5mm\hbox{otherwise} \cr }\end{aligned}$$ The cutoff of the turbulent eddies at the lower size end of the eddy spectrum is included according to the considerations of Weidenschilling (1984). The contribution of Brownian motion to the random part of the relative velocities between dust grains is only important for small ($a_i < 1\ \mu$m) grains: $$\delta v_{\rm brown}=\sqrt{\frac{8 {\rm k} T}\pi\frac{m_i+m_j}{m_i m_j}}$$ Thus, the dust particles achieve a total relative velocity $\delta v_{i,j}=(\delta v_{\rm syst}^2+\delta v_{\rm turb}^2 +\delta v_{\rm brown}^2)^{1/2}$, where $\delta v_{\rm syst}$ denotes the systematic relative velocities. These relative velocities are then evaluated according to the considerations of section \[SEcompact\] or \[SEfrac\]. If the velocities are sufficiently low, the particles can coagulate. This is mathematically described by the coagulation equation: $$\begin{aligned} \left(\frac{\partial n(m)}{\partial t}\right)_{\rm coag}\!\!\!\!\!\!\!\! &=&\!\!\!\frac{1}{2}\int\int\alpha(m^{'},m^{''})\,n(m^{'})\,n(m^{''})\cr &&\delta(m-m^{'}-m^{''})\,dm^{'}dm^{''}\cr &-&\!\!\!n(m)\int\!\alpha(m,m^{'})\,n(m^{'})\,dm^{'}\end{aligned}$$ with: $$\alpha(m^{'},m^{''}) = p\ \sigma_{coll}(m^{'},m^{''})\ \delta v(m^{'},m^{''})$$ The variables $n(m)$, $\sigma_{coll}(m^{'},m^{''})$ and $\delta v(m^{'},m^{''})$ are the number density, the relative interaction cross section, and the relative velocity of grains with masses $m$, $m^{'}$ and $m^{''}$, respectively. The sticking probability $p$ controls the onset of bouncing when the relative velocities become too high ($p$ $\rightarrow$ 0). Grain shattering is described by a generalization of the above coagulation equation: $$\begin{aligned} \left(\frac{\partial n(m)}{\partial t}\right)_{\rm shat}\!\!\!\!\!\!\!\!&=& \frac{1}{2}\int\int \beta(m^{'},m^{''})\,n(m^{'})\,n(m^{''}) \cr &&\hspace{-6mm} \gamma(m,m^{'},m^{''}\!\!,\delta v(m^{'}, m^{''}))\,dm^{'}dm^{''} \cr &-&\!\!\!n(m)\int\!\beta(m,m^{'})\,n(m^{'})\,dm^{'}\end{aligned}$$ with: $$\beta(m^{'},m^{''}) = q\ \sigma_{coll}(m^{'}, m^{''})\ \delta v(m^{'}, m^{''})$$ Here, the function $\gamma$ distributes the shattered dust fragments to the appropriate mass bins (see appendix). Again, the shattering probability $q$ controls the onset of shattering above the critical velocity ($q$ $\rightarrow$ 1). The total dust evolution operator (coagulation/shattering) is the sum of both partial operators. Numerical techniques ==================== To simulate the collapse of a gravitationally unstable rotating molecular cloud core we apply a multicomponent radiation hydrodynamics code with detailed dust dynamics (grain drift, coagulation, shattering). To keep the problem tractable axial symmetry is assumed (“2.5 D” in cylindrical coordinates). An explicit nested grid technique is applied to resolve the inner parts of the accretion disk (Yorke & Kaisig 1995). Solution of the coagulation/shattering equation {#SEcoagshatsol} ----------------------------------------------- To solve the combined coagulation/shattering equation numerically the dust size distribution is binned into $N$ discrete logarithmically spaced mass intervals. The continuous equation (section \[SEcoagshat\]) is therefore discretized: $$\begin{aligned} \label{eq-dndt} \left(\frac{\partial n_k}{\partial t}\right) \!\!\!\!&=&\!\!\!\! \left(\frac{\partial n_k}{\partial t}\right)_{\rm coag} +\left(\frac{\partial n_k}{\partial t}\right)_{\rm shat}\cr \!\!\!\!&=&\!\!\!\! \frac{1}{2}\sum_{i=1}^N\sum_{j=1}^N(\alpha_{ij}d_{ijk}+\beta_{ij}g_{ijk}) n_i n_j \cr &&\!\!\!\! -n_k\sum_{i=1}^N(\alpha_{ik}+\beta_{ik})n_i\end{aligned}$$ with: $$\begin{aligned} &&d_{ijk} = \cases{ \frac{m_i+m_j}{m_k} &if $m_i+m_j \in [m_k^-,m_k^+]$ \cr 0 &otherwise \cr } \cr &&m_k^- = (m_k + m_{k-1})/2 \quad m_k^+ = m_{k+1}^- \cr &&g_{ijk} = \frac{m_i+m_j}{m_k}\, G_k(m_i,m_j,\delta v_{ij}) \\ &&G_k(m_i,m_j,\delta v_{ij}) \in [0,1] \cr &&\sum_{k=1}^N G_k(m_i,m_j,\delta v_{ij})=1 \nonumber\end{aligned}$$ The distribution of the shattered fragments is given by the discrete distribution function $G_k(m_i,m_j,\delta v_{ij})$ (see appendix). The kernels $\alpha_{ij}$ and $\beta_{ij}$ are the discretized counterparts of $\alpha(m^{'},m^{''})$ and $\beta(m^{'},m^{''})$ (section \[SEcoagshat\]). Substituting backward time differences for all time derivatives, this nonlinear integro-differential-equation can be brought into the form: $$\label{eq-DnDt} ({\bf n}^*-{\bf n})/\Delta t={\cal A}({\bf n}^*){\bf n}^*,$$ where ${\bf n}=(n_1,n_2,\ .\ .\ .\ ,n_N)$ are the (known) densities at the beginning of the time step and ${\bf n}^*$ are the corresponding values after time $\Delta t$ which are to be determined. ${\cal A}({\bf n}^*)$ is a matrix which is constructed from the right hand side of Eq. \[eq-dndt\]. The advantages of backward time derivatives are: a) the difference equations are numerically stable for all choices of time steps $\Delta t$, and b) the numerical solution approaches the correct steady-state solution as $\Delta t \rightarrow \infty$. We iteratively solve the implicit equation \[eq-DnDt\] for each cell in the numerical grid using a multidimensional Newton-Raphson algorithm. In order to optimize convergence we adaptively reduce the time step with respect to the relatively large time step used for the explicit hydrodynamics. We tested our solver by calculating the solution of simple coagulation problems for which analytic solutions exist (Wetherill 1990). The correspondence was very good at high resolutions of mass binning. At resolutions comparable to those used in the collapse calculations the accuracy suffers. Sharply peaked mass distributions and discontinuous mass distributions become more diffuse with the passage of time. We do not consider this to be a serious flaw, however, because of the large uncertainties of the details of grain growth and destruction. The total mass of the dust component was always conserved within rounding errors (Suttner, Yorke, & Lin 1999). The multicomponent radiation hydrodynamic code ---------------------------------------------- An explicit/implicit method is used to solve the coupled hydrodynamic equations and the equations of radiation transport. The system is separated applying operator splitting, and the partial equations are then discretized explicitly or implicitly according to stability considerations. The dust size distribution is binned into $N$ mass intervals and the hydrodynamic equations for mass and momentum conservation are computed for the gas component and for each binned dust component (grain size) simultaneously. The equations of mass conservation for gas ($\varrho$) and dust particles ($\varrho_k$) can be written: $$\begin{aligned} &&\hspace{-7mm} \frac{\partial\varrho}{\partial t}+ \nabla\cdot(\varrho{\bf v})=0\\ &&\hspace{-7mm} \frac{\partial\varrho_k}{\partial t}+ \nabla\cdot(\varrho_k {\bf v})= -\varrho_k\sum_{i=1}^N(\alpha_{ik}+\beta_{ik})\frac{\varrho_i}{m_i} \\ &&+\frac{1}{2}\sum_{i=1}^N\sum_{j=1}^N(\alpha_{ij}d_{ijk}+\beta_{ij}g_{ijk}) \frac{\varrho_i}{m_i}\frac{\varrho_j}{m_j}\cdot m_k\\\end{aligned}$$ Here, dust coagulation and shattering has been included in the equation of continuity for the dust particles as a source/sink term. The equations for momentum conservation in cylindrical coordinates for gas and dust grains are: $$\begin{aligned} &&\hspace{-5mm} \frac{\partial(\varrho v_z)}{\partial t}+ \nabla\cdot(\varrho v_z{\bf v})= -\frac{\partial p}{\partial z}-\varrho\frac{\partial \Phi}{\partial z} \\ &&\hspace{6mm}+\varrho\sum_{k=1}^N I_k (v_{k,z}-v_z)-( \nabla\cdot {\cal Q}_{\rm visc})_z\\ &&\hspace{-5mm} \frac{\partial(\varrho v_r)}{\partial t}+ \nabla\cdot(\varrho v_r{\bf v})= -\frac{\partial p}{\partial r}-\varrho\frac{\partial\Phi}{\partial r}\\ &&\hspace{6mm}+\varrho\sum_{k=1}^N I_k (v_{k,r}-v_r)+\varrho \frac{v_\phi^2}{r}-( \nabla\cdot {\cal Q}_{\rm visc})_r\\ &&\hspace{-5mm} \frac{\partial(\varrho v_\phi r)}{\partial t}+ \nabla\cdot(\varrho v_\phi r {\bf v})=\\ &&\hspace{6mm}\varrho\sum_{k=1}^N I_k (v_{k,\phi}-v_\phi)\ r -( \nabla\cdot {\cal Q}_{\rm visc})_\phi\ r\\ &&\hspace{-5mm} \frac{\partial(\varrho_k v_{k,z})}{\partial t}+ \nabla\cdot(\varrho_k v_{k,z}{\bf v}_k)=\varrho_k\frac{\overline\kappa^{\rm p}_k F_z}{\rm c} -\varrho_k \frac{\partial \Phi}{\partial z}\\ &&\hspace{6mm}-\varrho I_k (v_{k,z}-v_z)\\ &&\hspace{-5mm} \frac{\partial(\varrho_k v_{k,r})}{\partial t}+ \nabla\cdot(\varrho_k v_{k,r}{\bf v}_k)=\varrho_k\frac{\overline\kappa^{\rm p}_k F_r}{\rm c} -\varrho_k \frac{\partial \Phi}{\partial r}\\ &&\hspace{6mm}-\varrho I_k (v_{k,r}-v_r) +\varrho_k\frac{v_{k,\phi}^2}{r}\\ &&\hspace{-5mm} \frac{\partial(\varrho_k v_{k,\phi}r)}{\partial t}+ \nabla\cdot (\varrho_k v_{k,\phi}r{\bf v}_k)=\\ &&\hspace{6mm}-\varrho I_k (v_{k,\phi}-v_\phi)\ r\end{aligned}$$ with the interaction term: $$I_k = \frac{4}{3}\varrho_k\frac{\sigma_k}{m_k}\sqrt{c_s^2+({\bf v}-{\bf v}_k)^2}$$ The advection part of these equations is solved by an explicit second order scheme. The gas–dust interaction terms need an implicit treatment because of the stiffness of the problem. The tensor ${\cal Q}_{\rm visc}$ which appears in several of the above equations denotes the viscous stress tensor of the $\alpha$-viscosity (Shakura & Sunyaev 1973): $${\cal Q}_{\rm visc} = \varrho \nu \, {\bf e} \; ,$$ where ${\bf e}$ is the shear tensor. $\nu$ is calculated from $$\nu = \alpha c_s H \approx 0.7\, \alpha c_s^2 / \Omega \; ,$$ where the density scale height $H$ has been replaced by an expression valid for equilibrium “thin” disks. For the calculation described in this study both sound speed $c_s$ and angular velocity $\Omega$ are approximately constant along the surfaces of cylinders within the equilibrium disk (c.f the theoretical discussion in Tassoul 1978). Thus, $\nu$ varies principally as a function of the radial distance within the disk. The viscosity as described above is applied to the entire grid. Within the accretion disk it modifies the flow by allowing angular momentum to be transfered radially outwards within the disk. The parameter $\alpha$ is continuously adjusted according to the Toomre stability criterion and is allowed to vary within the range of $10^{-3}$ to 0.1 (c.f. Yorke & Bodenheimer 1999). We define the Toomre parameter $Q_T$ within the accretion disk for each time step by the minimum value of: $$Q_T = \hspace{1mm} {\rm min} \hspace{-7mm}\lower 2.4mm\hbox to 11mm{\scriptsize $r\! \le\! R_{\rm disk}$} \hspace{1mm} \left[\; \Omega c_s / \pi G \Sigma\; \right]_{z=0} \; ,$$ where $\Sigma(r) = \int \varrho dz$ is the disk’s surface density. If $Q_T$ drops below 1.3 (i.e. nonradial instabilities can be expected to occur), we increase $\alpha$ by a small factor (typically 1.002), if necessary to its maximum value 0.1. If $Q_T$ increases above 1.5 (the disk becomes stable), $\alpha$ is reduced by a small factor (typically 0.999). Generally, $Q_T$ ‘hovers’ at either 1.3 or 1.5 and $\alpha$ levels off at values somewhere between $\alpha \approx 0.01$ (for $M = 1$ M$_\odot$) and $\alpha \approx 0.08$ (for $M = 10$ M$_\odot$). Radiation transport is calculated within the framework of the grey flux limited diffusion approximation (Levermore & Pomraning 1981; Yorke & Bodenheimer 1999): $$\label{eq-FLD} \frac{\partial a T_d^4}{\partial t} = \nabla\cdot \left( \frac{{\rm c}\lambda_R \; \nabla a T_d^4 } {\sum_{k=1}^N\overline\kappa_k(T_d)\varrho_k} \right) + L_*\delta(V_1) = 0$$ with flux limiter $\lambda_R$, Rosseland mean opacity $\overline\kappa_k(T_d)$, the radiation constant $a$, and luminosity of the central source $L_*$ (treated as an additional source term within the volume $V_1$ of the innermost grid cell): $$\begin{aligned} \delta(V_1) \!\!\!\!&=&\!\!\!\! \cases{ 1/V_1 &if innermost cell \cr 0 &otherwise} \\ \lambda_R \!\!\!\!&=&\!\!\!\! \frac{1}{\xi}\left(\coth(\xi)-\frac{1}{\xi}\right)\\ \xi \!\!\!\!&=&\!\!\!\! \frac{| \nabla T_d^4 |} {T_d^4 \sum_{k=1}^N\overline\kappa^{\rm ext}_k(T_d)\varrho_k}\\ \frac{1}{\overline\kappa_k(T)} \!\!\!\!&=&\!\!\!\! \int_0^\infty\frac{1} {\kappa^{\rm ext}_{\lambda,k}} \frac{{d \rm B_\lambda}}{d T}d\lambda\left/ \int_0^\infty\frac{{d \rm B_\lambda}}{d T}d\lambda\right.\end{aligned}$$ Here, ${\rm B}_\lambda ={\rm B}_\lambda(T)$ is the Planck function. Because we are considering grey radiation transfer only, the equilibrium temperature of each dust grain is identical to the radiation temperature $T_d$. To solve equation \[eq-FLD\] for $T_d$ an implicit ADI procedure is used (Douglas & Rachford 1956). By contrast, the gas is poorly coupled to the radiation field due to its low opacity. The dust grains interact with the gas by inelastic collisions which contribute a cooling term $\Lambda$ to the equation for the internal energy density of the gas. The dissipation of viscous energy leads to an additional gas heating term ${\cal Q}_{\rm visc}:\nabla {\bf v}$ $$\begin{aligned} \frac{\partial\epsilon}{\partial t}+ \nabla\cdot(\epsilon{\bf v}) \!\!\!\!&=&\!\!\!\! -p \nabla\cdot{\bf v}-\Lambda(\varrho,\varrho_k,T,T_d) \cr &&\!\!\!\! + {\cal Q}_{\rm visc}:\nabla {\bf v}\end{aligned}$$ Assuming an energy transfer of k$(T_d-T)$ per collision of a gas molecule with a dust grain the cooling function becomes ($\mu_0$ is mean molecular weight of the gas): $$\begin{aligned} \Lambda(\varrho,\varrho_k,T,T_d) = \hspace{39mm} \cr \frac{\varrho}{\mu_0 m_H}\sum_{k=1}^N \frac{\varrho_k}{m_k}\sigma_k\sqrt{\frac {8{\rm k}T}{\pi\mu_0 m_H}} {\rm k}(T_d-T)\end{aligned}$$ The equation of state $p(\varrho,T)$ and the internal energy $\epsilon(\varrho,T)$ for the gas component assume molecular gas and includes dissociation of the H$_2$ molecules above $\approx$ 2000K (Black & Bodenheimer 1975). Finally, the gravitation potential of the molecular cloud is calculated by a solution of the Poisson equation, again using ADI: $$\Delta\Phi = 4\pi G\ (\varrho+\sum_{k=1}^N\varrho_k) \; .$$ Initial and boundary conditions =============================== \ [ccccc]{}\ & &$M_{\rm c}$ &$\Omega$ &$t_{\rm ff}$\ Model &Dust &$[{\rm M}_\odot]$ &$[10^{-12}\, {\rm s}^{-1}]$&$[yr]$\ \ 1MS &comp. & 1 &1 &8635\ 3MS &comp. & 3 &3 &4985\ 5MS &comp. & 5 &4 &3870\ 10MS &comp. &10 &5 &2730\ 1MS\_PCA &BPCA & 1 &1 &8635\ 3MS\_PCA &BPCA & 3 &3 &4985\ 5MS\_PCA &BPCA & 5 &4 &3860\ 10MS\_PCA &BPCA &10 &5 &2730\ 1MS\_CCA &BCCA & 1 &1 &8635\ 3MS\_CCA &BCCA & 3 &3 &4985\ 5MS\_CCA &BCCA & 5 &4 &3860\ 10MS\_CCA &BCCA &10 &5 &2730\ \[TAinitsimtab\] We start the numerical simulations with an isothermal, uniformly rotating molecular cloud core with a total mass of $1\,$M$_\odot$ to $10\,$M$_\odot$, a radius of $r=2\times 10^{16}$cm and a temperature of $T=20$K. This gives an initial free-fall timescale $t_{\rm ff}$ between $\approx 8600$ yr and $\approx 2700$ yr for these configurations. The angular velocities $\Omega$ range between $10^{-12}$s$^{-1}$ and $5 \times 10^{-12}$s$^{-1}$ (see Table \[TAinitsimtab\]). We consider centrally peaked mass distributions $\varrho \propto 1/(r^2 + z^2)$. The total mass contribution of the dust grains is set to a fraction of $0.25\times 10^{-2}$ of the gas mass (corresponding to the mass contribution of silicates). At the outer boundary of the space integration domain ($\sqrt{r^2+z^2}=2\times 10^{16}$cm) the hydrodynamic variables are held constant (no inflow or outflow). This corresponds to an assumption that no material from the parent cloud will enter into the portion undergoing collapse. This does not mean that the mass influx rate of dusty material onto the new formed disk is suddenly cut off after one free-fall time. Instead, it steadily decreases as the material in the outer zones, initially slowed by pressure gradients in the density-peaked distribution, is depleted (see Yorke & Bodenheimer 1999). Even after three free-fall times, there is an appreciable mass influx onto the disk. From other studies (e.g. Mizuno et al. 1988) we know that if the enshrouding molecular cloud can continuously supply small dust grains, the grain size distribution will be affected. This effect will be present to some extent in the studies discussed here. The issue of the time dependency of the mass influx is a non-trivial one, however, and is beyond the scope of the present investigation. The particle sizes are distributed according to a MRN power law with an exponent of $-3.5$ which can be transformed to a bin mass distribution ${\rm m_{bin}}(m_k)$: $$\begin{aligned} n(a) \!\!\!\!&\propto&\!\!\!\! a^{-3.5} \\ {\rm m_{bin}}(m)d\,{\rm ln}\,m \!\!\!\!&\propto&\!\!\!\! m^{1/6}d\,{\rm ln}\,m\end{aligned}$$ with grain mass $m$. Figure \[dPIspheres\] (upper left panel) shows the initial MRN dust mass distribution. On a logarithmic scale the dust mass increases with increasing bin mass. The grain sizes range from 5nm to 5$\mu$m at the beginning. Above 5$\mu$m the particle density falls off proportional to ${\rm m_{bin}}^{-1}$ (this corresponds to $n(a)\propto a^{-7}$). The dynamics of these large grains at the upper end of the computed size distribution can be followed throughout the simulation without being relevant for coagulation (provided that the upper end is chosen far enough away of the largest particles produced by coagulation). The integration domain in dust size space ranges from 5nm up to 0.2mm which spans about 14 orders of magnitude in grain mass. Whenever the dust temperature is low enough to allow an ice coating on the grains’ surfaces, the sticking propabilities are modified accordingly. The innermost cell of the finest grid contains the central protostellar source and requires special treatment. Its total luminosity $L_*$ can be approximated by the sum of the core’s intrinsic luminosity and the luminosity due to accretion of material onto the core: $$L_* = 3L_\odot\left(\frac{M_*}{M_\odot}\right)^3+\frac{3}{4} \frac{GM_*\dot M_*}{R_*}$$ The radius $R_*$ of the central object is held constant at 10$R_\odot$. $\dot M_*(t)$ and $M_*(t) = \int \dot M_*dt$ result from the calculations. Figure \[PIevext\] (left panel, [*solid line*]{}) displays the net specific extinction coefficient for a gas–dust mixture with compact spherical grains. The opacities are calculated for the MRN mass distribution shown in Figure \[dPIspheres\] (upper left panel). Because the dust opacities are calculated using the actual grain size distribution they vary as a function of time and location during the simulation. Numerical Simulations ===================== to to to to to to to The following numerical calculations were conducted with three nested grids of increasing resolution of factor two each. The individual grids span $60 \times 60$ zones. The dust size distribution is sampled by $N=30$ discrete dust species. Table \[TAressimtab\] summarizes selected results of the simulations. \ [ccccc]{}\ & $M_*$ & $L_*$ & &\ Model & \[M$_\odot$\] & \[L$_\odot$\] & $M_d / M_*$ & $t_s / t_{\rm ff}$\ \ 1MS & 0.77 & 3.7 & 0.30 & 2.5\ 3MS & 2.2 & 87 & 0.36 & 2.3\ 5MS & 3.7 & 194 & 0.35 & 2.1\ 10MS & 8.2 & 1839 & 0.22 & 1.9\ 1MS\_PCA & 0.77 & 4.0 & 0.30 & 2.5\ 3MS\_PCA & 2.3 & 59 & 0.30 & 2.3\ 5MS\_PCA & 3.7 & 270 & 0.35 & 2.2\ 10MS\_PCA & 7.9 & 1639 & 0.27 & 2.2\ 1MS\_CCA & 0.77 & 2.9 & 0.30 & 3.1\ 3MS\_CCA & 2.2 & 80 & 0.36 & 2.5\ 5MS\_CCA & 3.8 & 188 & 0.32 & 2.3\ 10MS\_CCA & 8.0 & 1614 & 0.25 & 2.2\ \[TAressimtab\] Compact spherical dust grains {#SEsimcompact} ----------------------------- The first simulation applies the simple “compact spherical grain” dust model (section \[SEcompact\]). The collapse of the rotating molecular cloud core is followed for about $10^4$ yr (about two initial free-fall times). Figure \[sPIspheres\] displays an evolutionary sequence of the gas density and the gas velocity and Figure \[dPIspheres\] the corresponding total grain mass spectrum. As evident in the lower right panel of Figure \[sPIspheres\] two accretion shock fronts have developed around the protostellar disk. The central mass and core luminosity have attained values 2.2M$_\odot$ and 87L$_\odot$, respectively. As shown in Figure \[dPIspheres\], coagulation removes the small dust particles effectively from the size distribution. At the high mass end dust grains of $\sim 50\,\mu$m are grown by coagulation. However, larger particles do not form during the simulation. The integral size distribution between 5nm and 5$\mu$m varies as $n(a) \propto a^{-3.1}$. Figure \[PIseldist\] (left panel) shows the dust mass spectrum at selected heights above the protostellar accretion disk for a cylindrical distance of 30AU. At the disk midplane (right panel) coagulation is very strong. Large grains are produced at the cost of the small size end of the particle spectrum. At disk radii larger than about 200 AU the effect of coagulation becomes less important. Only the small grains are removed from the size spectrum by coagulation. However, in the accretion shock just above the disk (Fig. \[PIseldist\], left panel) the large grains are depleted relative to the low mass dust particles. The velocities of selected dust grains through the accretion shock at $r=30$AU (Fig. \[PIvshock\], left panel) show that dust grains of radii $\approx 1\,\mu$m and above are coupled only loosely to the gas so that significant relative velocities of several kms$^{-1}$ between these grains and the smaller ones are achieved. This implies that coagulation is inhibited and shattering might occur (the threshold velocity is 2.7kms$^{-1}$). The densities are so low, however, that the shattering time scale is $\sim 10^4$ yr. Thus, the depletion of the large grains must be attributed to the fact that they pass quickly through this zone, whereas the smaller grains slow down significantly. This size dependent gas–grain drift lowers the dust to gas mass ratio by more than a factor of 2 between the two accretion shocks (shown for the BPCA fractals, see Fig. \[PIpcafractals\], right panel). Note the differing velocities above the outer accretion shock ($z\approx 170$AU), caused by the size dependence of the grain opacity and thus by differential radiative acceleration. Without radiative acceleration the grains should fall towards the equatorial plane faster than the gas, because they are not pressure supported. Outside the outer accretion shock the grains’ mass distribution is still well approximated by the initial MRN mass distribution (Fig. \[PIseldist\], left panel). In outer regions of the cloud at the equatorial plane the infalling material is shielded from the central star by the disk. Hence, the grains do indeed accrete with higher velocities than the gas component (Fig. \[PIvshock\], right panel). To quantify the effect of a modified dust size spectrum on the optical properties of the protostellar matter in the accretion disk, Figure \[PIevext\] (left panel) compares the net specific extinction coefficient for several locations in the accretion disk. Whereas the depletion of the large grains behind the accretion shock does introduce minor modifications to the extinction coefficient, coagulation at the disk’s midplane causes an opacity reduction of more than an order of magnitude for the near infrared to UV extinction. From $\lambda=0.1$cm to 0.1mm the extinction coefficient is increased by approximately the same amount. This behaviour is indicative of the migration of the peak of the grain mass distribution to higher masses due to coagulation. To drive the coagulation beyond the initial upper grain size limit of $5\,\mu$m, systematic relative velocities are needed, such as those which result from differential radiative acceleration, relaxation behind the accretion shock and gravitative sedimentation. Figure \[PIevext\] (right panel) displays the resulting dust mass distribution, when these contributions to the relative motions are neglected. Obviously, turbulence and Brownian motion are sufficient to remove the small grains, but they are not able to build up $\mu$m-sized particles quickly. Whereas for cloud clump masses of 1M$_\odot$ the differential radiative acceleration of grains is negligible, it eventually becomes the most important mechanism for creating velocity differences between grains outside the accretion shock fronts when larger clump masses are considered. to to to to to to Influence of the sticking properties of the grains {#SEinflstick} -------------------------------------------------- How is coagulation affected by the sticking strength of the dust particles? As pointed out in the introduction, this material property is not yet well understood. Reports on experimental studies indicate larger values for the critical sticking velocity than theoretical models predict (Poppe & Blum 1997). Ices on the grain surfaces play an important role (Supulver et al. 1997). To investigate this effect we conducted comparison simulations with different sticking strengths. First, the critical sticking velocities were reduced to the conservative theoretical values (without the factor 10 correction for the experimental results, see section \[SEspstick\]). Figure \[PImiestick\] (left panel) displays the total mass distribution. Obviously, grain growth is reduced; the maximum grain mass is smaller by a factor of about 10. When the critical sticking velocity is set to infinity, i.e. the grains stick at every encounter, grains up to the actual bin limit with radii of about 0.2mm are grown. This demonstrates that the material parameters play an extremely important role in defining the upper mass limit up to which the dust grains are able to grow during the formation of an accretion disk. Fractal BPCA particles ---------------------- Our next approximation treats the dust grains as fractal coagulated particles (BPCA particles). The same initial and boundary conditions as in the previous calculation are used. At the end of the simulation the mass of the central object is 2.3M$_\odot$ with a luminosity of 59L$_\odot$. The overall evolution is qualitatively similar to the calculation with compact dust grains (Fig. \[sPIspheres\] and Fig. \[dPIspheres\]). Figure \[PIpcafractals\] shows the mass density and velocity of the gas component (left panel) and the dust to gas mass ratio (right panel). Again, an accretion shock has developed. In this accretion shock the dust to gas mass ratio is reduced by a factor of about 3 compared to the initial value due to size dependent advection. The specific cross section of large BPCA coagulates is about a factor of 5 larger than for compact spheres of the same mass (section \[SEfrac\]). Only particles with radii of about 10$\mu$m and larger decouple from the motion of the gas flow in the accretion shock (Fig. \[PIpcavel\], left panel; compare to Fig. \[PIvshock\]). Coagulation in the equatorial plane proceeds at a rate similar to the simple compact spherical dust model (Fig. \[PIpcavel\], right panel). As in the case of spherical grains no dust particles larger than several 10$\mu$m are grown by coagulation. This can be attributed again to a finite critical sticking velocity (see section \[SEinflstick\]). Fractal BCCA particles ---------------------- Finally, BCCA grains are the most fluffy particles dealt with in these simulations. Figure \[PIccafractals\] (left panel) shows the density and the velocity of the gas component in Model 3MS\_CCA. The overall distribution is similar to the previously discussed models (3MS and 3MS\_PCA). However, grain coagulation is enormously strong. Grains as large as 0.2mm (at the limiting end of the binning) are grown (Fig. \[PIccafractals\], right panel). Almost all the grain mass resides in the most massive bin. Because of the fluffy structure of the BCCA grains the dust remains well coupled to the gas, even in the accretion shock (Fig. \[PIccavel\], left panel). Thus, relative velocities between the dust particles remain very low. The optical properties of the gas–dust mixture at several locations in the accretion disk are plotted in Figure \[PIccavel\] (right panel). In the equatorial plane the net specific extinction coefficient $\kappa^{\rm ext}_\lambda$ is lowered by more than an order of magnitude from the near infrared to UV wavelengths. From 1mm to 100$\mu$m the extinction is enhanced. In the accretion shock only minor modifications can be ascertained. A theoretical shattering model differing somewhat from the one discussed above has been developed by Dominik & Tielens (1997). In order to test its effect on our simulations we used this model to compute the evolution of BCCA grains. According to Dominik & Tielens (1997) the shattering velocity is proportional to the critical sticking velocity. It also depends on the number of contact points between the two colliding dust grains. In our ignorance of this quantity we assume that two dust grains always have 10 contact points. Figure \[PIccastick\] (left panel) displays the density and velocity of dust grains with reduced radii $r_{ex}=5\,\mu$m. In the accretion shock near to the axis of rotation these dust particles are destroyed by shattering encounters. As can be seen in the total dust mass distribution (Fig. \[PIccastick\], right panel), the largest dust grains with radii of 0.2mm are not formed. We attribute this to frequent shattering encounters; the sticking properties are identical to those used in model 3MS\_CCA. Synthetic emission maps and spectra ----------------------------------- to to to In order to compare our numerical results to observations of young protostellar accretion disks we have produced emission maps and spectra calculated with a ‘numerical telescope’, which includes the contribution of scattered light (Yorke & Bodenheimer 1999). Figure \[PIemsphere\] (left panel) shows the dust continuum emission at 3.6$\mu$m at the final stage of the collapse of the “compact sphere” dust model (3MS). A dark bar across the midplane of the accretion disk marks a region of high extinction. Above and below the disk scattered light reveals the presence of the central protostellar radiation source. The spectral energy distribution (SED) shown in Figure \[PIemsphereccaspec\] (left panel) displays well known features of young dusty protostellar cores (e.g. Sonnhalter, Preibisch & Yorke 1995): No direct radiation from the protostar (at an angle of 85$^\circ$), a silicate absorption feature at $\lambda\approx 10\,\mu$m, and a dust emission temperature of about 100K. For comparison, the corresponding SED was recalculated for dust with an MRN mass distribution using the same density distribution. No obvious differences to the results using the more detailed coagulated dust model are detectable. This can be attributed to the fact that the strongly coagulated dust particles are embedded deeply within the accretion disk, whereas the outer layers contain only slightly modified grains. However, when all dust grains are assumed to have a radius of $a=22\,\mu$m (which corresponds to the maximum grain size formed in model 3MS), the calculated SED is markedly different from that resulting from dust with an MRN mass distribution. The silicate absorption feature is totally absent and more near infrared emission reaches the observer. For comparison, an emission map using the unmodified MRN dust distribution has been calculated and displayed in Figure \[PIemsphere\] (right panel). The differences are not overwhelming, but some general tendencies can be ascertained: For unmodified dust the dark absorption bar across the disk is somewhat more prominant (especially towards the edges of the disk) and more scattered light above and below the disk is visible. Because almost no coagulation occurred in these outer disk regions over the time period investigated, these differences result primarily from the differential advection of dust grains. A similar tendency can be seen for the fractal grains. In Figure \[PIemcca\] (left panel) the continuum emission for simulation 3MS\_CCA is displayed. In contrast to model 3MS the dark absorption bar across the disk is far more transparent at $\lambda=3.6\,\mu$m when dust coagulation is permitted. When compared to the emission map resulting from uncoagulated dust with an MRN mass distribution (Fig. \[PIemcca\], right panel), it becomes apparant that the coagulation process has enabled the disk to become rather transparent. The overall disk features are in general similar to those resulting from the simple spherical grain model. The SED shown in Figure \[PIemsphereccaspec\] (right panel) is similar to the SED of model 3MS (left panel). The SED generated using the coagulated dust from the 3MS\_CCA simulation ([*solid line*]{}, right panel) displays some differences with respect to the SED using non-coagulated dust: There is a slight shift in the emission peak to shorter wavelengths, lower far infrared fluxes, enhanced mid-infrared emission, and a less prominent silicate absorption feature. Discussion and Conclusions ========================== We have shown that dust coagulation proceeds at an early phase during the formation of a protostellar accretion disk. Small grains with $a {\,\raise 0.3ex\hbox{$<$}\kern -0.75em\lower 0.7ex\hbox{$\sim$}\,}0.1\,\mu$m are removed from the mass spectrum quickly and effectively in the midplane of the accretion disk within $\sim$10$^3$ yr. Here, large particles with sizes of several 10$\mu$m can be produced by coagulation. The maximum grain size which can be quickly produced by coagulation during the collapse and initial accretion of material onto the disk depends crucially on the assumed sticking strength of the dust particles. In this respect the process of ice sublimation plays an important role: When the grain surface is coated with material which enhances the grain to grain adhesion, the degree of coagulation can be significantly increased. In the accretion shock front relative velocities of several kms$^{-1}$ are achieved due to the size dependent coupling to the gas. Compact spherical grains decouple at higher gas densities (and thus earlier during the evolution) than fractal dust coagulates. Particle shattering of compact spherical grains was not critically important during the evolution of the intermediate mass protostars considered here. We infer, however, by appropriate scaling of masses and luminosities (see Suttner et al. 1999), that shattering should be important for high mass protostars. For the high mass case radiative acceleration will become increasingly more effective in causing a size-dependent spread of dust drift velocities. Assuming BCCA grains which break apart at even relatively small collision energies (as in the model of Dominik & Tielens 1997), particle shattering gains some importance for the lower cloud masses considered here. Within the accretion shock grains are shattered and the maximum grain size is limited to several $10\,\mu$m. However, the amount of very small debris particles thus produced is negligible in the total grain mass spectrum. Gas–dust drift leads to depletion of dust in the immediate vicinity of the accretion disk everywhere except in the equatorial regions. In particular, the gas to dust mass ratio can be lowered by a factor of 2 to 4 within the accretion shock. Whereas for a cloud clump mass of 1M$_\odot$ radiative acceleration of dust grains is negligible, for clump masses ${\,\raise 0.3ex\hbox{$>$}\kern -0.75em\lower 0.7ex\hbox{$\sim$}\,}3\,$M$_\odot$ radiative acceleration of dust grains becomes increasingly important. Depending on how well the radiation field of the central source is shielded by the disk, the infall of dust particles can be hindered in the polar regions, whereas in the equatorial regions the dust moves radially inwards faster than the gas. The optical and physical properties of grains are strongly affected by coagulation. The specific extinction coefficient in the visual to UV can be lowered by more than an order of magnitude in the equatorial plane due to coagulation. The grain temperature in the midplane and the grains’ capacity for “freezing out” molecules is correspondingly affected. Although the local variations of the optical coefficients are large, the only significant effect to observational properties is a reduction of the near infrared dust opacity in the wavelength range between 1 and 100$\mu$m, which is most prominent for “robust” BCCA particles. Polarization of starlight should supply an additional appropriate observational tool to determine the degree of coagulation. The differences of the global characteristics of the simulations using the simple approximation of compact spherical grains, BPCA dust, and BCCA dust are not as dramatic as may have been naively expected. For all three cases the hydrodynamical structure (in particular, gas density and velocity) is strikingly similar. Thus, we feel justified in using our rather crude dust models to perform hydrodynamic simulations of low and intermediate mass collapsing clouds and subsequently assume more sophisticated detailed dust models to generate emission maps, polarization maps, and SEDs. Finally, we note that D’Alessio et al. (1999) find that synthetic 1$\mu$m images of accretion disks around low mass stars appear to have too large geometrical thicknesses to be consistent with observation, under the assumption that dust is well mixed with the gas. Our study shows that the issue might be resolved by taking into proper account the differential advection of dust grains. We are grateful to Thomas Henning, Doug Lin, and Rainer Schräpler for helpful discussions and to an anonymous referee for useful suggestions. The research described in this paper was carried out by the Jet Propulsion Laboratory (JPL), California Institute of Technology, and was supported by the “Deutsche Forschungsgemeinschaft” (DFG) under the “Physics of Star Formation” program (grant Yo 5/20-2) and the National Aeronautics and Space Administration (NASA) under grant NRA-99-01-ATP-065. The calculations were performed on workstations at JPL and the “Rechenzentrum der Universität Würzburg”, on a Cray T90 at the “HLRZ Jülich” and on a SP2 parallel computer at the same facility. [**APPENDIX**]{} Here, we give an analytic expression for the kernel $\gamma(m,m^{'},m^{''}\!\!,\delta v(m^{'}, m^{''}))$ in the shattering equation of section \[SEcoagshat\]. As stated in section \[SEcoagshatsol\], the integral of the shattering equation is discretized by summing over the dust mass space ranging from $m_1$ to $m_N$ (or, equivalently, from $a_1$ to $a_N$). Thus, $\gamma$ transforms to the discrete function $g_{ijk}$, from which the debris distribution $G_k(m_i,m_j,\delta v_{ij})=g_{ijk} m_k/(m_i+m_j)$ can be separated.\ For $v_{\rm crit} \le \delta v_{i,j} < v_{\rm cat}$ and $w m_j\in [m_k^- , m_k^+] $: $$\begin{aligned} \label{eq-G1} G_k(m_i,m_j,\delta v_{ij}) = \frac{m_j}{m_i+m_j}(1-w) \end{aligned}$$ For $v_{\rm crit} \le \delta v_{i,j} < v_{\rm cat}$ and $a_k \le a_{\rm max}$: $$\begin{aligned} \label{eq-G2} G_k(m_i,m_j,\delta v_{ij}) = \frac{m_j}{m_i+m_j}w\ f_{\rm MRN}\end{aligned}$$ For $\delta v_{i,j} \ge v_{\rm cat}$ and $a_k \le a_{\rm max}$: $$\begin{aligned} \label{eq-G3} G_k(m_i,m_j,\delta v_{ij}) = f_{\rm MRN}\end{aligned}$$ Otherwise, $G_k(m_i,m_j,\delta v_{ij}) = 0$. We have used the following assumptions and definitions: $$\begin{aligned} m_i &\ge& m_j \\ m_k^- &=& (m_k + m_{k-1}) / 2 \quad m_k^+ = m_{k+1}^- \\ w &=& \left(\frac{\delta v_{ij}} {3.64\ \rm km\ s^{-1}}\right)^{16/9} \\ v_{\rm crit}&=&2.7\ {\rm km\ s^{-1}} \\ v_{\rm cat} &=& {\rm max} \left(v_{\rm crit},\ 1.13\ {\rm km\ s^{-1}} \left(m_i / m_j \right)^{9/16}\right) \\ a_{\rm max}&=&\cases{ 0.28\ (w\ m_j / \varrho_{\rm bulk} )^{1/3} &if $\delta v_{ij} < v_{\rm cat}$ \cr 0.20\ a_i\ v_{\rm cat} / \delta v_{ij} &if $\delta v_{ij} \ge v_{\rm cat}$ \cr }\end{aligned}$$ The formulae were adapted from the work of Jones et al. (1996). Here, $w$ denotes the ejected crater mass in units of the projectile mass, $v_{\rm cat}$ the critical velocity for the onset of total disruption of the target and $a_{\rm max}$ the radius of the largest debris particle. The debris particles are redistributed according to a MRN size distribution $f_{\rm MRN}$ between $a_{\rm min}=a_1$ and $a_{\rm max}$ (i.e. between $m_{\rm min}=m_1$ and $m_{\rm max}$): $$\begin{aligned} f_{\rm MRN}&=&\frac{m_k^{-5/6} \Delta m_k}{\sum_{i=1}^{\rm max} m_i^{-5/6}\Delta m_i}\end{aligned}$$ Beckwith, S.V.W., Henning, T., Nakagawa, Y., 2000, Protostars and Planets IV, eds. V. Mannings, A.P. Boss, S.S. Russell, Tucson: Univ. of Arizona Press, p. 533 Black, D.C., & Bodenheimer, P. 1975, ApJ, 199, 619 Bodenheimer, P., Yorke, H.W., Różyczka, M., & Tohline, J.E. 1990, ApJ, 355, 651 Bridges, F.G., Supulver, K.D., Lin, D.N.C., Knight, R., & Zafra, M. 1996, Icarus, 123, 422 Chokshi, A., Tielens, A.G.G.M., & Hollenbach, D. 1993, ApJ, 407, 806 D’Alessio, Calvet, N., Hartmann, L., Lizano, S., Cantó, J., 1999, ApJ, 527, 893 Dominik, C., & Tielens, A.G.G.M. 1997, ApJ, 480, 647 Douglas, J., & Rachford, H.H. 1956, Trans. Amer. Math. Soc., 82, 421 Draine, B.T., & Lee, H.M. 1984, ApJ, 285, 89 Epstein, P. 1923, Phys. Rev., 22, 710 Fischer, O., Henning, Th., & Yorke, H.W. 1994, A&A, 284, 187 Henning, Th., & Stognienko, R. 1996, A&A, 311, 291 Jones, A.P., Tielens, A.G.G.M., & Hollenbach, D. 1996, ApJ, 469, 740 Lang, K.R. 1974, Springer-Verlag, Berlin Levermore, C.D., & Pomraning, G.C. 1981, ApJ, 248, 321 Mathis, J.S., Rumpl, W., & Nordsieck, K.H. 1977, ApJ, 217, 425 Mizuno, H., Markiewicz, W.J., & Völk, H.J. 1988, A&A, 195, 183 Mizuno, H. 1989, Icarus, 80, 189 Ossenkopf, V. 1993, A&A, 280, 617 Poppe, T., & Blum, J. 1997, Adv. Space Res., 20/8, 1595 Schmitt, W., Henning, Th., & Mucha, R. 1997, A&A, 325, 569 Shakura, N.I., & Sunyaev, R.A. 1973, A&A, 24, 337 Sonnhalter, C., Preibisch, Th., & Yorke, H.W. 1993, A&A, 299, 545 Supulver, K.D., Bridges, F.G., Tiscareno, S., & Lievore, J. 1997, Icarus, 129, 539 Suttner, G., Yorke, H.W., & Lin, D.N.C. 1999, ApJ, 524, 857 Tassoul, J.-L., 1978, Theory of Rotating Stars, Princeton Univ. Press Völk, H.J., Jones, F.C., Morfill, G.E., & Röser, S. 1980, A&A, 85, 316 Vrba, F.J., Coyne, G.V., & Tapia, S. 1993, AJ, 105, 1010 Weidenschilling, S.J. 1977, MNRAS, 180, 57 Weidenschilling, S.J. 1984, Icarus, 60, 553 Weidenschilling, S.J., & Cuzzi, J.N. 1993, Protostars and Planets III, pp. 1031-1060, Univ. of Arizona Press, Tucson Weidenschilling, S.J., & Ruzmaikina, T.V. 1994, ApJ, 430, 713 Wetherill, G.W. 1990, Icarus, 88, 336 Wurm, G. 1997, PhD, Universität Jena Yorke, H.W. 1979, A&A, 80, 308 Yorke, H.W. 1988, Radiation in Moving Gaseous Media: 18th Saas-Fee Adv. Course, Geneva Observatory, pp. 210-224 Yorke, H.W., & Henning, T. 1994, IAU Coll. 146, 186 Yorke, H.W., & Kaisig, M. 1995, Comput. Phys. Comm., 89, 29 Yorke, H.W., & Bodenheimer, P. 1999, ApJ, 525, 330 Yorke, H.W., Bodenheimer, P., & Laughlin, G. 1995, ApJ, 443, 199
--- abstract: 'The field theoretic renormalization group (RG) is applied to the problem of a passive scalar advected by the Gaussian self-similar velocity field with finite correlation time and in the presence of an imposed linear mean gradient. The energy spectrum in the inertial range has the form $E(k)\propto k^{1-\eps}$, and the correlation time at the wavenumber $k$ scales as $k^{-2+\eta}$. It is shown that, depending on the values of the exponents $\eps$ and $\eta$, the model in the inertial-convective range exhibits various types of scaling regimes associated with the infrared stable fixed points of the RG equations: diffusive-type regimes for which the advection can be treated within ordinary perturbation theory, and three nontrivial convection-type regimes for which the correlation functions exhibit anomalous scaling behavior. Explicit asymptotic expressions for the structure functions and other correlation functions are obtained; they are represented by superpositions of power laws with nonuniversal amplitudes and universal (independent of the anisotropy) anomalous exponents, calculated to the first order in $\eps$ and $\eta$ in any space dimension. These anomalous exponents are determined by the critical dimensions of tensor composite operators built of the scalar gradients, and exhibit a kind of hierarchy related to the degree of anisotropy: the less is the rank, the less is the dimension and, consequently, the more important is the contribution to the inertial-range behaviour. The leading terms of the even (odd) structure functions are given by the scalar (vector) operators. For the first nontrivial regime the anomalous exponents are the same as in the rapid-change version of the model; for the second they are the same as in the model with time-independent (frozen) velocity field. In these regimes, the anomalous exponents are universal in the sense that they depend only on the exponents entering into the velocity correlator. For the last regime the exponents are nonuniversal (they can depend also on the amplitudes); however, the nonuniversality can reveal itself only in the second order of the RG expansion. A brief discussion of the passive advection in the non-Gaussian velocity field governed by the nonlinear stochastic Navier-Stokes equation is also given. St Petersburg University Preprint SPbU IP-98-16; [*chao-dyn/9808011*]{}; accepted to Phys. Rev. E.' address: | Department of Theoretical Physics, St Petersburg University, Uljanovskaja 1,\ St Petersburg, Petrodvorez, 198904 Russia author: - 'N. V. Antonov' title: Anomalous scaling regimes of a passive scalar advected by the synthetic velocity field --- \#1\#2\#3\#4\#5\#6[ [=255 =0 =1 =2 =3 =\#1=\#2=\#4=\#5=\#6 by 2 by-by-4 by2 by2 &gt;1 by-1 2 ]{}]{} \#1[ by\#1 by\#1 ]{} =0.04ex Introduction {#sec:Int} ============ The investigation of intermittency and anomalous scaling in fully developed turbulence remains one of the major theoretical problems. Both the natural and numerical experiments suggest that the deviation from the predictions of the classical Kolmogorov–Obukhov theory is even more strongly pronounced for a passively advected scalar field than for the velocity field itself; see, e.g., [@An; @Sree; @synth; @pass1; @pass2; @El] and literature cited therein. At the same time, the problem of passive advection appears to be easier tractable theoretically: even simplified models describing the advection by a “synthetic” velocity field with prescribed Gaussian statistics reproduce many of the anomalous features of genuine turbulent heat or mass transport observed in experiments, see [@synth]–[@Eyink]. Therefore, the problem of a passive scalar advection, being of practical importance in itself, may also be viewed as a starting point in studying anomalous scaling in the turbulence on the whole. Recently, a great deal of attention has been drawn by a simple model of the passive scalar advection by a self-similar Gaussian white-in-time velocity field, the so-called “rapid-change model,” introduced by Kraichnan [@Kraich1]; see \[8–30\] and references therein. For the first time, the anomalous exponents have been calculated on the basis of a microscopic model and within regular expansions in formal small parameters. Within the “zero-mode approach” to the rapid-change model, developed in [@Falk1; @Falk2; @GK; @BGK], nontrivial anomalous exponents are related to the zero modes (homogeneous solutions) of the closed exact equations satisfied by the equal-time correlations. In this sense, the model is “exactly solvable.” The anomalous exponents are universal, i.e., they depend only on the space dimension and the exponent entering into the velocity correlator. Of course, the Gaussian character, isotropy, and time decorrelation are strong departures from the statistical properties of genuine turbulence. One step toward the construction of a more realistic model of passive advection is the account of the finite correlation time of the velocity field. In [@ShS; @ShS2], a generalized phenomenological model was considered in which the temporal correlation of the advecting field was set by eddy turnover (see also an earlier work [@PDF], where the probability distribution function in an analogous model was studied). It was argued that the anomalous exponents may depend on more details of the velocity statistics, than only the exponents. This idea has received some analytical support in [@Falk3], where the case of short but finite correlation time was considered for the special case of a local turnover exponent. The anomalous exponents were calculated within the perturbation theory with respect to the small correlation time, with Kraichnan’s rapid-change model taken as the zeroth order approximation. The exponents obtained in [@Falk3] appear to be nonuniversal, through the dependence on the correlation time. The exact inequalities obtained in [@Eyink] using the so-called refined similarity relations also point up some significant differences between the zero and finite correlation-time problems. In the paper [@RG], the field theoretic renormalization group (RG) and operator product expansion (OPE) were applied to the model [@Kraich1]. The feature specific to the theory of turbulence is the existence in the corresponding field theoretical models of the composite operators with [*negative*]{} scaling (“critical”) dimensions. Such operators are termed “dangerous,” because their contributions to the OPE for the structure functions and various pair correlators give rise the anomalous scaling, i.e., singular dependence on the IR scale with nonlinear anomalous exponents. The latter are determined by the critical dimensions of these operators.[^1] The OPE and the concept of dangerous operators in the stochastic hydrodynamics were introduced and investigated in detail in [@LOMI; @JETP]; see also the review paper [@UFN] and the book [@turbo]. The part of the formal expansion parameter in the RG approach is played by the exponent $\zeta$ entering into the velocity correlator; see Eq. (\[RC1\]) in Sec. \[sec:FT\] (in Ref. [@RG], it was denoted by $\eps$, in order to emphasize the analogy with Wilson’s $\eps$ expansion). The anomalous exponents were calculated in [@RG] to the order $\zeta^{2}$ of the expansion in $\zeta$ for any space dimension, and they are in agreement with the first-order results obtained within the zero-mode approach in [@Falk1; @Falk2; @GK; @BGK]. In [@RG1], the RG method was generalized to the case of a nonsolenoidal (“compressible”) velocity field. The main advantage of the RG approach (apart from its calculational efficiency) is the universality: it is not related to the aforementioned solvability of the rapid-change model and can equally be applied to the case of finite correlation time, provided the corresponding model possesses the RG symmetry. In [@RG], the results were presented for the opposite limiting case of the time-independent (“frozen”) velocity field. In this paper, we apply the RG and OPE technique to the problem of a passive scalar field advected by a self-similar synthetic Gaussian velocity field with finite correlation time; the steady state is maintained by an imposed linear mean gradient. The velocity field satisfies a linear stochastic equation with effective viscosity and stirring force. The model was proposed and studied in detail (using numerical simulations, in two dimensions) in [@synth]; its rapid-change version is discussed in [@Pumir; @SSP; @BFL; @Pumir2]. We consider the problem in an arbitrary space dimension, $d\ge2$; we also stress that the correlation time is not supposed to be small. We establish the existence in the inertial-convective range of several different scaling regimes and show that for some of them the structure functions and other correlation functions of the problem exhibit anomalous scaling behavior; we derive explicit analytical expressions for the corresponding anomalous exponents. The advection of a passive scalar field in the presence of an imposed linear gradient is described by the equation $$\nabla _t\theta=\nu _0\partial^{2} \theta-\h\cdot{\bf v} , \quad \nabla _t\equiv \partial _t+ v_{i}\, \partial_{i}. \label{1}$$ Here $\theta(x)\equiv \theta(t,{\bf x})$ is the random (fluctuation) part of the total scalar field $\Theta(x)=\theta(x)+\h\cdot{\bf x}$, $\h$ is a constant vector that determines distinguished direction, $\nu _0$ is the molecular diffusivity coefficient, $\partial _t \equiv \partial /\partial t$, $\partial _i \equiv \partial /\partial x_{i}$, $\partial^{2}\equiv\partial _i\partial _i$ is the Laplace operator, and ${\bf v}(x)=\{v_i(x)\}$ is the transverse (owing to the incompressibility) velocity field. The velocity obeys the linear stochastic equation, cf. [@synth] $$\partial_{t} v_i + R v_i =f_{i}, \label{NS}$$ where $R$ \[in the momentum representation $R=R(k)$\] is a linear operation to be specified below and $f_{i}$ is an external random stirring force with zero mean and the correlator $$\langle f_{i}(x) f_{j}(x')\rangle = \int \frac{d\omega}{2\pi} \int \frac{d{\bf k}}{(2\pi)^d} P_{ij}({\bf k})\, D^{f}(\omega,k) \exp [ -{\rm i} (t-t')+{\rm i}{\bf k}\cdot({\bf x}-{\bf x'})] . \label{f}$$ Here $P_{ij}({\bf k}) = \delta _{ij} - k_i k_j / k^2$ is the transverse projector, $k\equiv |{\bf k}|$ is the wavenumber, and $d$ is the dimensionality of the ${\bf x}$ space. Following [@synth], we choose the correlator $D^{f}$ to be independent of the frequency, so that Eq. (\[f\]) contains the delta-function in time. More specific, we choose $$D^{f}(\omega,k)= g_{0}\nu_0^{3}\, \sigma_{k}^{4-d-\eps-\eta}, \quad R(k)=u_{0}\nu_0\, \sigma_{k}^{2-\eta}, \label{Fin1}$$ where $$\sigma_{k}\equiv \sqrt {k^{2}+m^{2}}. \label{Fin4}$$ The positive amplitude factors $g_{0}$ (a formal small parameter of the ordinary perturbation theory) and $u_{0}$ are the analogs of the the coupling constant (“charge”) $\lambda_{0}$ in the standard $\lambda_{0}\phi^{4}$ model of critical behavior, see, e.g., [@Zinn; @book3]; in what follows we shall also term these parameters “coupling constants.” The exponents $\eps$ and $\eta$ are the analogs of the RG expansion parameter $\eps=4-d$ in the $\lambda_{0}\phi^{4}$ model, and we shall use the traditional term “$\eps$ expansion” in our model for the double expansion in the $\eps$–$\eta$ plane around the origin $\eps=\eta=0$, with the additional convention that $\eps=O(\eta)$. The infrared (IR) regularization is provided by the integral scale $L\equiv 1/m$; its precise form is not essential. For $k>>m$ the functions (\[Fin1\]) take on simple powerlike form. Dimensionality considerations show that the charges are related to the characteristic ultraviolet (UV) momentum scale $\Lambda$ by $$g_{0}\simeq \Lambda^{\eps+\eta},\quad u_{0}\simeq \Lambda^{\eta}. \label{gg}$$ From Eqs. (\[NS\]) and (\[f\]) it follows that ${\bf v}(x)$ obeys Gaussian distribution with zero mean and correlator (dropping the transverse projector) $$D_{v}(\omega,k)= \frac{D^{f}(k)}{\omega^{2}+R^{2}(k)} =\frac{g_{0}\nu_0^{3}\, \sigma_{k}^{4-d-\eps-\eta}} {\omega^{2}+[u_{0}\nu_0\, \sigma_{k}^{2-\eta}]^{2}}\, . \label{Fin}$$ Therefore, the exponent $\eps$ describes the inertial-range behavior of the equal-time velocity correlator or, equivalently, the energy spectrum $$E(k) \simeq k^{d-1} \int d\omega D_{v}(\omega,k) \simeq (g_{0}\nu_0^{2}/u_{0})\, k^{1-\eps}, \label{spectrum}$$ cf. [@AvelMaj; @AvelMaj2; @Glimm], where a close family of models for the velocity field has been considered for a strongly anisotropic shear flow. The second exponent, $\eta$, is related to the function $R(k)$, the reciprocal of the correlation time at the wavenumber $k$ ($\eta\equiv2-z$ in the notation of [@AvelMaj; @AvelMaj2; @Glimm; @Falk3; @Eyink]; our exponents are defined so that $\eps=\eta=0$ correspond to the starting point of the RG expansion). It then follows that $\eps=8/3$ gives the Kolmogorow “five-thirds law” for the spatial velocity statistics, and $\eta=4/3$ corresponds to the Kolmogorov frequency. It was pointed out in [@synth] that the linear model (\[NS\]) suffers from the lack of Galilean invariance and therefore does not take into account the self-advection of turbulent eddies. It is well known that the different-time correlations of the Eulerian velocity field are not self-similar, as a result of these “sweeping effects,” and depend substantially on the integral scale; see, e.g., [@sweep]. Nevertheless, the results of [@synth] show that the model gives reasonable description of the passive advection in an appropriate frame, where the mean velocity field vanishes. To justify the model (\[NS\]), we also note that we shall be interested preferably in the equal-time, Galilean invariant quantities (structure functions, correlations of the dissipation rate etc.), which are not affected by the sweeping effects, and we expect that their absence from the model (\[NS\]) is not essential. We also note that the model contains two special cases that possess some interest on their own. In the limit $u_{0}\to\infty$, $g_{0}'\equiv g_{0}/u_{0}^{2}=\const$ we arrive at the rapid-change model: $$D_{v}(\omega,k)\to g_{0}'\nu_0\,(k^{2}+m^{2})^{-d/2-\zeta/2}, \quad \zeta\equiv \eps - \eta, \label{RC1}$$ and the limit $u_{0}\to 0$, $g_{0}''\equiv g_{0}/u_{0}=\const$ corresponds to the case of a frozen velocity field: $$D_{v}(\omega,k)\to g_{0}''\nu_0^{2}\,(k^{2}+m^{2})^ {-d/2+1-\eps/2}\, \pi\,\delta(\omega), \label{RC2}$$ when the velocity correlator is independent of the time variable $t-t'$ in the $t$ representation. The latter case for $\h=0$ has a close formal resemblance with the well-known models of the random walks in random environment with long-range correlations; see [@walks1; @walks3]. In Sec. \[sec:FT\], we give the field theoretic formulation of the problem and discuss some its consequences; we also explain briefly why the ordinary perturbation theory fails to give correct IR behavior for some values of $\eps$ and $\eta$ and establish the relationship between the IR and UV problems. In Sec. \[sec:RG\], we discuss the UV renormalization of the model, derive the RG equations and present the one-loop expressions for the basic RG functions (beta functions and anomalous dimensions). In Sec. \[sec:Fixed\], the analysis of the scaling behavior is given. Depending on the values of the exponents $\eps$ and $\eta$ entering into the velocity correlator, the model exhibits various types of IR scaling regimes, associated with the IR stable fixed points of the RG equations: \(i) The anomalous scaling behavior with universal (in the above sense) exponents, characteristic of the rapid-change model, takes place for $\eta<\eps<2\eta$. The anomalous exponents depend on the only exponent $\zeta$ entering into Eq. (\[RC1\]). \(ii) The anomalous scaling behavior with the universal exponents, characteristic of the model with time-independent (frozen) velocity field, emerges in the region $\eps>0$, $\eps >2\eta$. The exponents are determined solely by the equal-time velocity correlator and depend on the only exponent $\eps$ entering into Eq. (\[RC2\]). \(iii) The intermediate regime with nonuniversal exponents, which depend on the amplitudes entering into the velocity correlator, emerges for $\eps =2\eta$; the Kolmogorov-type synthetic velocity field [@synth] and the case of a local turnover exponent [@Falk3] correspond to this regime. The nonuniversality of the exponents in this regime is in agreement with the findings of Ref. [@Falk3], where the large $d$ limit has been considered. \[However, the exponents in our model turn out to be universal in the one-loop approximation\]. \(iv) The diffusive-type regimes, for which the advection (i.e., the nonlinearity in Eq. (\[1\])) can be treated within the ordinary perturbation theory. These regimes take place in the region specified by the inequalities $\eta>0$, $\eta>\eps$ and $\eta<0$, $\eps<0$. To avoid possible misunderstandings we emphasize that the limits $g_{0}$, $u_{0}\to0$ or $g_{0}$, $u_{0}\to\infty$ are not supposed to be performed in the original correlation function (\[Fin\]); the parameters $g_{0}$, $u_{0}$ are fixed at some finite values. The behavior specific to the models (\[RC1\]), (\[RC2\]) arises asymptotically in the regimes (i) and (ii) as a result of the solution of the RG equations, when the “RG flow” approaches the corresponding fixed point. Therefore, we deal with the finite correlation time, and there is no problem with the steady state in the frozen case even in two dimensions. The regions of IR stability of the regimes (i)–(iv) in the $\eps$–$\eta$ plane, given above, are identified to the first order of the $\eps$ expansion, but some of their boundaries are found exactly. In the regimes (i)–(iii), the correlation functions of the model exhibit anomalous scaling behavior, i.e., singular dependence on the IR scale $m$ with nonlinear “anomalous exponents.” Within the RG and OPE approach, the latter are related to the scaling dimensions of the tensor composite operators $\partial\theta\cdots\partial\theta$; these dimensions are calculated explicitly to the first order of the $\eps$ expansion (one-loop approximation) in Sec. \[sec:Operators\]. The inertial-convective-range asymptotic expressions for the structure functions of arbitrary order (even and odd) and the equal-time correlations of the scalar gradients are obtained in Sec. \[sec:OPE\] using the OPE. As the exponents $\eps$ and $\eta$ increase, the powers of the velocity field also become dangerous, and their contributions to the OPE should be summed. The required summation is performed in Sec. \[sec:summ\] on the example of the second-order structure function in the “frozen” regime; for the rapid-change regime the problem is absent. This summation might be interesting as a possible model of the origin of the anomalous scaling in the structure functions of the velocity itself: it was argued in [@AV] that the singular $m$ dependence of the equal-time correlators for the stochastic Navier–Stokes (NS) equation is related to [*infinite*]{} families of dangerous operators. The formulation (\[gg\]) is typical to the models of critical behavior \[i.e., the dimensional coupling constants are expressed only through the UV scale\], and we shall call it “standard.” In [@synth; @Pumir2; @ShS; @ShS2; @PDF; @Falk3; @Eyink], a different version of the problem was considered, in which the velocity correlator contains nontrivial dependence on the integral turbulence scale. This “exotic” (from the viewpoints of the theory of critical behavior) formulation requires special attention; it is considered briefly in Sec. \[sec:Exo\]. The results obtained are reviewed in Sec. \[sec:Con\], where we also discuss briefly the passive advection by the non-Gaussian velocity field governed by the nonlinear stochastic NS equation. Our approach is generalized directly to this case, and the explicit expressions for the anomalous exponents can readily be obtained in the first order of the corresponding $\eps$ expansion. We also discuss new problems that arise in the NS model beyond the $\eps$ expansion. Field theoretic formulation of the model. IR and UV singularities in the perturbation theory {#sec:FT} ============================================================================================ According to the general theorem (see, e.g., Refs. [@Zinn; @book3]), the stochastic problem (\[1\])–(\[f\]) is equivalent to the field theoretic model of the doubled set of fields $\Phi\equiv\{ \theta, \theta',{\bf v}, {\bf v'}\}$ with action functional $$S(\Phi)= (1/2) {\bf v'} D^{f} {\bf v'} + {\bf v'} [- \partial_{t}{\bf v}- R{\bf v}]+ \theta' \left[ - \partial_{t}\theta -({\bf v}\partt) \theta + \nu _0\partial^{2} \theta - \h\cdot{\bf v}\right]. \label{action1}$$ Here $D^{f}$ is the correlator (\[Fin1\]), the required integrations over $x=(t,{\bf x})$ and summations over the vector indices in Eq. (\[action1\]) and analogous formulas below are implied. The formulation (\[action1\]) means that statistical averages of random quantities in the stochastic problem (\[1\])–(\[f\]) coincide with functional averages with the weight $\exp S(\Phi)$, so that generating functionals of total \[$G(A)$\] and connected \[$W(A)$\] Green functions are represented by the functional integral $$G(A)=\exp W(A)=\int {\cal D}\Phi \exp [S(\Phi )+A\Phi ] \label{gene}$$ with arbitrary sources $A(x)$ in the linear form $$A\Phi \equiv \int dx[A^{\theta}(x)\theta (x)+A^{\theta '}(x)\theta '(x) + A^{\bf v}_{i}(x)v_{i}(x)+ A^{\bf v'}_{i}(x)v_{i}'(x)]. \label{sour}$$ In the following, we shall not be interested in the Green functions involving the auxiliary vector field ${\bf v'}$, so that we can set $A^{\bf v'}=0$ in Eq. (\[sour\]). It is then convenient to perform the Gaussian integration over ${\bf v'}$ in Eq. (\[gene\]) explicitly. We arrive at the field theoretic model of the reduced set of fields $\Phi\equiv\{ \theta, \theta',{\bf v}\}$ with the action $$S(\Phi)= \theta' \left[ - \partial_{t}\theta -({\bf v}\partt) \theta + \nu _0\partial^{2} \theta - \h\cdot{\bf v}\right] -{\bf v} D_{v}^{-1} {\bf v}/2. \label{action}$$ The first four terms in Eq. (\[action\]) represent the Martin–Siggia–Rose-type action for the stochastic problem (\[1\]) at fixed ${\bf v}$, and the last term represents the Gaussian averaging over ${\bf v}$ with the correlator $D_{v}$ from Eq. (\[Fin\]). The model (\[action\]) corresponds to a standard Feynman diagrammatic technique with the triple vertex $-\theta'({\bf v}\partt)\theta=\theta'V_{j}v_{j}\theta$ with vertex factor $$V_{j}= {\rm i} k_{j}, \label{vertex}$$ where ${\bf k}$ is the momentum flowing into the vertex via the field $\theta'$, and the bare propagators (in the momentum-frequency representation) $$\begin{aligned} \langle \theta \theta' \rangle _0=\langle \theta' \theta \rangle _0^*= (-{\rm i}\omega +\nu _0 k^2)^{-1} , \nonumber \\ \langle \theta \theta \rangle _0= \langle \theta \theta' \rangle _0 h_{i}h_{j} \langle v_{i} v_{j} \rangle _0 \langle \theta' \theta \rangle _0, \nonumber \\ \langle \theta v_{i} \rangle _0= - \langle \theta \theta' \rangle _0 h_{j} \langle v_{j} v_{i} \rangle _0, \nonumber \\ \langle \theta '\theta '\rangle _0=0 , \label{lines}\end{aligned}$$ where $h_{i}$ is a component of the vector $\h$ and the bare propagator $\langle v_{i} v_{j} \rangle _0$ is given by Eq. (\[Fin\]). The magnitude $h\equiv|\h|$ can be eliminated from the action (\[action\]) by rescaling of the scalar fields: $\theta\to h\theta$, $\theta'\to \theta'/h$. Therefore, any total or connected Green function of the form $\langle\theta(x_{1})\cdots\theta(x_{n})\, \theta'(y_{1}) \cdots\theta'(y_{p})\rangle$ contains the factor of $h^{n-p}$. The parameter $h$ appears in the bare propagators (\[lines\]) only in the numerators. It then follows that the Green functions with $n-p<0$ vanish identically. On the contrary, the 1-irreducible function $\langle\theta(x_{1})\cdots\theta(x_{n})\, \theta'(y_{1}) \cdots\theta'(y_{p})\rangle_{\rm 1-ir}$ contains a factor of $h^{p-n}$ and therefore vanishes for $n-p>0$; this fact will be relevant in the analysis of the renormalizability of the model (see below). Another important consequence of the representation (\[gene\]), (\[action\]) is that the large-scale anisotropy persists, through the dependence on $\h$, for all ranges of momenta (including convective and dissipative ranges), and that the dimensionless ratios of the structure functions are strictly independent on $h$; cf. [@synth; @pass1; @pass2; @ShS; @ShS2; @PDF]. It is noteworthy that all these statements equally hold for any statistics of the velocity field (not necessarily Gaussian or synthetic), provided its distribution is independent of $\h$. However, the ordinary perturbation theory fails to give correct IR behavior of Green functions for some values of the exponents $\eps$ and $\eta$. This can easily be illustrated on the simplest example of the 1-irreducible Green function $\langle\theta'\theta\rangle_{\rm 1-ir}$. It satisfies the Dyson equation of the form $$\langle\theta'\theta\rangle_{\rm 1-ir} = -{\rm i} \omega + \nu_0 k^{2} -\Sigma_{\theta'\theta} (\omega, k), \label{Dyson}$$ where $\Sigma_{\theta'\theta}$ is the self-energy operator represented by the corresponding 1-irreducible diagrams. Its one-loop approximation has the form $$\Sigma_{\theta'\theta} = \put(0.00,-56.00){\makebox{\dS}} \hskip1.7cm . \label{Dyson2}$$ Here and below the solid lines in the diagrams denote the bare propagator $\langle\theta\theta'\rangle_{0}$ from Eq. (\[lines\]), the end with a slash corresponds to the field $\theta'$, and the end without a slash corresponds to $\theta$; the dashed lines denote the bare propagator (\[Fin\]); the vertices correspond to the factor (\[vertex\]). The analytic expression for the diagram in (\[Dyson2\]) has the form $$\Sigma_{\theta'\theta} (\omega, k) = - k_{i}k_{j} \int \frac{d\omega'}{2\pi} \int \frac{d{\bf q}}{(2\pi)^{d}} \, \frac{P_{ij}({\bf q})\,D_{v}(\omega',q)} {-{\rm i} (\omega+\omega') + \nu_{0} ({\bf q}+{\bf k})^{2}}\, , \label{novaja1}$$ where $q\equiv|{\bf q}|$ and $D(\omega',q)$ is given by Eq. (\[Fin\]); the factor of $k_{i}k_{j}$ arises from the vertex factors (\[vertex\]). Integration over $\omega'$ in Eq. (\[novaja1\]) yields $$\Sigma_{\theta'\theta} (\omega, k) = - k_{i}k_{j} \frac{g_0\nu_0^{2}} {2u_{0}} \int \frac{d{\bf q}}{(2\pi)^{d}} \, \frac{P_{ij}({\bf q})\, \sigma_{q}^{2-d-\eps}} {-{\rm i} \omega +\nu_0 ({\bf q}+{\bf k})^{2} + u_{0}\nu_0 \sigma_{q}^{2-\eta}}\, . \label{novaja2}$$ We are interested in the IR behavior of the function (\[novaja2\]), i.e., the behavior of small $k$, $\omega$ and $m$. It is easily seen that this behavior is nontrivial in the region on the $\eps$–$\eta$ plane, determined by the inequalities $\eta<0$, $\eps>0$ and $\eta>0$, $\eps>\eta$, because the integral in (\[novaja2\]) is then IR divergent if $k$, $\omega$ and $m$ are simply set equal to zero. On the contrary, for the rest of the $\eps$–$\eta$ plane, the leading term of the desired asymptotic behavior is indeed obtained simply by setting $k=\omega=m=0$. The analysis is extended directly to the higher-order diagrams; it shows that these IR singularities enhance as the order of a diagram increases, and that they take place only within the same region on the $\eps$–$\eta$ plane. The IR singularities compensate the smallness of the coupling constant $g_{0}$, assumed within the framework of the ordinary perturbation theory. Therefore, in order to find correct IR behavior we would have to sum the entire series even if the expansion parameter, $g_{0}$, were small. It is also clear that these IR singularities get weaker as the parameters $\eps$, $\eta$ decrease, and they would disappear at $\eps=\eta=0$ if we could take this limit in Eq. (\[novaja2\]). However, this is impossible owing to the UV divergence in the integral (\[novaja2\]) at this point. In general, the diagrams of $\Sigma_{\theta'\theta}$ are UV divergent in the region $\eta>0$, $\eps<0$ and $\eta<0$, $\eps<\eta$, and the UV cutoff at $q\equiv|{\bf q}|\simeq\Lambda$ is then implied in (\[novaja2\]) and higher-order diagrams. If the point $\eps=\eta=0$ is approached from inside the region of UV convergence, the UV singularities manifest themselves as poles in $\eps$, $\eta$ and their linear combinations. The elimination of these poles is the classical UV problem, and its solution is given by the standard theory of UV renormalization; the RG equations are obtained within the framework of this theory and express the simple idea of nonuniqueness of the renormalization procedure. The correlation between the IR and UV singularities near the “logarithmic point” $\eps=\eta=0$, noted above, explains to some extent why the RG method, which is closely related to the UV divergences, can be a useful tool in studying the IR behavior, and why the exponents $\eps$ and $\eta$ are expected to be relevant small parameters in the RG expansions. Surprisingly, simple arguments given above lead to reasonable conclusions: the rigorous RG analysis confirms that the Green functions of the model indeed show anomalous IR behavior for some values of $\eps$ and $\eta$, and the region determined by the inequalities $\eta<0$, $\eps>0$ and $\eta>0$, $\eps>\eta$ coincides with the region of stability of the corresponding fixed points in the [*linear*]{} approximation; see Sec. \[sec:RG\], \[sec:Fixed\]. UV renormalization of the model. RG functions and RG equations {#sec:RG} ============================================================== The renormalization of the model (\[action\]) is similar to the renormalization of the simpler rapid-change model, considered in detail in [@RG]; below we confine ourselves to only the necessary information. The analysis of UV divergences is based on the analysis of [*canonical dimensions*]{}, see [@book3; @Collins]. Dynamical models of the type (\[action\]), in contrast to static models, are two-scale [@UFN; @turbo; @Pismak], i.e., the action functional (\[action\]) is invariant with respect to the two independent scale transformations, $S(\Phi', z_{i}')=S(\Phi, z_{i})$, where $\Phi\equiv\{ \theta, \theta',{\bf v}\}$ and $z_{i}=\{g_0,u_{0},\nu_0,m\}$ is the full set of the model parameters. In the first transformation, the time variable is fixed and the space variable is dilated along with all the fields and parameters: $$\Phi(t,{\bf x})\to \Phi'(t,{\bf x})= \lambda^{d_{\Phi}^{k}} \Phi(t,\lambda{\bf x}), \quad z_{i}\to z_{i}' = \lambda^{d_{z_{i}}^{k}} z_{i}, \label{scale1}$$ and in the second the space variable is fixed and all the other quantities are dilated: $$\Phi(t,{\bf x})\to \Phi'(t,{\bf x})= \lambda^{d_{\Phi}^{\omega}} \Phi(\lambda t,{\bf x}), \quad z_{i}\to z_{i}' = \lambda^{d_{z_{i}}^{\omega}} z_{i}. \label{scale2}$$ Here $\lambda>0$ is an arbitrary transformation parameter, and two independent canonical dimensions, the momentum dimension $d_{F}^{k}$ and the frequency dimension $d_{F}^{\omega}$, are assigned to each quantity $F$ (a field or a parameter in the action functional). These canonical (“engineering”) dimensions should not be confused with the exact critical dimensions: the latter are subject to nontrivial calculation, while the former are simply determined from the natural normalization conditions $d_k^k=-d_{\bf x}^k=1$, $d_k^{\omega}=d_{\bf x}^{\omega}=0$, $d_{\omega }^k=d_t^k=0$, $d_{\omega}^{\omega}=-d_t^{\omega}=1$, and from the requirement that each term of the action functional be dimensionless \[i.e., be invariant with respect to the transformations (\[scale1\]) and (\[scale2\]) separately\]. Then, based on $d_{F}^{k}$ and $d_{F}^{\omega}$, one can introduce the total canonical dimension [@UFN; @turbo; @Pismak], which corresponds to the dilatation with fixed value of $\nu_0$ (i.e., zero canonical dimension can be assigned to $\nu_0$). In our model, $\partial_{t}\propto\nu_0\partial^{2}$, so that the total dimension is given by $d_{F}=d_{F}^{k}+2d_{F}^{\omega}$. In the action (\[action\]), there are fewer terms than fields and parameters, and the canonical dimensions are not determined unambiguously. This is of course a manifestation of the fact that the “superfluous” parameter $h=|\h|$ can be eliminated from the action; see above. After it has been eliminated (or, equivalently, zero canonical dimensions have been assigned to it), the definite canonical dimensions can be assigned to the other quantities. They are given in Table \[table1\], including the dimensions of renormalized parameters, which will appear later on. From Table \[table1\] it follows that the model is logarithmic (the both coupling constants $g_{0}$ and $u_{0}$ are dimensionless) at $\eps=\eta=0$. This means that the UV divergences in the Green functions have the form of the poles in $\eps$, $\eta$, and all their possible linear combinations. The total dimension $d_{F}$ plays in the theory of renormalization of dynamical models the same role as does the conventional (momentum) dimension in static problems. The canonical dimensions of an arbitrary 1-irreducible Green function $\Gamma = \langle\Phi \dots \Phi \rangle _{\rm 1-ir}$ are given by the relations $$d_{\Gamma }^k=d- N_{\Phi}d_{\Phi}^k, \qquad d_{\Gamma }^{\omega }=1-N_{\Phi }d_{\Phi }^{\omega }, \qquad d_{\Gamma }=d_{\Gamma }^k+2d_{\Gamma }^{\omega }= d+2-N_{\Phi }d_{\Phi}, \label{17}$$ where $N_{\Phi}=\{N_{\theta},N_{\theta'},N_{\bf v}\}$ are the numbers of corresponding fields entering into the function $\Gamma$, and the summation over all types of the fields is implied. The total dimension $d_{\Gamma}$ is the formal index of the UV divergence. Superficial UV divergences, whose removal requires counterterms, can be present only in those functions $\Gamma$ for which $d_{\Gamma}$ is a non-negative integer. Analysis of divergences in the problem (\[action\]) should be based on the following auxiliary considerations; cf. [@RG; @UFN; @turbo]: \(1) All the 1-irreducible Green functions with $N_{\theta'}< N_{\theta}$ vanish; see Sec. \[sec:FT\]. \(2) If for some reason a number of external momenta occur as an overall factor in all the diagrams of a given Green function, the real index of divergence $d_{\Gamma}'$ is smaller than $d_{\Gamma}$ by the corresponding number (the Green function requires counterterms only if $d_{\Gamma}'$ is a non-negative integer). In the model (\[action\]), the derivative $\partial$ at the vertex $\theta'({\bf v}\partt)\theta$ can be moved onto the field $\theta'$ by virtue of the transversality of the field ${\bf v}$. Therefore, in any 1-irreducible diagram it is always possible to move the derivative onto any of the external “tails” $\theta$ or $\theta'$, which decreases the real index of divergence: $d_{\Gamma}' = d_{\Gamma}- N_{\theta}-N_{\theta'}$. This also means that the fields $\theta$, $\theta'$ enter into the counterterms only in the form of the derivatives $\partial\theta$ and $\partial\theta'$. From the dimensions in Table \[table1\] we find $d_{\Gamma} = d+2 - N_{\bf v} + N_{\theta}- (d+1)N_{\theta'}$ and $d_{\Gamma}'=(d+2)(1-N_{\theta'}) - N_{\bf v}$. From these expressions it follows that for any $ d$, superficial divergences can exist only in the 1-irreducible functions $\langle\theta'\theta\dots\theta\rangle_{\rm 1-ir}$ with $N_{\theta'}=1$ and arbitrary value of $N_{\theta}$, for which $d_{\Gamma}=1+N_{\theta}$, $d_{\Gamma}'=0$. However, all the functions with $N_{\theta}> N_{\theta'}$ vanish (see above) and obviously do not require counterterms. As in the case of the rapid-change model [@RG; @RG1], we are left with the only superficially divergent function $\langle\theta'\theta\rangle_{\rm 1-ir}$; the corresponding counterterm contains two symbols $\partial$ and is therefore reduced to $\theta'\partial^{2}\theta$. The inclusion of this counterterm is reproduced by the multiplicative renormalization of the parameters $g_{0}$, $u_{0}$, and $\nu_0$ in the action functional (\[action\]): $$\nu_{0}=\nu Z_{\nu}, \qquad g_{0}=g\mu^{\eps+\eta}\, Z_{g}, \qquad u_{0}=u\mu^{\eta}\, Z_{u}, \label{mult}$$ where the dimensionless parameters $g$, $u$, and $\nu$ are the renormalized analogs of the bare parameters, $\mu$ is the renormalization mass in the minimal subtraction (MS) scheme, which we always use in practical calculations, and $Z_{i}=Z_{i}(g,u)$ are the renormalization constants. They satisfy the identities $$Z_{g}= Z_{\nu}^{-3},\quad Z_{u}= Z_{\nu}^{-1}, \label{svaz}$$ which result from the absence of the renormalization of the contribution with $D_{v}$ in the functional (\[action\]). No renormalization of the fields, the “mass” $m$, and the vector $\h$ is required, i.e., $Z_{\Phi}=1$ for all $\Phi$ and $Z_{m}=Z_{h}=1$. The renormalized action functional has the form $$S_{\rm ren}(\Phi)= \theta' \left[ - \partial_{t}\theta -({\bf v}\partt) \theta + \nu Z_{\nu}\partial^{2} \theta -\h\cdot{\bf v}\right] -{\bf v} D_{v}^{-1} {\bf v}/2, \label{actionR}$$ where the correlator $D_{v}$ is expressed in renormalized parameters using the formulas (\[mult\]): $$D_{v}(\omega,k)= \frac{g\nu^{3}\mu^{\eps+\eta} \sigma_{k}^{4-d-\eps-\eta}} {\omega^{2}+[u\nu\mu^\eta \sigma_{k}^{2-\eta}]^{2}}\, . \label{FinR}$$ The relation $ S(\Phi,e_{0})=S_{\rm ren}(\Phi,e,\mu)$ (where $e_{0}$ is the complete set of bare parameters, and $e$ is the set of renormalized parameters) for the generating functional $W(A)$ in Eq. (\[gene\]) yields $ W(A,e_{0})=W_{\rm ren}(A,e,\mu)$. We use $\widetilde{\cal D}_{\mu}$ to denote the differential operation $\mu\partial_{\mu}$ for fixed $e_{0}$ and operate on both sides of this equation with it. This gives the basic RG differential equation: $${\cal D}_{RG}\,W_{\rm ren}(A,e,\mu) = 0, \label{RG1}$$ where ${\cal D}_{RG}$ is the operation $\widetilde{\cal D}_{\mu}$ expressed in the renormalized variables: $${\cal D}_{RG}\equiv {\cal D}_{\mu} + \beta_{g}(g,u)\partial_{g} + \beta_{u}(g,u)\partial_{u} -\gamma_{\nu}(g,u){\cal D}_{\nu}. \label{RG2}$$ In Eq. (\[RG2\]), we have written ${\cal D}_{x}\equiv x\partial_{x}$ for any variable $x$, and the RG functions (the $\beta$ functions and the anomalous dimension $\gamma$) are defined as \[RGF\] $$\gamma_{\nu}\equiv \Dm \ln Z_{\nu}, \label{RGF1}$$ $$\beta_{g}\equiv\Dm g=g[-\eps-\eta+3\gamma_{\nu}], \label{beta1}$$ $$\beta_{u}\equiv\Dm u=u[-\eta+\gamma_{\nu}]. \label{beta2}$$ The relations between $\beta$ and $\gamma$ in Eq. (\[RGF\]) result from the definitions and the relation (\[svaz\]). Now let us turn to the explicit calculation of the constant $Z_{\nu}$ in the one-loop approximation in the MS scheme. The constant $Z_{\nu}$ is determined by the requirement that the 1-irreducible Green function $\langle\theta'\theta\rangle_{\rm 1-ir}$, when expressed in renormalized variables, be UV finite \[i.e., have no singularities for $\eps$, $\eta\to0$\]. The Dyson equation (\[Dyson\]) relates this function to the self-energy operator $\Sigma_{\theta'\theta}$, and Eq. (\[novaja2\]) gives the explicit expression for the latter in the first order $O(g_{0})$ of the unrenormalized perturbation theory. Now we have to calculate the function $\Sigma_{\theta'\theta}$ in the order $O(g)$ of the renormalized perturbation theory; therefore we should simply replace $\nu_{0}\to\nu$ in the propagator $\langle\theta\theta'\rangle_{0}$ and use the expression (\[FinR\]) for the velocity correlator in Eq. (\[Dyson2\]), which leads to the substitution $g_{0}\to g\mu^{\eps+\eta}$, $u_{0}\to u \mu^{\eta}$, $\nu_{0}\to\nu$ in Eqs. (\[novaja1\]), (\[novaja2\]). We know that the divergent part of the diagram is independent of $\omega$, so that we can set $\omega=0$ in what follows. It is also convenient to cut off the integral over ${\bf q}$ from below at $q\simeq m$ and set $m=0$ in the integrand (the integral diverges logarithmically, and its UV divergent part is independent of the specific form of the IR regularization). Furthermore, we can set ${\bf k}=0$ in the integrand (we know that the counterterm is proportional to $k^{2}$, and the factor of $k^{2}$ has already been isolated from the integral) and make use of the isotropy, namely, $$\int d{\bf q}\, f(q) P_{ij}({\bf q}) = \delta_{ij}\frac{d-1}{d} \int d{\bf q}\, f(q).$$ Then Eq. (\[novaja2\]) yields $$\Sigma_{\theta'\theta} (\omega=0, k) \simeq - k^{2} \, \frac{g\nu \mu^{\eps} \,(d-1)\, J}{2ud}\, , \label{Dyson4}$$ where we have written $$J\equiv \int \frac{d{\bf q}}{(2\pi)^{d}} \, \frac{q^{-d-\eps}}{1+u\,(\mu/q)^{\eta}} \label{J}$$ and $\simeq$ denotes the equality up to the UV finite parts. The expansion of the integrand in $u$ gives $$J= \sum_{s=0}^{\infty} (-u)^{s}\mu^{s\eta} \int \frac{d{\bf q}}{(2\pi)^{d}} q^{-d-\eps-s \eta}\simeq \frac{S_{d}}{(2\pi)^{d}} \sum_{s=0}^{\infty} (-u)^{s} \frac{\mu^{s\eta}\,m^{-\eps-s\eta}}{\eps+s\eta}\, , \label{J1}$$ where the parameter $m$ arises from the IR limit in the integral over ${\bf q}$ and $S_d\equiv 2\pi ^{d/2}/\Gamma (d/2)$ is the surface area of the unit sphere in $d$-dimensional space. Finally, from Eqs. (\[Dyson4\]) and (\[J1\]) we obtain $$\Sigma_{\theta'\theta} (\omega=0, k) \simeq \frac{-ag\nu k^2}{u} \sum_{s=0}^{\infty} \frac{(-u)^{s}\,(\mu/m)^{\eps+s\eta}} {\eps+s\eta}, \label{Dyson5}$$ where we have written $$a\equiv \frac{(d-1)S_{d}}{2d(2\pi)^{d}}. \label{a}$$ The renormalization constant $Z_{\nu}$ is found from the requirement that the UV divergences cancel out in Eq. (\[Dyson\]) after the substitution $\nu_0=\nu Z_{\nu}$. This determines $Z_{\nu}$ up to an UV finite contribution; the latter is fixed by the choice of the renormalization scheme. In the MS scheme all the renormalization constants have the form “1 + only poles in $\eps$, $\eta$ and their linear combinations,” which gives the following expression $$Z_{\nu}=1- \frac{ag}{u}\sum^{\infty}_{s=0} \frac{(-u)^{s}} {\eps+s\eta}\, , \label{Z}$$ with the coefficient $a$ from Eq. (\[a\]). In contrast to the rapid-change model, the one-loop approximation in the case at hand is not exact: the expression (\[Z\]) has nontrivial corrections of order $g^{2}$, $g^{3}$, and so on. The series in Eq. (\[Z\]) can be expressed in the form of a single integral, but this is not convenient for the calculation of the RG functions. The RG functions in the one-loop approximation can be calculated from the renormalization constant (\[Z\]) using the identity $\Dm=\beta_{g}\partial_{g}+\beta_{u}\partial_{u}$, which follows from the definitions (\[RGF\]) and the fact that $Z_{\nu}$ depends only on the charges $g,u$. Within our accuracy this identity is reduced to $\Dm\simeq -(\eps+\eta)\D_g -\eta\D_u $. From Eq. (\[Z\]) it then follows: $$\begin{aligned} \gamma_{\nu}= \biggl[(\eps+\eta)\D_g +\eta\D_u\biggr] \frac{ag}{u}\sum^{\infty}_{s=0} \frac{(-u)^{s}} {\eps+s\eta}= \frac{ag}{u}\sum^{\infty}_{s=0} (-u)^{s}= \frac{ag}{u(1+u)}\, , \label{gammanu}\end{aligned}$$ up to the corrections of order $g^{2}$ and higher. The beta functions are obtained from Eq. (\[gammanu\]) using the relations (\[beta1\]), (\[beta2\]). Fixed points and scaling regimes {#sec:Fixed} ================================ It is well known that possible scaling regimes of a renormalizable model are associated with the IR stable fixed points of the corresponding RG equations, see, e.g., [@Zinn; @book3]. The fixed points are determined from the requirement that all the beta functions of the model vanish. In our model the coordinates $g_{*},u_{*}$ of the fixed points are found from the equations $$\beta_{g} (g_{*},u_{*})=\beta_{u} (g_{*},u_{*})=0 \label{points}$$ with the beta functions given in Eqs. (\[beta1\]), (\[beta2\]). The type of the fixed point is determined by the eigenvalues of the matrix $\Omega=\{\Omega_{ik}=\partial\beta_{i}/\partial g_{i}\}$, where $\beta_{i}$ denotes the full set of the beta functions and $g_{i}$ is the full set of charges. For the standard (as in Eq. (\[gg\])) formulation of the problem the IR asymptotic behavior is governed by the IR stable fixed points, i.e., those for which all the eigenvalues are positive. From the equations (\[beta1\]), (\[beta2\]) we obtain the exact relation $\beta_{g}/g-3\beta_{u}/u =2\eta-\eps$. It shows that the beta functions $\beta_{g}$, $\beta_{u}$ cannot vanish simultaneously for finite values of their arguments. \[The only exception is the case $2\eta=\eps$. We shall study it separately, and for now we assume $2\eta\ne\eps$.\] Therefore, to find the fixed points we must set either $u=0$ or $u=\infty$ and simultaneously rescale $g$ so that the anomalous dimension $\gamma_{\nu}$ remain finite. In order to study the limit $u\to\infty$ we change to the new variables $w\equiv 1/u$, $g'\equiv g/u^{2}$; the corresponding beta functions have the form $$\begin{aligned} \beta_{w}\equiv \Dm w= -\beta_{u}/u^{2}=w[\eta-\gamma_{\nu}], \nonumber \\ \beta_{g'}\equiv \Dm g'=\beta_{g}/u^{2}-2g\beta_{u}/u^{3}= g'[\eta-\eps+\gamma_{\nu}], \label{beta'}\end{aligned}$$ and for the one-loop anomalous dimension we obtain from Eq. (\[gammanu\]) $$\gamma_{\nu}=ag'/(1+w) \label{gamma'}$$ with the constant $a$ defined in Eq. (\[a\]). From the expressions (\[beta’\]) we find two fixed points, which we denote FPI and FPII. The first point is trivial, $$\begin{aligned} {\rm FPI:} \qquad w_{*}=g'_{*}=0; \qquad \gamma_{\nu}^{*}=0. \label{FPI}\end{aligned}$$ The corresponding matrix $\Omega$ is diagonal with the diagonal elements $$\Omega_{1}=\eta, \quad \Omega_{2}=\eta-\eps. \label{omegaI}$$ For the second point we obtain $$\begin{aligned} {\rm FPII:} \qquad w_{*}=0,\ g'_{*}= (\eps-\eta)/a; \qquad \gamma_{\nu}^{*}= \eps-\eta. \label{FPII}\end{aligned}$$ The corresponding matrix $\Omega$ is triangular, $\partial_{g'}\beta_{w}=0$, and its eigenvalues coincide with the diagonal elements: $$\begin{aligned} \Omega_{1}= \partial_{w}\beta_{w} =\eta-\gamma_{\nu}^{*}=2\eta-\eps, \nonumber \\ \Omega_{2}= \partial_{g'}\beta_{g'} =ag'_{*}= \eps-\eta. \label{omegaII}\end{aligned}$$ We note that the expressions for $\gamma_{\nu}^{*}$ in Eq. (\[FPII\]) and for $\Omega_{1}$ in (\[omegaII\]) are exact, i.e., they have no corrections of order $O(\eps^{2})$ \[we take $\eps\simeq\eta$, so that here and below $O(\eps^{2})$ denotes all the terms of the form $\eps\eta$, $\eta^{2}$ and higher\]. Now let us turn to the regime with $u\to0$. In order to study this limit we change to the new variable $g''\equiv g/u$; the corresponding beta functions have the form $$\begin{aligned} \beta_{g''}\equiv\Dm g''=\beta_{g}/u-g\beta_{u}/u^{2}= g''[-\eps+2\gamma_{\nu}], \nonumber \\ \beta_{u}=u[-\eta+\gamma_{\nu}] \label{beta''}\end{aligned}$$ \[the function $\beta_{u}$ is the same as in Eq. (\[beta2\])\]. The one-loop anomalous dimension (\[gammanu\]) takes the form $$\gamma_{\nu} = ag''/(1+u) .$$ From the expressions (\[beta”\]) we find two fixed points, which we denote FPIII and FPIV. The first point is trivial, $$\begin{aligned} {\rm FPIII:} \qquad u_{*}=g''_{*}=0; \qquad \gamma_{\nu}^{*}=0. \label{FPIII}\end{aligned}$$ The corresponding matrix $\Omega$ is diagonal with the elements $$\Omega_{1}=-\eps, \quad \Omega_{2}=-\eta. \label{omegaIII}$$ For the nontrivial point we obtain $$\begin{aligned} {\rm FPIV:} \qquad u_{*}=0,\ g''_{*}=\eps/2a; \qquad \gamma_{\nu}^{*}=\eps/2. \label{FPIV}\end{aligned}$$ The corresponding matrix $\Omega$ is triangular, $\partial_{g''}\beta_{u}=0$, and its eigenvalues have the form $$\begin{aligned} \Omega_{1}=\partial_{u}\beta_{u} =-\eta+\gamma_{\nu}^{*}=(\eps-2\eta)/2, \nonumber \\ \Omega_{2}=\partial_{g''}\beta_{g''} = 2ag''_{*}= \eps . \label{omegaIV}\end{aligned}$$ The expressions for $\gamma_{\nu}^{*}$ in Eq. (\[FPIV\]) and for $\Omega_{1}$ in Eq. (\[omegaIV\]) are exact. Of course, the expressions (\[omegaI\]), (\[omegaIII\]), and $\gamma_{\nu}^{*}=0$ for the trivial fixed points are also exact. In the special case $\eps=2\eta$ the beta functions (\[beta1\]), (\[beta2\]) become proportional, and the set (\[points\]) reduces to a single equation. As a result, the corresponding nontrivial fixed point, which we denote FPV, is degenerate: rather than a point, we have a line of fixed points in the $g$–$u$ plane. It is given by the relation $$\begin{aligned} {\rm FPV:} \qquad g_{*}/u_{*}(u_{*}+1)=\eta/a; \qquad \gamma_{\nu}^{*}=\eta=\eps/2. \label{FPV}\end{aligned}$$ The exact expression for $\gamma_{\nu}^{*}$ follows from the relation between the RG functions in Eq. (\[RGF\]). The eigenvalues of the matrix $\Omega$ (which is not diagonal here) have the form $$\Omega_{1}=0, \quad \Omega_{2}=\eta\,(2+u_{*})/(1+u_{*}). \label{omegaV}$$ The vanishing of the element $\Omega_{1}$ reflects the existence of a marginal direction in the $g$–$u$ plane (along the line of the fixed points) and is therefore an exact fact. The coordinates of a point on the line (\[FPV\]) can also be expressed explicitly as functions of the dimensionless parameter $\rho\equiv g_{0}/u^{3}_{0}$ using the exact relation $g_{0}/u^{3}_{0}=g_{*}/u^{3}_{*}$. The actual expansion parameter appears to be $\sqrt\eta$ rather than $\eta$ itself, and the zeroth order approximation has the form $$g_{*}=(\eta/a)^{3/2}\rho^{-1/2}, \qquad u_{*}=(\eta/a\rho)^{1/2}, \qquad \Omega_{2}=2\eta. \label{roots}$$ In Figure I, we show the regions of stability for the fixed points FPI–FPV in the $\eps$–$\eta$ plane, i.e., the regions for which the eigenvalues of the $\Omega$ matrix are positive. The boundaries of the regions are depicted by thick lines. We note that the regions adjoin each other without overlaps or gaps. This fact is exact for the ray $\eps=2\eta>0$, the boundary between the regions of stability for the points FPII and FPIV \[at the same time, this ray is the region of stability for the point FPV\]. On the contrary, the boundary $\eps=\eta$, $\eta>0$ for the point FPII and $\eps=0$, $\eps>\eta$ for FPIV are approximate, so that the gaps or overlaps can appear in the two-loop approximation. The regions denoted as FPIV$a$ and FPIV[*b*]{} with the boundary $\eps=2$ both correspond to the same fixed point FPIV; the part FPIV[*b*]{} represents the region in which the velocity field has negative critical dimension; see Sec. \[sec:summ\]. Surprisingly, Fig. I has some resemblance with the phase diagrams presented in Refs. [@AvelMaj; @Glimm], despite the essential difference between the models (in those papers, a strongly anisotropic velocity field has been studied). Indeed, the boundaries between the diffusive-type behavior (“homogenization regime” in terminology of [@AvelMaj]) and convective-type regimes (“superdiffusive behavior”) in the two models coincide (however, in our case they are not exact and will be affected by the $O(\eps)$ corrections). Furthermore, the Kolmogorov point ($\eps=8/3$, $\eta=4/3$) in our case and in [@AvelMaj] lies on a boundary between two nontrivial regimes. We also note that the boundary $2\eta=\eps$ between the rapid-change and frozen regimes was anticipated on phenomenological grounds in Ref. [@Falk3], see also [@Eyink]; their arguments can be linked directly to the RG analysis (see below). It is clear from the definition of the parameters $g'$, $g''$ that the critical regime governed by the point FPII corresponds to the rapid-change limit (\[RC1\]) of our model, while the point FPIV corresponds to the limit of the frozen velocity field; see Eq. (\[RC2\]). This shows that in the latter case, the temporal fluctuations of the velocity field are asymptotically irrelevant in determining the inertial-range behavior of the scalar, which is then completely determined by the equal-time velocity statistics. In the former case, spatial and temporal fluctuations are both relevant, but the effective correlation time of the scalar field becomes so large under renormalization that the correlation time of the velocity can be completely neglected. The inertial-range behavior of the scalar is determined solely by the $\omega=0$ mode of the velocity field; this is the case of the rapid-change model. We then expect that all the critical dimensions at the point FPII \[FPIV\] depend on the only exponent $\zeta\equiv\eps-\eta$ \[$\eps$\] that survives in the limit in question, and coincide with the corresponding dimensions obtained directly for the models (\[RC1\]) \[(\[RC2\])\]. This is indeed the case; see Eqs. (\[DeltaOmega\]) and (\[Dnp\]) below. In the regimes governed by the trivial fixed points FPI and FPIII, the contribution of the convection dies out in the IR asymptotic region; the IR behavior has purely diffusive character, while the convection can be treated within ordinary perturbation theory. The existence of the two fixed points, the frozen and the rapid-change ones, implies that for $\eta<0$ transport by [*small*]{} wavenumbers $k\to0$ is governed by equal-time (spatial) velocity statistics, while for $\eta>0$ transport by small wavenumbers is determined by the $\omega=0$ mode, i.e., the time decorrelated component of the velocity field. If there were IR singularities in the scalar correlations, they would be determined by the contributions of small momenta, and these two regimes would be really different. However, in the regions of stability of the trivial fixed points there are no such singularities (see the discussion in Sec. \[sec:FT\]). Moreover, in these regimes all momenta $k$ contribute to the long-term, large-scale transport properties of the scalar field (we recall that for $\eta>0$, $\eps<0$ and $\eta<0$, $\eps<\eta$, the actual UV cutoff $\Lambda$ has to be introduced, see Sec. \[sec:FT\], and the main contribution to the perturbative diagrams then comes from the momenta of order $k\sim\Lambda$). The RG is not suitable for studying such “$\Lambda$ divergent,” analytic in momenta and frequecies, quantities. Therefore, the splitting of the homogenization regime into the rapid-change and frozen parts is not meaningless, but not practically useful. Probably for this reason it was not mentioned in Refs. [@AvelMaj; @AvelMaj2; @Glimm]. In what follows, we shall focus our attention on the nontrivial (anomalous) regimes. The solution of the RG equations in conformity with the stochastic hydrodynamics is discussed in Refs. [@JETP; @UFN; @turbo] in detail; see also [@RG; @RG1] for the case of the rapid-change models. Below we restrict ourselves with the only information we need. Any solution of the RG equation (\[RG1\]) can be represented in terms of invariant variables $\bar g(k)$, $\bar u(k)$, and $\bar \nu(k)$, i.e., the first integrals normalized at $k=\mu$ to $g$, $u$, and $\nu$, respectively (we recall that $\mu $ is the renormalization mass in the MS scheme). The relation between the bare and invariant charges has the form $$g_{0}=k^{\eps+\eta}\, \bar g\, Z_{g}(\bar g,\bar u),\quad u_{0}=k^{\eta}\, \bar u\, Z_{u}(\bar g,\bar u), \quad \nu_0=\bar\nu Z_{\nu}(\bar g,\bar u), \label{exo1}$$ see, e.g., [@UFN; @turbo; @AV]. Equation (\[exo1\]) determines implicitly the invariant variables as functions of the bare parameters; it is valid because both sides of it satisfy the RG equation, and because Eq. (\[exo1\]) at $k=\mu$ coincides with (\[mult\]) owing to the normalization of the invariant variables. Correlation time of the velocity field at the wavenumber $k$ is determined by the relation $t_{v}^{-1}(k) = R(k) = u_{0}\nu_0 k^{2-\eta}$, see Eqs. (\[Fin1\]), (\[Fin\]). Correlation time of the free scalar field is given by $t_{\theta}^{-1}(k) = \nu_0 k^{2}$, in the presence of advection it is replaced by the exact expression $t_{\theta}^{-1}(k) =\bar \nu(k) k^{2}$. The relations (\[svaz\]) and (\[exo1\]) allow the bare parameters and renormalization constants to be eliminated from the ratio $t_{\theta}(k) /t_{v}(k) $; this gives $$t_{\theta}(k)/t_{v}(k)=\bar u(k)\propto\const\, k^{-\eta+\gamma^{*}_{\nu}}. \label{times}$$ The last relation in Eq. (\[times\]) holds for $k\to0$. It follows from the RG equation ${\cal D}_{k}\bar u=\beta_{u}(\bar g,\bar u)$, which reduces to ${\cal D}_{k}\bar u= \bar u [-\eta+\gamma^{*}_{\nu}]$ near a fixed point; see Eq. (\[beta2\]). Equation (\[times\]) discloses the precise physical meaning of the invariant variable $\bar u$: the ratio of the velocity and scalar correlation times at the wavenumber $k$. Now we can complete the above discussion of the scaling regimes and relate it to the phenomenological arguments given in Refs. [@Falk3; @Eyink]. From (\[times\]) it follows that for the fixed points FPI and FPII the velocity correlation time $t_{v}(k)$ becomes very small in comparison to $t_{\theta}(k)$ for $k\to0$ and can be disregarded; we arrive at the time-decorrelated velocity field. For FPIII and FPIV, the opposite inequality, $t_{v}(k)>>t_{\theta}(k)$, holds for small momenta, the temporal fluctuations of the velocity are “frozen in,” and its correlation time can be replaced with $t_{v}(k)=\infty$. \[Using the representation (\[times\]) and the exact expressions for $\gamma_{\nu}^{*}$ in Eqs. (\[FPI\]), (\[FPII\]), (\[FPIII\]) and (\[FPIV\]), one can easily check that $\bar u\to\infty$ for FPI and FPII and $\bar u\to0$ for FPIII and FPIV, in agreement with the analysis of the $\Omega$ matrix.\] However, these strong inequalities for the correlation times hold only asymptotically for $k\to0$, and therefore the exact correlator (\[Fin1\]) can be replaced with its limits (\[RC1\]) or (\[RC2\]) only in calculation of a quantity dominated by small $k$ modes of the velocity field. Finally, for the point FPV one has $\gamma_{\nu}^{*}=\eta$ and the ratio (\[times\]) remains finite for $k\to0$; this is the case of the local turnover exponent, studied in [@Falk3]. Let $F$ be some multiplicatively renormalized quantity (a parameter, a field or composite operator), i.e., $F=Z_{F}F_{\rm ren}$ with certain renormalization constant $Z_{F}$. Then its critical dimension is given by the expression $$\Delta[F]\equiv\Delta_{F} = d_{F}^{k}+ \Delta_{\omega} d_{F}^{\omega}+\gamma_{F}^{*}, \label{32B}$$ see, e.g., [@JETP; @UFN; @turbo; @Pismak]. Here $d_{F}^{k}$ and $d_{F}^{\omega}$ are the corresponding canonical dimensions, $\gamma_{F}^{*}$ is the value of the anomalous dimension $\gamma_{F}(g)\equiv \widetilde{\cal D}_\mu \ln Z_{F}$ at the fixed point in question, and $\Delta_{\omega}=2-\gamma^{*}_{\nu}$ is the critical dimension of frequency. For the nontrivial fixed points we obtain $$\Delta_{\omega}= 2- \cases{ \zeta & for FPII, \cr \eps/2 & for FPIV, \cr \eta=\eps/2 & for FPV \cr } \label{DeltaOmega}$$ (we recall that $\zeta\equiv \eps-\eta$, see (\[RC1\])). The critical dimensions of the fields $\Phi$ in our model are also found exactly: $$\Delta_{\bf v}=1-\gamma^{*}_{\nu},\qquad \Delta_{\theta} = -1 \qquad \Delta_{\theta'} = d+1, \label{DimFi}$$ and for the IR scale we have $\Delta_{m}=1$ \[we recall that all these quantities in the model (\[action\]) are not renormalized and therefore their anomalous dimensions vanish identically, $\gamma_{\Phi,m}\equiv 0$\]. It is also not too difficult to show that the composite operator $\theta^{n}$ in the model (\[action\]) is not renormalized and therefore its critical dimension is given simply by the relation $\Delta[\theta^{n}]= n \Delta[\theta]$; cf. [@RG] for the rapid-change case. We note that the canonical dimensions of the fields $\theta$, $\theta'$ in our model (see Table I) differ from their counterparts in the isotropic rapid-change model (see Table I in Ref. [@RG]). As a result, the critical dimensions (\[DeltaOmega\]) and (\[DimFi\]) at the point FPIV differ from their analogs for the rapid-change model, in spite of the fact that the anomalous dimensions are identical. In principle, the canonical dimensions in two models can be made equal by an appropriate rescaling of the scalar fields; we shall not dwell on this point here. Let $G(r)=\langle F_{1}(x)F_{2}(x')\rangle$ be an equal-time two-point quantity, for example, the pair correlation function of the primary fields $\Phi$ or some multiplicatively renormalizable composite operators. The existence of a nontrivial IR stable fixed point implies that in the IR asymptotic region $\Lambda r>>1$ and any fixed $mr$ the function $G(r)$ takes on the form $$G(r) \simeq \nu_{0}^{d_{G}^{\omega}}\, \Lambda^{d_{G}} (\Lambda r)^{-\Delta_{G}}\, \xi(mr), \label{RGR}$$ with the values of the critical dimensions that correspond to the fixed point in question and certain scaling function $\xi$ whose explicit form is not determined by the RG equation itself. The canonical dimensions $d_{G}^{\omega}$, $d_{G}$ and the critical dimension $\Delta_{G}$ ot the function $G(r)$ are equal to the sums of the corresponding dimensions of the quantities $F_{i}$. Critical dimensions of the composite operators $\partial\theta\cdots\partial\theta$ {#sec:Operators} =================================================================================== In the following, an important role will be played by the composite operators of the form $$F[n,p]\equiv \partial_{i_{1}}\theta\cdots\partial_{i_{p}}\theta\, (\partial_{i}\theta\partial_{i}\theta)^{l}, \label{Fnp}$$ where $p$ is the number of the free vector indices and $n=p+2l$ is the total number of the fields $\theta$ entering into the operator; the vector indices of the symbol $F[n,p]$ are omitted. Coincidence of the field arguments in Green functions containing a composite operator $F$ gives rise to additional UV divergences. They are removed by a special renormalization procedure, described in detail, e.g., in [@Zinn; @book3; @Collins]. The discussion of the renormalization of composite operators in turbulence models can be found in [@UFN; @turbo]; see also Ref. [@RG] for the case of Kraichnan’s model. Owing to the renormalization, the critical dimension $\Delta[F]$ associated with certain operator $F$ is not in general equal to the simple sum of critical dimensions of the fields and derivatives entering into $F$. As a rule, the renormalization of composite operators involves mixing, i.e., an UV finite renormalized operator is a linear combination of unrenormalized operators, and vice versa. The analysis of UV divergences is related to the analysis of the corresponding canonical dimensions, cf. Sec. \[sec:RG\]. It shows that the operators $F[n,p]$ mix only with each other in renormalization, with the multiplicative matrix renormalization of the form (dropping the vector indices everywhere) $$F[n,p] = Z_{[n,p]\,[n',p']} F_{\rm ren}[n',p'] \label{Matrix}$$ Here $F_{\rm ren}$ is the renormalized analog of the operator $F$ and $Z$ is the matrix of renormalization constants. The corresponding matrix of anomalous dimensions is defined as $$\gamma_{[n,p]\,[n',p']} = Z_{[n,p]\,[n'',p'']}^{-1} \Dm Z_{[n'',p'']\,[n',p']}. \label{Ma2}$$ A simple analysis of the diagrams shows that the matrix element $Z_{[n,p]\,[n',p']}$ is proportional to $h^{n-n'}$, so that the elements with $n<n'$ vanish (the parameter $h\equiv|\h|$ appears only in the numerators of the diagrams; see Sec. \[sec:RG\]). The elements with $n=n'$ are independent of $h$ and therefore they can be calculated directly in the isotropic model with $\h=0$. The block $Z_{[n,p]\,[n,p']}$ can be then diagonalized by the changing to irreducible operators (scalars, vectors, and traceless tensors); but for our purposes it is sufficient to note that the elements $Z_{[n,p]\,[n,p']}$ vanish for $p<p'$ \[the irreducible tensor of the rank $p$ consists of the monomials with $p'\le p$ only, and therefore only these monomials can admix to the monomial of the rank $p$ in renormalization\]. Therefore, the renormalization matrix in Eq. (\[Matrix\]) is triangular, and so is the matrix (\[Ma2\]). The isotropy is violated for $\h\ne0$, so that the irreducible tensors with different numbers of the fields $\theta$ can mix with each other even though their ranks are also different. In particular, the vector $\partial_{i}\theta$ admixes to the irreducible tensor $\partial_{i}\theta\partial_{j}\theta-\delta_{ij}(\partial_{s}\theta \partial_{s}\theta)/d$ in the form of the traceless combination $2\delta_{ij} (h_{s}\partial_{s}\theta)/d-h_{i}\partial_{j}\theta- h_{j}\partial_{i}\theta$. In the following, we shall not be interested in the precise form of the basis operators, i.e., those having definite anomalous dimensions; we shall rather be interested in the anomalous dimensions themselves. The latter are given by the eigenvalues $\gamma[n,p]$ of the matrix (\[Ma2\]), and in our case they are completely determined by the diagonal elements of the renormalization matrix (\[Matrix\]): $\gamma[n,p]= \Dm \ln Z_{[n,p]\,[n,p]}$. Now let us turn to the one-loop calculation of the constant (\[Znp\]) in the MS scheme. Let $\Gamma(x;\theta)$ be the generating functional of the 1-irreducible Green functions with one composite operator $F[n,p]$ from Eq. (\[Fnp\]) and any number of fields $\theta$. Here $x\equiv (t,{\bf x})$ is the argument of the operator and $\theta(x)$ is the functional argument, the “classical analog” of the random field $\theta$. We are interested in the $\theta^{n}$ term of the expansion of $\Gamma(x;\theta)$ in $\theta(x)$, which we denote $\Gamma_{n}(x;\theta)$; it has the form $$\Gamma_{n}(x;\theta) = \frac{1}{n!} \int dx_{1} \cdots \int dx_{n} \, \theta(x_{1})\cdots\theta(x_{n})\, \langle F[n,p](x) \theta(x_{1})\cdots\theta(x_{n})\rangle_{\rm 1-ir}. \label{Gamma1}$$ In the one-loop approximation the function (\[Gamma1\]) is represented diagramatically in the following manner: $$\Gamma_{n}= F[n,p] +\frac{1}{2} \put(-20.00,-50.00){\makebox{\dA}} \hskip1.4cm . \label{Gamma2}$$ The first term is the “tree” approximation, and the black circle with two attached lines in the diagram denotes the variational derivative $\delta^{2} F[n,p] / {\delta\theta\delta\theta}$. In the momentum representation it has the form $$\begin{aligned} T ({\bf k},{\bf q})\equiv \frac{\delta^{2} F[n,p]}{\delta\theta({\bf k})\delta\theta({\bf q})} = - p\,(p-1)\, k_{i_{1}}q_{i_{2}}\, (\partial_{i_{3}}\theta\cdots\partial_{i_{p}}\theta)\, (\partial_{i}\theta\partial_{i}\theta)^{l} \nonumber \\ - 4pl\, k_{i_{1}}q_{s}\,(\partial_{i_{2}}\theta\cdots \partial_{i_{p}}\theta)\, \partial_{s}\theta \, (\partial_{i}\theta\partial_{i}\theta)^{l-1} \nonumber \\ - 2l ({\bf k}\cdot{\bf q})\, (\partial_{i_{1}}\theta\cdots\partial_{i_{p}}\theta)\, (\partial_{i}\theta\partial_{i}\theta)^{l-1} \nonumber \\ - 4l(l-1)\, k_{j}q_{s} (\partial_{j}\theta\partial_{s}\theta)\, (\partial_{i_{1}}\theta\cdots\partial_{i_{p}}\theta)\, (\partial_{i}\theta\partial_{i}\theta)^{l-2}. \label{variat}\end{aligned}$$ Strictly speaking, we had to symmetrize the right-hand side of Eq. (\[variat\]) with respect to the indices $i_{1}\cdots i_{p}$ and the momenta ${\bf k}$, ${\bf q}$. However, the symmetry is restored automatically after the vertex $T ({\bf k},{\bf q})$ has been inserted into the diagram, which is why only one term of each type is displayed in Eq. (\[variat\]) and the required symmetry coefficients are introduced. The vertex (\[variat\]) contains $(n-2)$ factors of $\partial\theta$. Two remaining “tails” $\theta$ are attached to the vertices $\theta'({\bf v}\partt)\theta$ of the diagram (\[Gamma2\]). It follows from the explicit form of the vertices that these two fields $\theta$ are isolated from the diagram in the form of the overall factor $\partial\theta\partial\theta$; cf. Sec. \[sec:RG\]. In other words, two external momenta, corresponding to these fields $\theta$, occur as an overall factor in the diagram, and the UV divergence of the latter is logarithmic rather than quadratic; cf. the expression (\[novaja2\]), (\[Dyson4\]). Therefore, we can set all the external momenta and the “mass” $m$ equal to zero in the integrand; the IR regularization is provided by the cut-off of the integral at $q\simeq m$. Then the UV divergent part of the one-loop diagram (\[Gamma2\]) can be written in the form $$\partial_{p}\theta\partial_{l}\theta \int \frac{d\omega}{2\pi} \int \frac{d{\bf q}}{(2\pi)^{d}} \, T ({\bf q},-{\bf q}) \frac{P_{pl}({\bf q})\,D_{v}(\omega,q)} {\omega^{2}+\nu^2 q^{4}} . \label{diagr01}$$ The expression (\[diagr01\]) is a linear combination of the integrals $$T_{ij,pl}= \int \frac{d\omega}{2\pi} \int \frac{d{\bf q}}{(2\pi)^{d}} \, \frac{q_{i}q_{j}\, P_{pl}({\bf q})\,D_{v}(\omega,q)} {\omega^{2}+\nu^2 q^{4}}. \label{diagr1}$$ We perform the integration over $\omega$ and make use of the isotropy, namely, $$\int d{\bf q}\, f(q)\,q_{i}q_{j}\, P_{pl}({\bf q}) = \frac{(d+1)\delta_{pl}\delta_{ij}-\delta_{pi}\delta_{lj} - \delta_{pj}\delta_{li}}{d(d+2)} \int d{\bf q}\, f(q)\, q^{2}.$$ This gives $$T_{ij,pl}= \frac{(d+1)\delta_{pl}\delta_{ij}-\delta_{pi}\delta_{lj} - \delta_{pj}\delta_{li}}{2ud(d+2)}\, g\mu^{\eps} \,J \label{diagr2}$$ with the integral $J$ from Eq. (\[J\]). Substituting Eqs. (\[variat\]) and (\[diagr2\]) into Eq. (\[diagr01\]) gives the desired expression for the divergent part of the diagram (\[Gamma2\]). In this expression we have to take into account all the terms proportional to the operator $F[n,p]$ and neglect all the other terms, namely, the terms containing the factors of $\delta_{i_{1}i_{2}}$ etc. The latter determine non-diagonal elements of the matrix (\[Matrix\]), which we are not interested in here. Finally we obtain $$\Gamma_{n}\simeq F[n,p] \left[1 - \frac {g\mu^{\eps} \,J\,Q[n,p]} {4ud(d+2)} \right] + \cdots, \label{diagr3}$$ where we have written $$\begin{aligned} Q _{np}& \equiv & 2n\,(n-1) - (d+1)\, (n-p)\, (d+n+p-2) = \nonumber \\ &=& 2p\,(p-1) - (d-1)\, (n-p)\, (d+n+p) . \label{Qnp}\end{aligned}$$ The dots in Eq. (\[diagr3\]) stand for the $O(g^{2})$ terms and the structures different from $F[n,p]$, $\simeq$ denotes the equality up to the UV finite parts; we also recall that $n=p+2l$. The constant $Z_{[n,p],[n,p]}$ is found from the requirement that the renormalized analog $\Gamma_{n}^{\rm ren}\equiv Z_{[n,p],[n,p]}^{-1}\Gamma_{n}$ of the function (\[diagr3\]) be UV finite (mind the minus sign in the exponent); along with the representation (\[J1\]) for the integral $J$ and the MS scheme this gives the following result: $$Z_{[n,p]\,[n,p]}=1- \frac{ag}{u}\, \frac{Q[n,p]}{2(d-1)(d+2)} \sum^{\infty}_{s=0} \frac{(-u)^{s}} {\eps+s\eta}\, , \label{Znp}$$ with the polynomial $Q[n,p]$ from Eq. (\[Qnp\]) and the constant $a$ is given in Eq. (\[a\]). For the anomalous dimension (\[Ma2\]) we then obtain: $$\gamma[n,p]= \frac{ag\,Q[n,p]}{2u(u+1)(d-1)(d+2)}\, ; \label{Q}$$ cf. Sec. \[sec:RG\] for the dimension $\gamma_{\nu}$. The critical dimension associated with the operator $F[n,p]$ has the form $\Delta[n,p]= \gamma^{*}[n,p]$; see Eq. (\[32B\]) and Table I ($\gamma^{*}$ denotes the value of $\gamma$ at the fixed point in question). For the nontrivial fixed points discussed in Sec. \[sec:Fixed\] we then obtain $$\Delta[n,p] = \frac{Q[n,p]}{2(d-1)(d+2)}\times \cases{ \zeta\equiv\eps-\eta & for FPII, \cr \eps/2 & for FPIV, \cr \eta =\eps/2 & for FPV \cr } \label{Dnp}$$ with the corrections of order $O(\eps^{2})$. The expression (\[Dnp\]) illustrates the general fact that the critical dimensions in the rapid-change and frozen regimes depend only on the exponents $\zeta$ and $\eps$, respectively. It turns out that the dimension $\Delta[n,p]$ at the point FPV is universal, i.e., it is independent of the free parameter $u_{*}$, or, equivalently, of the specific choice of a fixed point on the curve described by Eq. (\[FPV\]). This is a consequence of the explicit form of the RG functions in the one-loop approximation (the same combination $g/u(u+1)$ enters into the beta functions and the anomalous dimension of the operator $F[n,p]$). We then expect that the exact dimension $\Delta[n,p]$ at the point FPV is nonuniversal, and the dependence on $u_{*}$ will appear at the two-loop level. Another artifact of the one-loop approximation is the continuity of the dimension $\Delta[n,p]$ at the crossover line $\eps=2\eta$ as a function of the exponents $\eps$, $\eta$. The first-order result (\[Dnp\]) for the operator $F[2,0]$ (the local dissipation rate) is in fact exact. The proof is based on certain Schwinger equation; it is almost identical to the analogous proof for the Kraichnan model, given in [@RG], and will not be discussed here. The above analysis applies also to the case of a nonsolenoidal velocity field (compressible fluid). The transversal projector in Eq. (\[f\]) is then replaced with $P_{ij}({\bf k})+\alpha Q_{ij}({\bf k})$, where $Q_{ij}({\bf k})\equiv k_{i}k_{j}/k^{2}$ is the longitudinal projector and $\alpha>0$ is an additional arbitrary parameter, the degree of compressibility. For the rapid-change regime (\[RC1\]), the dimension $\Delta[n,p]$ takes on the form $$\Delta[n,p] = \frac{-\zeta}{(d+2)} \left[ \frac{(n-p)(d+n+p)}{2} + \frac{p(p-1)(\alpha-1)+\alpha(n-p)(n+p-2)}{(d-1+\alpha)} \right] + O(\zeta^{2}), \label{Dnp2}$$ in agreement with the $p=0$ results obtained in Refs. [@tracer] for the ‘tracer’ model and earlier in Ref. [@VM] for $d=1$. In general case (\[Fin\]), additional superficial UV divergence emerges in the 1-irreducible Green function $\langle\theta'\theta v\rangle_{\rm 1-ir}$, and the second independent renormalization constant should be introduced as a coefficient in front of the new counterterm $\theta'({\bf v}\partt)\theta$. This case requires special analysis and will be discussed elsewhere[^2] (in particular, the nontrivial fixed point becomes infinite for the purely potential frozen velocity field, cf. [@walks1; @walks3] for the random walks in random environment). Operator product expansion and the anomalous scaling for the structure functions and other correlators {#sec:OPE} ====================================================================================================== The representation (\[RGR\]) for any scaling function $\xi(mr)$ describes the behavior of the Green function for $\Lambda r>>1$ and any fixed value of $mr$. The inertial-convective range corresponds to the additional condition that $mr<<1$. The form of the function $\xi(mr)$ is not determined by the RG equations themselves; in the theory of critical phenomena, its behavior for $mr\to0$ is studied using the well-known Wilson operator product expansion (OPE); see, e.g., [@Zinn; @book3; @Collins]. This technique is also applicable to the theory of turbulence; see [@RG; @RG1; @LOMI; @JETP; @UFN; @turbo]. According to the OPE, the equal-time product $F_{1}(x)F_{2}(x')$ of two renormalized operators at ${\bf x}\equiv ({\bf x} + {\bf x'} )/2 = {\const}$ and ${\bf r}\equiv {\bf x} - {\bf x'}\to 0$ has the representation $$F_{1}(x)F_{2}(x')=\sum_{\alpha}C_{\alpha} ({\bf r}) F_{\alpha}({\bf x,t}) , \label{OPE}$$ where the functions $C_{\alpha}$ are the Wilson coefficients regular in $m^{2}$ and $ F_{\alpha}$ are all possible renormalized local composite operators allowed by symmetry, with definite critical dimensions $\Delta_{\alpha}$. The renormalized correlator $\langle F_{1}(x)F_{2}(x') \rangle$ is obtained by averaging Eq. (\[OPE\]) with the weight $\exp S_{ren}$, the quantities $\langle F_{\alpha}\rangle$ appear on the right-hand side. Their asymptotic behavior for $m\to0$ is found from the corresponding RG equations and has the form $\langle F_{\alpha}\rangle \propto m^{\Delta_{\alpha}}$. From the operator product expansion (\[OPE\]) we therefore find the following expression for the scaling function $\xi(mr)$ in the representation (\[RGR\]) for the correlator $\langle F_{1}(x)F_{2}(x') \rangle$: $$\xi(mr)=\sum_{\alpha}A_{\alpha}\,(mr)^{\Delta_{\alpha}}, \label{OR}$$ where the coefficients $A_{\alpha}=A_{\alpha}(mr)$ are regular in $(mr)^{2}$. In the models of critical phenomena, the leading contribution to the representations like (\[OR\]) is related to the simplest operator $F=1$ with the minimal dimension $\Delta_{\alpha}=0$, while the other operators determine only the corrections that vanish for $mr\to0$. The feature characteristic of the turbulence models is the existence of the so-called “dangerous” composite operators with negative critical dimensions [@RG; @RG1; @LOMI; @JETP; @UFN; @turbo]. Their contributions to the operator product expansions determine the IR behavior of the scaling functions and lead to their singular dependence on $m$ for $mr\to0$. If the spectrum of the dimensions $\Delta_{\alpha}$ for a given scaling function is bounded from below, the leading term of its behavior for $mr\to0$ is simply given by the minimal dimension. This is the case of the rapid-change model (see [@RG; @RG1]), and, as we shall see below, of our model (\[action\]). \[The exception is provided by the non-rapid-changes regimes, if the values of the exponents $\eps$, $\eta$ are large enough. It is discussed in the subsequent Section.\] Consider for definiteness the equal-time structure functions of the scalar field: $$S_{n}(r)\equiv\langle[\theta(t,{\bf x})-\theta(t,{\bf x'})]^{n}\rangle, \quad r\equiv|{\bf x}-{\bf x'}| . \label{struc}$$ For the functions (\[struc\]), the representation of the form (\[RGR\]) is valid with the dimensions $d_{G}^{\omega}=0$ and $d_{G}=\Delta_{G}=n \Delta_{\theta}=-n$. In general, the operators entering into the operator product expansions are those which appear in the corresponding Taylor expansions, and also all possible operators that admix to them in renormalization. The leading term of the Taylor expansion for the function (\[struc\]) is given by the $n$-th rank tensor $F[n,n]$ from Eq. (\[Fnp\]). The decomposition of $F[n,n]$ in irreducible tensors gives rise to the dimensions $\Delta[n,p]$ with all possible values of $p$; the admixture of junior operators gives rise to all the dimensions $\Delta[k,p]$ with $k<n$. Therefore, the asymptotic expression for the structure function has the form $$S_{n}(r) \simeq (hr)^n\, \sum_{k=0}^{n} \sum_{p=p_{k}}^{k} \left[C_{kp}\, (mr)^{\Delta[k,p]}+\cdots\right]. \label{struc2}$$ Here and below $p_{k}$ denotes the minimal possible value of $p$ for given $k$, i.e., $p_{k}=0$ for $k$ even and $p_{k}=1$ for $k$ odd; $C_{kp}$ are some numerical coefficients dependent on $\eps$, $\eta$, $d$, and on the angle between the vectors ${\bf h}$ and ${\bf r}$. Some remarks are in order. The dots in Eq. (\[struc2\]) stand for the contributions of the order $(mr)^{2+O(\eps)}$ and higher, which arise from the senior operators, for example, $\partial^{2}\theta\partial^{2}\theta$ or ${\bf v}^2$. In the original Kraichnan model, only scalar operators give contributions to the representations like (\[struc2\]), because the mean values $\langle F_{\alpha}\rangle$ of all the other irreducible tensors vanish owing to the isotropy; see [@RG; @RG1]. In the model (\[action\]), the traceless irreducible tensors acquire nonzero mean values, and their dimensions appear on the right-hand side of Eq. (\[struc2\]). In particular, the mean value of the operator $\partial_{i}\theta\partial_{j} \theta-\delta_{ij} (\partial_{s}\theta \partial_{s}\theta)/d$ is proportional to the traceless tensor of the form $\delta_{ij}(h_{s}h_{s})/d-h_{i}h_{j}$, its tensor indices are contracted with the indices of the corresponding coefficient $C_{\alpha}$ in Eq. (\[OPE\]). The operators $F[k,p]$ with $k>n$ (whose contributions would be more important) do not appear in Eq. (\[struc2\]), because they do not appear in the Taylor expansion of the function $S_{n}$ and do not admix in renormalization to the terms of the Taylor expansion. The leading term of the expression (\[struc2\]) for $mr\to0$ is obviously given by the contribution with the minimal possible dimension. The straightforward analysis of the explicit one-loop expression (\[Dnp\]) shows that for fixed $n$, any $d\ge2$, and any nontrivial fixed point, the dimension $\Delta[n,p]$ decreases monotonically with $p$ and reaches its minimum for the minimal possible value of $p=p_{n}$, i.e., $p=0$ if $n$ is even and $p=1$ if $n$ is odd. Furthermore, this minimal value $\Delta[n,p_{n}]$ decreases monotonically as $n$ increases, i.e., $$\Delta[2k,0]>\Delta[2k+1,1]>\Delta[2k+2,0].$$ \[A similar hierarchy has been established recently in Ref. [@hi] for the magnetic field advected passively by the rapid-change velocity in the presence of large-scale anisotropy.\] Therefore, the desired leading term for the even (odd) structure function $S_n$ is determined by the scalar (vector) composite operator consisting of $n$ factors $\partial\theta$ and has the form $$S_{n}(r) \propto (hr)^n\, (mr)^{\Delta[n,p_{n}]} \label{struc3}$$ with the dimension $\Delta[n,p]$ given in Eq. (\[Dnp\]). For the rapid-change fixed point and even values of $n$, the total power of $r$ in Eq. (\[struc3\]) coincides with the exponent in the original isotropic Kraichnan model, calculated to the order $O(\zeta)$ in [@BGK] and $O(1/d)$ in [@Falk2] within the zero-mode approach, and to the order $O(\zeta^{2})$ in [@RG] within the framework of the RG. We also note that the anomalous dimensions associated with the operators $F[2k,2]$ were calculated in [@RG] to the order $O(\zeta^{2})$; the exact dimension of the operator $F[2,2]$ was found in [@Falk1]. \[It should be noted that the [*decomposition*]{} of the total exponent in Eq. (\[struc3\]) into the critical dimension of the composite operator and the critical dimension of the structure function itself differs from the analogous decomposition for Kraichnan’s model, as a result of the difference in canonical dimensions; see Sec. \[sec:RG\]\]. The result (\[struc3\]) for the third-order structure function in the rapid-change model coincides with the $O(\zeta)$ result obtained in [@Pumir2] within the zero-mode technique; see also the earlier paper [@Pumir] for the three-dimensional result. We note that the exponents $-7\zeta/5$ and $3\zeta/5$ from [@Pumir] should be identified with the anomalous dimensions $\Delta[3,1]$ and $\Delta[3,3]$, respectively. The result (\[struc3\]) for $n=3$ is also in agreement with the $O(1/d)$ result obtained in [@Gut], with the identification $\gamma+1-\Delta=3+\Delta[3,1]$. For the case of the frozen velocity field (FPIV), the first-order results for the even structure functions were presented in [@RG]. We also note that they satisfy the exact inequalities obtained for the time-independent case in [@Eyink]. The analysis given above is extended directly to the case of other correlation functions. For example, the analog of the expression (\[struc3\]) for the equal-time pair correlation function of the operators (\[Fnp\]) has the form $$\langle F[n,p]\, F[n',p']\rangle \simeq h^{n+n'}\, (\Lambda r)^{-\Delta[n,p_{n}]-\Delta[n',p_{n'}]} (mr)^{\Delta[n+n',\, p_{n+n'}]} \, . \label{struc4}$$ Some special cases of the relation (\[struc4\]) for the rapid-change model were obtained earlier in Refs. [@Falk1; @Falk2; @GK; @BGK; @RG]. Another interesting example is the equal-time pair correlator $\langle\theta^{n}(t,{\bf x})\theta^{k}(t,{\bf x'})\rangle$. Substituting the relations $d^{\omega}_{G}=0$ and $d_{G}=\Delta_{G}=-(n+k)$ into the general expression (\[RGR\]) gives $\langle\theta^{n}\theta^{k}\rangle=r^{n+k}\xi(mr)$, and the small $mr$ behavior of the scaling function $\xi(mr)$ is found from Eq. (\[OR\]) (here and below, we do not display the obvious dependence on $h$). In contrast to the previous example, composite operators in the expansion (\[OPE\]) can involve the field $\theta$ [*without derivatives*]{}. The leading term in Eq. (\[OR\]) is then given simply by the operator $\theta^{n+k}$ with $\Delta_{F}=-(n+k)$, while the first correction is related to the monomial $(\partial_{i}\theta\partial_{i}\theta)\theta^{n+k-2}$ whose critical dimension is easily found to be $\Delta_{F}= -(n+k)+\Delta_{\omega}$ with $\Delta_{\omega}$ from Eq. (\[DeltaOmega\]). Therefore, in the inertial range our correlator has the form $\langle\theta^{n}\theta^{k}\rangle\simeq c_{1} m^{-(n+k)}-c_{2} m^{-(n+k)}(mr)^{\Delta_{\omega}}+\dots$, a large constant minus a powerlike correction (the signs of the constants $c_{i}$ are explained by the fact that the correlator is positive and decreases as $r$ grows). In the structure functions (\[struc\]) all the contributions related to operators containing fields without derivatives cancel out to give the behavior (\[struc2\]), determined by the operators constructed only of field derivatives. Finally, we note that the hierarchy of critical dimensions $\Delta[n,p]$, established in Sec.\[sec:Operators\], persists also for the nonsolenoidal velocity field, see (\[Dnp2\]). Therefore, the asymptotic expressions like (\[struc2\]), (\[struc3\]) and (\[struc4\]) remain valid for the compressible case (a ‘tracer’ in terminology of [@tracer]) with the exponents $\Delta[n,p]$ given in (\[Dnp2\]). Summation of the dangerous contributions from the powers of the velocity field {#sec:summ} ============================================================================== A new interesting problem emerges as the parameters $\eps$ and $\eta$ increase and the velocity field becomes dangerous; see Eq. (\[DimFi\]). Owing to the Gaussianity, all its powers also become dangerous with the dimensions $\Delta[v_{i_1}\cdots v_{i_n}]=n\Delta_{\bf v}$. The analysis of the diagrams shows that in the rapid-change regime, these operators do not contribute to the operator product expansions of the equal-time correlators like (\[struc\]) or (\[struc4\]), but all the contributions of the scalars $({\bf v}^2)^n$ do appear in those OPE for the non-rapid-change regimes. The spectrum of their dimensions is unbounded from below, and in order to find the small $mr$ behavior we have to sum up all their contributions $\propto (mr)^{n\Delta[{\bf v}]}$ in the representation (\[OR\]). We have employed the infrared perturbation theory in the form developed in [@LOMI; @JETP] to perform the required summation for the structure function $S_2$ in the frozen regime and within the one-loop approximation for the Wilson coefficients. In this case, the velocity becomes dangerous for $\eps>2$ (region FPIV[*b*]{} on Fig. I). The function $S_2$ is represented in the form $$S_{2} = \int \D\Phi\, [\theta(t,{\bf x})-\theta(t,{\bf x'})]^{2} \, \exp S(\Phi) \label{sum1}$$ with the action functional from Eq. (\[action\]). Following [@LOMI; @JETP], we split the velocity field in Eq. (\[sum1\]) into two components, ${\bf v}(x) = {\bf v}_{<} (x) +{\bf v}_{>} (x)$, referring to the “soft” component, ${\bf v}_{<}$, all the Fourier modes with $k\le k_{*}$, and to to the “hard” component, ${\bf v}_{>}$, all the remaining modes with $k> k_{*}$. Here $k_{*}$ is a fixed arbitrary separating scale, which will not enter into the final expressions. Since we are interested in only the contributions of the operators $({\bf v}^2)^n$ into the OPE, we can neglect the spacetime inhomogeneity of the soft field. It then becomes a random variable (rather than a random field) with the statistics determined by the relation $$\langle{\bf v}_{<}\cdots{\bf v}_{<} \rangle\equiv \langle{\bf v} (x) \cdots{\bf v}(x)\rangle. \label{sum2}$$ Furthermore, we confine ourselves to the one-loop approximation for the corresponding Wilson coefficients, so that we can omit the contribution of the hard field in the vertex $\theta'({\bf v}\partt)\theta$. Then the integration over the fields $\theta$, $\theta'$, and ${\bf v}_{>}$ in Eq. (\[sum1\]) gives: $$S_{2} =2 \int \frac{d{\bf k}}{(2\pi)^{d}} \int \frac{d\omega}{(2\pi)} \biggl[ 1-\exp({\rm i}{\bf k}\cdot{\bf r}) \biggr] \frac{P({\bf k}) D_{v}(\omega,{\bf k})} {(\omega-{\bf v}_{<}\cdot{\bf k})^{2}+\nu_0^{2} k^{4}}\, , \label{sum3}$$ where $P({\bf k})\equiv h_{i}h_{j}P_{ij}({\bf k})$, the correlator $D_{v}$ is given by Eq. (\[Fin\]), and the averaging over ${\bf v}_{<}$ with the statistics (\[sum2\]) is to be performed. The mean values (\[sum2\]) in Eq. (\[sum3\]) correspond to the contributions from $\langle{\bf v}(x)\cdots{\bf v}(x)\rangle$ in the in the representation (\[OR\]) for $S_{2}$. For the rapid-change model, the correlator $D_{v}$ is independent of the frequency; see Eq. (\[RC1\]). It then follows from the expression (\[sum3\]) that the dependence on ${\bf v}_{<}$ vanishes after the integration over $\omega$, which means that the operators $({\bf v}^2)^n$ give no contribution to the OPE for $S_{2}$. Now let us turn to the case of the frozen velocity field. We substitute the correlator (\[RC2\]) into Eq. (\[sum3\]) and perform the integration over $\omega$; this gives: $$S_{2} = g_{0}''\nu_0^{2}\,\int \frac{d{\bf k}}{(2\pi)^{d}} \biggl[ 1-\exp({\rm i}{\bf k}\cdot{\bf r}) \biggr] \frac{P({\bf k})\,(k^{2}+m^{2})^{-d/2+1-\eps/2} } {({\bf v}_{<}\cdot{\bf k})^{2}+\nu_0^{2} k^{4}}. \label{sum4}$$ A brief digression is required here. We are interested in the small $mr$ behavior of the scaling function $\xi(mr)$ from Eq. (\[OR\]), so that we have to combine the expression (\[sum4\]) with the representation (\[RGR\]) for the function $S_{2}$. In renormalized variables, the latter is written in the form $S_{2}=(hr)^{2} f(\mu r,g'', u, mr)$, where $f$ is some function of completely dimensionless arguments. The function $\xi(mr)$ is then given by the relation $\xi(mr)=f(1, g_{*}'', 0, mr)$ with $g_{*}''$ from Eq. (\[FPIV\]) (we recall that for the frozen regime, $u_{*}=0$). The renormalization of Eq. (\[sum4\]) in our approximation reduces to the replacement $g_{0}''\to g''\mu^{\eps}$, $\nu_0\to \nu$, and the changeover to the scaling function is given by the substitution $g''\to g_{*}''$, $\mu\to 1/r$. From now on, all these substitutions are implied. The expansion of the denominator in Eq. (\[sum4\]) in $({\bf v}_{<}\cdot{\bf k})^{2}$ gives: $$\frac{1}{({\bf v}_{<}\cdot{\bf k})^{2}+\nu^{2} k^{4}} = \frac{1}{\nu^{2} k^{4}} \sum_{n=0}^{\infty} (-1)^{n}\, \frac{({\bf v}_{<}\cdot{\bf k})^{2n}}{\nu^{2n} k^{4n}}. \label{sum5}$$ It follows from Eqs. (\[RC2\]) and (\[sum2\]) that the correlators of the soft field have the form $$\langle (v_{<})_{i_{1}}\cdots (v_{<})_{i_{2n}} \rangle = D\, [\delta_{i_{1}i_{2}}\delta_{i_{3}i_{4}}\cdots\delta_{i_{2n-1}i_{2n}}+ {\rm all\ possible\ permutations}\,] , \label{sum6}$$ where $$D\simeq r^{\eps} \nu^{2}\,\int \frac{d{\bf k}}{(2\pi)^{d}}\, (k^{2}+m^{2})^{-d/2+1-\eps/2} \simeq \nu^{2}\,m^{2} (mr)^{-\eps}. \label{sum7}$$ (here and below $\simeq$ denotes the equality up to a numeric factor). Strictly speaking, the integral (\[sum7\]) should be cut off from above at $k\sim k_{*}$. For $\eps>1$, the cut-off can be removed without changing the singularity at $m\to0$. The averaging Eq. (\[sum5\]) over ${\bf v}_{<}$ gives: $$\frac{1}{\nu^{2}k^{4}}\sum_{n=0}^{\infty} \frac{(2n)!}{n!}\, (-z)^{n}, \quad z\equiv \frac {D} {2k^{2}\nu^2}\simeq \frac {m^{2}(mr)^{-\eps}}{k^{2}} \label{sum8}$$ \[we note that $(2n)!/2^{n}n!=(2n-1)!!$ is the number of terms in Eq. (\[sum6\])\]. The small $mr$ limit implies $z\to\infty$. The large $z$ behavior of the series in Eq. (\[sum8\]) is found from the following integral representation: $$\sum_{n=0}^{\infty} \frac{(2n)!}{n!} (-z)^{n}= \int^{\infty}_{0} dt\, \exp(-zt^{2}-t) = \sqrt{\pi/4z} \left[1+O(1/\sqrt{z}) \right]. \label{sum9}$$ Substituting Eq. (\[sum9\]) into Eq. (\[sum4\]) gives: $$S_{2}\simeq m^{-2} (mr)^{-\eps/2} \int d {\bf y} \biggl[ 1-\exp({\rm i}{\bf y}\cdot m{\bf r}) \biggr] P({\bf y}) y^{-3} (1+y^{2})^{-d/2+1-\eps/2}, \label{sum10}$$ where we have changed to the dimensionless integration variable ${\bf y}$ defined so that ${\bf k}=m{\bf y}$. Expansion of the quantity in the square brackets for small $mr$ gives $$S_{2}\simeq (mr)^{-\eps/2} r_{i}r_{j} \int d {\bf y} P({\bf y}) y_{i}y_{j} y^{-3} (1+y^{2})^{-d/2+1-\eps/2} \label{sum11}$$ \[the first term of the expansion gives no contribution to Eq. (\[sum11\]), owing to the evenness of the rest of the integrand\]. The vector indices can be isolated from the integral (\[sum11\]); this gives rise to the two structures, $\delta_{ij}h^{2}$ and $h_{i}h_{j}$, with the scalar coefficients proportional to the integral $\int d {\bf y} y^{-1} (1+y^{2})^{-d/2+1-\eps/2}$. One can easily check that for $\eps+\eta>1$, it is both IR and UV convergent, so that the leading terms of the small $mr$ behavior of the integral (\[sum10\]) are indeed obtained simply by the expansion of the integrand and have the form $h^{2}\,r^{2}\,(mr)^{-\eps/2}$ and $({\bf h}\cdot{\bf r})^{2}(mr)^{-\eps/2}$. Therefore, it turns out that the contributions of the operators $({\bf v}^2)^n$ sum up to the powerlike expression $r^2 (mr)^{-\eps/2}$. In other words, the infinite sum of these dangerous operators gives to the function $S_2$ contribution of the same form as the single operator $F[2,0]$, and therefore the IR behavior of $S_2$ is given by the same expression (\[struc3\]) for all values of the exponent $\eps$ \[of course, the corresponding amplitudes for $\eps>2$ acquire an additional contribution from the operators $({\bf v}^2)^n$\]. The infinite family of the dangerous operators $({\bf v}^2)^n$ also occurs in the RG approach to the stochastic NS equation; see [@LOMI; @JETP; @UFN; @turbo]. In that case, their summation in the OPE for different-time correlators leads to the expressions that are analogous to [@sweep] and describe the well-known sweeping effects. The contributions of these operators vanish when one changes to the equal-time Galilean invariant functions, for example, the structure functions of the velocity field. In this connection, it should be stressed that the appearance of the operators $({\bf v}^2)^n$ in the structure functions of the model (\[action\]) is not an artifact of the synthetic velocity statistics. One can check that in the presence of a mean gradient, the same effect occurs even though the velocity is generated by the Galilean covariant stochastic NS equation. On the contrary, if the effective scalar noise in the diffusion equation is proportional to the delta function in time (as in original Kraichnan’s model), the operators $({\bf v}^2)^n$ are absent from the equal-time correlators whatever be the velocity statistics. One can probably consider this effect as an additional source of the breakdown of the Kolmogorov–Obukhov theory for the passive scalar advection. The summation given above can be viewed as a possible model of the origin of the anomalous scaling in the structure functions of the velocity field: it was argued in [@AV] that the singular dependence on $mr$ of the equal-time correlators for the stochastic NS equation can be related to [*infinite*]{} families of dangerous operators. The summation can also be performed in a more formal way, without referring to the infrared perturbation theory, by using only the operator product expansion for the quantity $[\theta(t,{\bf x})- \theta(t,{\bf x'})]^{2}$ and not only its average (\[sum1\]). It is also possible to take into account all the composite operators constructed of the velocity and its time derivatives; see [@Kim]. “Exotic” scaling regimes {#sec:Exo} ======================== So far, we have considered the standard formulation of the asymptotic problem, for which the dimensional bare charges entering into the velocity correlator (\[Fin\]) depend on the UV scale only; see Eq. (\[gg\]). In a number of papers, e.g., [@synth; @Pumir2; @ShS; @ShS2; @PDF; @Falk3; @Eyink], a different version of the problem was considered, in which the velocity correlator depends also on the IR scale $L\equiv m^{-1}$. The RG method is applicable also to this “exotic” (from the viewpoints of the theory of critical behavior) situation. No additional calculation of the fixed points or anomalous exponents is required; only the regions of stability of the fixed points FPI–FPV can change.[^3] As already said in Sec. \[sec:Fixed\], solutions of the RG equation (\[RG1\]) can be represented in terms of invariant charges $\bar g$, $\bar u$. The relations (\[exo1\]) determine implicitly the invariant charges as functions of the bare parameters; the former depend on the scales $\Lambda$, $m$ only through their dependence on the latter. The solution of the equations (\[exo1\]) for any given $g_{0}$, $u_{0}$ would show which of the fixed points is “chosen by the RG flow.” However, the complete solution of this problem is too difficult to achieve; the main obstacle is that the renormalization constants in Eq. (\[exo1\]) must be taken in the one-loop order of the RG, while we know them only in the one-loop order of the ordinary perturbation theory; see Eq. (\[Z\]). In other words, we know the renormalization constants only up to the terms of order $O(g)$, and here we need to take into account all the terms of the form $(g/\eps)^{n}$ etc. We shall not try to solve the problem completely, we rather give a simple recipe that allows one to distinguish between the [*nontrivial*]{} scaling regimes. The renormalization constants can be eliminated from Eq. (\[exo1\]) using the relation (\[svaz\]); this gives $$\chi(k)\equiv\frac{\bar g}{\bar u^{3}} = \frac{g_{0}}{u^{3}_{0}}\, k^{\eta-\zeta}, \label{exo2}$$ with $\zeta\equiv \eps-\eta$, see Eq. (\[RC1\]). Of course, the identity (\[exo2\]) contains less information than the full system (\[exo1\]); in particular, we cannot judge from Eq. (\[exo2\]) whether or not the invariant charges in the IR regime are attracted by one of the nontrivial fixed points FPII, FPIV or FPV. Fortunately, if we know (or assume) that this is the case, we can definitely say which point is involved. Indeed, for FPII we have $\bar u\to\infty$, $\bar g/\bar u^{2}=\const$ (see Sec. \[sec:Fixed\]) and therefore $\chi(k) \to 0$ at this fixed point. For FPIV ($\bar u\to0$, $\bar g/\bar u=\const$) we have $\chi(k) \to \infty$ and for FPV the function $\chi(k)$ remains $O(1)$. We also note that for this case $\chi(k)$ is independent of $k$ owing to the equality $\zeta=\eta$. Therefore, taking the limit $k\to0$ in (\[exo2\]) leads to the exact relation $g_{0}/u^{3}_{0}=g_{*}/u^{3}_{*}$ for FPV, which was used in Sec. \[sec:Fixed\] when deriving Eq. (\[roots\]). For the standard regime (\[gg\]) it follows from Eq. (\[exo2\]) that $\chi(k) \propto (k/\Lambda)^{\eta-\zeta}$. Then the above considerations show that in the IR asymptotic region, $k<<\Lambda $, the invariant charges approach the point FPII for $\zeta-\eta<0$, FPIV for $\zeta-\eta>0$, and FPV for $\zeta=\eta$, in agreement with the analysis given in Sec. \[sec:Fixed\]. Now let us turn to the example of the “exotic” regime for which the characteristic frequency of the velocity field depends only on the IR scale, $\omega\propto {(Wk^{2})}^{1/3} (k/m)^{4/3-\eta}$ \[the dependence on the mean energy dissipation rate, $W$, is implied\], while the equal-time second-order structure function of the velocity is expressed only through the UV scale, $S_{2}(r)\propto (Wr)^{2/3} (\Lambda r)^{-8/3+\zeta+\eta}$. The comparison with Eq. (\[Fin\]) gives $u_{0}\nu_0 \propto {W}^{1/3} (k/m)^{4/3-\eta}$ and $g_{0}\nu_0^{2}/u_{0} \propto {W}^{2/3} \Lambda ^{-8/3+\zeta+\eta}$. From Eq. (\[exo2\]) we then obtain $$\chi(k)\propto (\Lambda /k)^{\alpha} \, (k/m)^{\beta}\, , \label{exo3}$$ with the exponents $$\alpha\equiv -8/3+\zeta+\eta, \qquad \beta \equiv -8/3 +2\eta. \label{exo4}$$ It follows from Eqs. (\[exo3\]), (\[exo4\]) that the regions of stability of the fixed points have changed: the point FPII is approached if $\alpha$, $\beta<0$, FPIV is approached if $\alpha$, $\beta>0$, and the nonuniversal regime FPV takes place for the unique choice $\alpha=\beta=0$ that exactly corresponds to the Kolmogorov velocity correlator ($\zeta=\eta=4/3$). The most interesting scaling regime arises when the signs of the exponents $\alpha$ and $\beta$ are opposite, say, $\alpha>0$, $\beta<0$. Then near the upper bound of the inertial range, $k>>m$, $\Lambda \sim k$, we have $\chi(k) <<1$, and the critical dimensions are given by the rapid-change fixed point FPIV; near the lower bound of the inertial range, $\Lambda >>k$, $k\sim m$, we have $\chi(k) <<1$, and the “frozen” point FPV works. Therefore, the same critical regime is described by two different fixed points and, consequently, two sets of critical dimensions! It is tempting to relate this fact to the effect observed experimentally in the boundary layer: the second-order structure function exhibits two power-law regimes with a pronounced break in the exponent [@break]. Of course, one should not insist too much on this interpretation. Conclusion {#sec:Con} ========== We have applied the RG and OPE methods to the simple model describing the advection of a passive scalar by the synthetic velocity field and in the presence of an imposed linear mean gradient. The statistics of the velocity is Gaussian, with a given self-similar correlator with finite correlation time. We have shown that the model possesses the RG symmetry, and the corresponding RG equations have several fixed points. As a result, the correlation functions of the scalar field in the inertial-convective range exhibit various types of scaling behavior: diffusive-type regimes for which the advection can be treated within the ordinary perturbation theory, and three nontrivial convection-type regimes for which the correlation functions of the model reveal anomalous scaling behavior. The stability of the fixed points (and, therefore, the choice of the scaling regime) depends on the values of the two exponents $\eps$ and $\eta$, entering into the velocity correlator. The explicit asymptotic expressions for the structure functions and other correlation functions in any space dimension are obtained; the anomalous exponents are calculated to the first order of the corresponding $\eps$ expansions. For the first nontrivial regime the anomalous exponents are the same as in the rapid-change version of the model; for the second they are the same as in the model with time-independent (frozen) velocity field. In these regimes, the anomalous exponents are universal in the sense that they depend only on the exponents entering into the velocity correlator; what is more, they depend on the only exponent ($\zeta\equiv\eps-\eta$ and $\eps$) remaining in the corresponding limit. For the last regime the exponents are nonuniversal: in principle, they depend also on the values of the coupling constants. It turns out, however, that they can reveal the nonuniversality only in the second order of the $\eps$ expansion. A serious question is that of the validity of the $\eps$ expansion and the possibility of the extrapolation of the results, obtained within the $\eps$ expansions, to the finite values $\eps=O(1)$. In Refs. [@Gat] and [@Pumir2], the agreement between the nonperturbative results and the $\eps$ expansion has been established on the example of the triple correlation function in the rapid-change model. In particular, in [@Pumir2] the exponent $\Delta[3,1]$ (we use the notation introduced in Sec. \[sec:Operators\]) has been calculated numerically for all $0\le\zeta\le2$ within the zero-mode approach. It was shown that for small $\zeta$, this nonperturbative result agrees with the expansion in $\zeta$, while for $\zeta=2$ it coincides with the exact analytic result $\Delta[3,1]=-2$ obtained previously in [@SSP]. In the paper [@VM], the one-dimensional version of the rapid-change model has been studied both numerically and analytically within the zero-mode approach; the analytic expressions for the anomalous exponents obtained within the $\eps$ expansion have also been found to agree with the nonperturbative numerical results. Finally, in Ref. [@VMF] the analytic $O(\eps)$ result has been confirmed by the numerical experiment on the example of the fourth-order structure function in three dimensions. In this connection, we also note that a number of exact analytic results appear to be in agreement with the corresponding $\eps$ expansions: the exponent $\Delta[2,2]$, calculated exactly in [@Falk1], the exponent for the second-order structure function of a passively advected magnetic field [@MHD], and the second-order exponent for a scalar advected by the nonsolenoidal (“compressible”) velocity field [@RG1]; the corresponding expansions in $\zeta$ (to the order $\zeta^{2}$) have been calculated within the RG approach in [@RG; @RG1]. These facts support strongly the applicability of the $\eps$ expansion, at least for low-order correlation functions. In the paper [@Kraich2], a closure-type approximation for the rapid-change model, the so-called linear ansatz, was used to derive simple explicit expression for the anomalous exponents for any $0\le\zeta\le2$, $d$, and $n$, the order of the structure function. For $\zeta=1$, the predictions of the linear ansatz appear to be consistent with the numerical simulations [@Kraich3; @Galanti; @VMF; @Zeitak]; they are also in agreement with some exact relations [@Kraich4; @Lvov; @Galanti]. However, they do not agree with the results obtained within the zero-mode and RG approaches in the ranges of small $\zeta$, $2-\zeta$ or $1/d$. In fact, there is no [*formal*]{} contradiction between the perturbative results and the linear ansatz: the violation of the latter in the aforementioned limits can be related to the fact that they have strongly nonlocal dynamics in the momentum space; see, e.g., the discussion in Refs. [@Kraich4; @VMF]. On the other hand, the [*numerical*]{} divergence of the predictions given by the linear ansatz and $\eps$ expansion for the fourth-order structure function at $\zeta \simeq 1$ is, roughly speaking, of the same order of magnitude as the difference between the nonperturbative numerical results and the perturbative small-$\zeta$ results for the triple correlator, as one can see from the figures presented in [@Gat]. One can think that the series in $\zeta$, obtained within the RG or zero-mode approaches, give correct formal expansions of the (unknown) exact exponents, while the linear ansatz gives a good approximate expression for the same quantities near $\zeta=1$. We also note that the numerical agreement between the expansion in $\zeta$ and the exact results is expected to worsen as $n$ increases, because the actual expansion parameter is $n\zeta$ rather than $\zeta$ itself; see [@RG; @RG1]. Of course, it is not impossible that new dangerous operators arise for some finite value of the RG expansion parameter. The example is provided by the frozen regime of the model in question (see Sec. \[sec:summ\]). In principle, this effect can lead to a qualitative changeover in the IR behavior of the correlators as $\eps$ increases, but in our model this is not the case, at least for the second-order structure function. We also note that the first-order expressions for the anomalous exponents in Eq. (\[Dnp\]) look alike for the rapid-change and frozen regimes, but we expect that the analytic properties of the series are different. The aforementioned exact results suggest that in the rapid-change models, the series in $\zeta$ have finite radii of convergence. In the language of the field theory, this is related to the fact that in the rapid-change models, there is no factorial growth of the number of diagrams in higher orders of the perturbation theory: a great deal of diagrams vanishes owing to retardation and the fact that the velocity correlator contains the delta function in time. This is no longer so for the regimes in which the correlation time remains finite, and we expect that the series in $\eps$ for these regimes are only asymptotical, as in most models of the critical behavior. Let us conclude with a brief discussion of the passive advection in the non-Gaussian velocity field governed by the nonlinear stochastic NS equation. In this case, one has to add the nonlinear term $(v_j \partial_{j})v_{i}$ to the left-hand side of the equation (\[NS\]) and set $\eta=0$ in Eq. (\[Fin1\]). The RG approach is also applicable to this model; the analysis of the UV divergences shows that the basic RG functions are the same as for the model with $\h=0$. The RG analysis of the latter has been accomplished in [@AVH]. It shows that the model possesses a nontrivial IR stable fixed point; its coordinate has been calculated in [@AVH] in the first order of the $\eps$ expansion.[^4] The inclusion of a nonzero mean gradient $\h\ne0$ gives rise to anomalous scaling; the analysis given in Sec. \[sec:OPE\] can also be extended to this case. For small $\eps$, the anomalous exponents are given by the relation (\[Q\]), in which one should take $g/u(u+1) = \eps/3a$ at the fixed point, with the coefficient $a$ from Eq. (\[a\]) (we use the notation introduced above; the definition of the parameters $\eps$, $a$, and $u$ in [@AVH] is slightly different). Despite the non-Gaussianity, the critical dimensions of the powers of the velocity field are given by the simple linear relation $\Delta[v_{i_1}\cdots v_{i_n}]=n\Delta_{\bf v}=n(1-\eps/3)$; see [@LOMI; @JETP; @UFN; @turbo; @She] (in the notation of the papers [@LOMI; @JETP; @UFN; @turbo], $\eps$ should be replaced with $2\eps$). Therefore, all these operators are dangerous for $\eps>3$, and the summation of their contributions is required. For the different-time correlators, it has been accomplished in [@LOMI; @JETP]; for the structure functions it can be performed in the one-loop approximation as in Sec. \[sec:summ\] above and leads to an analogous conclusion: the behavior of the second-order structure function does not change for $\eps>3$. For $\eps>4$, the composite operator of the local energy dissipation rate also becomes dangerous [@Pismak], possibly along with all of its powers [@She]; some other dangerous operators arise for $\eps>6$ and further [@Kim; @further]. The identification of all the other dangerous operators and summation of their contributions in the operator product expansions remains an open problem. I have benefited from discussions with L. Ts. Adzhemyan, M. Hnatič, J. Honkonen, M. Yu. Nalimov and A. N. Vasil’ev. I am thankful to V. S. L’vov, A. Mazzino and M. Vergassola for useful comments on the paper subject. The work was supported by the Russian Foundation for Fundamental Research (Grant No. 99-02-16783) and by the Grant Center for Natural Sciences of the Russian State Committee for Higher Education (Grant No. 97-0-14.1-30). R. Antonia, E. Hopfinger, Y. Gagne and F. Anselmet, Phys. Rev. [**A 30**]{}, 2704 (1984). K. R. Sreenivasan, Proc. Roy. Soc. London A [**434**]{}, 165 (1991). M. Holzer and E. D. Siggia, Phys. Fluids [**6**]{}, 1820 (1994). A. Pumir, Phys. Fluids [**6**]{}, 2118 (1994). C. Tong and Z. Warhaft, Phys. Fluids [**6**]{}, 2165 (1994). T. Elperin, N. Kleeorin, and I. Rogachevskii, Phys. Rev. Lett. [**76**]{}, 224 (1996); Phys. Rev E [**52**]{}, 2617 (1995); [**53**]{}, 3431 (1996). M. Avellaneda and A. Majda, Commun. Math. Phys. [**131**]{}, 381 (1990). M. Avellaneda and A. Majda, Commun. Math. Phys. [**146**]{}, 139 (1992); A. Majda, J. Stat. Phys. [**73**]{}, 515 (1993); D. Horntrop and A. Majda, J. Math. Sci. Univ. Tokyo [**1**]{}, 23 (1994). Q. Zhang and J. Glimm, Commun. Math. Phys. [**146**]{}, 217 (1992). R. H. Kraichnan, Phys. Fluids [**11**]{}, 945 (1968). R. H. Kraichnan, Phys. Rev. Lett. [**72**]{}, 1016 (1994). R. H. Kraichnan, V. Yakhot, and S. Chen, Phys. Rev. Lett. [**75**]{}, 240 (1995); V. Yakhot, Phys. Rev. E [**55**]{}, 329 (1997). R. H. Kraichnan, Phys. Rev. Lett. [**78**]{}, 4922 (1997). M. Chertkov, G. Falkovich, I. Kolokolov, and V. Lebedev, Phys. Rev. E [**52**]{}, 4924 (1995). M. Chertkov and G. Falkovich, Phys. Rev. Lett [**76**]{}, 2706 (1996). K. Gawȩdzki and A. Kupiainen, Phys. Rev. Lett. [**75**]{}, 3834 (1995). D. Bernard, K. Gawȩdzki, and A. Kupiainen, Phys. Rev. E [**54**]{}, 2564 (1996). A. Pumir, Europhys. Lett. [**34**]{}, 25 (1996). D. Gutman and E. Balkovsky, Phys. Rev. E [**54**]{}, 4435 (1996). A. Pumir, B. I. Shraiman, and E. D. Siggia, Phys. Rev. E [**55**]{}, R1263 (1997). A. L. Fairhall, O. Gat, V. L’vov, and I. Procaccia, Phys. Rev. E [**53**]{}, 3518 (1996). O. Gat, V. S. L’vov, E. Podivilov, and I. Procaccia, Phys. Rev. E [**55**]{}, R3836 (1997);\ O. Gat, V. S. L’vov, and I. Procaccia, Phys. Rev. E [**56**]{}, 406 (1997). A. L. Fairhall, B. Galanti, V. S. L’vov, and I. Procaccia, Phys. Rev. Lett. [**79**]{}, 4166 (1997). E. Balkovsky, G. Falkovich and V. Lebedev, Phys. Rev. E [**55**]{}, R4881 (1997). M. Vergassola and A. Mazzino, Phys. Rev. Lett. [**79**]{}, 1849 (1997). G. Eyink and J. Xin, Phys. Rev. Lett. [**77**]{}, 2674 (1996); [*chao-dyn/9605008.*]{} M. Vergassola, Phys. Rev. E [**53**]{}, R3021 (1996). A. Pumir, Europhys. Lett. [**37**]{}, 529 (1997); Phys. Rev. E [**57**]{}, 2914 (1998). U. Frisch, A. Mazzino, and M. Vergassola, Phys. Rev. Lett. [**80**]{}, 5532 (1998). O. Gat, I. Procaccia, and R. Zeitak, Phys. Rev. Lett. [**80**]{}, 5536 (1998). K. Gawȩdzki and M. Vergassola, cond-mat/9811399; A. Celani, A. Lanotte and A. Mazzino, chao-dyn/9902011. L. Ts. Adzhemyan, N. V. Antonov, and A. N. Vasil’ev, Phys. Rev. E [**58**]{}, 1823 (1998). L. Ts. Adzhemyan and N. V. Antonov, Phys. Rev. E [**58**]{}, 7381 (1998). B. I. Shraiman and E. D. Siggia, C. R. Acad. Sci., Ser. II, [**321**]{}, 279 (1995). B. I. Shraiman and E. D. Siggia, Phys. Rev. Lett. [**77**]{}, 2463 (1996). B. I. Shraiman and E. D. Siggia, Phys. Rev. E [**49**]{}, 2912 (1994). M. Chertkov, G. Falkovich, and V. Lebedev, Phys. Rev. Lett. [**76**]{}, 3707 (1996). G. Eyink, Phys. Rev. E [**54**]{}, 1497 (1996). N. V. Antonov, Zapiski Nauchnykh Seminarov LOMI [**169**]{}, 18 (1988) \[English translation: Sov. Math. Zapiski LOMI\]. L. Ts. Adzhemyan, N. V. Antonov, and A. N. Vasil’ev, Zh. [É]{}ksp. Teor. Fiz. [**95**]{}, 1272 (1989) \[Sov. Phys. JETP [**68**]{}, 733 (1989)\]. L. Ts. Adzhemyan, N. V. Antonov, and A. N. Vasil’ev, Usp. Fiz. Nauk, [**166**]{}, 1257 (1996) \[Phys. Usp. [**39**]{}, 1193 (1996)\]. L. Ts. Adzhemyan, N. V. Antonov, and A. N. Vasiliev, [*The Field Theoretic Renormalization Group in Fully Developed Turbulence*]{} (Gordon & Breach, London, 1999). B. Duplantier, A. Ludwig, Phys. Rev. Lett. [**66**]{}, 247 (1991). G. L. Eyink, Phys. Lett. A [**172**]{}, 355 (1993). J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{} (Clarendon, Oxford, 1989). A. N. Vasil’ev, [*Quantum-Field Renormalization Group in the Theory of Critical Phenomena and Stochastic Dynamics*]{} (St Petersburg Institute of Nuclear Physics, St Petersburg, 1998) \[in Russian; English translation: Gordon & Breach, in preparation\]. R. H. Kraichnan, Phys. Fluids [**7**]{}, 1723 (1964); [**8**]{}, 575 (1965); S. Chen and R. H. Kraichnan, Phys.Fluids A [**1**]{}, 2019 (1989); V. S. L’vov, Phys. Rep. [**207**]{}, 1 (1991). J. P. Bouchaud, A. Comtet, A. Georges, and P. Le Doussal, J. Phys. (Paris) [**48**]{}, 1445 (1987); [**49**]{}, 369 (1988); J. P. Bouchaud and A. Georges, Phys. Rep. [**195**]{}, 127 (1990). J. Honkonen and E. Karjalainen, J. Phys. A: Math. Gen. [**21**]{}, 4217 (1988); J. Honkonen, Yu. M. Pis’mak, and A. N. Vasil’ev, J. Phys. A: Math. Gen. [**21**]{}, L835 (1989); J. Honkonen and Yu. M. Pis’mak, J. Phys. A: Math. Gen. [**22**]{}, L899 (1989). N. V. Antonov and A. N. Vasil’ev, Theor. Math. Phys. [**110**]{}, 97 (1997). J. C. Collins, [*Renormalization. An Introduction to Renormalization, the Renormalization Group, and the Operator Product Expansion*]{} (Cambridge University Press, Cambridge, 1984). L. Ts. Adzhemyan, A. N. Vasil’ev, and Yu. M. Pis’mak, Theor. Math. Phys. [**57**]{}, 1131 (1983); L. Ts. Adzhemyan, A. N. Vasil’ev, and M. Hnatich, Theor. Math. Phys. [**74**]{}, 115 (1988). A. Lanotte and A. Mazzino, chao-dyn/9903026, to appear in Phys. Rev. E. L. Ts. Adzhemyan, N. V. Antonov, and T. L. Kim, Theor. Math. Phys. [**100**]{}, 1086 (1994). L. Ts. Adzhemyan, A. N. Vasil’ev, and M. Hnatich, Theor. Math. Phys. [**58**]{}, 47 (1984). V. Yakhot, S. A. Orszag, Phys. Rev. Lett. [**57**]{}, 1722 (1986); Journ. Sci. Comp. [**1**]{}, 3 (1986); W. P. Dannevik, V. Yakhot, S. A. Orszag, Phys. Fluids [**30**]{}, 2021 (1987); É. V. Teodorovich, Prikladnaja Matematika i Mehanika [**52**]{}, 218 (1988) \[in Russian\]. V. Yakhot, Z.-S. She, and S. A. Orszag, Phys. Fluids A [**1**]{}(2), 289 (1989). N. V. Antonov, S. V. Borisenok, and V. I. Girina, Theor. Math. Phys. [**106**]{}, 75 (1996). P. G. Mestayer, J. Fluid. Mech. [**125**]{}, 475 (1982). ------------------- ----------- ------------ ------------- ------------------- ---------------------------- -------------- --------- --------------------- $F$ $\theta $ $\theta '$ $ {\bf v} $ $\nu$, $\nu _{0}$ $m$, $M$, $\mu$, $\Lambda$ $g_{0}$ $u_{0}$ $g$, $u$, ${\bf h}$ $d_{F}^{k}$ $-1$ $d+1$ $-1$ $-2$ 1 $\eps+\eta $ $\eta $ 0 $d_{F}^{\omega }$ 0 0 1 1 0 0 0 0 $d_{F}$ $-1$ $d+1$ 1 0 1 $\eps+\eta $ $\eta $ 0 ------------------- ----------- ------------ ------------- ------------------- ---------------------------- -------------- --------- --------------------- : Canonical dimensions of the fields and parameters in the model (\[action\]).[]{data-label="table1"} [^1]: For the rapid-change model, the relationship between the anomalous exponents and dimensions of composite operators was anticipated in [@Falk2; @BGK; @Eyink] within certain phenomenological formulation of the OPE, the so-called “additive fusion rules,” typical to the models with multifractal behavior; see also [@DL; @Ey]. The RG analysis of Ref. [@RG] shows that such fusion rules are indeed obeyed by the powers of the local dissipation rate in the model [@Kraich1], and all these operators are dangerous. [^2]: See: N. V. Antonov, [*Anomalous scaling of a passive scalar advected by the synthetic compressible flow,*]{} chao-dyn/9907018. [^3]: This Section is not included into the journal version. [^4]: The results obtained in [@AVH] were also rederived later in Refs. [@Yakhot2].
--- abstract: 'We discuss the concept of natural Poisson bivectors, which allows us to consider the overwhelming majority of known integrable systems on the sphere in framework of bi-Hamiltonian geometry.' author: - | A V Tsiganov\ *St.Petersburg State University, St.Petersburg, Russia\ *e–mail: andrey.tsiganov@gmail.com** title: On natural Poisson bivectors on the sphere --- 0.1truecm Introduction ============ The Hamilton-Jacobi theory seems to be one of the most powerful methods of investigation the dynamics of mechanical (holonomic and nonholonomic) and control systems. Besides its fundamental aspects such as its relation to the action integral and generating functions of symplectic maps, the theory is known to be very useful in integrating the Hamilton equations using the variables separation technique. The milestones of this technique include the works of Stäckel, Levi-Civita, Eisenhart, Woodhouse, Kalnins, Miller, Benenti and others. The majority of results was obtained for a very special class of integrable systems, important from the physical point of view, namely for the systems with quadratic in momenta integrals of motion. The Kowalevski, Chaplygin and Goryachev results on separation of variables for the systems with higher order integrals of motion missed out of this scheme. Bi-Hamiltonian structures can be seen as a dual formulation of integrability and separability, in the sense that they substitute a hierarchy of compatible Poisson structures to the hierarchy of functions in involution, which may be treated either as integrals of motion or as variables of separation for some dynamical system. The Eisenhart-Benenti theory was embedded into the bi-Hamiltonian set-up using the lifting of the conformal Killing tensor that lies at the heart of Benenti’s construction [@sar00; @imm00]. The concept of natural Poisson bivectors allows us to generalize this construction and to study systems with quadratic and higher order integrals of motion in framework of a single theory [@ts10d]. The aim of this note is to bring together all the known examples of natural Poisson bivectors on the sphere, because a good example is the best sermon. Some of these Poisson bivectors have been obtained and presented earlier in different coordinate systems and notations. Here we propose the unified description of this known and few new bivectors using so-called geodesic $\Pi$ and potential $\Lambda$ matrices [@ts10d]. In some sense we propose new form for the old content and believe that this unification is a first step to the geometric analysis of various natural systems on the sphere, which reveals what they have in common and indicates the most suitable strategy to obtain and to analyze their solutions. The corresponding integrable natural systems on two-dimensional unit sphere $\mathbb S^2$ are related to rigid body dynamics. In order to describe these systems we will use the angular momentum vector $J=(J_1,J_2,J_3)$ and the Poisson vector $x=(x_1,x_2,x_3)$ in a moving frame of coordinates attached to the principal axes of inertia [@bm05]. The Poisson brackets between these variables $$\label{e3} \,\qquad \bigl\{J_i\,,J_j\,\bigr\}=\varepsilon_{ijk}J_k\,, \qquad \bigl\{J_i\,,x_j\,\bigr\}=\varepsilon_{ijk}x_k \,, \qquad \bigl\{x_i\,,x_j\,\bigr\}=0\,,$$ may be associated to the Lie-Poisson algebra of the three-dimensional Euclidean algebra $e(3)$ with two Casimir elements \[caz-e3\] C\_1=|x|\^2\_[k=1]{}\^3 x\_k\^2, C\_2= x,J \_[k=1]{}\^3 x\_kJ\_k . Below we always put $C_2=0$. As usual all the results are presented up to the linear canonical transformations, which consist of rotations $$x\to \alpha\, {U}\, x\,,\qquad J\to {U}\, J\,,$$ where $\alpha$ is an arbitrary parameter and $U$ is an orthogonal constant matrix, and shifts $$x\to x \,,\qquad J\to J+ {S}\, x\,,\label{shiftE3}$$ where ${ S}$ is an arbitrary $3\!\times\!3$ skew-symmetric constant matrix [@bm05; @kst03]. If the square integral of motion $C_2=(x,J)$ is equal to zero, rigid body dynamics may be restricted on the unit sphere $\mathbb S^2$ and we can use standard spherical coordinate system on it’s cotangent bundle $T^*{\mathbb S}^2$ \[sph-coord\] [lll]{} x\_1 =,& x\_2 = ,& x\_3 =,\ \ J\_1 =p\_-p\_,& J\_2 =p\_+p\_, & J\_3 = -p\_. We use these variables in order to determine and classify the natural Poisson bivectors on $T^*\mathbb S^2$ up to the point canonical transformations. As far as the organization of this paper is concerned, in Section 2 we briefly introduce the notions of bi-Hamiltonian geometry relevant for subsequent sections. In particular, we discuss the concept of natural Poisson bivectors on cotangent bundles to Riemannian manifolds, which allows us to generalize classical Eisenhart-Benenti theory. In Section 3 we discuss the bi-Hamiltonian classification of bi-integrable systems on the sphere. Section 4 is devoted to the separable natural systems coming from auxiliary bi-Hamiltonian systems. Some issues in the geometry of bi-Hamiltonian manifolds ======================================================= A bi-Hamiltonian manifold $M$ is a smooth manifold endowed with a pair of compatible Poisson bivectors $P$ and $P'$ such that \[m-eq1\] \[P,P’\]=0,=0,where $[.,.]$ is the Schouten bracket. This means that every linear combination of $P$ and $P'$ is still a Poisson bivector. If $P$ is invertible Poisson bivector on $M$, one can introduce the so-called Nijenhuis operator (or hereditary, or recursion) \[rec-op\] N=P’ P\^[-1]{}. If $N$ has, at every point, the maximal number of different functionally independent eigenvalues $u_1,\ldots,u_n$, then $M$ is said to be a regular bi-Hamiltonian manifold. Bi-integrable systems ---------------------- Let us consider a family of bi-integrable systems for which there are functionally independent integrals of motion $H_1,\ldots,H_n$ in the bi-involution \[bi-inv\] {H\_i,H\_j}={H\_i,H\_j}’=0,i,j=1,…,n, with respect to a pair of compatible Poisson brackets $\{.,.\}$ and $\{.,.\}'$ defined by $P$ and $P'$. There are three known distinct constructions of bi-integrable systems, see [@ts10d] . Firstly, if $M$ is a regular bi-Hamiltonian manifold endowed with invertible Poisson bivector $P$, then we can construct recursion operator $N$ (\[rec-op\]) and, as usual, functions $$\label{aux-int} \mathcal H_k=\frac1{2k}\,\mathrm{tr}\,N^k$$ form a bi-Hamiltonian hierarchy on $M$, i.e. the Lenard relations hold $$P' d{\mathcal H}_k=P d{\mathcal H}_{k+1}\,,\qquad\mbox{for all}\qquad k\ge 1\,.$$ Using these relations we can get all the integrals of motion starting with the Hamilton function $H_1$. The natural obstacle for existence of the bi-Hamiltonian systems is discussed in [@br]. Fortunately, we can use these rare bi-Hamiltonian systems (natural or non-natural) as *auxiliary* systems for the construction of an infinite family of non bi-Hamiltonian separable systems. Namely, second special but more fundamental construction of integrable systems was originally formulated by Jacobi when he invented elliptic coordinates and successfully applied them to solve several important mechanical problems: *“The main difficulty in integrating a given differential equation lies in introducing convenient variables, which there is no rule for finding. Therefore, we must travel the reverse path and after finding some notable substitution, look for problems to which it can be successfully applied”.* In framework of the Jacobi method we consider $\mathcal H_i$ (\[aux-int\]) as constants of motion for an *auxiliary* bi-Hamiltonian system on the regular bi-Hamiltonian manifold $M$ and treat functionally independent eigenvalues $u_j$ of $N$ \[dn-var\] B()=((N-I))\^[1/2]{}=(-u\_1)(-u\_2)(-u\_n), as “convenient variables” for an ifinite family of *separable* bi-integrable systems associated with various separated relations $$\label{seprel} \Phi_i(u_i,p_{u_i},H_1,\dots,H_n)=0\ ,\quad i=1,\dots,n\ , \qquad\mbox{with }\det\left[\frac{\partial \Phi_i}{\partial H_j}\right] \not=0\> .$$ Here $u=(u_1,\ldots,u_n)$ and $p_u=(p_{u_1},\ldots,p_{u_n})$ are canonical variables of separation \[poi-br12\] {u\_i,p\_[u\_j]{}}=\_[ij]{}{u\_i,p\_[u\_j]{}}’=\_[ij]{}u\_i. The Poisson brackets (\[poi-br12\]) entail that solutions $H_1,\ldots,H_n$ of the separated relations (\[seprel\]) are functionally independent integrals of motion in the bi-involution (\[bi-inv\]), see [@ts07a]. Of course, this construction will be justified only if we are capable to obtain separable Hamilton functions $H=H_i$, which have natural form in initial $(p,q)$ variables (\[nat-h\]). The third construction of integrals of motion in bi-involution on irregular bi-Hamiltonian manifolds is discussed in [@mpt; @ts10d]. In this case polynomial integrals of motion $H_2,\ldots,H_n$ are solutions of the following equations for the given Hamiltonian $H_1$ $$P'dH_1=\varkappa_k P\,d\ln H_k,\qquad k> 1\,, \quad \varkappa_k\in\mathbb R\,,$$ which replace the usual Lenard relations (\[aux-int\]). If this equations have many different functionally independent solutions labeled by different $\varkappa_k$, then we obtain so-called superintegrable systems [@mpt; @ts10d]. Bi-Hamiltonian structures on cotangent bundles ---------------------------------------------- According to [@Turiel] a torsionless (1,1) tensor field $L$ on a smooth manifold $Q$ gives rise to a (second) Poisson structure on the cotangent space $M=T^*Q$, compatible with the canonical one. Let $\theta$ be the Liouville $1$-form on $T^*Q$ and $\omega= d\theta$ the standard symplectic $2$-form on $T^\ast Q$, whose associated Poisson bivector will be denoted with $P$. If we choose some local coordinates $q=(q_1,\dots,q_n)$ on $Q$ and the corresponding symplectic coordinates $(q,p)=(q_1,\dots,q_n,p_1,\dots,p_n)$ on $T^\ast Q$ then we get the following local expressions \[poi-0\] =p\_1dq\_1+…p\_ndq\_n,P=( [cc]{} 0 &\ - & 0 ).Using a torsionless tensor field $L$ one can deform $\theta$ to a $1$-form $\theta'$ and $P$ to bivector $P'$: \[p2-ben\] ’ =\_[i,j=1]{}\^n L\_[ij]{} p\_i dq\_j, P’= ( [cc]{} 0 & L\_[ij]{}\ \ -L\_[ij]{}&\_[k=1]{}\^n(-)p\_k ). The vanishing of $L$ torsion entails that $P'$ (\[p2-ben\]) is a Poisson bivector compatible with $P$. Let us consider natural integrable by Liouville system on $Q$. The natural Hamilton function \[nat-h\] H\_1=T+V=\_[i,j=1]{}\^n g\_[ij]{}p\_ip\_j+V(q\_1,…, q\_n) is the sum of the geodesic Hamiltonian $T$ defined by metric tensor $\mathrm g(q_1,\ldots,q_n)$ and potential energy $V(q_1,\ldots,q_n)$ on $Q$. If the corresponding Hamilton-Jacobi equation is separable in orthogonal coordinate system $(u,p_u)$ on configurational space $Q$, then in framework of the Eisenhart-Benenti theory the second Poisson bivector $P'$ (\[p2-ben\]) is defined by a conformal Killing tensor $L$ of gradient type on $Q$ with pointwise simple eigenvalues associated with the metric $\mathrm g(q_1,\ldots,q_n)$, see [@ben97; @ben05; @bm03; @sar00; @imm00]. According to Kowalevski [@kow89] and Chaplygin [@chap04], separation of variables for integrable systems with higher order integrals of motion involves generic canonical transformation of the whole phase space. Definition (\[nat-h\]) of the natural Hamiltonian and metric tensor $\mathrm g(q_1,\ldots,q_n)$ is non-invariant with respect to arbitrary canonical transformations of coordinates on $T^*Q$ $$q_i\to q'_i=f_i(p,q)\,,\qquad p_i\to p'_i=g_i(p,q)\,.$$ In the situation, when habitual objects (geodesic, metric, potential) lose their geometric sense and remaining invariant equation (\[m-eq1\]) has apriority infinite many solutions, notion of the natural Poisson bivectors on $T^*Q$ became de-facto very useful practical tool for the calculation of variables of separation [@gts10; @mpt; @ts10a; @ts10k; @ts10d; @ts09v]. The natural Poisson bivector $P'$ on $T^*Q$ is a sum of the geodesic Poisson bivector $P'_T$ compatible with $P$ \[w-eq\] \[P,P’\_T\]=\[P’\_T,P’\_T\]=0, and the potential part defined by a torsionless (1,1) tensor field $\Lambda(q_1,\ldots, q_n)$ on $Q$ \[n-p\] P’= P’\_T+( [cc]{} 0 & \_[ij]{}\ \ -\_[ji]{}&\_[k=1]{}\^n(-)p\_k ). In fact, here we simple assume that bi-integrability of the geodesic motion is a necessary condition for bi-integrability in generic case at $V\neq 0$. Throughout this paper geodesic bivector $P'_T$ is defined by $n\times n $ matrix $\Pi(q_1,\ldots, q_n,p_1,\ldots,p_n)$ and functions $\mathrm{x,y}$ and $\mathrm z$ on $T^*Q$ \[p2-sph2\] P’\_T= ( [cc]{} \_[k=1]{}\^n( \_[jk]{}(q)-\_[ik]{}(q)) & \_[ij]{}\ \ -\_[ji]{}&\_[k=1]{}\^n(-)z\_[k]{}(p)\ ) up to the point transformations. In this case the corresponding Poisson bracket $\{.,.\}'$ looks like &&{q\_i,p\_j}’=\_[ij]{}+\_[ij]{},{q\_i,q\_j}’ = \_[k=1]{}\^n( \_[jk]{}(q)-\_[ik]{}(q)) ,\ &&{p\_i,p\_j}’=\_[k=1]{}\^n(-)p\_k+\_[k=1]{}\^n(-)z\_[k]{}(p).In fact, functions $\mathrm{x,y}$ and $\mathrm z$ are completely determined by the matrix $\Pi$ via compatibility conditions (\[w-eq\]). We can add various integrable potentials $V$ to the given geodesic Hamiltonian $T$ in order to get integrable natural Hamiltonians (\[nat-h\]). In similar manner we can add different compatible potential matrices $\Lambda$ to the given geodesic matrix $\Pi$ in order to get natural Poisson bivectors $P'$ (\[n-p\]) compatible with the canonical bivector $P$. We have to underline that this definition of natural Poisson bivectors is the useful anzats rather than rigorous mathematical definition. It is an obvious sequence of non-invariant definition of the natural Hamiltonian with respect to transformations of the whole phase space. We hope that further inquiry of geometric relations between $n\times n$ metric matrix $\mathrm g$, potential matrix $\Lambda$ and geodesic matrix $\Pi$ on $T^*Q$ allows us to get more invariant and rigorous definition of these objects. In term of variables of separation $\Pi=0$ and $\Lambda=\mbox{diag}(u_1,\ldots, u_n)$, so we have usual invariant construction of Turiel [@Turiel]. The main problem is how to rewrite this invariant theory in term of initial physical variables. We suppose that (\[p2-sph2\]) is a special form of $P'_T$. Other form of $P'_T$ on the generic symplectic leaves of $e^*(3)$ for the Steklov-Lyapunov system at $C_2\neq 0$ will be presented in the forthcoming publication. Special natural Poisson bivectors on the sphere =============================================== The standard Laplace method for the direct search of integrable systems may be applied to the search of the natural bivectors $P'$ too. Firstly we stint ourselves by a family of natural Poisson bivectors (\[n-p\]) with geodesic part (\[p2-sph2\]). Then, it is easy to see that the geodesic Hamiltonian $$T= \sum_{i,j=1}^n \mathrm g_{ij}(q)\,p_i\,p_j\,$$ on the cotangent bundle $T^*Q$ is the second order homogeneous polynomial in momenta, so we assume that entries of $\Pi$ are the similar homogeneous polynomials \[pi2-sph\] \_[ij]{}=\_[k,m=1]{}\^n c\_[ij]{}\^[km]{}(q) p\_kp\_m up to canonical transformations $p_k\to p_k+f_k(q_k)$. On two-dimensional unit sphere $Q=\mathbb S^2$ we use spherical coordinates (\[sph-coord\]) such that q=(q\_[1]{},q\_[2]{})=(,)p=(p\_1,p\_2)=(p\_,p\_).\[p12-coord\] At the third step we introduce a family of partial solutions for which all the entries of $P'_T$ (\[p2-sph2\]) are independent on variable $\phi$, i.e. at \[assum2\] c\_[ij]{}\^[km]{}(q)=c\_[ij]{}\^[km]{}(),x\_[jk]{}(,)=x\_[jk]{}(),y\_[ik]{}(,)=y\_[ik]{}(). It looks like reasonable assumption because the geodesic Hamiltonian $T$ T&=&a\_1J\_1\^2+a\_2J\_2\^2+a\_3J\_3\^2=(a\_3-\^2(a\_1\^2+a\_2\^2))p\_\^2,\ \[n-hams\]\ &-&2(a\_1-a\_2)p\_p\_+(a\_1\^2+a\_2\^2)p\_\^2is independent on variable $\phi$ at $a_1=a_2$. If $a_k$ are constants it means that two diagonal elements of inertia tensor of the body $a_1^{-1}=a_2^{-1}$ are equal to each other and we discuss *symmetric* rigid body [@bm05]. Due to the special form of $P'_T$ (\[p2-sph2\]) and additional assumptions (\[pi2-sph\]-\[assum2\]), equations (\[w-eq\]) decompose on the subsystem of equations for $c_{ij}^{km}(q)$, subsystem of equations for $\mathrm z_k(p),c_{ij}^{km}(q)$ and third subsystem of equations for $\mathrm x_k(q), \mathrm y_k(q), c_{ij}^{km}(q)$, which can be partially solved independently to each other. If assumptions (\[pi2-sph\]-\[assum2\]) hold, then subsystem of equations for the functions $z_k(p)$ coming in (\[w-eq\]) has three families of solutions \[3-fam\] [lll]{} 1.& \_[ij]{}=0;\ \ 2.& z\_[1]{}=0,&z\_2=0;\ \ 3.& z\_[1]{}=,& z\_[2]{}=. This proposition gives only the necessary conditions. Of course, there remain complementary equations on the other functions $c_{ij}^{km}(\theta)$, $\mathrm x_{jk}(\theta)$ and $\mathrm y_{jk}(\theta)$ which have to be solved in the sequel. At the first case $P'_T=0$ and we can immediately look for compatible potential part $\Lambda(\phi,\theta)$ and the variables of separation $u_{1,2}$ (\[dn-var\]), which are related with initial variables by the point canonical transformations \[point-trans\] u\_i=f\_i(,),p\_[u\_i]{}=g\_i(,)p\_+h\_i(,)p\_. As a consequence, the geodesic Hamiltonian is a second order homogeneous polynomial in physical and separated momenta and the theory of projectively equivalent metrics in classical differential geometry study essentially the same object [@bm03]. In second case generic solution of (\[w-eq\]) is parameterized by six functions $g,h$ and one parameter $\gamma=0,1$: \[a-z\] =( [cc]{} p\_\^2 & g\_1()p\_\^2+ g\_2()p\_p\_+ g\_3()p\_\^2\ \ 0 & h\_1()p\_\^2+ h\_2()p\_p\_+ h\_3()p\_\^2 ) up to the point transformations $p_k\to \alpha_k p_1+\beta_k p_2$. As above it is only necessary condition and functions $g,h$ from (\[a-z\]), together with functions $\mathrm x,\mathrm y$ from (\[p2-sph2\]), are solutions of the remaining six non-linear differential equations in (\[w-eq\]). In third case generic solution of (\[w-eq\]) is parameterized by nine functions $f,g,h$ and one parameter $\gamma=0,1$: \[a-nz\] =( [cc]{} f\_1()p\_\^2+ f\_2()p\_p\_+ f\_3()p\_\^2 &g\_1()p\_\^2+ g\_2()p\_p\_+ g\_3()p\_\^2\ \ 12f\_2()p\_\^2+2f\_3()p\_p\_+(f\_3()+h\_3())\^[3/2]{}p\_\^2 & h\_1()p\_\^2+ h\_2()p\_p\_+ h\_3()p\_\^2 ) up to the point transformations $p_k\to \alpha_k p_1+\beta_k p_2$. Functions $f,g,h$ from (\[a-nz\]), together with functions $\mathrm x,\mathrm y$ from (\[p2-sph2\]), are solutions of the remaining 19 non-linear differential equations in (\[w-eq\]). Matrices (\[a-z\]) and (\[a-nz\]) were obtained as solutions of the subsystem of algebraic and linear differential equations for $c_{ij}^{km}(\theta)$, which has an unambiguous solution. The remaining functions satisfy to the complementary overdetermined subsystem of nonlinear PDE’s, which have many distinct particular solutions. In both cases (\[a-z\]) and (\[a-nz\]) we can get a complete classification of these particular solutions and of the corresponding bi-Hamiltonian systems (\[aux-int\]). Classification of separable bi-integrable systems demands additional assumptions on the form of the separated relations. Case 2 - classification of natural bi-Hamiltonian systems --------------------------------------------------------- Let us briefly discuss a procedure of classification of the natural bi-Hamiltonian systems associated with natural Poisson bivector (\[n-p\]-\[p2-sph2\]) defined by the geodesic matrix $\Pi$ (\[a-z\]). If $h_2(\theta)=0$ in (\[a-z\]), then six differential equations coming in (\[w-eq\]) have four distinct solutions; among them we pick out solution defined by the following matrix $$\Pi=\left( \begin{array}{cc} \gamma\,p_\phi^2 &\gamma\left(1-\dfrac{h'_3(\theta)\,F}{\alpha\sqrt{h_3(\theta}}+F^2\right) p_\phi\,p_\theta \\ \\ 0 & \gamma\left(1+F^2\right)p_\phi^2+h_3(\theta)\,p_\theta^2 \\ \end{array} \right)\,,\qquad F=\tan\left(\alpha\int{\dfrac{d\theta}{\sqrt{h_3(\theta)}}}+\beta\right)$$ If $a_1=a_2=const$, then we can put $h_3(\theta)=\gamma=1$ without loss of generality and obtain \[nph-pi\] =( [cc]{} p\_\^2 & (1+\^2)p\_p\_\ \ 0 & (1+\^2)p\_\^2+p\_\^2 ),y\_[12]{}()=. The corresponding geodesic Hamiltonian (\[aux-int\]) is equal to $$\mathcal T=\dfrac{1}{2}\,\mathrm{tr}\,N=\mathrm{tr}\,\Pi=(2+\tan^2\alpha\theta)p_\phi^2+p_\theta^2\,.$$ At $\alpha=1$ matrix $\Pi$ (\[nph-pi\]) is consistent only with the following potential matrix \[nph-l\] =( [cc]{} f() &g(,)\ \ -g(,)& ++a\^2 ), where f()&=&a\^2+++,\ g(,)&=& .So, bi-Hamiltonian system associated with $\Pi$ (\[nph-pi\]) and $\Lambda$ (\[nph-l\]) has the following Hamilton function (\[aux-int\]) H\_1&=&T++\ &-&. Second integral of motion $\mathcal H_2$ (\[aux-int\]) is a fourth order polynomial in momenta. This integrable system, to the best of our knowledge, has not been considered in literature yet. In similar manner we can get a complete classification of natural bi-Hamiltonian systems associated with matrices (\[a-z\]) and (\[a-nz\]). Case 3 - one possible generalization ------------------------------------ Non-invariant assumptions (\[pi2-sph\],\[assum2\]) depend on a choice of coordinate system and we miss a lot of another solutions of (\[m-eq1\]), which may be interesting in applications. One of the possible generalizations consists in the application of multiplicative separable functions in (\[pi2-sph\]) $$c_{ij}^{km}(\phi,\theta)=a_{ij}^{km}(\phi)\,b_{ij}^{km}(\theta)\,,$$ and similar for $\mathrm{x,y}$. For instance, geodesic matrix \[pi-lagr2\] = e\^[2]{}( [cc]{} (p\_+p\_)\^2 &\ \ 0& 0 ),=,C, gives rise to the natural Poisson bivector $P'$ at $$\mathrm y_{11} =- \dfrac{\ii}{2}\,,\qquad\mathrm z_{1}=\dfrac{p_{\phi}}{3}\,,\qquad \mathrm z_{2}=\dfrac{p_{\theta}}{3}\,.$$ It is easy to prove that integrals of motion for the Lagrange top (\[int-lagr\]) are in involution with respect to the corresponding Poisson bracket $\{.,.\}'$. Bivector $P'_T$ (\[p2-sph2\]) associated with $\Pi$ (\[pi-lagr2\]) has a natural counterpart on the generic symplectic leaves of the Lie algebra $e^*(3)$ at $(x,J)\neq 0$. Case 3 - three-dimensional sphere ---------------------------------- On the three and four dimensional spheres endowed with the standard spherical coordinates there are the same three families of solutions (\[3-fam\]). It means that factor $1/3$ in (\[3-fam\]) is independent on dimension of the sphere. For instance, if $q=(\phi,\psi,,\theta)$ and $p=(p_\phi,p_\psi,p_\theta)$ are the standard spherical coordinates on $T^*\mathbb S^3$, then at $z_k=\dfrac{p_k}{3}$ matrices \_1&=&( [ccc]{} p\_\^2 &2p\_p\_&( 4-)p\_p\_\ 0 & p\_\^2+fp\_\^2+ p\_\^2& (2-)fp\_p\_\ 0 & p\_p\_& p\_\^2-p\_\^2+p\_\^2 ),f=f(),\ \[pi-sph3\]\ \_2&=& ( [ccc]{} p\_\^2 & 2p\_p\_&2 (e\^+1)p\_p\_\ 0 & F&2e\^[-]{}(e\^+1)\^2p\_p\_\ 0 &-2e\^[-]{}p\_p\_&F-4(e\^[-]{}+1)p\_\^2 ),where $F=(e^{\alpha\psi}+1)p_\phi^2+\beta e^{-\alpha\psi}(e^{\alpha\psi}+1)^2\,p_\psi^2+\gamma e^{-\alpha\psi}\,p_\theta^2$, determine geodesic Poisson bivectors (\[p2-sph2\]) and geodesic Hamiltonians (\[aux-int\]) $$\mathcal T_1=3p_\phi^2+\dfrac{2f}{3}\,p_\psi^2+\dfrac{2\alpha f^3}{f'^2}\,p_\theta^2\,,\qquad \mathcal T_2=(2e^{\alpha\psi}+3)p_\phi^2+\dfrac{2\beta(e^{\alpha\psi}+1)^2}{e^{\alpha\psi}}\,p_\psi^2 -2\gamma\left(2+\dfrac{1}{e^{\alpha\psi}}\right)\,p_\theta^2\,.$$ Then we can calculate compatible potential matrices $\Lambda_{1,2}$ depending on coordinates $(\phi,\psi,\theta)$ and the corresponding integrable potentials $V_{1,2}$. The corresponding integrals of motion $\mathcal H_{2,3}$ (\[aux-int\]) are the fourth and sixth order polynomials in momenta, respectively. So, using notion of the natural Poisson bivectors we can produce a lot of abstract mathematical examples of bi-Hamiltonian system on the sphere. The main problems are how to select physically interesting bi-Hamiltonian systems and how to construct significant separable systems from the non-physical auxiliary bi-Hamiltonian systems. Separable bi-integrable systems ================================ In this Section we present matrices $\Pi$ and $\Lambda$ for the following well-known separable systems on the sphere - Case 1 - Lagrange top, Neumann system and systems separable in the elliptic coordinates; - Case 2 - Goryachev system, Matveev-Dullin system, Kowalevsky top, Chaplygin system; - Case 3 - Goryachev-Chaplygin top, Sokolov system, Kowalevsky-Goryachev-Chaplygin gyrostat; which may be natively embedded into the proposed scheme as separable bi-integrable systems. Some new mathematical generalizations of these systems and new separation of known systems are collateral results for this activity. In framework of the Jacobi methods one gets integrals of motion $H_1,\ldots, H_n$ as solutions of the separated relations (\[seprel\]). Of course, variables of separation and separated relations could have the singular points. So, the standard problem is the rigorous determination of domain where variables of separation and integrals of motion are well defined, see the Jacobi definition of the elliptic coordinates. Our main purpose is to discuss natural Poisson bivectors and, therefore, we do not comment this huge and complicated part of the work here, see for example [@bm05; @chap04; @dull04; @gaff; @kow89; @yeh] and references within. Case 1 - Lagrange top --------------------- If the spherical coordinates $\phi,\theta$ (\[sph-coord\]) are variables of separation, one gets the simplest natural Poisson bivector $P'$ (\[p2-sph2\]) at \[p2-lagr\] =0,=( [cc]{} & 0\ 0 & ). The auxiliary bi-Hamiltonian system is trivial $$\mathcal H_1=\phi+\theta\,,\qquad \mathcal H_2=\dfrac12\,(\phi^2+\theta^2).$$ On the other hand, substituting variables of separation $u_1=\phi$ and $u_2=\theta$ into the separated relations $$\Phi_1=\left(a+\dfrac{\cos^2\theta}{\sin^2\theta}\right)H_2-H_1+p_{\theta}^2+b \cos\theta=0\,,\qquad \Phi_2=p_\phi^2-H_2=0\,,$$ one gets integrals of motion for the Lagrange top in rotating frame \[int-lagr\] H\_1=J\_1\^2+J\_2\^2+aJ\_3\^2+bx\_3,H\_2=J\_3\^2, a,bR, More complicated natural bi-vector $P'$ obtained from matrix (\[pi-lagr2\]) gives rise to another variables of separation for this system. According to [@ts08] bivector $P'$ (\[n-p\]) associated with $\Lambda$ (\[p2-lagr\]) admits extension from cotangent bundle $T^*\mathbb S^2$ to the symplectic leaves of the Lie algebra $e^*(3)$ at $(x,J)\neq 0$. Case 1 - Neumann system ------------------------- Let us put $P'_T=0$ in (\[n-p\]) and consider some particular solution $P'$ of the equations (\[m-eq1\]) defined by the following non-symmetric matrix \[p2-neu\] =( [cc]{} a\_1 \^2+a\_2\^2& 2\ \ 2& a\_3\^2+(a\_1\^2+a\_2\^2)\^2 ) with three arbitrary parameters $a_k\in\mathbb R$. As above, the auxiliary bi-Hamiltonian system has trivial integrals of motion $\mathcal H_{k}$ (\[aux-int\]), which are functions only on the configurational space $\mathbb S^2$. On the other hand, coordinates of separation $u_{j}$ (\[dn-var\]) are the standard elliptic coordinates on the sphere \[ell-q\] ++ =.By substituting these variables in the separated relations $$u_i H_1-H_2-4(a_1-u_i)(a_2-u_i)(a_3-u_i)\,p_{u_i}^2+U_i(u_i)=0,\qquad i=1,2\,,$$ one gets bi-integrable systems with quadratic in momenta integrals of motion $$H_1=J_1^2+J_2^2+J_3^2+V(x)\,,\qquad H_2=a_1J_1^2+a_2J_2^2+a_3J_3^2+W(x) \,,$$ which are in the bi-involution (\[bi-inv\]) with respect to both Poisson brackets. Here $V(x)$ and $W(x)$ are easy calculated from the potentials $U_{1,2}$. For instance, if $$U(u)=u(u-a_1-a_2-a_3)\,,$$ then one gets the Neumann system with the following integrals of motion \[h-neu\] [l]{} H\_1=J\_1\^2+J\_2\^2+J\_3\^2+a\_1x\_1+a\_2x\_2+a\_3x\_3,\ \ H\_2=a\_1J\_1\^2+a\_2J\_2\^2+a\_3J\_3\^2-a\_2a\_3x\_1-a\_1a\_3x\_2-a\_1a\_2x\_3 .\ Bivector $P'$ (\[n-p\]) associated with $\Lambda$ (\[p2-neu\]) also satisfies equations (\[m-eq1\]) at $(x,J)\neq 0$, but in this case we lose bi-involutivity (\[bi-inv\]) of integrals of motion $H_{1,2}$ (\[h-neu\]) for the Clebsch system on the whole phase space $e^*(3)$. Of course, the corresponding elliptic coordinates on $e^*(3)$ remain variables of separation, but we can not get interesting natural Hamiltonians using these variables [@ts06]. Case 2 - systems with cubic integral of motion ----------------------------------------------- At $\gamma=0$ in (\[a-z\]) we have particular solution of the equations (\[w-eq\]) defined by geodesic matrix \[gor-pi\] =( [cc]{} 0 & -(+ )F\ \ 0 & F ),F=(g()p\_-i h()p\_)\^2,i=, depending on arbitrary functions $g(\theta)$ and $h(\theta)$ and by functions $$\mathrm x_{22}=-\dfrac{g(\theta)}{2h(\theta)}\,,\qquad \mathrm y_{12}=0,\qquad \mathrm z_k=0\,.$$ This matrix $\Pi$ is consistent with the diagonal potential matrix \[gor-l\] =(i-d)( [cc]{} 1 & 0\ \ 0 & 1 ). The corresponding bi-Hamiltonian systems (\[aux-int\]) are non-physical $\mathcal T=F$ and, therefore, we immediately proceed to consideration of the coordinates of separation $v_{1,2}=\sqrt{\,u_{1,2}}$ following to [@ts09v]. If we introduce polynomial $$\mathcal B(\lambda)=(\lambda-v_1)(\lambda-v_2)= \lambda^2-\mathrm i\sqrt{F}\lambda+\Lambda_{1,1}\,.$$ instead of characteristic polynomial $B(\lambda)=(\lambda-u_1)(\lambda-u_2)=(\lambda-v_1^2)(\lambda-v_2^2)$ (\[dn-var\]) of recursion operator $N$, then it is easy to prove that $$\{\mathcal B(\lambda),\mathcal A(\mu)\}=\dfrac{\lambda}{\mu-\lambda} \left(\dfrac{\mathcal B(\lambda)}{\lambda}-\dfrac{\mathcal B(\mu)}{\mu}\right)\,, \qquad \{\mathcal A(\lambda),\mathcal A(\mu)\}=0\,,$$ where $$\mathcal A(\lambda)=\int\dfrac{\mathrm id\theta}{{g(\theta)}}-\dfrac{\mathrm ip_\phi}{\lambda}\,.$$ It entails that $$p_{v_j}=\mathcal A(\lambda=v_j)\,,\qquad j=1,2,$$ are canonically conjugated to $v_j$ momenta and that the corresponding Poisson brackets read as $$\{v_i,p_{v_j}\}=\delta_{ij},\qquad \{v_i,p_{v_j}\}'=\delta_{ij} v_i^2\,.$$ Now we have to substitute this family of variables of separation into the separated relations and try to get natural Hamiltonians. For instance, let us take \[hg-gor\] g()=f(),h()= f(), substitute $$\lambda=v_j\qquad \mu=\dfrac{2\mathrm i}{3}\, v_{j}\,p_{j},\qquad j=1,2,$$ into the equation \[gor-eq\] (,)=H\_1+H\_2-\^3-\^3-b+=0, and solve a pair of the resulting equations with respect to $H_{1,2}$. If $a_1=a_2$ in the geodesic Hamiltonian (\[n-hams\]), then in this solution we have to put $$f(\theta)=\dfrac{\cos^{1/3}\theta}{\sin^2\theta} \,,$$ and we obtain integrals of motion for the Gorychev system on the sphere [@gor15] H\_1&=&J\_1\^2+J\_2\^2+J\_3\^2+-,\ \[gor-int\]\ H\_2&=&(J\_1\^2+J\_2\^2+J\_3\^2-)-2ix\_3\^[1/3]{}J\_1+x\_1J\_3. For other separable natural bi-integrable systems from [@ts05; @ts09v] we present Hamiltonians and functions $g$ and $h$ only. So, for the Goryachev-Chaplygin top [@gor00; @chap04] we have $$H_1=J_1^2+J_2^2+4J_3^2+ax_1+\dfrac{b}{x_3^2}\,,\qquad g(\theta) = \dfrac{1}{\cos\theta\sin\theta}\,,\qquad h(\theta) =\dfrac{3\cos^2\theta-2}{\cos^2\theta\sin^2\theta}\,.$$ For the Dullin-Matveev system [@dull04] with Hamiltonian $$H_1=J_1^2+J_2^2+\left(1+\dfrac{x_3}{x_3+c}-\dfrac{x_3^2-|x|^2}{4(x_3+c)^2}\right) J_3^2+\dfrac{ax_1}{(x_3+c)^{1/2}}+\dfrac{b}{x_3+c}$$ geodesic matrix $\Pi$ (\[gor-pi\]) and potential matrix $\Lambda$ (\[gor-l\]) are defined by functions $$g(\theta) = \dfrac{1}{\sin\theta}\,,\qquad \qquad h(\theta)=-\dfrac{1-2c\cos\theta-3\cos^2\theta}{2\sin^2\theta\,(\cos\theta+c)}\,.$$ For the system with the Hamiltonian $$H_1=J_1^2+J_2^2+\left(\dfrac{7}{12}+\dfrac{x_3}{2(x_3+|x|)}\right) J_3^2 +\dfrac{2\mathrm i\alpha x_1}{(x_3+|x|)^{5/6}}-\dfrac{b}{(x_3+|x|)^{1/3}}\,,$$ bi-Hamiltonian structure is defined by functions $$g(\theta) = \dfrac{(\cos\theta+1)^{2/3}}{\sin\theta}\,,\qquad\qquad h(\theta) = -\dfrac{(\cos\theta+1)^{2/3}}{2(\cos\theta)-1)}\,.$$ For the last system from [@ts05] we have $$H=J_1^2+J_2^2+\left(\dfrac{13}{16}+\dfrac{3x_3}{8(x_3+|x|)}\right) J_3^2 +\dfrac{ax_1}{(x_3+|x|)^{3/4}}+\dfrac{b}{(x_3+|x|)^{1/2}}$$ and $$g(\theta) =\dfrac{ (\cos\theta+1)^{1/2}}{\sin\theta}\,,\qquad\qquad h(\theta)=\dfrac{ (3\cos\theta+1)(\cos\theta+1)^{1/2}}{4\sin^2\theta}\,.$$ If $\widetilde{P}'$ is the linear in momenta Poisson bivector from [@ts09v], then our natural Poisson bivector is equal to $P'=\widetilde{P}'P^{-1}\widetilde{P}'$. According to [@val10] , the Coryachev-Chaplygin, Chaplygin and Dullin-Matveev systems can be embedded into a family of integrable systems with cubic integral of motion. We suppose that bi-Hamiltonian structures for the Valent systems may be described by a suitable choice of the functions $g(\theta)$ and $h(\theta)$ in (\[gor-pi\]) and (\[gor-l\]). Another possible generalization consists of multiplication of matrix $\Pi$ (\[gor-pi\]) on the functions depending on $\phi$ similar to (\[pi-lagr2\]). Case 2 - Kowalevski top and Chaplygin system -------------------------------------------- Let us consider a geodesic bivector $P'_T$ (\[p2-sph2\]) determined by the matrix $\Pi$ \[kow-pi\] =( [cc]{} 0 &\ \ 0 & \^2p\_\^2+\^2p\_\^2 ),R, and by functions $$\mathrm y_{12}(\theta)=\cos\theta\Bigl(\sin\theta+\alpha \mathrm x_{22}(\theta)\cos\theta\Bigr)\,,\qquad \mathrm z_{1,2}=0\,.$$ There is only one potential matrix consistent with $\Pi$ (\[kow-pi\]) \[kow-l\] =( [cc]{} a-b& (a-b)\ \ (a-b) & -a+b ),a,bR. The corresponding coordinates of separation $u_{1,2}$ (\[dn-var\]) are the roots of the polynomial B()&=&\^2--\ \[kow-var\]\ &-&-a\^2-b\^2.Following to [@ts10a; @ts10k] we can introduce auxiliary polynomial $$A(\lambda)=\dfrac{\sin\theta p_\theta}{\alpha\cos\theta}\,\lambda +\dfrac{a\sin\alpha\phi+b\cos\alpha\phi}{\alpha}\,p_\phi - \dfrac{\sin\theta(a\cos\alpha\phi-b\sin\alpha\phi)}{\alpha\cos\theta}\,p_\theta\,,$$ such as $$\{B(\lambda), A(\mu)\}=\dfrac{1}{\lambda-\mu}\,\Bigl((\mu^2-a^2-b^2)B(\lambda)-(\lambda^2-a^2-b^2)B(\mu)\Bigr)\,,\qquad \{A(\lambda),A(\mu)\}=0\,.$$ It entails that $$p_{u_j}=-\dfrac{1}{u_j^2-a^2-b^2}\, A(\lambda=u_j)\,,\qquad j=1,2,$$ are the canonically conjugated momenta satisfying to the Poisson brackets (\[poi-br12\]). At $\alpha=2$ these variables have been considered by Chaplygin [@chap03]. By substituting these variables of separation into a pair of the separated relations $$\Phi_1=(u_1^2-a^2-b^2)p_{u_1}^2+H_1-H_2=0\,,\qquad \Phi_2= (u_2^2-a^2-b^2)p_{u_2}^2+H_1+H_2=0\,,$$ one gets separable bi-integrable system with the Hamilton function \[g-kgch\] 2\^2H\_1=p\_\^2-\^2 p\_\^2+2(a+b)\^,R. According to [@ts10a; @ts10k], at $\alpha=1$ using separated relations \[kow-sep\] (u,p\_u)=((u\^2-a\^2-b\^2)p\_u\^2+H\_1-H\_2)((u\^2-a\^2-b\^2)p\_u+H\_1+H\_2)+cu\^2+du=0 one gets Hamilton function of the generalized Kowalevski top [@kow89] \[kow-hg\] H\^[kow]{}=2H\_1=(1-)(J\_1\^2+J\_2\^2)+2J\_3\^2+2ax\_2+2bx\_1-. At $\alpha=2$ we can use another separated relations \[chap-sep\] (u,p\_u)=((u\^2-a\^2-b\^2)p\_u\^2+cu+H\_1-H\_2)((u\^2-a\^2-b\^2)p\_u\^2+cu+H\_1+H\_2)+du=0 in order to get Hamiltonian of the generalized Chaplygin system [@chap03; @gor16] \[chap-hg\] H\^[ch]{}=8H\_1=(1-)(J\_1\^2+J\_2\^2)+2J\_3\^2-2a(x\_1\^2-x\_2\^2)-2bx\_1x\_2-. At $c=-\alpha^{-2}$ we have geodesic Hamiltonian $T=J_1^2+J_2^2+2J_3^2$ with the constant inertia tensor. By substituting these variables of separation into another separation relations we can obtain various mathematical generalizations of bi-integrable Hamiltonians (\[g-kgch\],\[kow-hg\],\[chap-hg\]). Case 2 - spherical top and Chaplygin system ------------------------------------------- At $\gamma=0$ in (\[a-z\]) we have a particular solution of the equations (\[w-eq\]) defined by matrix \[p-tr-ex\] =( [cc]{} p\_\^2 & p\_ p\_\ \ 0 & p\_\^2+p\_\^2 ),R, and functions $$\mathrm y_{12}= \sin\theta\cos\theta+\dfrac{2\alpha\cos^2\theta}{\sin^2\theta-\alpha}\,\mathrm x_{22}\,,\qquad \mathrm z_k=0\,.$$ In this case coordinates of separation $u_{1,2}$ (\[dn-var\]) are equal to $$u_1=p_\phi^2\,,\qquad\qquad u_2=\dfrac{\alpha p_\phi^2}{\sin^2\theta}-\dfrac{(\sin^2\theta-\alpha)p_\theta^2}{\cos^2\theta}\,,$$ so that conjugated momenta read as $$p_{u_1}=\dfrac{\arctan\left(\dfrac{p_\theta\tan\theta}{p_\phi}\right)-\phi}{2p_\phi}\,,\qquad p_{u_2}=\dfrac{\mathrm \sin\theta\cos\theta\arctan\left(\dfrac{\sin^2\theta\,p_\theta}{\sqrt{\alpha\cos^2\theta p_\phi^2-\sin^2\theta(\sin^2\theta-\alpha)p_\theta^2}}\right)} {2\sqrt{\alpha\cos^2\theta p_\phi^2-\sin^2\theta(\sin^2\theta-\alpha)p_\theta^2}}\,.$$ By substituting these variables of separation into the separated relations $$\Phi_1=\sqrt{u_1}-H_2=0\,,\qquad\Phi_2=\alpha H_1-u_2\Bigl(1-(\alpha-1)\tan^2(2p_{u_2}\sqrt{u_2})\Bigr)+\alpha f(\theta)=0\,,$$ where $$\theta=\arccos\sqrt{\dfrac{u_2-\alpha H_2^2}{u_2}+\dfrac{\alpha(H_2^2-u_2)(1-\cos4p_{u_2}\sqrt{u_2})}{2u_2}}$$ one gets generalized Lagrange top with integrals of motion \[eq-glag\] H\_1=J\_1\^2+J\_2\^2+J\_3\^2+f(x\_3),H\_2=J\_3. Other separated relations \_1(u\_1,p\_[u\_1]{})&=&H\_2-H\_1+u\_1=0,\ \[seprel-sph\]\ \_2(u\_1,p\_[u\_1]{})&=&H\_1-u\_2(1-(-1)\^2(2p\_[u\_2]{}))=0.give rise to integrals of motion for the spherical top \[sph-top\] H\_1=T=J\_1\^2+J\_2\^2+J\_3\^2,H\_2=J\_1J\_2J\_3. There are only two potential matrices compatible with $\Pi$ (\[p-tr-ex\]) $$\Lambda^{(1)}=\left( \begin{array}{cc} f(\phi) & 0 \\ \\ \dfrac{f'(\phi)(\sin^2\theta-\alpha)}{2\sin\theta\cos\theta} &\dfrac{ \alpha f(\phi) }{\sin^2\theta} \end{array} \right)$$ and $$\Lambda^{(2)}=\left( \begin{array}{cc} a \sin2\phi+b \cos2\phi & -\dfrac{\cos\theta}{\alpha\sin\theta}(\alpha-\sin^2\theta)(a\cos2\phi-b \sin2\phi) \\ \\ -\dfrac{\sin\theta}{\alpha\cos\theta}(\alpha-\sin^2\theta)(a\cos2\phi-b \sin2\phi) & -\dfrac{(\alpha-2\sin^2\theta)(a\sin2\phi+b \cos2\phi)}{\alpha} \end{array} \right)\,.$$ In the first case the auxiliary bi-Hamiltonian system with the Hamilton function (\[aux-int\]) \[def-kow1\] H\_1\^[(1)]{}=(1+)(J\_1\^2+J\_2\^2)+2J\_3\^2 +f()(1+), is a deformation of the geodesic Hamiltonian for the Kowalevski top at $\alpha=1$ and $f=0$. By substituting the corresponding coordinates of separation (\[dn-var\]) $$\hat{u}_1=u_1+f(\phi)\,, \quad\hat{p}_{u_1}=p_{u_1}-\dfrac{1}{2}\int^\phi\dfrac{dx}{p_\phi^2+f(\phi)-f(x)}\,, \qquad \hat{u}_2=u_2+\dfrac{\alpha f(\phi)}{\sin^2\theta}\,, \quad \hat{p}_{u_2}=p_{u_2}\,,$$ into $\Phi_1=\hat{u}_1-\widehat{H}_2=0 $ and the second separated relation $\Phi_2$ in (\[seprel-sph\]), one gets a generalization of the spherical top defined by the following integrals of motion $$\widehat{H}_1=J_1^2+J_2^2+J_3^2+\dfrac{f\left(\dfrac{x_1}{x_2}\right)}{x_1^2+x_2^2}\,,\qquad \widehat{H}_2=J_3^2+\dfrac{f\left(\dfrac{x_1}{x_2}\right)}{x_1^2+x_2^2+x_3^2}.$$ In the second case matrices $\Pi$ (\[p-tr-ex\]) and $\Lambda^{(2)}$ give rise to the auxiliary bi-Hamiltonian system with the Hamilton function \[def-chap1\] H\_1\^[(2)]{}=(1+)(J\_1\^2+J\_2\^2)+2J\_3\^2+ 4\^[-1]{}ax\_1x\_2-2\^[-1]{}b(x\_1\^2-x\_2\^2). It is a new deformation of the well-known Chaplygin system [@chap03]. According to [@ts99b], there is a non-canonical map, which relates integrals of motion (\[sph-top\]) with integrals of motion for the Gaffet system [@gaff] $$H_1=J_1^2+J_2^2+J_3^2-\dfrac{1}{(x_1 x_2 x_3)^{2/3}}\,,\qquad H_2=J_1J_2J_3+\dfrac{x_2 x_3 J_1+x_1 x_3 J_2+x_1 x_2 J_3}{ (x_1 x_2 x_3)^{2/3} }\,.$$ In order to describe the bi-Hamiltonian structure for the Gaffet system we have to use additional non-point transformation of the standard spherical coordinates, which changes the form of $P'_T$ (\[p2-sph2\]) in initial variables. This bi-Hamiltonian structure will be discussed in the forthcoming publication. Case 3 - Goryachev-Chaplygin top and Sokolov system --------------------------------------------------- At $\gamma=0$ in (\[a-nz\]) equations (\[w-eq\]) have a particular solution $P'_T$ (\[p2-sph2\]) defined by the following symmetric matrix \[p-gch1\] =( [cc]{} p\_\^2 +p\_\^2(4+3\^2) & 2p\_p\_\ \ 2p\_p\_&p\_\^2-p\_\^2\^2 ),R, and by the functions $$\mathrm x_{22}=\mathrm y_{12}=-\dfrac{\cos\alpha\theta\,\sin\alpha\theta}{\alpha}\,,\qquad\qquad \mathrm z_{k}=\dfrac{p_k}{3}\,.$$ There is only one potential matrix compatible with $\Pi$ (\[p-gch1\]) $$\Lambda= \dfrac{a}{\cos^2\alpha\theta}\left( \begin{array}{cc} 1 &0 \\ 0 & 1 \end{array} \right)\,.$$ The corresponding auxiliary bi-Hamiltonian system is defined by the Hamilton function (\[aux-int\]) $$\dfrac{1}{2}\mathcal H_1=(2+\cot^2\alpha\theta)p_\phi^2+p_\theta^2+\dfrac{a}{\cos^2\alpha\theta}\,.$$ If $\alpha=1$, we have a deformation of the geodesic Hamiltonian for the Kowalevski top [@kow89] H\_1=J\_1\^2+J\_2\^2+2J\_3\^2+. This auxiliary bi-Hamiltonian system gives rise to the variables of separation $u_{1,2}$ (\[dn-var\]) $$u_{1,2}=\left(p_\phi\pm\sqrt{\dfrac{p_\phi^2}{\sin^2\theta}+p_\theta^2+\dfrac{a}{\cos^2\theta}}\right)^2= \left( J_3\pm\sqrt{J_1^2+J_2^2+J_3^2+\dfrac{a}{x_3^2}} \right)^2\,,\qquad \alpha=1.$$ At $a=0$ these coordinates were found in [@chap04]. By substituting the generalized Chaplygin variables \[fch-var\] v\_[1,2]{}=,p\_[v\_[1,2]{}]{}=- (v\_[1,2]{}(i x\_1-x\_2)-(i J\_1-J\_2)x\_3)+, into the separated relations $$\Phi_{1,2}(v,p_v)=H_1v+H_2+b\sqrt{v^2-a}\sin 2 p_v-v^3-cv^2=0\,,\qquad v=v_{1,2},\quad p_v=p_{v_{1,2}}\,,$$ one gets integrals of motion for the generalized Goryachev-Chaplygin gyrostat [@gor00; @chap04] H\_1&=&J\_1\^2+J\_2\^2+4J\_3\^2+2cJ\_3+bx\_1+\ \ H\_2&=&(2J\_3+c)(J\_1\^2+J\_2\^2+)-bx\_3J\_1 .By substituting the same variables (\[fch-var\]) into the following separated relations \[sok-rel\] \_[1,2]{}(v\_[1,2]{},p\_[v\_[1,2]{}]{})=\_1\_2+b2p\_[v\_[1,2]{}]{}-v\_[1,2]{}\^2-cv\_[1,2]{}=0, we obtain the generalized Sokolov system [@sok] defined by integrals of motion \_1&=&J\_1\^2+J\_2\^2+2J\_3\^2+c J\_3+b(J\_3x\_1-x\_3J\_1)+,\ \ \_2&=&(2J\_3+c+bx\_1),up to the canonical transformation discussed in [@kst03]. Case 3 - Kowalevski-Goryachev-Chaplygin gyrostat ------------------------------------------------ The geodesic matrix $\Pi$ (\[p-gch1\]) for the Goryachev-Chaplygin top may be deformed \[p-kowg\] =+( [cc]{} 0 & p\_\^2\ \ 0 & 0 ), if $$\mathrm y_{11}(\theta)=\mathrm x_{21}(\theta)-\dfrac{\beta}{2\alpha}\,,\qquad y_{12}(\theta)=-\dfrac{\cos^2\alpha\theta}{\sin^2\alpha\theta}\,\mathrm x_{22}(\theta) -\dfrac{\cos\alpha\theta}{\alpha \sin\alpha\theta}\,,\qquad \mathrm z_{k}=\dfrac{p_k}{3}.$$ In the $r$-matrix formalism transition from matrix (\[p-gch1\]) to the matrix (\[p-kowg\]) generates transition from the quadratic Sklyanin bracket to the so-called reflection equation algebra [@ts02; @ts08t]. In generic case matrix $\widehat{\Pi}$ (\[p-kowg\]) is compatible with the potential matrix \[lg-kowg\] \^[(1)]{}= ( [cc]{} \^2-4 & -\ \ & \^2 )+ ( [cc]{} 1 & -\ \ 0 & 1 ) The corresponding auxiliary bi-Hamiltonian system is defined by the Hamiltonian $$\dfrac{1}{2}\mathcal H_1=(2+\cot^2\alpha\theta)p_\phi^2+p_\theta^2 +\dfrac{a(\cos^2\alpha\theta-2)\e^{-\frac{4\alpha\phi}{\beta}}}{\sin^2\alpha\theta}+\dfrac{b\sin^2\alpha\theta}{\cos^2\alpha\theta}\,.$$ So, at $\alpha=1$ we have another deformation of the geodesic Hamiltonian for the Kowalevski top [@kow89] $$\dfrac{1}{2}\mathcal H_1=J_1^2+J_2^2+2J_3^2-\dfrac{ a(x_1^2+x_2^2+1)}{x_1^2+x_2^2}\,\e^{-\frac{4\arctan(x_1/x_2)}{\beta}}+\dfrac{b(x_1^2+x_2^2)}{x_3^2}\,.$$ In this case description of the variables of separation and the corresponding bi-integrable system is an open problem. At $\beta=\pm2\,\mathrm i$ there is one more particular potential matrix compatible with $\widehat{\Pi}$ (\[p-kowg\]) \[l-kowg\] \^[(2)]{}= \^[i]{} ( [cc]{} i &\ \ 0 & 0 ). In this particular case we can substitute the coordinates of separation $u_{1,2}$ (\[dn-var\]) and the corresponding momenta $p_{u_{1,2}}$ into the separated relations defined by \[kgyr-curve\] (u,p\_u)= u\^6+H\_1u\^4+H\_2u\^2+a+2p\_u=0, and obtain integrals of motion for the Kowalevski-Goryachev-Chaplygin gyrostat with the following Hamilton function \[kgyr-h\] H\_1=J\_1\^2+J\_2\^2+2J\_3\^2+2c\_1 J\_3+c\_2x\_1+c\_3(x\_1\^2-x\_2\^2) +, see [@chap03; @gor16; @kow89; @yeh]. Here $b(u)$ (\[kgyr-curve\]) is a special polynomial of eight order in $u$ with coefficients depending on $a$ and $c_k$, see details in [@ts02]. In this case in order to get the conjugated momenta $p_{u_{1,2}}$ and the separated relation we used the Lax matrices and the reflection equation algebra, that drastically simplified all the calculations. For the systems with quartic integral of motion from [@ts05a] the natural Poisson bivector may be obtained using deformation of the matrix (\[p-kowg\]) similar to (\[pi-lagr2\]). Case 2 - deformations of the Kowalevski top and Chaplygin systems ----------------------------------------------------------------- Let us consider trivial canonical transformation \[ptheta-tr\] p\_p\_+f(), which preserved canonical Poisson bivector $P$ (\[poi-0\]). This mapping shifts the natural Poisson bivector $P'$ (\[n-p\]) associated with matrices $\Pi$ (\[kow-pi\]) and $\Lambda$ (\[kow-l\]) by the rule $$\widehat{P}'=P'+g(\theta)\left( \begin{array}{cccc} 0 & 0 & 0 &\left(\dfrac{\alpha\cos^2\theta-1}{\alpha\sin^2\theta}+\dfrac{\cot\theta}{\alpha}\,\ln'g\right) \,p_\phi \\ \\ * & 0 & 0 &p_\theta+\dfrac{\cos^2\theta\,\sin^{\alpha-2}\theta}{4}\,g(\theta) \\ \\ * & * & 0 & \dfrac{\sin^{\alpha-2}\theta(a\sin\alpha\phi+b\cos\alpha\phi)}{2}\Bigl(\sin\theta\cos\theta \ln'g-1\Bigr)\\ \\ * & * & * & 0 \end{array} \right)\,,$$ where $$g(\theta)=-\dfrac{2f(\theta)\sin^{2-\alpha}\theta}{\cos^2\theta}\, \qquad\mbox{and}\qquad \ln'g=\dfrac{1}{g(\theta)}\dfrac{dg(\theta)}{d\theta}\,.$$ The Poisson bivector $\widehat{P}'$ gives rise to the “shifted” variables of separation \[sh-sepvar\] =. u|\_[p\_p\_+f()]{},=. p\_u |\_[p\_p\_+f()]{}. If we substitute these variables of separation into the old separated relations (\[kow-sep\]) and (\[chap-sep\]) one gets non-natural Hamiltonians, which are related to the old Hamiltonians (\[kow-hg\]) and (\[chap-hg\]) by canonical transformation (\[ptheta-tr\]). In order to get new natural Hamiltonians we have to appropriately modify the separated relations. For instance, let us take $$f(\theta)=\dfrac{\sqrt{\beta}\,\tan^{\alpha-1}\theta}{\cos^\alpha\theta}\,.$$ At $\alpha=1$ by substituting variables of separation (\[sh-sepvar\]) into the new separated relations \[dkow-sep\] =-H\_1+[\^2]{}+(\^2-a\^2-b\^2)\_[u]{},=\_[1,2]{},\_u=\_[u\_[1,2]{}]{}, where $\Phi$ is given by (\[kow-sep\]), one gets generalization of the Hamilton function (\[kow-hg\]) $$\widehat{H}^{kow}=\left(1-\dfrac{c+1}{x_3^2}\right)(J_1^2+J_2^2)+2J_3^2+2ax_2+2bx_1-\dfrac{d}{\sqrt{x_1^2+x_2^2}}-\dfrac{\beta}{x_3}\,,\nn\\$$ At $\alpha=2$ the “shifted” separated relations \[dchap-sep\] =+(\^2-a\^2-b\^2)\_u,=\_[1,2]{},\_u=\_[u\_[1,2]{}]{}, where $\Phi$ is given by (\[chap-sep\]), yield similar generalization of the Hamiltonian (\[chap-hg\]) $$\widehat{H}^{ch}=\left(1-\dfrac{4c+1}{x_3^2}\right)(J_1^2+J_2^2)+2J_3^2-2a(x_1^2-x_2^2)-2bx_1x_2-\dfrac{2d}{1+4c-x_3^2}+\beta\left(\dfrac{1}{x_3^4}-\dfrac{1}{x_3^6}\right)\,.\nn$$ These Hamiltonians at $c=-\alpha^{-2}$ and another Hamiltonians associated with various functions $f(\theta)$ may be found in [@yeh]. The separability of these systems, to the best of our knowledge, has not been considered in literature yet. In tboth cases equations of motion are linearized on the two copies of the non-hyperelliptic curves of genus three defined by (\[dkow-sep\]) and (\[dchap-sep\]). We do not know how to solve the corresponding Abel-Jacobi equations as yet. Other natural Poisson bivectors studied in the previous Sections may be shifted on the similar linear in momenta terms. As above, it allows us to get various generalizations of the considered bi-integrable systems. Conclusion ========== We proved that almost all known integrable systems on the two-dimensional unit sphere $\mathbb S$ may be studied in the framework of a single theory of natural Poisson bivectors. It is an experimental fact supported by all the know constructions of the variables of separation on the sphere. We try to draw attention to this experimental fact in order to find suitable geometric explanation of this phenomenon. So, this collection of examples may be helpful for investigations of the invariant geometric properties of metric $\mathrm g$, geodesic $\Pi$ and potential $\Lambda$ matrices as objects on the whole phase space, which allows us to obviate a necessity of the direct solutions of the equations (\[m-eq1\],\[w-eq\]) and (\[bi-inv\]). Moreover, it can possibly be a suitable step towards the construction of Poisson bivectors on more generic symplectic and Poisson manifolds. [10]{} S. Benenti, , [J. Math. Phys.]{}, v.38, p. 6578-6602, 1997. S.Benenti, , [Acta Applicandae Mathematicae]{}, v.87, p. 33-91, 2005. A. V. Bolsinov, V. S. Matveev, , J. Geom. Phys. v.44, p.489-506, 2003. A.V. Borisov, I.S. Mamaev, , Moscow-Izhevsk, RCD, 2005. R. Brouzet, , Jour. Math. Phys. 34, 1309–1313, 1993. S.A. Chaplygin, , Trudy otdel. Fiz. Nauk Obsh. Liub. Est., v.11, p.7-10, 1903. S.A. Chaplygin, , Trudy otd. fiz. nauk Mosk. obshch. lyub. estest., v. 12, no. 1, p. 1–4, 1904. M. Crampin, W. Sarlet, G. Thompson, , J. Phys. A: Math. Gen. v.33, p.8755–8770., 2000. H.R. Dullin, V.S. Matveev, , Mathematical Research Letters, v.11, p.715-722, 2004. B. Gaffet, , J. Phys. A: Math. Gen., v.31, p. 1581-1596, 1998. D.N. Goryachev, , Mat. sbonik kruzhka lyub. mat. nauk, vol. 21, no. 3, pp. 431-438, 1900. D.N. Goryachev, Warshav. Univ. Izv., v.3, p.1-11, 1915. D.N. Goryachev, , Warshav. Univ. Izv., v.3, p.1-15, 1916. Yu. A. Grigoryev, A. V. Tsiganov, , arXiv:1012.0468, 2010. A. Ibort, F. Magri, G. Marmo, , [J. Geometry and Physics]{}, v.33, p.210-228, 2000. I.V. Komarov, V.V. Sokolov, A.V. Tsiganov, , , v.36, p. 8035-8048, 2003. S. Kowalevski, , , v.[12]{}, p.177-232, 1889. A. J. Maciejewski, M.  Przybylska, A.V.  Tsiganov, , arXiv:1011.3249, 2010. V. S. Matveev, V.V. Shevchishin, , Journal of Geometry and Physics, v. 60, pp. 833-856 , 2010.\ , arXiv:1010.4699, 2010. V.V. Sokolov, , Theor. Math. Phys., v.129, p. 1335-1340, 2001. A.V. Tsiganov, , J.Phys.A., v.32, p.8355-8363, 1999 A.V. Tsiganov, , J. Phys. A, Math. Gen. 35, No.26, L309-L318, 2002 A.V. Tsiganov, , J. Phys. A, Math. Gen. v.38, p.921-927, 2005. A.V. Tsiganov, , J. Phys. A, Math. Gen. v.38, p.3547-3553, 2005. A.V. Tsiganov, , J. Phys. A, Math. Gen. v.39, p.L571-L574, 2006. A.V. Tsiganov, , Journal of Physics A: Math. Theor. v.40, pp. 6395-6406, 2007. A.V. Tsiganov, , J. Phys. A: Math. Theor., v.41, 315212 (12pp), 2008. A. V. Tsiganov, , Regular and Chaotic Dynamics, v.13(3), p.191-203, 2008. A.V. Tsiganov, , Journal of Mathematical Sciences, v.168, n.8, p.901-911, 2010. A.V. Tsiganov, , Regular and Chaotic Dynamics, v.15, n.6, p. 657-667, 2010. A.V. Tsiganov, , arXiv:1006.3914, 2010. F. Turiel, [*Structures bihamiltoniennes sur le fibré cotangent*]{}, C. R. Acad. Sci. Paris Sér. I Math. [**315**]{} (1992), 1085–1088. G. Valent, , Commun. Math. Phys., v.299, p.631-649, 2010. A.V. Vershilov, A.V. Tsiganov, , J. Phys. A: Math. Theor. v.42, 105203 (12pp), 2009. H.M. Yehia, A.A. Elmandouh,, Regular and Chaotic Dynamics, v.13(1), pp. 56 - 69, 2008.
--- abstract: | An exact solution of nuclear spherical mean-field plus orbit-dependent non-separable pairing model with two non-degenerate $j$-orbits is presented. The extended one-variable Heine-Stieltjes polynomials associated to the Bethe ansatz equations of the solution are determined, of which the sets of the zeros give the solution of the model, and can be determined relatively easily. A comparison of the solution to that of the standard pairing interaction with constant interaction strength among pairs in any orbit is made. It is shown that the overlaps of eigenstates of the model with those of the standard pairing model are [always large,]{} especially for the ground and the first excited state. However, the quantum phase crossover in the non-separable pairing model [cannot]{} be accounted for by the standard pairing interaction. .3cm [**Keywords:**]{} Non-separable pairing interaction; exact solvable models; Bethe ansatz. author: - 'Feng Pan[^1]' - Shuli Yuan - Yingwen He - Yunfeng Zhang - Siyu Yang - 'J. P. Draayer' title: | [An exact solution]{} of spherical mean-field plus orbit-dependent\ non-separable pairing model with two non-degenerate $j$-orbits --- [**1. Introduction:**]{}  Pairing correlations seem evident in various quantum many-body systems. [@bar; @ran; @coop; @gomes]. It has been shown [that]{} pairing interactions are key to [elucidating]{} ground state and low-energy spectroscopic properties of nuclei [@Belyaev; @Ring; @Hasegawa]. Though the Bardeen-Cooper-Schrieffer (BCS) [@bar] and the Hartree-Fock-Bogolyubov (HFB) approximations provide simple and clear pictures [@Belyaev; @PN; @ma], tremendous efforts have been made in finding [exact]{} solutions to the problem [@dans; @cov; @bi; @zeng; @mol; @Volya]. It is known that spherical or deformed mean-field plus the standard (equal strength) pairing interaction can be solved exactly by using the Gaudin-Richardson method [@gau; @Ri; @duk], which can now be solved more easily by using the extended Heine-Stieltjes polynomial approach [@pan0; @guan1; @guan2; @qi]. The separable pairing problem was studied in [@pan3], in which the single-particle energies are all degenerate. The separable pairing interaction with two non-degenerate levels was analyzed in [@ba], of which solution with multi non-degenerate levels of a special case was given in [@Rom; @claeys; @pan4], while the general case has been analyzed in [@pan5]. In this work, it will be shown that the orbit-dependent non-separable pairing interaction among valence nucleons over two non-degenerate orbits can also be solved analytically. .3cm [**2. The model and exact solution:**]{}  The Hamiltonian of a spherical mean-field plus orbit-dependent non-separable pairing model (NSPM) with two non-degenerate $j$-orbits [can]{} be written as $$\begin{aligned} \label{H} \hat{H} =\sum_{t}^{p}\epsilon_{{t}}\,\hat{N}_{j_{t}}+\hat{H}_{\rm P}= \sum_{t}^{p}\epsilon_{{t}}\,\hat{N}_{j_{t}}-\sum_{1\leq t,t'\leq p}g_{{t},{t'}}\,S^{+}_{j_{t}}S^{-}_{j_{t'}},\end{aligned}$$ where $p=2$ is the total number of orbits [considered above a closed or sub-closed shell,]{} $\{\epsilon_{{t}}\}$ ($t=1,~2$) is single-particle energies generated from a mean-field theory with $\epsilon_{{1}}\neq \epsilon_{{2}}$, $\hat{N}_{j}=\sum_{m}a^{\dagger}_{jm}a_{jm}$ and $S_{j}^{+}=\sum_{m>0}(-1)^{j-m}a^{\dagger}_{jm}a^{\dagger}_{j-m}$, in which $a^{\dagger}_{jm}$ ($a_{jm}$) is the creation (annihilation) operator for a nucleon with angular momentum quantum number $j$ [with projection]{} $m$, and $\{g_{{t},{t'}}\}$ ($t,t'=1,2$) are the non-separable pairing interaction parameters, which are all assumed to be real and must be symmetric with $g_{{1},{2}}=g_{{2},{1}}$. The set of local operators $\{{S}^{-}_{j_{t}},~{S}^{+}_{j_{t}},~ \hat{N}_{j_{t}}\}$ ($t=1,2$), where $S^{-}_{j_{t}}=(S^{+}_{j_{t}})^{\dag}$, generate two copies of [an SU(2) algebra that satisfies]{} the commutation relations $ [\hat{N}_{j_{t}}/2,~{S}^{-}_{j_{t'}}]=-\delta_{tt'}{S}^{-}_{j_{t}},~ [\hat{N}_{j_{t}}/2,~{S}^{+}_{j_{t'}}]=\delta_{tt'}{S}^{+}_{j_{t}},~ [{S}^{+}_{j_{t}},~{S}^{-}_{j_{t'}}]=2\delta_{tt'}S^{0}_{j_{t}}, $ where $S^{0}_{j_{t}}=(\hat{N}_{j_{t}}-\Omega_{{t}})/2$ with $\Omega_{{t}}=j_{t}+1/2$. As adopted in the Gaudin-Richardson approach  [@gau; @Ri; @duk] for the standard pairing model (SPM), let $$\label{2} S^{+}(x)=\sum_{t}^{2}{1\over{2\epsilon_{{t}}-x}}S_{j_{t}}^{+},$$ where $x$ is the spectral parameter to be determined. According to the commutation relations of the generators of the two copies of [the]{} SU(2) algebra, we have $$\label{3} [\sum_{t}\epsilon_{{t}}\hat{N}_{j_{t}},~S^{+}(x)]= \sum_{t}{2\epsilon_{{t}}\over{2\epsilon_{{t}}-x}}S^{+}_{j_{t}}= S^{+}+x\,S^{+}(x),$$ where $S^{+}=\sum_{t} S^{+}_{j_{t}}$, and $$\label{4} [\hat{H}_{\rm P},~S^{+}(x)]= \sum_{t',t}g_{{t'}, {t}}S^{+}_{j_{t'}}{2S_{j_{t}}^{0}\over{2\epsilon_{{t}}-x}},$$ $$\label{5} [[\hat{H}_{\rm P},~S^{+}(x)], ~S^{+}(y)]= 2\sum_{t',t}g_{{t'},{t}}{1\over{(2\epsilon_{{t}}-x) (2\epsilon_{{t}}-y)}}S^{+}_{j_{t'}}S_{j_{t}}^{+}.$$ The $k$-pair eigenvectors of (\[H\]) [can]{} be still written as the Gaudin-Richardson form with $$\label{6} \vert \zeta,~k;JM\rangle=\prod_{\rho}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle,$$ where $\zeta$ labels the $\zeta$-th set of solution $\{x^{(\zeta)}_{1},\cdots,x^{(\zeta)}_{k}\}$. If the seniority number of the $t$-th orbit is $\nu_{{t}}$, the pairing vacuum states of these two orbits are denoted as $\vert \nu_{{t}}\eta_{{t}}J_{t}M_{t}\rangle$ satisfying $S^{-}_{j_{t}}\vert \nu_{j_{t}}\eta_{{t}}J_{t}M_{t}\rangle=0$, where $J_{{t}}$ and $M_{{{t}}}$ are the angular momentum quantum number and that of its third component, respectively, and $\eta_{{t}}$ is the multiplicity label needed to distinguish different possible ways of $\nu_{{t}}$ particles coupled to the angular momentum $J_{{t}}$. Thus, a pairing vacuum state of a two $j$-orbit system with the total seniority number $\nu=\nu_{{1}}+\nu_{{2}}$ and the total angular momentum $J$ can be expressed as $\vert JM\rangle\equiv\vert \nu_{{1}}\eta_{{1}}, \nu_{{2}}\eta_{{2}};(J_{{{1}}}\otimes J_{{{2}}})JM\rangle$. Thus, $\vert JM\rangle$ satisfies $S^{-}_{j_{t}}\vert JM\rangle=0$ for $t=1,2$, which is used in (\[6\]). To solve the eigen-equation of (\[H\]) [with]{} ansatz (\[6\]), one [can]{} calculate commutators of $\hat{H}$ with the pairing operators $S^{+}(x^{(\zeta)}_{\rho})$ as was done in Richardson’s work on solving the SPM [@Ri; @duk]. Since (\[H\]) only contains one- and two-body interaction terms, the $q$-time commutators $[\cdots[\hat{H},S^{+}(x^{(\zeta)}_{\rho_{1}})],\cdots,S^{+}(x^{(\zeta)}_{\rho_{q-1}})], S^{+}(x^{(\zeta)}_{\rho_{q}})]$ vanish when $q\geq3$. Namely, one only needs to calculate single and double commutators of $\hat{H}$ with the operators $S^{+}(x^{(\zeta)}_{\rho})$. Since we use the pairing operator (\[2\]) to construct the eigen-vectors (\[6\]), the commutator of the one-body mean-field term of (\[H\]) with $S^{+}(x^{(\zeta)}_{\rho})$ is given by (\[3\]), while (\[4\]) [can]{} be expressed in terms of the collective operators $S^{+}(x)$ and $S^{+}$ appearing on the right-hand-side of (\[3\]) when the commutator is applied to the vacuum state $\vert JM\rangle$ with $$\begin{aligned} \label{7} [\hat{H}_{\rm P},~S^{+}(x)]\vert JM\rangle= \sum_{t',t}g_{{t'},{t}}\,S^{+}_{j_{t'}}{2S_{j_{t}}^{0}\over{2\epsilon_{{t}}-x}}\vert JM\rangle= (\alpha(x)\, S^{+}+\beta(x)\, S^{+}(x))\vert JM\rangle.\end{aligned}$$ After solving the above binomial equations of the local operators $S^{+}_{j_{1}}$ and $S^{+}_{j_{2}}$, one obtains $$\begin{aligned} \label{9}\nonumber &\alpha(x)=-{(x-2\epsilon_{{2}})\left((x-2\epsilon_{{1}})g_{1,1}-(x-2\epsilon_{{2}})g_{1,2}\right) (\Omega_{{1}}-\nu_{{1}})+ (x-2\epsilon_{{1}})\left((x-2\epsilon_{{1}})g_{1,2}-(x-2\epsilon_{{2}})g_{2,2}\right)(\Omega_{{2}}-\nu_{{2}}) \over{ 2(x-2\epsilon_{{1}})(x-2\epsilon_{{2}})(\epsilon_{{1}}-\epsilon_{{2}})}},\\ &\beta(x)=-{(x-2\epsilon_{{2}})(g_{1,1}-g_{1,2})(\Omega_{{1}}-\nu_{{1}})+ (x-2\epsilon_{{1}})(g_{1,2}-g_{2,2})(\Omega_{{2}}-\nu_{{2}})\over{ 2(\epsilon_{{1}}-\epsilon_{{2}})}},\end{aligned}$$ where the condition $g_{2,1}=g_{1,2}$ is used. It is obvious that (\[9\]) also assumes $\epsilon_{{1}}\neq\epsilon_{{2}}$, which is valid for non-degenerate cases. It is clear that the expression shown on the right-hand-side of (\[7\]) is impossible when the number of orbits $p\geq3$. For the standard pairing interaction with $g_{t,t'}=G$ $\forall~t,t'$, (\[7\]) becomes the commutators shown in Richardson’s work [@Ri; @duk] with $\beta(x)=0$. Similarly, the double commutator $S^{+}(x,y)=[[\hat{H}_{\rm P},~S^{+}(x)], ~S^{+}(y)]$ given in (\[5\]) is a homogenous binomial of degree $2$ in $S^{+}_{j_{t}}$ with $t=1,2$ for the two $j$-orbit case, which, therefore, can be expressed in terms of $3$ independent terms. Hence, similar to the commutators shown in the SPM, one [can]{} write (\[5\]) as $$\label{10} S^{+}(x,y)= 2\sum_{t',t}g_{{t'},{t}}\,{1\over{(2\epsilon_{{t}}-x) (2\epsilon_{{t}}-y)}}S^{+}_{j_{t'}}S_{j_{t}}^{+}=a(x,y)\, S^{+}S^{+}(x)+b(x,y)\,S^{+}S^{+}(y)+ c(x,y)\,S^{+}(x)S^{+}(y),$$ which expressed in terms of $S^{+}$, $S^{+}(x)$, and $S^{+}(y)$ is only possible for two $j$-orbit case. For a system with $p$ $j$-orbits, $p(p+1)/2$ terms are needed on the right-hand-side of (\[10\]). For example, six terms on the right-hand-side of (\[10\]) are needed for [the]{} three $j$-orbit case. Hence, though it is possible to solve a multi $j$-orbit system by using this procedure, the results will be very complicated with $p$ variables for a two-pair state. After comparing the coefficients of $S^{+}_{j_{t}}S^{+}_{j_{t'}}$ with the same $t$ and $t'$ on both sides of (\[10\]), one gets $$\begin{aligned} \label{12}\nonumber &a(x,y)={1\over{2(x-y)(\epsilon_{{1}}-\epsilon_{{2}})^{2}}}F(x,y),~~b(x,y)=a(y,x),\\ \nonumber &c(x,y)={1\over{2(\epsilon_{{1}}-\epsilon_{{2}})^{2}}}\left( (x+y)(2\epsilon_{{2}}(g_{1,2}-g_{1,1})+2\epsilon_{{1}}(g_{1,2}-g_{2,2}))\right.+\\ &\left. x\,y\,(g_{1,1}+g_{2,2}-2g_{1,2})+4\epsilon_{{2}}^{2}(g_{1,1}-g_{1,2})+4\epsilon_{{1}}^{2}(g_{2,2}-g_{1,2})\right),\end{aligned}$$ where $$\begin{aligned} \label{13}\nonumber &F(x,y)=x\,( 2\epsilon_{{1}}(g_{1,1}-g_{1,2})+2\epsilon_{{2}}(g_{2,2}-g_{1,2}))+ y\,(2\epsilon_{{2}}(g_{1,1}-g_{1,2})+2\epsilon_{{1}}(g_{2,2}-g_{1,2}))+\\ &x\,y\,(2g_{1,2}-g_{1,1}-g_{2,2})+ 4g_{1,2}(\epsilon_{{1}}^{2}+\epsilon_{{2}}^{2})-4\epsilon_{{1}}\epsilon_{{2}} (g_{1,1}+g_{2,2}),\end{aligned}$$ and $c(x,y)$ is obviously symmetric in $x$ and $y$. Using Eqs. (\[3\]) ,(\[7\]), and (\[10\]), one can directly check that $$\begin{aligned} \label{14} &\sum_{t}\epsilon_{t}\,\hat{N}_{j_{t}}\vert \zeta,~k;JM\rangle= \sum_{i}^{k}S^{+}\prod_{\rho\,(\neq i)}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle+ \sum_{i}^{k}x_{i}^{(\zeta)}\prod_{\rho}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle\end{aligned}$$ and $$\begin{aligned} \label{15}\nonumber &\hat{H}_{\rm P}\vert \zeta,~k;JM\rangle=\sum^{k}_{i} \alpha(x^{(\zeta)}_{i})\,S^{+}\prod_{\rho\,(\neq i)}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle +\sum^{k}_{i}\beta(x^{(\zeta)}_{i})\prod_{\rho}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle +\\ &\sum_{i}^{k}\sum^{k}_{i'\,(\neq i)}a(x_{i'}^{(\zeta)},x_{i}^{(\zeta)} )\,S^{+}\prod_{\rho\,(\neq i)}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle+ \sum^{k}_{i}\sum^{k}_{i'=i+1}c(x_{i}^{(\zeta)},x_{i'}^{(\zeta)} )\prod_{\rho}^{k}S^{+}(x^{(\zeta)}_{\rho}) \vert JM\rangle.\end{aligned}$$ With (\[14\]) and (\[15\]), one can prove that the eigen-equation $\hat{H}\vert \zeta, k;JM\rangle=E^{(\zeta)}_{k}\vert \zeta,k;JM\rangle$ is fulfilled if and only if $$\label{12} 1+\alpha(x^{(\zeta)}_{i})+\sum^{k}_{i'\,(\neq i)}a(x_{i'}^{(\zeta)},x_{i}^{(\zeta)} )=0 ~~{\rm for}~~i=1,2,\cdots k,$$ with the corresponding eigen-energy $$\begin{aligned} \label{17}\nonumber &E^{(\zeta)}_{k}=\sum_{t=1}^{p}\epsilon_{{t}}\nu_{{t}}+ \sum_{i=1}^{k}\left(x_{i}^{(\zeta)}+ \beta(x^{(\zeta)}_{i})+\sum^{k}_{i'=i+1}c(x_{i}^{(\zeta)},x_{i'}^{(\zeta)} )\right)=\\\nonumber & \sum_{t=1}^{p}\epsilon_{{t}}\nu_{{t}}+\left( {(g_{1,1}-g_{1,2})(\Omega_{{1}}-\nu_{{1}})\epsilon_{2}\over{\epsilon_{1}-\epsilon_{2}}}+ {(g_{1,2}-g_{2,2})(\Omega_{{2}}-\nu_{{2}})\epsilon_{1}\over{\epsilon_{1}-\epsilon_{2}}} +{\epsilon_{1}^2(g_{2,2}-g_{1,2})+\epsilon_{2}^2(g_{1,1}-g_{1,2})\over{ (\epsilon_{1}-\epsilon_{2})^{2}}}(k-1)\right)k+\\\nonumber &\left( 1-{(g_{1,1}-g_{1,2})(\Omega_{{1}}-\nu_{{1}})\over{2\epsilon_{1}-2\epsilon_{2}}}- {(g_{1,2}-g_{2,2})(\Omega_{{2}}-\nu_{{2}})\over{2\epsilon_{1}-2\epsilon_{2}}} +{\epsilon_{2}(g_{1,2}-g_{1,1})+\epsilon_{1}(g_{1,2}-g_{2,2})\over{ (\epsilon_{1}-\epsilon_{2})^{2}}}(k-1)\right)\sum_{i=1}^{k}x_{i}^{(\zeta)}+\\ &{g_{1,1}+g_{2,2}-2g_{1,2}\over{4(\epsilon_{1}-\epsilon_{2})^{2}}}\left( (\sum_{i=1}^{k}x_{i}^{(\zeta)})^{2}-\sum_{i=1}^{k}(x_{i}^{(\zeta)})^{2}\right),\end{aligned}$$ where $\sum_{t=1}^{p}\epsilon_{{t}}\nu_{{t}}$ is contributed from particles in the pairing vacuum. One can easily check that, when $g_{t,t'}=G$ $\forall~t,t'$, $\alpha(x)=-G\sum_{t}(\Omega_{{t}}-\nu_{{t}})/({2\epsilon_{{t}}-x})$, $a(x,y)=2G{/{(x-y)}}$, $\beta(x)=c(x,y)=0$, with which (\[12\]) and (\[17\]) become the Bethe ansatz equations and the corresponding eigen-energy of the SPM with $E^{(\zeta)}_{k}=\sum_{i=1}^{k}x_{i}^{(\zeta)}$ known previously [@Ri; @duk]. Thus, the solution provided by (\[6\]), (\[12\]), and (\[17\]) include the standard and separable pairing models with two non-degenerate $j$-orbits as special cases, though the form of the eigenstates shown in (\[6\]) for the separable pairing case with $g_{t,t'}=g_{t}g_{t'}$, where $g_{t}$ ($t=1,\cdots,p$) is a set of real parameters, looks quite different from that used previously [@ba; @Rom; @claeys; @pan4]. It should be pointed out that (\[12\]) also implies $g_{1,2}\neq0$. There will no solution of (\[12\]) [when]{} $g_{1,2}=0$. Actually, similar to the case with no pairing interaction, a product of the single-particle states is an eigen-state of (\[H\]) when $g_{1,2}=0$. Hence, $g_{1,2}\neq0$ is assumed. According to the Heine-Stieltjes correspondence [@pan0; @guan1], zeros $\{x_{i}^{(\zeta)}\}$ of the extended Heine-Stieltjes polynomials $y_{k}(x)$ of degree $k$ are roots of Eq. (\[12\]), which should satisfy the following second-order Fuchsian equation: $$\label{18} A(x)y_{k}^{\prime\prime}(x)+B(x,k)y^{\prime}_{k}(x)-V(x,k)y_{k}(x)=0.$$ Here, $$\label{19} A(x)={1\over{2}}(x^{2}F_{12}+x\,(F_{1}+F_{2})+F_{0})\prod_{t=1}^{2}(2\epsilon_{{t}}-x)$$ is a polynomial of degree $4$, in which $$\begin{aligned} \label{20} &F_{1}={\epsilon_{1}(g_{1,1}-g_{1,2})+\epsilon_{2}(g_{2,2}-g_{1,2})\over{(\epsilon_{1}-\epsilon_{2})^{2}}}, ~F_{2}={\epsilon_{2}(g_{1,1}-g_{1,2})+\epsilon_{1}(g_{2,2}-g_{1,2})\over{(\epsilon_{1}-\epsilon_{2})^{2}}}, ~F_{12}={2g_{1,2}-g_{1,1}-g_{2,2}\over{2(\epsilon_{1}-\epsilon_{2})^{2}}}, ~F_{0}={2g_{1,2}(\epsilon_{1}^{2}+\epsilon_{2}^{2}) -2\epsilon_{1}\epsilon_{2}(g_{1,1}+g_{2,2})\over{(\epsilon_{1}-\epsilon_{2})^{2}}},\end{aligned}$$ the polynomial $B(x,k)$ of degree $3$ is given as $$\label{21} B(x,k)/A(x)={2\over{x^{2}F_{12}+x\,(F_{1}+F_{2})+F_{0}}}\left( \sum_{t=1}^{2} {\alpha_{t}^{(1)}+ \alpha_{t}^{(2)}\,x\over{2\epsilon_{{t}}-x}}-(F_{1}+F_{12}\,x)\,(k-1)-1\right),$$ where $$\begin{aligned} \label{22}\nonumber &\alpha_{1}^{(1)}={(\epsilon_{1}g_{1,1}-\epsilon_{2}g_{1,2})(\Omega_{{1}}-\nu_{{1}}) \over{\epsilon_{1}-\epsilon_{2}}}, ~\alpha_{1}^{(2)}={(g_{1,2}-g_{1,1})(\Omega_{{1}}-\nu_{{1}}) \over{2\epsilon_{1}-2\epsilon_{2}}},\\ &\alpha_{2}^{(1)}={(\epsilon_{1}g_{1,2}-\epsilon_{2}g_{2,2})(\Omega_{{2}}-\nu_{{2}}) \over{\epsilon_{1}-\epsilon_{2}}}, ~\alpha_{2}^{(2)}={(g_{2,2}-g_{1,2})(\Omega_{{2}}-\nu_{{2}}) \over{2\epsilon_{1}-2\epsilon_{2}}},\end{aligned}$$ and $V(x)$ is a Van Vleck polynomial of degree $2$, which is determined according to Eq. (\[18\]). Therefore, the polynomial approach for the SPM proposed in [@guan1; @guan2] applies to the this case as well. For given the number of pairs $k$, $k$ zeros $\{x_{i}^{(\zeta)}\}$ of $y(x)$ provides a solution of (\[12\]) with the corresponding eigen-energy given by (\[17\]). .3cm [**3. A simple analysis of the model:**]{}  To demonstrate the use of the solution, the validity of the SPM is analyzed, of which only one overall pairing interaction strength can be adjusted. We consider $5$ pairs in the NSPM with $\epsilon_{1}=1$ MeV and $\epsilon_{2}=2$ MeV, $j_{1}=19/2$ and $j_{2}=21/2$, with which each orbit can accommodate $5$ pairs. The on-site pairing interaction parameters $g_{1,1}=g_{2,2}=1$ MeV are fixed. We calculated the pair excitation energies of the NSPM for serval values of $g_{1,2}={g}$, which are presented in Table \[t1\]. Then, the overall pairing interaction strength of the SPM is adjusted according to the ground-state energy of the NSPM for each case. Though pairing excitation energies of the SPM are about $2$ MeV different from the corresponding ones of the NSPM, as shown in Table \[t1\], the overlap-square of the NSPM with the corresponding one of the SPM, $\eta(\zeta)=\vert\langle\zeta\vert\zeta\rangle_{\rm SP}\vert^2$, is always greater than $94\%$ calculated in this way, where $\vert\zeta\rangle\equiv\vert\zeta,k=5;\,0\,0\rangle$ is obtained according to (\[6\]) for each case, while $\vert\zeta\rangle_{\rm SP}$ is the corresponding eigen-state of the SPM. The results of the overlaps show that the SPM seem a good approximation to the NSPM. In fact, with the increasing of the pairing interaction strength $g$ of nucleon pairs from different orbits, the system undergoes a phase crossover from localized normal phase mainly determined by the pure mean-field and the on-site pairing interaction strengths $g_{t,t}$ ($t=1,2$) among nucleon pairs within the same orbits to the delocalized superconducting phase, for which there are a few effective order parameters. Here we calculate the occupation probability of nucleon pairs in the $j_{1}$ orbit at the $\zeta$-th excited state defined by $$\label{23} \rho(j_{1},\zeta)={1\over{k}}\langle\zeta\vert S_{j_{1}}^{+}\frac{\partial}{\partial S_{j_{1}}^{+}} \vert\zeta\rangle$$ for $\zeta=1$ and $\zeta=2$. As clearly shown in Fig. \[f1\], the ground-state (the first excited state) occupation probability of the NSPM decreases (increases) with the increasing of $g$ noticeably around $g\sim0.05$[–]{}0.1 MeV, and there is a crossing point around $g\sim0.21$ MeV. However, the occupation probability of the ground-state is always a little smaller than that of the first excited state in the SPM, which is opposite to the result of the NSPM when $g$ is smaller than the value of the crossing point. They gradually decrease with the increasing of $g$ with the overall pairing interaction strength fitted to the ground-state energy of the NSPM, and become close to those of the NSPM in the strong $g$ limit. Therefore, the SPM is a good approximation to the NSPM only when the pairing interaction among nucleon pairs in different orbits is sufficiently strong. Nevertheless, the SPM [cannot]{} account for the actual quantum phase crossover when the pairing interaction strengths of different orbits are relatively weaker and differ from those of the same orbits as required, for example, in the $ds$- and $fp$-shell nuclei [@35; @36]. Moreover, the on-site pairing interaction strengths $g_{t,t}$ can also change the actual ordering of the single-particle energies. For example, when $g_{2,2}$ is sufficiently greater than $g_{1,1}$, the ground state of the system may be dominated by the nucleon pairs of the $j_{2}$-orbit though $\epsilon_{2}$ is greater than $\epsilon_{1}$, which may be used to elucidate the inversion of the single-particle energy ordering of a shell model. Obviously, these phase transition associated issues [cannot]{} be described by the SPM, for which the NSPM should be adopted. $0^{+}_{1}$ $0^{+}_{2}$ $0^{+}_{3}$ $0^{+}_{4}$ $0^{+}_{5}$ $0^{+}_{6}$ ------------------ --------------- ------------- ------------- ------------- ------------- ------------- ------------- -- -- -- -- -- -- -- -- -- -- -- -- -- $\delta g=-0.50$ $E^{(\zeta)}$ $-48.95$ $-36.66$ $-26.23$ $-17.30$ $-9.90$ $-4.96$ $\eta(\zeta)$ 99.600% 98.989% 97.968% 96.600% 95.370% 94.548% $\delta g=-0.25$ $E^{(\zeta)}$ $-59.35$ $-42.61$ $-28.02$ $-15.10$ $-3.86$ $4.93$ $\eta(\zeta)$ 99.949% 99.870% 99.756% 99.653% 99.714% 99.714% $\delta g$=0.25 $E^{(\zeta)}$ $-80.28$ $-54.81$ $-31.93$ $-10.96$ $8.34$ $25.66$ $\eta(\zeta)$ 99.977% 99.946% 99.905% 99.883% 99.9287% 99.929% $\delta g$=0.50 $E^{(\zeta)}$ $-90.78$ $-60.98$ $-33.94$ $-8.91$ $14.49$ $36.12$ $\eta(\zeta)$ 99.934% 99.842% 99.728% 99.674% 99.806% 99.818% : Excited level energies $E^{(\zeta)}$ (in MeV) of the NSPM and the overlap-square $\eta(\zeta)=\vert\langle\zeta\vert\zeta\rangle_{\rm SP}\vert^2$ of the pairing excited states with the corresponding ones of the SPM for $k=5$ pairs over $j_{1}=19/2$ and $j_{2}=21/2$ orbits with single-particle energies $\epsilon_{1}=1$ MeV, $\epsilon_{2}=2$ MeV, and $g_{1,1}=g_{2,2}=1$ MeV, where $g_{1,2}=g$, and $\delta g=g-g_{1,1}$ (in MeV). The overall pairing strength in the SPM is adjusted to reproduce the same ground-state energy of the NSPM for each case, with which the corresponding overlap $\eta(\zeta)$ is obtained. \[t1\] ![(Color online) The occupation probability of nucleon pairs in the $j_{1}$-orbit at the $\zeta$-th excited state for $\zeta=1$ and $\zeta=2$ as a function of $g_{12}=g$ (in MeV) with other model parameters the same as those shown in the caption of Table \[t1\], where the solid curve represents the occupation probability at the ground state ($\zeta=1$) of the NSPM, the dashed curve is that of the first excited state ($\zeta=2$) of the NSPM, and the dotted lines from bottom (Red) to the top (Blue) are that of the ground-state and the first excited state, respectively, in the SPM.[]{data-label="f1"}](occup.eps) [**4. Summary:**]{} In this work, it is shown that the nuclear spherical mean-field plus orbit-dependent non-separable pairing model with two non-degenerate $j$-orbits, like the standard and separable pairing models, is also exactly solvable. The solution of the model by using the Bethe ansatz method is presented. The extended one-variable Heine-Stieltjes polynomials associated to the Bethe ansatz equations of the solution are determined. As the use of the solution, a comparison of the solution to that of the standard pairing interaction with constant interaction strength among pairs in any orbit is made via a concrete example. It is shown that the overlaps of eigenstates of the model with those of the standard pairing model are [always large,]{} especially for the ground and the first excited state. However, the quantum phase crossover in the non-separable pairing model [cannot]{} be accounted for by the standard pairing interaction, for which the NSPM should be adopted. .3cm [**Acknowledgement:**]{}  [Support from the National Natural Science Foundation of China (11675071, 11747318), the U. S. National Science Foundation (OIA-1738287 and ACI -1713690), [U. S. Department of Energy (DE-SC0005248)]{}, the Southeastern Universities Research Association, the China-U. S. Theory Institute for Physics with Exotic Nuclei (CUSTIPEN) (DE-SC0009971), and the LSU–LNNU joint research program (9961) is acknowledged.]{} [0]{} J. Bardeen, L. N. Cooper, J. R. Schrieffer, Phys. Rev. [**108**]{}, 1175 (1957). M. Randeria, J. M. Duan, L. Y. Shieh, Phys. Rev. Lett. [**62**]{}, 981 (1989) 981. D. W. Cooper, J. S. Batchelder, M. A. Taubenblatt, J. Coll. Int. Sci. [**144**]{}, 201 (1991). K. K. Gomes, A. N. Pasupathy, A. Pushp, S. Ono, Y. Ando, and A. Yazdani, Nature [**447**]{}, 569-572 (2007). P. Ring and P. Schuck, *The Nuclear Many-Body Problem* (Springer Verleg, Berlin, 1980). A. Bohr, B. R. Mottelson, and D. Pines, Phys. Rev. [**110**]{}, 936 (1958); S. T. Belyaev, Mat. Fys. Medd. K. Dan. Vidensk. Selsk. [**31**]{}, 11 (1959). M. Hasegawa and S. Tazaki, Phys. Rev. C [**47**]{}, 188 (1993). H. C. Pradhan, Y. Nogami, and J. Law, Nucl. Phys. A [**201**]{}, 357(1973). H. J. Mang, Phys. Rep. [**18**]{}, 325 (1975). G. D. Dans and A. Klein, Phys. Rev. [**143**]{}, 735 (1966). A. Covello and E. Salusti, Phys. Rev. [**162**]{}, 859 (1967). M. Bishari, I. Unna, and A. Mann, Phys. Rev. C [**3**]{}, 1715 (1971). J. Y. Zeng, C. S. Cheng, Nucl. Phys. A [**405**]{}, 1 (1983); [**411**]{}, 49 (1984); [**414**]{}, 253 (1984). H. Molique and J. Dudek, Phys. Rev. C [**56**]{}, 1795 (1997). A. Volya, B. A. Brown, and V. Zelevinsky, Phys. Lett. B [**509**]{}, 37 (2001). A. K. Kerman and R. D. Lawson, Phys. Rev. [**124**]{}, 162 (1961). V. Zelevinsky and A. Volya, Physics of Atomic Nuclei [**66**]{}, 1781 (2003). M. Gaudin, J. Physique 37, 1087 (1976). R. W. Richardson, Phys. Lett. [**3**]{}, 277 (1963); [**5**]{}, 82 (1963); R. W. Richardson and N. Sherman, Nucl. Phys. [**52**]{}, 221 (1964); [**52**]{}, 253 (1964). J. Dukelsky, S. Pittel, and G. Sierra, Rev. Mod. Phys. [**76**]{}, 643 (2004). F. Pan, L. Bao, L. Zhai, X. Cui, and J. P. Draayer, J. Phys. A: Math. Theor. [**44**]{}, 395305 (2011). X. Guan, K. D. Launey, M. Xie, L. Bao, F. Pan, J. P. Draayer, Phys. Rev. C [**86**]{}, 024313 (2012). X. Guan, K. D. Launey, M. Xie, L. Bao, F. Pan, J. P. Draayer, Comp. Phys. Commun. [**185**]{}, 2714 (2014). C. Qi and T. Chen, Phys. Rev. C [**92**]{}, 051304(R) (2015). F. Pan, J. P. Draayer, and W. E. Ormand, Phys. Lett. B [**422**]{}, 1 (1998). A. B. Balantekin and Y. Pehlivan, Phys. Rev. C [**76**]{}, 051001(R) (2007). S. M. A. Rombouts, J. Dukelsky, and G. Ortiz, Phys. Rev. B [**82**]{} 224510 (2010). P. W. Claeys, S. De Baerdemacker, M. Van Raemdonck, and D. Van Neck, Phys. Rev. B [**91**]{}, 155102 (2015). L. Dai, F. Pan, and J. P. Draayer, Nucl. Phys. A [**957**]{}, 51 (2017). F. Pan, D. Zhou, L. Dai, and J. P. Draayer, Phys. Rev. C [**95**]{}, 034308 (2017). F. Nowacki, A. Poves, Phys. Rev. C 79, 014310 (2009). M. Honma, T. Otsuka, B.A. Brown, T. Mizusaki, Phys. Rev. C 69, 034335 (2004). [^1]: The corresponding author’s e-mail: daipan@dlut.edu.cn
--- abstract: 'We develop Cresson’s non-differentiable embedding to quantum problems of the calculus of variations and optimal control with time delay. Main results show that the dynamics of non-differentiable Lagrangian and Hamiltonian systems with time delays can be determined, in a coherent way, by the least-action principle.' author: - | [**Gastão S. F. Frederico${}^{a, b}$**]{} and [**Delfim F. M. Torres${}^{b}$**]{}\ [{gastao.frederico, delfim}@ua.pt]{}\ ${}^{a}$ Gregório Semedo University, Luanda, Angola\ ${}^{b}$ Center for Research and Development in Mathematics and Applications\ Department of Mathematics, University of Aveiro\ 3810-193 Aveiro, Portugal title: | This is a preprint of a paper whose final and definite form will be published in:\ Int. J. Difference Equ. ([http://campus.mst.edu/ijde]{}).\ Submitted Aug 24, 2012; Revised Nov 19, 2012; Accepted Nov 19, 2012.\ ------------------------------------------------------------------------ **A Non-Differentiable Quantum Variational Embedding in Presence of Time Delays** --- [**AMS Subject Classifications:** ]{}49K05, 49S05, 26A24. [**Keywords:** ]{}non-differentiability, scale calculus of variations, scale optimal control, Euler–Lagrange equations, embedding, coherence, time delay, Hamiltonian systems. Introduction ============ Lagrangian systems play a fundamental role describing motion in mechanics. The importance of such systems is related to the fact that they can be derived via the least-action principle using differentiable manifolds [@ar]. Nevertheless, some important physical systems involve functions that are non-differentiable. A non-differentiable calculus was introduced in 1992 by Nottale [@Nottale:1992; @Nottale:1999]. A rigorous foundation to Nottale’s scale-relativity theory was recently given by Cresson [@CD:Cresson:2005; @CD:Cresson:2006; @Cresson:2011]. The calculus of variations developed in [@Cresson:2011] cover sets of non-differentiable curves, by substituting the classical derivative by a new complex operator, known as the *scale derivative*. In [@alto; @alto1] Almeida and Torres obtain several Euler–Lagrange equations for variational functionals and isoperimetric problems of the calculus of variations defined on a set of Hölder curves. The embedding procedure, introduced for stochastic processes in [@MR2337677], can always be applied to Lagrangian systems [@MyID:228; @MyID:226]. In this work we prove that the embedding of Lagrangian and Hamiltonian systems with time delays, via the least-action principle, respect the principle of coherence. For the importance of variational and control systems with delays we refer the reader to [@MyID:231] and references therein. The article is organized as follows. A brief review of the quantum calculus of [@Cresson:2011], which extends the classical differential calculus to non-differentiable functions, is given in Section \[sec:2\]. In Section \[sec:cNT\] we discuss the non-differentiable embedding within the time delay formalism: Section \[sec:CV\] is devoted to the development of the non-differentiable embedding to variational problems with time delay, where a causal and coherent embedding is obtained by restricting the set of variations; in Section \[sec:OP\] we prove that the non-differentiable embedding of the corresponding Hamiltonian formalism is also coherent. The Quantum Calculus of Cresson {#sec:2} =============================== Let $\mathbb{X}^d$ denote the set $\mathbb{R}^{d}$ or $\mathbb{C}^{d}$, $d \in \mathbb{N}$, and $I$ be an open set in $\mathbb{R}$ with $[t_1,t_2]\subset I$, $t_1<t_2$. We denote by $\mathcal{G}\left(I,\mathbb{X}^d\right)$ the set of functions $f:I \rightarrow \mathbb{X}^d$ and by $\mathcal{C}^{0}\left(I,\mathbb{X}^d\right)$ the subset of functions of $\mathcal{G}\left(I,\mathbb{X}^d\right)$ that are continuous. Let $f\in \mathcal{C}^{0}\left(I, \mathbb{R}^{d}\right)$. For all $\epsilon>0$, the $\epsilon$-left and $\epsilon$-right quantum derivatives of $f$, denoted respectively by $\Delta_{\epsilon}^{-}f$ and $\Delta_{\epsilon}^{+}f$, are defined by $$\Delta_{\epsilon}^{-}f(t)=\frac{f(t)-f(t-\epsilon)}{\epsilon} \quad \text{ and } \quad \Delta_{\epsilon}^{+}f(t)=\frac{f(t+\epsilon)-f(t)}{\epsilon} \, .$$ The $\epsilon$-left and $\epsilon$-right quantum derivatives of a continuous function $f$ correspond to the classical derivative of the $\epsilon$-mean function $f_{\epsilon}^{\sigma}$ defined by $$f_{\epsilon}^{\sigma}(t)=\frac{\sigma}{\epsilon}\int_{t}^{t+\sigma\epsilon}f(s)ds\, , \quad \sigma=\pm \, .$$ Next we define an operator which generalize the classical derivative. \[def:qd\] Let $f\in \mathcal{C}^{0}\left(I,\mathbb{R}^{d}\right)$. For all $\epsilon>0$, the $\epsilon$-scale derivative of $f$, denoted by $\frac{\square_{\epsilon}f}{\square t}$, is defined by $$\begin{gathered} \frac{\square_{\epsilon}f}{\square t} =\frac{1}{2}\left[\left(\Delta_{\epsilon}^{+}f +\Delta_{\epsilon}^{-}f\right) -i\left(\Delta_{\epsilon}^{+}f -\Delta_{\epsilon}^{-}f\right)\right],\end{gathered}$$ where $i$ is the imaginary unit. If $f$ is differentiable, we can take the limit of the scale derivative when $\epsilon$ goes to zero. We then obtain the classical derivative $\frac{df}{dt}$ of $f$. We also need to extend the scale derivative to complex valued functions. Let $f\in \mathcal{C}^{0}\left(I,\mathbb{C}^{d}\right)$ be a continuous complex valued function. For all $\epsilon>0$, the $\epsilon$ scale derivative of $f$, denoted by $\frac{\square_{\epsilon}f}{\square t}$, is defined by $$\begin{gathered} \frac{{\square}_{\epsilon}f}{{\square}t} =\frac{{\square}_{\epsilon}\textrm{Re}(f)}{\square t} +i\frac{\square_{\epsilon}\textrm{Im}(f)}{\square t} \, ,\end{gathered}$$ where $\textrm{Re}(f)$ and $\textrm{Im}(f)$ denote the real and imaginary part of $f$, respectively. In Definition \[def:qd\], the $\epsilon$-scale derivative depends on $\epsilon$, which is a free parameter related to the smoothing order of the function. This brings many difficulties in applications to Physics, when one is interested in particular equations that do not depend on an extra parameter. To solve these problems, the authors of [@Cresson:2011] introduced a procedure to extract information independent of $\epsilon$ but related with the mean behavior of the function. Let ${C^0_{conv}}\left(I\times (0,1),\mathbb{R}^{d}\right)\subseteq {C^0}\left(I\times (0,1),\mathbb{R}^{d}\right)$ be such that for any function $f \in {C^0_{conv}}\left(I\times (0,1),\mathbb{R}^{d}\right)$ the $\lim_{\epsilon\to 0}f(t,\epsilon)$ exists for any $t\in I$; and $E$ be a complementary of ${C^0_{conv}}\left(I\times (0,1),\mathbb{R}^{d}\right)$ in ${C^0}\left(I\times (0,1),\mathbb{R}^{d}\right)$. We define the projection map $\pi$ by $$\begin{array}{lcll} \pi: & {C^0_{conv}}\left(I\times (0,1),\mathbb{R}^{d}\right) \oplus E & \to & {C^0_{conv}}(I\times \left(0,1),\mathbb{R}^{d}\right)\\ & f_{conv}+f_E & \mapsto & f_{conv} \end{array}$$ and the operator $\left< \cdot \right>$ by $$\begin{array}{lcll} \left< \cdot \right>: & {C^0}\left(I\times (0,1),\mathbb{R}^{d}\right) & \to & {C^0}\left(I,\mathbb{R}^{d}\right)\\ & f & \mapsto & \left< f \right>: t\mapsto \displaystyle\lim_{\epsilon\to 0}\pi(f)(t,\epsilon)\,. \end{array}$$ We now introduce the quantum derivative of $f$ without the dependence of $\epsilon$ [@Cresson:2011]. \[def:ourHD\] The quantum derivative of $f$ in the space $\mathcal{C}^{0}\left(I, \mathbb{R}^{d}\right)$ is given by the rule $$\label{eq:scaleDer} \frac{\Box f}{\Box t}=\left< \frac{{\Box_{\epsilon}}f}{\Box t} \right>.$$ The scale derivative has some nice properties. Namely, it satisfies a Leibniz and a Barrow rule. First let us recall the definition of an $\alpha$-Hölderian function. Let $f\in C^0\left(I, \mathbb{R}^{d}\right)$. We say that $f$ is $\alpha$-Hölderian, $0<\alpha<1$, if for all $\epsilon>0$ and all $t$, $t'\in I$ there exists a constant $c>0$ such that $|t-t'|\leqslant\epsilon$ implies $\|f(t)-f(t')\|\leqslant c\epsilon^{\alpha}$, where $\|\cdot\|$ is a norm in $\mathbb{R}^{d}$. The set of Hölderian functions of Hölder exponent $\alpha$ is denoted by $H^\alpha(I,\mathbb{R}^{d})$. In what follows, we will frequently use $\square$ to denote the scale derivative operator $\frac{{\square}}{{\square}t}$. \[theo:mult\] For $f\in H^\alpha\left(I, \mathbb{R}^{d}\right)$ and $g\in H^\beta\left(I, \mathbb{R}^{d}\right)$, with $\alpha+\beta>1$, one has $$\label{eq:mult} \square(f\cdot g)(t)=\square f(t) \cdot g(t)+f(t)\cdot\square g(t)\,.$$ For $f\in \mathcal{C}^1\left(I, \mathbb{R}^{d}\right)$ and $g\in \mathcal{C}^1\left(I, \mathbb{R}^{d}\right)$, one obtains from the classical Leibniz rule $(f\cdot g)'=f'\cdot g+f\cdot g'$. For convenience of notation, we sometimes write as $(f\cdot g)^\square(t)= f^\square(t) \cdot g(t)+f(t)\cdot g^\square(t)$. \[Barrow\] Let $f\in \mathcal{C}^0([t_1,t_2],\mathbb{R})$ be such that $\Box f / \Box t$ is continuous and $$\label{nec_condition} \lim_{\epsilon\to0} \int_{t_{1}}^{t_{2}} \left(\frac{\Box_\epsilon f}{\Box t}\right)_E(t)dt=0.$$ Then, $$\int^{t_{2}}_{t_{1}} \frac{\Box f}{\Box t}(t)\, dt=f(t_{2})-f(t_{1}).$$ The Non-Differentiable Embedding with Time Delays {#sec:cNT} ================================================= Given two operators $A$ and $B$, we use the notations $$(A\cdot B)(y)=A(y) B(y) \quad \text{ and } (A\circ B)(y)=A(B(y)).$$ Let $k \in \mathbb{N}$. The operator $\square^{k}$ is defined by $$\label{eq:notat} \square^{k} = \frac{\square^{k}}{\square t^{k}} = \frac{\square}{\square t}\circ \cdots \circ \frac{\square}{\square t},$$ where $\frac{\square}{\square t}$ appears exactly $k$ times on the right-hand side of . Given $\tau > 0$, the backward shift operator $\rho^\tau$ is defined by $\rho^\tau(t) = t - \tau$. Let $\tau>0$, $k \in \mathbb{N}$. For convenience, we introduce the operators ${[\cdot]^{\square^k}_{\tau}}$ and ${[\cdot]^{k}_{\tau}}$ by $$[y]^{\square^k}_\tau(t)=\left(t, y(t),\square y(t),\ldots,\square^{k}y(t),(y \circ \rho^\tau)(t), (\square y\circ \rho^\tau)(t),\ldots, (\square^{k}y\circ \rho^\tau)(t) \right)$$ and $$[y]^{k}_\tau(t)=\left(t, y(t),y'(t),\ldots,y^{(k)}(t),(y\circ \rho^\tau)(t), (y'\circ \rho^\tau)(t), \ldots,(y^{(k)}\circ \rho^\tau)(t) \right).$$ When $k = 1$, we omit $k$ and use $\dot{y}$ to denote the derivative $y'$, that is: $$[y]_{\tau}^{\square}(t)=(t,y(t),\square y(t),y(t-\tau),\square y(t-\tau))$$ and $$[y]_{\tau}(t)=(t,y(t),\dot{y}(t),y(t-\tau),\dot{y}(t-\tau)).$$ Given a function $f :\mathbb{R} \times \left(\mathbb{C}^d\right)^{2(k+1)} \rightarrow \mathbb{C}$, we denote by $F^{k,\tau}$ the corresponding *evaluation operator* defined by $F^{k,\tau} = f[\cdot]^{k}_\tau$, that is, $$F^{k,\tau} : \begin{array}{lll} \mathcal{C}^{0}\left(I, \mathbb{C}^{d}\right) & \longrightarrow & \mathcal{C}^{0}\left(I, \mathbb{C}\right) \\ y & \longmapsto & t \mapsto f[y]^{k}_\tau(t)\, . \end{array}$$ Let $\mathbf{f}=\{ f_i \}_{i=0,\ldots ,n}$ and $\mathbf{g}=\{ g_i \}_{i=0,\ldots ,n}$ be a finite family of functions $f_i, g_i :\mathbb{R} \times \left(\mathbb{C}^d\right)^{2(k+1)} \rightarrow \mathbb{C}$, and $F^{k,\tau}_i$ and $G^{k,\tau}_i$, $i=1,\ldots ,n$, be the corresponding family of evaluation operators. We denote by $\mathrm{O}^{k,\tau}_{\mathbf{f}, \mathbf{g}}$ the differential operator $$\label{formop} \mathrm{O}^{k,\tau}_{\mathbf{f},\mathbf{g}} =\sum_{i=0}^n F^{k,\tau}_i \cdot \left({d^i \over dt^i } \circ G^{k,\tau}_i\right),$$ with the convention that $ \left ( {d\over dt} \right )^0$ is the identity mapping on $\mathbb{C}$. As before, we omit $k$ when $k = 1$: $\mathrm{O}^{\tau}_{\mathbf{f},\mathbf{g}} = \mathrm{O}^{1,\tau}_{\mathbf{f},\mathbf{g}}$. \[ndeo\] The non-differentiable embedding of , denoted by $\mbox{\rm Emb}_{\square} \left(\mathrm{O}^{k,\tau}_{\mathbf{f},\mathbf{g}}\right)$, is given by $$\mbox{\rm Emb}_{\square}\left(\mathrm{O}^{k,\tau}_{\mathbf{f},\mathbf{g}}\right) = \sum_{i=0}^n F^{\square^k,\tau}_i \cdot \left({\square^{i} \over \square t^i } \circ G^{\square^k,\tau}_i\right),$$ $F^{\square^k,\tau}_i = \mbox{\rm Emb}_{\square}(F^{k,\tau}_i) = f_i[\cdot]^{\square^k}_\tau$, ${\square^{i} \over \square t^i } = \mbox{\rm Emb}_{\square}\left({d^i \over dt^i }\right)$, $G^{\square^k,\tau}_i = \mbox{\rm Emb}_{\square}(G^{k,\tau}_i) = g_i[\cdot]^{\square^k}_\tau$. Embedding of Variational Problems with Time Delay {#sec:CV} ------------------------------------------------- The fundamental problem of the calculus of variations with time delay is to minimize $$\label{P} I^{\tau}[q] = \int_{t_{1}}^{t_{2}} L\left(t,q(t),\dot{q}(t),q(t-\tau),\dot{q}(t-\tau)\right) dt$$ subject to $q(t)=\delta(t)$, $t\in[t_{1}-\tau,t_{1}]$, and $q(t_2) = q_2$, where $t_{1}< t_{2}$ are fixed in $\mathbb{R}$, $\tau$ is a given positive real number such that $\tau<t_{2}-t_{1}$, $\delta$ is a given piecewise smooth function, and $q_2 \in \mathbb{R}^d$. We assume that admissible functions $q$ are such that both $q$ and $q\circ \rho^\tau$ belong to $\mathcal{C}^{1}\left(I,\mathbb{R}^{d}\right)$. Note that, with our notations, can be written as $$I^{\tau}[q] = \int_{t_{1}}^{t_{2}} L[q]_\tau(t) dt.$$ A variation of $q\in \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$ is another function of $ \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$ of the form $q + \varepsilon h$ with $\varepsilon$ a small number and $h \in \mathcal{C}^{1}\left(I,\mathbb{R}^{d}\right)$ such that $h(t)=0$ for $t \in[t_{1}-\tau,t_{1}] \cup \{t_2\}$. We say that $q$ is an *extremal* of funcional if $$\frac{d}{d\varepsilon}\left.I^{\tau}[y + \varepsilon h]\right|_{\varepsilon = 0}=0$$ for any $h\in \mathcal{C}^{1}\left(I,\mathbb{R}^{d}\right)$. A first idea to obtain a non-differentiable Lagrangian system with time delays is to embed directly the classical Euler–Lagrange equations with time delays. A function $q\in \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$ is an extremal of if and only if $$\label{EL1} \begin{cases} \frac{d}{dt}\left[L_{\dot{q}}[q]_{\tau}(t)+ L_{\dot{q}_{\tau}}[q]_{\tau}(t+\tau)\right]\\ \qquad=L_{q}[q]_{\tau}(t)+L_{q_{\tau}}[q]_{\tau}(t+\tau)\,, \quad t_{1}\leq t\leq t_{2}-\tau,\\ \tag{EL} \frac{d}{dt}L_{\dot{q}}[q]_{\tau}(t) =L_{q}[q]_{\tau}(t)\,, \quad t_{2}-\tau\leq t\leq t_{2}, \end{cases}$$ where $L_{\xi}(t,q,\dot{q},q_\tau,\dot{q}_\tau)$ denotes the partial derivative of $L(t,q,\dot{q},q_\tau,\dot{q}_\tau)$ with respect to $\xi \in \{q,\dot{q},q_\tau,\dot{q}_\tau\}$. The following theorem gives the non-differentiable embedding of the Euler–Lagrange equations with time delay . By $\mathcal{C}^{n}_{\square}\left(I, \mathbb{X}^{d}\right)$ we denote the set of functions $q$ such that both $q$ and $q \circ \rho^\tau$ belong to $\mathcal{C}^{0}\left(I, \mathbb{X}^{d}\right)$ as well as $\square^{i}q$ and $(\square^{i}q) \circ \rho^\tau$ for all $i=1,\ldots,n$. Let the Lagrangian $L$ be a $\mathcal{C}^{1}_{\square}\left(I, \mathbb{R}^{d}\right)$-function with respect to all its arguments, holomorphic with respect to $\dot{q}(t)$ and $\dot{q}(t-\tau)$, and real when $\dot{q}(t)$ and $\dot{q}(t-\tau)$ belong to $\mathbb{R}^{d}$. The non-differentiable embedded *Euler–Lagrange equations with time delay* associated to $L$ are given by $$\label{EL2} \begin{cases} \frac{\square}{\square t}\left[L_{\square q}[q]_{\tau}^{\square}(t)+ L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right]\\ \qquad=L_{q}[q]_{\tau}^{\square}(t) +L_{q_{\tau}}[q]_{\tau}^{\square}(t+\tau),\,\,\, t_{1}\leq t \leq t_{2}-\tau,\\ \tag{$\square EL$} \frac{\square}{\square t} L_{\square q}[q]_{\tau}^{\square}(t) =L_{q}[q]_{\tau}^{\square}(t),\,\,\, t_{2}-\tau\leq t\leq t_{2}\,. \end{cases}$$ The Euler–Lagrange equations can be written in the equivalent form $$\mathrm{O}^{\tau}_{\mathbf{f},\mathbf{g}}(q)(t)=0, \quad t\in [t_{1}, t_{2}],$$ with $\mathbf{f}$ and $\mathbf{g}$ given by $$\mathbf{f}[q]_{\tau}(t) = \begin{cases} \left(-L_{q}[q]_{\tau}(t)-L_{q_{\tau}}[q]_{\tau}(t+\tau), 1\right) & \text{ if } t\in [t_{1},t_{2}-\tau]\\ \left(-L_{q}[q]_{\tau}(t), 1\right) & \text{ if } t\in [t_{2}-\tau, t_{2}] \end{cases}$$ and $$\mathbf{g}[q]_{\tau}(t) = \begin{cases} \left(1,L_{\dot{q}}[q]_{\tau}(t)+ L_{\dot{q_{\tau}}}[q]_{\tau}(t+\tau) \right) & \text{ if } t\in [t_{1},t_{2}-\tau]\\ \left(1,L_{\dot{q}}[q]_{\tau}(t) \right) & \text{ if } t\in [t_{2}-\tau, t_{2}]. \end{cases}$$ The intended conclusion follows now by a direct application of Definition \[ndeo\]. The Euler–Lagrange equations reduce to when the functions are differentiable. Another approach to obtain a non-differentiable Lagrangian system with time delays is to embed the Lagrangian, and then to apply the least-action principle. The non-differentiable embedding of functional is given by $$\label{P1} I_{\square}^{\tau}[q] = \int_{t_{1}}^{t_{2}} L\left(t,q(t),\square q(t),q(t-\tau),\square q(t-\tau)\right) dt = \int_{t_{1}}^{t_{2}} L[y]_{\tau}^{\square}(t) dt.$$ In contrast with the original problem, the admissible functions $q$ are now not necessarily differentiable: admissible functions $q$ for are those such that $q \in \mathcal{C}^{1}_{\square}\left(I,\mathbb{R}^{d}\right)$. Let $\alpha,\beta,\varepsilon \in \mathbb{R}$ be such that $0<\alpha,\beta<1$, $\alpha+\beta>1$ and $|\varepsilon| \ll 1$. A variation of $q\in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ is another function of $ H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ of the form $q+ \varepsilon h$, with $h \in H^{\beta}\left(I, \mathbb{R}^{d}\right)$, such that $h(t)=0$ for $t \in[t_{1}-\tau,t_{1}] \cup \{t_2\}$. \[nde\] We say that $q$ is a *non-differentiable extremal* of funcional if $$\label{diff} \frac{d}{d\varepsilon}\left.I_{\square}^{\tau}[y + \varepsilon h]\right|_{\varepsilon = 0}=0$$ for any $h\in H^{\beta}\left(I,\mathbb{R}^{d}\right)$. As in the classical case, the least-action principle has here its own meaning, i.e., we seek the non-differentiable extremals of funcional to determine the dynamics of a non-differentiable dynamical system. The next theorem gives the Euler–Lagrange equations with time delay obtained from the non-differentiable least-action principle. \[ELDT\] Let $0<\alpha,\beta<1$ with $\alpha+\beta>1$. If $q\in H^{\alpha}\left(I,\mathbb{R}^{d}\right)$ satisfies $\square q\in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ and $\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t)\right)\cdot h(t)$ satisfies condition for all $h\in H^{\beta}\left(I, \mathbb{R}^{d}\right)$, then function $q$ is a non-differentiable extremal of $I_{\square}^{\tau}$ if and only if $$\label{EL3} \begin{cases} \frac{\square}{\square t}\left(L_{\square q}[q]_{\tau}^{\square}(t)+ L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\\ \qquad\quad = L_{q}[q]_{\tau}^{\square}(t)+L_{q_{\tau}}[q]_{\tau}^{\square}(t+\tau), \quad t_{1}\leq t\leq t_{2}-\tau,\\ \tag{$EL_{\square LAP}$} \frac{\square}{\square t} L_{\square q}[q]_{\tau}^{\square}(t) =L_{q}[q]_{\tau}^{\square}(t),\quad t_{2}-\tau\leq t\leq t_{2}. \end{cases}$$ Condition gives $$\begin{gathered} \label{II} \int_{t_{1}}^{t_{2}} \left(L_{q}[q]_{\tau}^{\square}(t)\cdot h(t) + L_{\square q}[q]_{\tau}^{\square}(t)\cdot \square h(t)\right)dt\\ +\int_{t_{1}}^{t_{2}} \left(L_{q_{\tau}}[q]_{\tau}^{\square}(t)\cdot h(t-\tau)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t)\cdot \square h(t-\tau)\right) dt=0\,.\end{gathered}$$ By the linear change of variables $t=s+\tau$ in the last integral of , and having in mind the fact that $h(t)=0$ on $[t_1-\tau,t_1]$, equation becomes $$\begin{gathered} \label{I} \int_{t_{1}}^{t_{2}-\tau}\left[\left(L_{q}[q]_{\tau}^{\square}(t) +L_{q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\cdot h(t)\right. \\ \left. +\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\cdot \square h(t)\right]dt\\ +\int_{t_{2}-\tau}^{t_{2}}\left(L_{q}[q]_{\tau}^{\square}(t)\cdot h(t)+L_{\square q}[q]_{\tau}^{\square}(t)\cdot \square h(t)\right)dt=0\,.\end{gathered}$$ Using the hypotheses of the theorem, we obtain from Theorem \[theo:mult\] that $$\begin{gathered} \label{III} \int_{t_{1}}^{t_{2}-\tau}\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\cdot \square h(t)dt\\=\int_{t_{1}}^{t_{2}-\tau}\frac{\square}{\square t}\left\{\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\cdot h(t)\right\}dt\\ -\int_{t_{1}}^{t_{2}-\tau}\frac{\square}{\square t}\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\cdot h(t)dt\end{gathered}$$ and $$\begin{gathered} \label{IIII} \int_{t_{2}-\tau}^{t_{2}}L_{\square q}[q]_{\tau}^{\square}(t)\cdot \square h(t) dt\\=\int_{t_{2}-\tau}^{t_{2}}\frac{\square}{\square t}\left(L_{\square q}[q]_{\tau}^{\square}(t)\cdot h(t)\right)dt-\int_{t_{2}-\tau}^{t_{2}}\frac{\square}{\square t}\left(L_{\square q}[q]_{\tau}^{\square}(t)\right)\cdot h(t)dt\,.\end{gathered}$$ Because $\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\cdot h(t)$ satisfies for all $h\in H^{\beta}\left(I, \mathbb{R}^{d}\right)$, using Theorem \[Barrow\] and replacing the quantities of and into , we obtain $$\begin{split} \int_{t_{1}}^{t_{2}-\tau}&\left[L_{q}[q]_{\tau}^{\square}(t) +L_{q_{\tau}}[q]_{\tau}^{\square}(t+\tau) -\frac{\square}{\square t}\left(L_{\square q}[q]_{\tau}^{\square}(t) + L_{\square q_{\tau}}[q]_{\tau}^{\square}(t+\tau)\right)\right]\cdot h(t)dt\\ & +\left.\left(L_{\square q}[q]_{\tau}^{\square}(t)+L_{\square q_{\tau}} [q]_{\tau}^{\square}(t+\tau)\right)\cdot h(t)\right|^{t_2-\tau}_{t_1}\\ &+ \int_{t_{2}-\tau}^{t_{2}}\left[L_{q}[q]_{\tau}^{\square}(t) -\frac{\square}{\square t}\left(L_{\square q}[q]_{\tau}^{\square}(t)\right)\right] \cdot h(t)dt+\left.L_{\square q}[q]_{\tau}^{\square}(t)\cdot h(t)\right|^{t_2}_{t_2-\tau}=0\,. \end{split}$$ The Euler–Lagrange equations with time delay are obtained using the fundamental lemma of the calculus of variations (see, e.g., [@MR0160139]). To summarize, the dynamics of a non-differentiable Lagrangian system with time delay are determined by the Euler–Lagrange equations , via the $\square$-least-action principle, respecting the principle of coherence: coincide with . Embedding of Optimal Control Problems with Time Delay {#sec:OP} ----------------------------------------------------- In Section \[sec:CV\] we studied the non-differentiable variational embedding in presence of time delays. Now, we give a scale Hamiltonian embedding for more general scale problems of optimal control with delay time. Following [@DH:1968; @MyID:231], we define the optimal control problem with time delay as follows: $$\begin{gathered} \label{Pond1} I^{\tau}[q,u] = \int_{t_1}^{t_2} L\left(t,q(t),q(t-\tau),u(t),u(t-\tau)\right) dt \longrightarrow \min \, , \\ \dot{q}(t)=\varphi\left(t,q(t),q(t-\tau),u(t),u(t-\tau)\right) \label{ci}\, ,\end{gathered}$$ under given boundary conditions $$\label{ic} q(t)=\delta(t), \quad t\in[t_{1}-\tau,t_{1}]\,, \quad q(t_2) = q_2,$$ where $q \in \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$, $u \in \mathcal{C}^{0}\left(I, \mathbb{R}^{d}\right)$, the Lagrangian $L:I \times \mathbb{R}^{2d} \times \mathbb{R}^{2m} \rightarrow \mathbb{R}$ and the velocity vector $\varphi : I \times \mathbb{R}^{2d} \times \mathbb{R}^{2m} \rightarrow \mathbb{R}^d$ are assumed to be $C^{1}$-functions with respect to all its arguments. Similarly as before, we assume that $\delta$ is a given piecewise smooth function and $q_2$ a given vector in $\mathbb{R}^d$. We introduce the operators $[\cdot,\cdot]_{\tau}$, $[\cdot,\cdot,\cdot]_{\tau}$, $[\cdot,\cdot]_{\tau}^{\square}$ and $[\cdot,\cdot]_{\tau}^{\square}$ by: 1. $[q,u]_{\tau}(t)=(t,q(t),q(t-\tau),u(t),u(t-\tau))$, where $q \in \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$ and $u \in \mathcal{C}^{0}\left(I,\mathbb{R}^{d}\right)$; 2. $[q,u,p]_{\tau}(t)=(t,q(t),q(t-\tau),u(t),u(t-\tau),p(t))$, where $q, p \in \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$ and $u \in \mathcal{C}^{0}\left(I,\mathbb{R}^{d}\right)$; 3. $[q,u]_{\tau}^{\square}(t)=(t,q(t),q(t-\tau),u(t),u(t-\tau))$, where $q \in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ and $u \in H^{\alpha}\left(I, \mathbb{C}^{d}\right)$; 4. $[q,u,p]_{\tau}^{\square}(t)=(t,q(t),q(t-\tau),u(t),u(t-\tau),p(t))$, where $q \in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ and $u, p \in H^{\alpha}\left(I, \mathbb{C}^{d}\right)$. \[theo:pmpnd1\] If $(q,u)$ is a minimizer to problem –, then there exists a co-vector function $p \in \mathcal{C}^{1}\left(I, \mathbb{R}^{d}\right)$ such that the following conditions hold: - *the Hamiltonian systems* $$\label{eq:Hamnd1} \begin{cases} \dot{q}(t) = H_{p}[q,u,p]_{\tau}(t) \, , \\ \dot{p}(t) = -H_{q}[q,u,p]_{\tau}(t)-H_{q_{\tau}}[q,u,p]_{\tau}(t+\tau)\,, \end{cases}$$ for $t_{1}\leq t\leq t_{2}-\tau$, and $$\label{eq:Hamnd2} \begin{cases} \dot{q}(t) = H_{p}[q,u,p]_{\tau}(t) \, , \\ \dot{p}(t) = -H_{q}[q,u,p]_{\tau}(t) \, , \end{cases}$$ for $t_{2}-\tau\leq t\leq t_{2}$; - *the stationary conditions* $$\label{eq:CE1} H_{u}[q,u,p]_{\tau}(t)+H_{u_{\tau}}[q,u,p]_{\tau}(t+\tau)= 0\,,$$ for $t_{1}\leq t\leq t_{2}-\tau$, and $$\label{eq:CE12} H_{u}[q,u,p]_{\tau}(t)= 0,$$ for $t_{2}-\tau\leq t\leq t_{2}$; where the Hamiltonian $H$ is defined by $H[q,u,p]_{\tau}(t)= L[q,u]_{\tau}(t)+p(t) \cdot \varphi[q,u]_{\tau}(t)$. Let $H[q,u,p]_{\tau}^{\square}(t)= L[q,u]_{\tau}^{\square}(t)+p(t) \cdot \varphi[q,u]_{\tau}^{\square}(t)$. The embedding of the Hamiltonian systems and are given, respectively, by $$\label{eq:Hamnd3} \begin{cases} \square q(t)=H_{p}[q,u,p]_{\tau}^{\square}(t) \, , \\ \square p(t)=-H_{q}[q,u,p]_{\tau}^{\square}(t) -H_{q_{\tau}}[q,u,p]_{\tau}^{\square}(t+\tau), \end{cases}$$ for $t_{1}\leq t\leq t_{2}-\tau$, and by $$\label{eq:Hamnd4} \begin{cases} \square q(t)= H_{p}[q,u,p]_{\tau}^{\square}(t) \, , \\ \square p(t) = -H_{q}[q,u,p]_{\tau}^{\square}(t) \, , \end{cases}$$ for $t_{2}-\tau\leq t\leq t_{2}$; the embedding of the stationary conditions and are given, respectively, by $$\label{eq:CE2} H_{u}[q,u,p]_{\tau}^{\square}(t) +H_{u_{\tau}}[q,u,p]_{\tau}^{\square}(t+\tau)= 0,$$ for $t_{1}\leq t\leq t_{2}-\tau$, and by $$\label{eq:CE22} H_{u}[q,u,p]_{\tau}^{\square}(t)= 0\, ,$$ for $t_{2}-\tau\leq t\leq t_{2}$. We call systems and *the scale Hamiltonian systems with delay*, while to and we call *stationary conditions with delay*. The embedding of – is given by $$\begin{gathered} \label{Pond11} I^{\tau}_{\square}[q, u] = \int_{t_1}^{t_2} L\left(t,q(t),q(t-\tau),u(t),u(t-\tau)\right) dt \longrightarrow \min \, , \\ \square q(t)=\varphi\left(t,q(t),q(t-\tau),u(t),u(t-\tau)\right), \label{ci1}\end{gathered}$$ where $q, q \circ \rho^\tau \in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ and $u, u \circ \rho^\tau \in H^{\alpha}\left(I,\mathbb{C}^{d}\right)$. Theorem \[th:pmp\] generalizes Theorem \[theo:pmpnd1\] for non-differentiable optimal control problems with time delay. \[th:pmp\] Let $0<\alpha$, $\beta<1$ with $\alpha+\beta>1$. Assume that $q\in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ satisfies $\square q\in H^{\alpha}\left(I, \mathbb{R}^{d}\right)$ and $\left(L_{\square q}[q]_{\square}^{\tau}(t) +L_{\square q_{\tau}}[q]_{\square}^{\tau}(t)\right)\cdot h(t)$ satisfies condition for all $h\in H^{\beta}\left(I, \mathbb{R}^{d}\right)$. If $(q, u)$ is a minimizer to problem – subject to given boundary conditions , then there exists a co-vector function $p \in H^{\alpha}\left(I, \mathbb{C}^{d}\right)$ such that the following conditions hold: - the scale Hamiltonian systems with delay and ; - the stationary conditions with delay and . We prove the theorem only in the interval $t_{1}\leq t\leq t_{2}-\tau$ (the reasoning is similar in interval $t_{2}-\tau\leq t\leq t_{2}$). Using the Lagrange multiplier rule, – is equivalent to minimize the augmented functional $J^{\tau}_{\square}[q,u,p]$ defined by $$\label{eq:pcond} J^{\tau}_{\square}[q,u,p] = \int_{t_{1}}^{t_{2}} \left[ H\left(t,q(t),q(t-\tau),u(t),u(t-\tau),p(t)\right)-p(t)\cdot\square q(t)\right]dt.$$ The necessary optimality conditions and are obtained from the Euler–Lagrange equations applied to functional : $$\begin{gathered} \begin{cases} \frac{\square}{\square t} \left(\mathbb{L}_{\square q}[q,u,p]_{\square}^{\tau}(t)+\mathbb{L}_{\square q_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\right)\\ \qquad\qquad=\mathbb{L}_{q}[q,u,p]_{\square}^{\tau}(t) +\mathbb{L}_{ q_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\\ \frac{\square}{\square t} \left(\mathbb{L}_{\square u}[q,u,p]_{\square}^{\tau}(t)+\mathbb{L}_{\square u_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\right)\\ \qquad\qquad =\mathbb{L}_{u}[q,u,p]_{\square}^{\tau}(t) +\mathbb{L}_{ u_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\; \\ \frac{\square}{\square t} \left(\mathbb{L}_{\square p}[q,u,p]_{\square}^{\tau}(t) +\mathbb{L}_{\square p_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\right)\\ \qquad\qquad = \mathbb{L}_{ p}[q,u,p]_{\square}^{\tau}(t) +\mathbb{L}_{p_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau) \end{cases}\\ \Leftrightarrow\; \begin{cases} \square p(t)=-H_{q}[q,u,p]_{\square}^{\tau}(t)-H_{q_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\\ 0=H_{u}[q,u,p]_{\square}^{\tau}(t)+H_{u_{\tau}}[q,u,p]_{\square}^{\tau}(t+\tau)\\ 0=-\square q(t)+H_{p}[q,u,p]_{\square}^{\tau}(t), \end{cases}\end{gathered}$$ where $\mathbb{L}[q,u,p]_{\square}^{\tau}(t)=H[q,u,p]_{\square}^{\tau}(t)-p(t) \cdot\square q(t)$. In the differentiable case Theorem \[th:pmp\] reduces to Theorem \[theo:pmpnd1\]. The first equations in the scale Hamiltonian system with delay and are nothing else than the scale control system . In classical mechanics, $p$ is called the *generalized momentum*. In the language of optimal control, $p$ is known as the adjoint variable [@CD:MR29:3316b]. \[scale:Pont:Ext\] A triplet $(q,u,p)$ satisfying the conditions of Theorem \[th:pmp\] will be called a *scale Pontryagin extremal*. \[rem:cp:CV:EP:EEL\] If $\varphi\left(t,q,q_\tau,u,u_\tau\right) = u$, then Theorem \[th:pmp\] reduces to Theorem \[ELDT\]. Let us verify this in the interval $t_{1}\leq t\leq t_{2}-\tau$ (the procedure is similar for $t_{2}-\tau\leq t\leq t_{2}$). The stationary condition gives $p(t) = L_{u}[q]_{\tau}^{\square}(t)+L_{u_{\tau}}[q]_{\tau}^{\square}(t+\tau)$ and the second equation in the scale Hamiltonian system with delay gives $\square p(t) = L_{q}[q]_{\tau}^{\square}(t)+L_{q_{\tau}}[q]_{\tau}^{\square}(t)$. Comparing both equalities, one obtains the non-differentiable Euler–Lagrange equations with time delay . In other words, the scale Pontryagin extremals (Definition \[scale:Pont:Ext\]) are a generalization of the non-differentiable Euler–Lagrange extremals (Definition \[nde\]). We conclude from Theorem \[th:pmp\] that the coherence principle is also respected for non-differentiable optimal control problems with time delay. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by [*FEDER*]{} funds through [*COMPETE*]{} — Operational Programme Factors of Competitiveness (“Programa Operacional Factores de Competitividade”) and by Portuguese funds through the [*Center for Research and Development in Mathematics and Applications*]{} (University of Aveiro) and the Portuguese Foundation for Science and Technology (“FCT — Fundação para a Ciência e a Tecnologia”), within project PEst-C/MAT/UI4106/2011 with COMPETE number FCOMP-01-0124-FEDER-022690. The first author was also supported by the FCT post-doc fellowship SFRH/BPD/51455/2011, from program [*Ciência Global*]{}. [99]{} O. P. Agrawal, J. Gregory and K. Pericak-Spector, A Bliss-type multiplier rule for constrained variational problems with time delay, J. Math. Anal. Appl. [**210**]{} (1997), no. 2, 702–711. R. Almeida and D. F. M. Torres, Hölderian variational problems subject to integral constraints, J. Math. Anal. Appl. [**359**]{} (2009), no. 2, 674–681. [arXiv:0807.3076]{} R. Almeida and D. F. M. Torres, Generalized Euler-Lagrange equations for variational problems with scale derivatives, Lett. Math. Phys. [**92**]{} (2010), no. 3, 221–229. [arXiv:1003.3133]{} V. I. Arnold, [*Mathematical methods of classical mechanics*]{}, translated from the Russian by K. Vogtmann and A. Weinstein, second edition, Graduate Texts in Mathematics, 60, Springer, New York, 1989. J. Cresson, Non-differentiable variational principles, J. Math. Anal. Appl. [**307**]{} (2005), no. 1, 48–64. [arXiv:math/0410377]{} J. Cresson, Non-differentiable deformations of $\Bbb R\sp n$, Int. J. Geom. Methods Mod. Phys. [**3**]{} (2006), no. 7, 1395–1415. J. Cresson and S. Darses, Stochastic embedding of dynamical systems, J. Math. Phys. [**48**]{} (2007), no. 7, 072703, 54 pp. [arXiv:math/0509713]{} J. Cresson and I. Greff, Non-differentiable embedding of Lagrangian systems and partial differential equations, J. Math. Anal. Appl. [**384**]{} (2011), no. 2, 626–646. J. Cresson, A. B. Malinowska and D. F. M. Torres, Time scale differential, integral, and variational embeddings of Lagrangian systems, Comput. Math. Appl. [**64**]{} (2012), no. 7, 2294–2301. [arXiv:1203.0264]{} G. S. F. Frederico and D. F. M. Torres, Noether’s symmetry theorem for variational and optimal control problems with time delay, Numer. Algebra Control Optim. [**2**]{} (2012), no. 3, 619–630. [arXiv:1203.3656]{} I. M. Gelfand and S. V. Fomin, [*Calculus of variations*]{}, Revised English edition translated and edited by Richard A. Silverman Prentice Hall, Englewood Cliffs, NJ, 1963. D. K. Hughes, Variational and optimal control problems with delayed argument, J. Optimization Theory Appl. [**2**]{} (1968), 1–14. L. Nottale, The theory of scale relativity, Internat. J. Modern Phys. A [**7**]{} (1992), no. 20, 4899–4936. L. Nottale, The scale-relativity program, Chaos Solitons Fractals [**10**]{} (1999), no. 2-3, 459–468. T. Odzijewicz, A. B. Malinowska and D. F. M. Torres, Generalized fractional calculus with applications to the calculus of variations, Comput. Math. Appl. [**64**]{} (2012), no. 10, 3351–3366. [arXiv:1201.5747]{} L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, [*Selected works. Vol. 4*]{}, translated from the Russian by K. N. Trirogoff, translation edited by L. W. Neustadt, reprint of the 1962 English translation, Classics of Soviet Mathematics, Gordon & Breach, New York, 1986.
--- abstract: 'Multi-Person Pose Estimation is an interesting yet challenging task in computer vision. In this paper, we conduct a series of refinements with the MSPN and PoseFix Networks, and empirically evaluate their impact on the final model performance through ablation studies. By taking all the refinements, we achieve 78.7 on the COCO test-dev dataset and 76.3 on the COCO test-challenge dataset.' author: - | Dongdong Yu$^{\dag}$, Kai Su$^{\dag}$, Changhu Wang\ ByteDance AI Lab, China\ bibliography: - 'egbib.bib' title: 'Towards Good Practices for Multi-Person Pose Estimation' --- Introduction ============ Multi-Person Pose Estimation is a fundamental yet challenging problem in computer vision. The goal is to locate body parts for all persons in an image, such as keypoints on the arms, torsos, and the face. It is important for many applications like human re-identification, human-computer interaction and activity recognition. The tremendous development of deep convolution neural networks [@he2016deep] bring huge progress for multi-person pose estimation. Existing approaches can be roughly classified into two frameworks, i.e., top-down framework [@li2019rethinking; @su2019multi; @moon2019posefix; @yu2018multi] and bottom-up framework [@newell2017associative]. The former one first detects all human bounding boxes in the image and then estimates the pose within each box independently. For example, Multi-Stage Pose estimation Network (MSPN) [@li2019rethinking] adopts the ResNet-50 through multi stages based on repeated down and up sampling steps. PoseFix Network [@moon2019posefix] is a human pose refinement network that refines a estimated pose from a tuple of an input image and a pose. The latter one first detects all body keypoints independently and then assembles the detected body joints to form multiple human poses. For example, Associate Embedding [@newell2017associative] designs the network to simultaneously estimate the keypoints detection and group heatmaps, instead of the multi-stage pipelines. In this paper, we follow the top-down pipeline and conduct a series of refinements based on MSPN and PoseFix Networks and evaluate their impact on the final model performance through ablation studies. Finally, we achieve 78.7 on the COCO test-dev dataset and 76.3 on the COCO test-challenge dataset. Method ====== To handle the multi-person pose estimation, we follow the top-down pipeline. First, a human detector is applied to generate all human bounding boxes in the image. Then we apply pose estimation network to estimate the corresponding human pose. The MSPN network adopts the ResNet-50 as the backbone of the encoder and decoder. In our work, we propose a new backbone, named Refine-50, which can well handle the scale variant cases. Experiments =========== Datasets -------- The training datasets include the COCO train2017 dataset [@lin2014microsoft] (includes 57K images and 150K person instances) and all the AI Challenge dataset [@ai.challenge] (includes 400K person instances). For the AI Challenge dataset, we only use the same annotated keypoints as the COCO train2017 dataset for the training. The final results are reported on the COCO test-dev dataset and the COCO test-challenge dataset. Results ------- ### Ablation Study In this subsection, we will step-wise decompose our model to reveal the effect of each component. In the following experiments, we evaluate all comparisons on the COCO val2017 dataset. We use 4x stage for both MSPN network and our network. [**Effect of Backbone**]{}  Different with MSPN, we replace the ResNet-50 with our Refine-50. As shown in Table \[table:table1\], we do the experiment with official MSPN code, the AP is 74.7, which is obvious lower than the MSPN’s result from their paper. After replacing the ResNet-50 with our Refine-50, the AP is improved from 74.7 to 76.0. Detection box is provided from the MSPN paper. [**Effect of Image Resolution**]{}  By using a larger resolution of input image, the AP performance is improved from 76.0 to 77.5, as shown in Table \[table:table2\]. [**Effect of Extra Dataset**]{}  Besides, we also use extra dataset(AI Challenge Dataset) for training. As shown in Table \[table:table3\], the AP is improved from 77.5 to 78.5 by using the extra dataset. ### Development and Challenge Results In this subsection, we ensemble three Refine-50 models for the pose estimation. As shown in Table \[table:table4\], the AP of test-dev is 78.0, and the AP of test-cha is 76.0. After using PoseFix, the AP of test-dev can be improved to 78.7 and the AP of test-cha can be improved to 76.3. Conclusion ========== In this work, we conduct a series of refinements with the MSPN and PoseFix Networks, and empirically evaluate their impact on the final model performance through ablation studies. By taking all the refinements, we achieve 78.7 on the COCO test-dev dataset and 76.3 on the COCO test-challenge dataset.
--- abstract: 'A thermodynamical minimum principle valid for photon radiation is shown to hold for arbitrary geometries. It is successfully extended to neutrinos, in the zero mass and chemical potential case, following a parallel development of photon and neutrino statistics. This minimum principle stems more from that of Planck than that of classical Onsager-Prigogine irreversible thermodynamics. Its extension from bosons to fermions suggests that it may have a still wider validity.' --- UF-IFT-HEP-97-7\ August 1997\ Revised: April 1998 **MINIMUM ENTROPY PRODUCTION OF NEUTRINO RADIATION** **IN THE STEADY STATE** 0.6cm **Christopher Essex** 0.6cm Department of Applied Mathematics The University of Western Ontario, London, Canada N6A 5B7 0.6cm **Dallas C. Kennedy** 0.6cm Department of Physics University of Florida, Gainesville FL 32611 USA 0.6cm Keywords: radiation, neutrinos, photons, fermions, bosons, minimum entropy production Introduction ============ Photon radiation has a distinctive quality in that it interacts little enough with matter that it is typically far from thermal equilibrium even when matter may be close to equilibrium. This is the origin of classical radiative transfer [@Sch; @Ch], which represents a link between far-from-equilibrium properties of radiation to the near-equilibrium thermodynamics of matter, leading to some curious thermodynamical consequences [@Essexa; @Essexb]. These in particular concern minimum entropy production in a non-equilibrium steady state (NESS). This is a property distinct from the classical theory of irreversible thermodynamics [@DeG], as few of the requirements for that theory hold: one does not even have a local thermodynamical bilinear form of entropy production [@Essexa; @Essexad]. Is this distinctive linkage one that may only be found in the domain of bosons? The answer is surely not, as neutrinos, which are fermions, are even more tenuously linked to matter than photons. However, a beam of neutrinos, unlike one of photons, is not at all the norm. In the case of fermions, in contrast, we must turn to the atypical domains of a supernova and the early Universe to have macroscopic beams interacting meaningfully at a thermodynamical level with matter. These physical situations involve a boundary in space or time between neutrino equilibrium and non-equilibrium. If we allow neutrino production without effective absorption, ordinary stars provide another example domain. However, the question is to what extent the thermodynamical properties that hold for photons also hold for neutrinos. That is, is there a similar minimum principle for entropy production in the case of neutrino radiation? This paper addresses this question by generalizing “radiation” to include any exactly or nearly massless particles whose number is not conserved. We consider only the case of matter in local thermal equilibrium (LTE) and where macroscopic thermodynamics is valid, not the more general case when LTE breaks down and kinetic theory is necessary [@BoerGroot]. We proceed here with the assumption that the rest mass and chemical potential of neutrinos are zero. Clearly a more comprehensive treatment must include the possibility that neither is zero [@EaK]. However there are advantages to proceeding in this manner in the first instance. By making this choice we ensure that the neutrinos are as much like the photons as possible. The rest mass is not so much a problem here as is the chemical potential. A non-zero chemical potential presents a qualitative departure from the thermodynamic properties of photons in that it implies an additional independent thermodynamical variable. However, what is learned in this case can be a guide to future work. We find that neutrinos have an entropy production minimum principle in the steady state similar to that of photons, which also manifests itself as a conservation principle for energy. Implicit in weak reactions involving neutrinos is the conservation, not only of electric charge $Q,$ but of lepton number $L$ and baryon number $B$ as well. $Q,$ $B,$ and $L$ conservation in weak reactions play a non-trivial role, unlike $Q$ in purely electromagnetic processes. These quantities are assumed to be exactly conserved in the microscopic sense. But conservation in this paper may also have a secondary thermodynamical meaning: absence of sources or sinks of these numbers in the form of macroscopic, thermodynamic reservoirs. We proceed in a parallel manner between photons and neutrinos in order to highlight differences and similarities and to link with the previous work on photons. General Definitions and Relations ================================= Phase space for both neutrinos and photons is defined by position ${\bf r},$ momentum ${\bf p},$ implying energy $\epsilon $. In the case of photons it is often customary to introduce frequency $\nu$ as a proxy for energy and wavenumber ${\bf k}$ instead of momentum. The energy of unpolarised particles in the phase space volume $d^3p~d^3r$ is $$2 n \epsilon \frac{d^3p~d^3r}{h^3}\quad ,$$ where $n$ is the mean occupation number for either photons or neutrinos. $h$ is Planck’s constant. The entropy of that same volume is $$2 \k [ \mp (1 \mp n) \ln (1\mp n) - n \ln n] \frac{d^3p~d^3r}{h^3}\quad ,$$ where $\k$ is Boltzmann’s constant. The upper signs correspond to neutrinos and the lower to photons. A momentum-dependent temperature $T_{\bf p}$ may be introduced for this small phase volume, by forming the derivative of entropy with respect to energy, $$\frac{1}{T_{\bf p}} = \frac{\k}{\epsilon}\ln (n^{-1}\mp 1)\quad ,$$ which takes its physical meaning from the time-independent steady state of noninteracting quanta. This makes the cell indistinguishable from one that shares the same temperature with all other cells. We extend this naturally to neutrinos given the assumptions of the paper: the rest mass $m_\nu$ and the chemical potential $\mu_\nu$ are both zero. One finds $$\frac{d^3p~d^3r}{h^3} = \left(\frac{\k T_{\bf p}}{hc}\right)^3~x^2~dx~d\Omega~d^3r\quad ,$$ given the new variable $x$ such that $\epsilon$ = $pc,$ $x=\epsilon/{\k T_{\bf p}}$, and $d^3p$ = $p^2 dp d\Omega$, where $d\Omega$ is an element of solid angle. In thermal equilibrium, $$n_{\bf p} = \frac{1}{e^x \pm 1}\quad ,$$ and we may drop the subscript ${\bf p}$ on the temperature. The energy in the phase volume may then be written $$\frac{2(\k T)^4}{(hc) ^3} \frac{x^3}{(e^x \pm 1)}~dx~d\Omega~d^3r \quad .$$ From this we note that when an integration over $x$ is carried out that the fourth-power law follows for both photons [*and*]{} neutrinos. The only difference between the two is in the numerical factor of the integral due to the “$\pm$” in the denominator in the integrand of the $x$ integration. The entropy of the phase volume is treated similarly. After some simple manipulations it becomes $$\frac{2\k (\k T)^3}{(hc)^3}\Big[ \frac{x^3}{(e^x \pm 1)} \pm x^2 \ln (1 \pm e^{-x}) \Big] dx~d\Omega~d^3r\quad .$$ This implies the expected third-power behaviour for entropy in equilibrium, for neutrinos as well as photons. The four integrals: $$\int^\infty_0\ \frac{x^3}{(e^x \pm 1)} \ dx = \frac{15 \mp 1}{16} \Big(\frac{\pi^4}{15}\Big)\quad ,$$ and $$\int^\infty_0\ \pm x^2 \ln (1 \pm e^{-x}) \ dx = \frac{15 \mp 1}{16} \Big(\frac{1}{3}\Big)\Big(\frac{\pi^4}{15}\Big)\quad ,$$ are easily deduced by simple series expansions. From these we find the energy per unit volume into solid angle $d\Omega$, $$\frac{15 \mp 1}{16}\Big(\frac{\sigma}{\pi c}\Big)T^4d\Omega\quad,$$ where $\sigma$ = $2\k^4\pi^5/(15c^2h^3),$ the Stefan-Boltzmann constant. Similarly for the entropy, $$\Big(\frac{4}{3}\Big)\frac{15 \mp 1}{16}\Big(\frac{\sigma}{\pi c}\Big)T^3 d\Omega\quad .$$ The vector flux density of energy into solid angle $d\Omega$ with direction $\hat{\bf m}$ is $$\frac{15 \mp 1}{16}\Big(\frac{\sigma}{\pi}\Big)T^4\hat{\bf m}~d\Omega \quad ,$$ and for entropy, $$\Big(\frac{4}{3}\Big)\frac{15 \mp 1}{16}\Big(\frac{\sigma}{\pi}\Big)T^3 \hat{\bf m}~d\Omega\quad .$$ The integrals (8,9) give the canonical fermion factor of $7/8$ relative to bosons. We conclude that the flux density per solid angle, known variously as the specific intensity or radiance, is for energy, $$\frac{15 \mp 1}{16}\Big(\frac{\sigma}{\pi}\Big)T^4\quad ,$$ and for entropy, $$\Big(\frac{4}{3}\Big)\frac{15 \mp 1}{16}\Big(\frac{\sigma}{\pi}\Big)T^3\quad .$$ Even out of equilibrium, we may relate the specific intensity $I_\epsilon$, for a given $\epsilon$, to $n$ from equations (1) and (4) and the [*flux density*]{} into a solid angle, $$I_\epsilon = \frac{2n_\epsilon\epsilon^3}{h^3c^2}\quad .$$ The specific entropy intensity is $$J_\epsilon = \frac{2\k\epsilon^2}{h^3c^2}\Big[ \mp \Big(1 \mp \frac{c^2h^3I_\epsilon}{2\epsilon^3}\Big) \ln\Big(1\mp \frac{c^2h^3 I_\epsilon}{2\epsilon^3}\Big) \nonumber \\ - \Big(\frac{c^2h^3 I_\epsilon}{2\epsilon^3}\Big) \ln\Big(\frac{c^2h^3 I_\epsilon}{2\epsilon^3}\Big)\Big]\quad .$$ The fundamental extensive quantities are the specific entropy flux $J_\epsilon$ and specific energy flux $I_\epsilon$. Note that expression (3) is recovered by forming $dJ_\epsilon /dI_\epsilon$. Although neutrino number is not conserved, lepton number is, and it is thus physically important to define a specific [*number*]{} flux for neutrinos $N_\epsilon$ corresponding to $I_\epsilon$: $$N_\epsilon = \frac{2n_{\bf p}\epsilon^2_{\bf p}}{h^3c^2}\quad ,$$ where $$I_\epsilon = \epsilon N_\epsilon\quad .$$ In the case of photons, number is not interesting, as photon number is not conserved. In the case of fermions, some number conservation law always holds (because of the half-integral spins). With zero chemical potential, however, the neutrino number is a purely auxiliary quantity and depends on the energy flux. Conversely, we could take neutrino number as fundamental and energy as derived; in either case, only one variable is independent. In the case of non-zero chemical potential, not treated in this paper, neutrino number and energy flux become independent variables, with an energy-dependent chemical potential $\mu_\epsilon$. Entropy Production and Entropy Flows ==================================== Entropy is an extensive thermodynamic property and can be localised and integrated to determine a global amount. Localization is also possible for entropy production itself. This result agrees with the general principle of the locality of physical interactions, so long as a thermodynamical picture is valid. It also means that thermodynamics is not restricted to a macroscopic box. If we divide space into distinct regions, boundary surfaces are defined; entropy can be moved between regions and the notion of entropy flux across a surface follows. The entropy production rate can be expressed in a balance or conservation equation, $$\epsilon_s = {{\partial s} \over {\partial t}} + {\bf\nabla\cdot\bf{\cal F}}~,$$ where $\epsilon_s $ is the entropy production rate per unit volume, $s$ is the volume density of entropy, and ${\bf\cal F}$ is the entropy flux density. It is only by convention that this equation is called a [*conservation*]{} equation. It is really an accounting equation, with no implication of conservation. In fact, this equation allows a general statement of the second law of thermodynamics: $\epsilon_s \geq 0$. A somewhat artificial distinction may be made now in equation (20), between radiative and matter processes. In the former case we necessarily must always consider full phase space, while in the latter we assume a near-equilibrium distribution in energy, nearly identical in all directions (so-called “local thermal equilibrium” or LTE). In that case we find $$\epsilon_s = \epsilon_s^m + \epsilon_s^\gamma~,$$ where the superscript $m$ denotes non-radiative components which shall be termed “matter." $\gamma$ denotes radiative components. Writing these entropy sources out explicitly we have, $$\label{Bal} \epsilon_s = {{\partial s_m} \over {\partial t}}+ {{\partial s_\gamma } \over {\partial t}} + \nabla \cdot {\bf Y}_s + \nabla \cdot {\bf H}~,$$ where $s_m$ is the volume density of entropy in matter, and $s_\gamma$ is the volume density of entropy in radiation. ${\bf Y}_s$ and [**H**]{} denote non-radiative and radiative entropy flux densities, respectively. As radiation we mean here, of course, photons or neutrinos, while other particles take the role of being non-radiative and in LTE. A mixture of near-equilibrium (LTE) matter and the far-from-equilibrium radiation is a typical one in the Universe. It is the mixture in which you are immersed while reading this page: you are warm, and yet you can read these words radiatively. By using the equations of state and balance equations for extensive variables, we re-express the entropy production rate in the form [@Essexg] $$\epsilon_s = \sum_{k} \{ a_k \epsilon_k + \nabla a_k \cdot {\bf Y}'_k \} + {{\partial s_r } \over {\partial t}} + \nabla \cdot {\bf H}~,$$ where the sum is over contributions from extensive variables with index $k$. $a_k$ is the conjugate intensive variable divided by the temperature. $\epsilon_k$ is the creation rate of variable $k$ (for example, the rate that internal energy is created from the radiation field, nuclear reactions, or viscous dissipation). ${\bf Y}'_k$ is the flux density of variable $k$ (for example, the flux density of internal energy in the case of diffusion). The prime denotes the value in the rest frame of the medium. As massless radiation is without a rest frame, the separation of the entropy production into these two parts thus turns out to be not at all artificial. If we assume in (\[Bal\]) a steady radiation field, and integrate over a finite volume $V$ bounded by a surface $S,$ with element $d{\bf S},$ containing all of the matter, then the overall entropy production rate $\Sigma$ is $$\Sigma = \int_V {{\partial s_m} \over {\partial t}} dV + \int_S {\bf H}\cdot d{\bf S}~,$$ because matter fluxes must vanish across $S$. If we ignore matter transport processes, equation (23) becomes $$\label{Glob} \Sigma = \int_V \sum_{k} \{ a_k \epsilon_k \}\ dV + \int_S {\bf H}\cdot d{\bf S}~.$$ We may arrive at this result, alternatively, by imagining that the process is steady and that the entropy change of matter (the first integral) is reversibly drained off to a heat or another type of reservoir. That is, for this steady case, we deal only with a subsystem, thermodynamically speaking, and so conservation laws do [*not*]{} hold [*macroscopically*]{} (i.e. integrated) in the subsystem alone. Of course this has no bearing on [*microscopic*]{} conservation laws. It is worth noting that the terms under the first integral of (\[Glob\]) are all due to processes in matter and are not a part of the entropy production for photons or neutrinos. It is a common misconception to interpret the radiation heating rate over the temperature, which is a possible term under the first integral, as the entropy production of radiation. It should be clear from this construction that $\int_V \epsilon_s^\gamma dV$ is all accounted for through the second integral of (\[Glob\]). Minimum Entropy Production ========================== Equation (\[Glob\]) provides a structure for computing the entropy production rate due to the interaction of matter and radiation for many finite bodies locally in equilibrium. The first term represents whatever changes are manifest in the entropy of the body while the second term accounts for changes in the radiation field itself due to the interaction. If photon radiation is impinging on a body of temperature $T$ in a vacuum, the entropy production rate is from (25) just $$\Sigma = \int_V \left(-\frac{\nabla \cdot {\bf F}}{T}\right) dV + \int_S {\bf H}\cdot d{\bf S}~,$$ where the volume $V$ is any containing the body, and ${\bf F}$ is the volume flux density of energy radiation. The latter is the integral over all solid angles and energy of $I_\epsilon {\bf \hat{n}}$, where ${\bf \hat{n}}$ is the unit vector defining the direction that $I_\epsilon$ (16) flows in. If the temperature is (artificially held) uniform over the body, then $$\Sigma = - \frac{1}{T} \int_S {\bf F} \cdot d{\bf S} + \int_S {\bf H}\cdot d{\bf S}~.$$ If the surface area of the body is $A$, and it emits as a black body, then $$\Sigma = \frac{1}{T} \left\{ \left| \int_S {\bf F}^i \cdot d{\bf S} \right| - \sigma T^4 A \right\} - \left| \int_S {\bf H}^i \cdot d{\bf S} \right| + \frac{4}{3} \sigma T^3 A~,$$ where the remaining integrals represent impinging photon radiation, and the superscript $i$ denotes an impinging flow only, which is independent of the state of the body. It follows that $$\frac{d\Sigma}{dT} = -\frac{1}{T^2} \left\{ \left| \int_S {\bf F}^i \cdot d{\bf S} \right| - \sigma T^4 A \right\} ~,$$ or for a minimum $$\left\{ \left| \int_S {\bf F}^i \cdot d{\bf S} \right| - \sigma T^4 A \right\} = 0~.$$ That is, the entropy production rate is a minimum in the steady state, implying energy conservation [@Essexa; @Essexc], but for for an arbitrary geometry and impinging field. Equations (26) to (30) could represent the entropy production of a (black) planet irradiated by photons originating elsewhere from a star, or could represent a gas cloud under similar photon radiation. They could also represent the entropy production of a target of matter irradiated by a laser. One can envisage matter in the laboratory as being connected to a heat bath, or circumstances where the heat capacity is very high, to justify the steady, isothermal construction of the rate of entropy change in matter (first term in (26)). Obviously the isothermal construction may be relaxed [@Essexc] using the equations of Section III, but that is beyond the scope of this paper. While these equations represent an extension of previous work [@Essexa; @Essexc] to arbitrary matter and radiation geometries, the remarkable thing about them is that they represent a minimum entropy production principle that is foreign to the classical minimum entropy production principle of Prigogine [@DeG]. There are no Onsager reciprocity relations, no “linear" empirical flux-force laws, and no meaningful thermodynamical fluxes and forces. In the latter regard, there need only be one temperature defined for the entire problem. The question of whether this remarkable property is restricted to boson radiation in the from of photons has not been put previously, so we now consider the corresponding problem for fermions in the form of neutrinos. As in the case of photons we turn to equation (\[Glob\]), but at this point the differences between photons and neutrinos emerge, not in the second (radiation) term, but in the first (matter) term. That is because of the different manner in which neutrinos interact with matter. While photons do not conserve their number and trivially conserve electric charge, neutrinos are linked to the conservation of charge, lepton number, and baryon number through the structure of weak interactions. Here we consider only first-generation fermions and only nucleons for hadrons: $$\label{Reac} \nu + n \rightarrow p + e^- ~,$$ and all related reactions. Thus for neutrinos (\[Glob\]) becomes $$\Sigma = \int_V \left(-\frac{\nabla \cdot {\bf F}}{T} + \frac{\mu_e \dot{n}_e + \mu_n \dot{n}_n +\mu_p \dot{n}_p}{T} \right) dV + \int_S {\bf H}\cdot d{\bf S}~.$$ Recall that $\mu_\nu = 0$ is assumed for neutrinos. $\dot{n}_e$, $\dot{n}_n$, and $\dot{n}_p$ represent rates of change of number densities for electrons, neutrons and protons respectively, each of which is multiplied by its corresponding chemical potential. Assuming chemical equilibrium within the matter, $$\mu_e +\mu_p - \mu_n = 0 ~.$$ This, together with an isothermal and black body (neutrino) assumption, leads to $$\Sigma = \frac{1}{T} \left\{ \left| \int_S {\bf F}^i \cdot d{\bf S} \right| - \frac{7}{8}\sigma T^4 A \right\} + \int_V \frac{1}{T} \left\{ \mu_n (\dot{n}_n +\dot{n}_p) + \mu_e (\dot{n}_e - \dot{n}_p) \right\} dV- \left| \int_S {\bf H}^i \cdot d{\bf S} \right| + \frac{7}{8}\cdot \frac{4}{3} \sigma T^3 A~.$$ Conservation of baryon number and charge produce $$\Sigma = \frac{1}{T} \left\{ \left| \int_S {\bf F}^i \cdot d{\bf S} \right| - \frac{7}{8}\sigma T^4 A \right\} - \left| \int_S {\bf H}^i \cdot d{\bf S} \right| + \frac{7}{8}\cdot \frac{4}{3} \sigma T^3 A~.$$ Except for the factors of $\frac{7}{8}$, this equation is identical to (28). Thus in minimum entropy production, the energy balance steady state $$\left\{ \left| \int_S {\bf F}^i \cdot d{\bf S} \right|- \frac{7}{8} \sigma T^4 A \right\} = 0 ,$$ must hold as well. Thus we find that this distinctive minimum entropy production result is extended to neutrinos, and thereby beyond the limitation to bosons to at least some fermions. Local and Nonlocal Regimes ========================== Generally, the interactions of neutrinos and photons with matter are most simply viewed as purely local. For statistical physics, however, we also need to count momentum states. If we make the matter-radiation separation of section 3 and further assume LTE for matter, the matter momentum states can be integrated out, leaving the full phase space only for radiation. This condition allows us to avoid a full kinetic theory calculation [@BoerGroot]. At this juncture, we have a choice of local versus action-at-a-distance representation for the radiation. While the radiation exists in its own right, in the event that the overall entropy of the radiation field is not changing, we need only be concerned with matter-matter interactions mediated by radiation, as the radiation terms can be then be integrated out. Radiation then becomes merely a special kind of nonlocal heat transport, and $\Sigma$ may be represented in a multilocal form. This multilocal form is at least true for the first term in (\[Glob\]) even when separate changes do take place in the radiation field, such as in conservative scattering processes [@Essexad]. In all cases, it is the [*total*]{} $\Sigma$ that is minimised. In the case of quanta in an extended, continuous medium, a common matter-radiation LTE is often valid, with a common matter-radiation temperature $T({\bf r})$. This temperature in general is not constant in space. LTE holds to extremely high accuracy for [*photons*]{} inside the photospheric surface of a star, for example, but not for neutrinos. The entropy production associated with the production and transport (diffusion) of photons is $$\begin{aligned} \label{Froin} \Sigma_\gamma & = & \int~dV~\Big\{ (1/2)[4acT^5/(3\kappa_\gamma )] [{\bf\nabla}(1/T)]^2 + \varepsilon_\gamma /T\Big\}~,\end{aligned}$$ where $\varepsilon_\gamma$ is the photon energy production rate density and $\kappa_\gamma$ is the opacity (inverse mean free path) of the matter against photon diffusion [@KaB]. Equation (\[Froin\]) corresponds to the second term in equation (\[Glob\]), but written as a volume integral of a divergence, up to but [*not*]{} including the photosphere. This is in contrast to photon entropy production discussed previously in this paper, in that the photons here are virtually in equilibrium with matter and so are diffusive, not radiative. At the photospheric surface, a single LTE ceases to hold (see below), and the diffusive approximation of equation (\[Froin\]) breaks down. Nonetheless the second term of (\[Glob\]) still represents the entropy production in the radiation field, but at the photosphere and outside, the photons become radiative. The complete photon entropy production of a star of radius $R$ (including its photosphere) is $$\label{all} \Sigma_{\gamma ,\rm sur} = \frac{4}{3}\sigma T^3_{\rm sur}(4\pi R^2)$$ for the photon radiation released into empty space at an idealised sharp surface. $(T_{\rm sur}$ is the photospheric surface temperature.) As the volume of integration is increased $\Sigma_\gamma$ picks up additional contributions to interactions with more matter, for example, with a planet [@Essexa; @Essexb; @Essexc; @Les]. Neutrinos emitted by ordinary stars are quite different from photons: as the interior temperatures are not high enough for weak interactions to be in LTE, the neutrinos are not emitted in anything like a blackbody distribution, and are not subsequently thermalised. Their spectra are instead determined almost exactly by the microscopic reaction spectra and emerge essentially unaffected by the neutrinos’ subsequent travel through stellar matter to empty space. If the emitting star does not absorb neutrinos and the receiving Earth-bound detector does not emit neutrinos, the total neutrino entropy production is $$\begin{aligned} \Sigma_\nu & = & \int_{\rm emitter}~dV~\int~d^3p\ \frac{\epsilon_{\bf p}\ {\dot n}_{\bf p}}{T_{\bf p}} - \nonumber \\ & & \int_{\rm receiver}~dV^\prime~\int~d^3p^\prime\ {\epsilon_{\bf p^\prime}\ {\dot n}_{\bf p^\prime}\over T_{\bf p^\prime}}~,\end{aligned}$$ where ${\dot n}_{\bf p}$ $({\dot n}_{\bf p^\prime})$ is the neutrino production (absorption) rate density in real and momentum space. In a NESS, ${\dot n}_{\bf p}$ depends only on $\epsilon_{\bf p},$ not on emission direction ${\bf\hat p}.$ In an equilibrated supernova or the early Universe, on the other hand, the neutrinos are emitted and absorbed in LTE. The entropy production below the supernova neutrinosphere is a function of a single local temperature: $$\begin{aligned} \Sigma_\nu & = & \int~dV~\Big\{ (1/2)[7acT^5/(6\kappa_\nu )] [{\bf\nabla}(1/T)]^2 + \varepsilon_\nu /T\Big\}\quad ,\end{aligned}$$ like (\[Froin\]), with a neutrino mean free path $1/\kappa_\nu$ and an extra factor of $7/8$ in the diffusion part. The total $\Sigma_\nu$ [*including*]{} the neutrinosphere is analogous to (\[all\]). An analogous expression can be constructed for photon and neutrino entropy production in the early Universe before their respective decouplings, but this would require inclusion of general relativity. Even if the neutrinos or photons are emitted and absorbed locally as a gas, the system in general is not in equilibrium with a single temperature $T$ or $T({\bf r}).$ Multiple temperatures can be defined if each system component retains its own LTE, with each component spectrum thermal. For example, a photon gas with a frequency-dependent temperature $T_\gamma$ may interact with matter of temperature $T.$ Then $$\begin{aligned} \Sigma_\gamma = \int dV \int d\epsilon \int d\Omega\ I_\epsilon ({\bf r},\epsilon ,\Omega ) \Bigl[\frac{1}{T_\gamma({\bf r},\epsilon ,\Omega )} - \frac{1}{T({\bf r})}\Bigr]\quad .\end{aligned}$$ $I_\epsilon$ is the local specific energy intensity of photons emitted [*by*]{} the matter. If $T_\gamma >$ $T,$ then $I_\epsilon\le$ 0; if $T_\gamma <$ $T,$ then $I_\epsilon\ge$ 0. Thus $\Sigma_\gamma$ is always $\ge$ 0. Stellar atmospheres provide a related example. The radiation has a temperature $T_\gamma ({\bf r})$, while the various chemical species $X_l$ each can have their own $T_l({\bf r}).$ Thus $$\begin{aligned} \Sigma_\gamma = \sum_l \int dV \int d\epsilon \int d\Omega\ I_\epsilon ({\bf r},\epsilon ,\Omega ) \Bigl[\frac{1}{T_\gamma({\bf r},\epsilon ,\Omega )} - \frac{1}{T_l({\bf r})}\Bigr]\quad .\end{aligned}$$ Again $I_\epsilon$ is the local specific radiation intensity emitted by the matter. Contributions such as (41) or (42) occur in addition to such gradient terms as (37) or (40). In a supernova [@Smit], the matter and the neutrinos can have different temperatures $T$ and $T_\nu$, and $\Sigma_\nu$ picks up a contribution analogous to (41). The total $\Sigma$ again has other contributions, e.g., from gradients of $T$, $T_\gamma$, and $T_\nu$ such as (37) and (40). Neutrinos can interact among themselves by weak neutral currents and change their own phase space distribution without any ordinary matter present. The associated entropy production is $$\begin{aligned} \Sigma^{\rm NC}_\nu & = & \int dV \int d\epsilon \int d\Omega \int d\epsilon^\prime \int d\Omega^\prime\ I_{\epsilon\epsilon^\prime}({\bf r}, \epsilon ,\epsilon^\prime , \Omega ,\Omega^\prime )\ \times \nonumber \\ & & \Bigl[\frac{1}{T_\nu({\bf r},\epsilon ,\Omega )} - \frac{1}{T_\nu({\bf r}, \epsilon^\prime ,\Omega^\prime )} \Bigr]~,\end{aligned}$$ which is local in form. $I_{\epsilon\epsilon^\prime}({\bf r},\epsilon, \epsilon^\prime,\Omega,\Omega^\prime)$ is the local doubly-specific radiation intensity of the neutrinos “shining” on themselves and is proportional to the neutrino-neutrino weak neutral current reaction cross section. $\Sigma^{\rm NC}_\nu$ vanishes if thermal equilibrium obtains and there is only a single temperature, $T_{\bf p}$ = $T_{{\bf p^\prime}}$ for all ${\bf p},$ ${\bf p^\prime},$ at each point ${\bf r}.$ Summary and Conclusion ====================== A non-zero density of entropy production, $\Sigma$, indicates a local process of an irreversible nature. It is distinct from the local density of entropy, and it has no direct implications for processes elsewhere. For any given process, one may measure local nearness to equilibrium by its magnitude, although relative nearness of two different processes may have no meaning by this measure. If a single process is free, one expects that the entropy production rate will relax toward zero. If some exterior conditions prevent this dynamical relaxation, the process will instead simply relax as far as the exterior arrangements permit. This dynamical argument anticipates minimum entropy production, at least for sufficiently simple systems where thermodynamic quantities have meaning. But it also anticipates, as thermodynamics often does, an analogous static construction of the same principle, as the relaxation may be imagined to be constrained to quasi-steady conditions in whatever state it may be. Thus the principle of minimum entropy production found here is not unexpected. However in many respects it is very different from the classical minimum entropy production principle of Prigogine [@DeG]. It does not depend on Onsager’s principle; there are no “linear" empirical flux-force laws, nor is there even a meaningful bilinear construction. Under the LTE-NESS assumption, systems of both matter [@DeG] and photons [@Essexa; @Essexb; @Essexc; @KaB] exhibit some type of minimum entropy production. A photon-like principle holds, under the same assumptions, for neutrinos, as seen in examples given in sections IV and V. These examples can be generalised to many local or continuously varying temperatures. Conservation laws play identical roles in all three cases, by constraining the microscopic interactions of the quanta. Thus, microscopic energy and momentum are always conserved, with all the appropriate macroscopic consequences. Because neutrinos and photons are both taken here as massless and not conserved in number, the analogy between these two particles can be carried through in most aspects. However, neutrinos are fermions, which always have some associated conservation law; in this case, lepton number $L.$ Because the weak interactions conserve $B - L,$ baryon number $B$ is also conserved. Electromagnetic interactions conserve charge, but since photons themselves do not carry charge, this conservation law is trivial in radiative transfer, in contrast to the situation for neutrinos, which do carry $L.$ The exact masslessness of neutrinos is not proven experimentally [@Gel], and a logical generalization of our results here is to extend the treatment to massive neutrinos. Although we have used an electron chemical potential $\mu_e$ in matter, another generalization left open is to include a neutrino chemical potential $\mu_\nu$. These two extensions will be presented in a subsequent publication [@EaK]. We thank the Telluride Summer Research Center, where part of this work was done. D. C. Kennedy acknowledges the support of the University of Florida/Institute for Fundamental Theory, the U.S. Department of Energy under contract DE-FG05-86-ER40272 (U. Florida) and the NASA/Fermilab Theoretical Astrophysics group under DOE/NASA contract NAG5-2788. Schwarzschild, K. 1906, [*Göttinger Nachrichten*]{}, [**195**]{}, 41. Chandrasekhar, S. 1950, [*Radiative Transfer*]{} (New York: Dover Publications, 1960). Essex, C. 1984, [*J. Planet. Space Sci.*]{} [**32**]{}, 1035. Essex, C. 1984, [*J. Atmos. Sci.*]{} [**41**]{}, 1985. De Groot S. R. and Mazur, P. 1962, [ *Non-Equilibrium Thermodynamics*]{} (New York: Dover Publications, 1984). Essex, C. 1990, in [*Advances in Thermodynamics, Vol. 3: Nonequilibrium Theory and Extremum Principles,*]{} S. Sieniutycz and P. Salamon, eds. (New York: Taylor and Francis) 435. De Boer, W. P. H. and Van Weert, Ch. G. 1977, [*Physica*]{} [87A]{}, 67, 80. De Groot, S. R., 1979, [*Ann. Inst. Henri Poincar' e*]{} [**A31**]{}, 377. Essex, C. and Kennedy, D. C. 1998, in preparation. Essex, C. 1987, [*Geophys. Astrophys. Fluid Dynam.*]{} [**38**]{}, 1. Essex, C. 1984, [*Astrophys. J.*]{} [**285**]{}, 279. Kennedy, D. C. and Bludman, S. A. 1997, [*Astrophys. J.*]{} [**484**]{}, 329. Lesins, G. B. 1991, in [*Scientists on Gaia,*]{} S. H. Schneider and P. J. Boston, eds. (Cambridge, Massachusetts: MIT Press) 121. Smit, J. M. [*et al.*]{} 1996, [*Astrophys. J.*]{} [**460**]{}, 895. R. M. Barnett [*et al.*]{} (Particle Data Group) 1996, [*Phys. Rev.*]{} [**D54**]{}, 1, and 1998, World Wide Web URL http://pdg.lbl.gov/ .
--- abstract: 'Contextualized embeddings use unsupervised language model pretraining to compute word representations depending on their context. This is intuitively useful for generalization, especially in Named-Entity Recognition where it is crucial to detect mentions never seen during training. However, standard English benchmarks overestimate the importance of lexical over contextual features because of an unrealistic lexical overlap between train and test mentions. In this paper, we perform an empirical analysis of the generalization capabilities of state-of-the-art contextualized embeddings by separating mentions by novelty and with out-of-domain evaluation. We show that they are particularly beneficial for unseen mentions detection, especially out-of-domain. For models trained on CoNLL03, language model contextualization leads to a +1.2% maximal relative micro-F1 score increase in-domain against +13% out-of-domain on the WNUT dataset.' author: - Bruno Taillé - Vincent Guigue - Patrick Gallinari bibliography: - 'mendeley.bib' title: 'Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization' --- Introduction ============ Named-Entity Recognition (NER) consists in detecting textual mentions of entities and classifying them into predefined types. It is modeled as sequence labeling, the standard neural architecture of which is BiLSTM-CRF [@Huang2015BidirectionalTagging]. Recent improvements mainly stem from using new types of representations: learned character-level word embeddings [@Lample2016NeuralRecognition] and contextualized embeddings derived from a language model (LM) [@Peters2018DeepRepresentations; @Akbik2018ContextualLabeling; @Devlin2019BERT:Understanding]. LM pretraining enables to obtain contextual word representations and reduce the dependency of neural networks on hand-labeled data specific to tasks or domains [@Howard2018UniversalClassification; @Radford2018ImprovingPre-Training]. This contextualization ability can particularly benefit to NER domain adaptation which is often limited to training a network on source data and either feeding its predictions to a new classifier or finetuning it on target data [@Lee2018TransferNetworks; @Rodriguez2018TransferClasses]. All the more as classical NER models have been shown to poorly generalize to unseen mentions or domains [@Augenstein2017GeneralisationAnalysis]. In this paper, we quantify the impact of ELMo [@Peters2018DeepRepresentations], Flair [@Akbik2018ContextualLabeling] and BERT [@Devlin2019BERT:Understanding] representations on generalization to unseen mentions and new domains in NER. To better understand their effectiveness, we propose a set of experiments to distinguish the effect of unsupervised LM contextualization ($C_{LM}$) from task supervised contextualization ($C_{NER}$). We show that the former mainly benefits unseen mentions detection, all the more out-of-domain where it is even more beneficial than the latter. Lexical Overlap =============== Neural NER models mainly rely on lexical features in the form of word embeddings, either learned at the character-level or not. Yet, standard NER benchmarks present a large lexical overlap between mentions in the train set and dev / test sets which leads to a poor evaluation of generalization to unseen mentions as shown by Augenstein et al. [@Augenstein2017GeneralisationAnalysis]. They separate seen from unseen mentions and evaluate out-of-domain to focus on generalization but only study models designed before 2011 and no longer in use. We propose to use a similar setting to analyze the impact of state-of-the-art LM pretraining methods on generalization in NER. We introduce a slightly more fine-grained novelty partition by separating unseen mentions in *partial match* and *new* categories. A mention is an *exact match* (EM) if it appears in the exact same case-sensitive form in the train set, tagged with the same type. It is a *partial match* (PM) if at least one of its non stop words appears in a mention of same type. Every other mentions are *new*. We study lexical overlap in CoNLL03 [@Sang2003IntroductionTask] and OntoNotes [@Weischedel2013OntoNotesLDC2013T19], the two main English NER datasets, as well as WNUT17 [@Derczynski2017ResultsRecognition] which is smaller, specific to user generated content (tweets, comments) and was designed without exact overlap. For out-of-domain evaluation, we train on CoNLL03 (news articles) and test on the larger and more diverse OntoNotes (see Table \[table:genres\] for genres) and the very specific WNUT. We remap OntoNotes and WNUT entity types to match CoNLL03’s and denote the obtained dataset with $^\ast$. [@ll\*[5]{}[Y]{}rYr\*[5]{}[Y]{}rYr\*[4]{}[Y]{}@]{} & & & & ON & & & & WNUT & &\ & & LOC & MISC & ORG & PER & ALL & & ALL& & LOC & MISC & ORG & PER & ALL & & ALL & & LOC & ORG & PER & ALL\ & EM & 82% & 67% & 54% & 14% & 52% & & 67% & & 87% & 93% & 54% & 49% & 69% & & - & & - & - &- &-\ & PM & 4% & 11% & 17% & 43% & 20% & & 24% & & 6% & 2% & 32% & 36% & 20% & & 12% & & 11% & 5% & 13% & 12%\ & New & 14% & 22% & 29% & 43% & 28% & & 9% & & 7% & 5% & 14% & 15% & 11% & & 88% & & 89% & 95% & 87% & 88%\ & EM &- & - & -& - &- & &- & & 70% & 78% & 18% & 16% & 42% & & - & & 26% & 8% & 1% & 7%\ & PM &- & - &- & -& - & & - & & 7% & 10% & 45% & 46% & 28% & & - & & 9% & 15% & 16% & 14%\ & New & - & - & - & - & - & & - & & 23% & 12% & 38% & 38% & 30%& & - & & 65% & 77% & 83% & 78%\ \[table:overlap\] As reported in Table \[table:overlap\], the two main benchmarks for English NER mainly evaluate performance on occurrences of mentions already seen during training, although they appear in different sentences. Such lexical overlap proportions are unrealistic in real-life where the model must process orders of magnitude more documents in the inference phase than it has been trained on, to amortize the annotation cost. On the contrary, WNUT proposes a particularly challenging low-resource setting with no exact overlap. Furthermore, the overlap depends on the entity types: Location and Miscellaneous are the most overlapping types, even out-of-domain, whereas Person and Organization present a more varied vocabulary, also more subject to evolve with time and domain. Word Representations ==================== ### Word embeddings map each word to a single vector which results in a lexical representation. We take **GloVe 840B** embeddings [@Pennington2014GloVe:Representation] trained on Common Crawl as the pretrained word embeddings baseline and fine-tune them as done in related work. ### Character-level word embeddings are learned by a word-level neural network from character embeddings to incorporate orthographic and morphological features. We reproduce the **Char-BiLSTM** from [@Lample2016NeuralRecognition]. It is trained jointly with the NER model and its outputs are concatenated to GloVe embeddings. We also experiment with the Char-CNN layer from ELMo to isolate the effect of LM contextualization and denote it **ELMo\[0\]**. ### Contextualized word embeddings take into account the context of a word in its representation, contrary to previous representations. A LM is pretrained and used to predict the representation of a word given its context. **ELMo** [@Peters2018DeepRepresentations] uses a Char-CNN to obtain a context-independent word embedding and the concatenation of a forward and backward two-layer LSTM LM for contextualization. These representations are summed with weights learned for each task as the LM is frozen after pretraining. **BERT** [@Devlin2019BERT:Understanding] uses WordPiece subword embeddings [@Wu2016GooglesTranslation] and learns a representation modeling both left and right contexts by training a Transformer encoder [@Vaswani2017AttentionNeed] for Masked LM and next sentence prediction. For a fairer comparison, we use the BERT~LARGE~ feature-based approach where the LM is not fine-tuned and its last four hidden layers are concatenated. **Flair** [@Akbik2018ContextualLabeling] uses a character-level LM for contextualization. As in ELMo, they train two opposite LSTM LMs, freeze them and concatenate the predicted states of the first and last characters of each word. Flair and ELMo are pretrained on the 1 Billion Word Benchmark [@Chelba2013OneModeling] while BERT uses Book Corpus [@Zhu2015AligningBooks] and English Wikipedia. Experiments =========== In order to compare the different embeddings, we feed them as input to a classifier. We first use the state-of-the-art BiLSTM-CRF [@Huang2015BidirectionalTagging] with hidden size 100 in each direction and present in-domain results on all datasets in Table \[table:in-domain\]. We then report out-of-domain performance in Table \[table:results\]. To better capture the intrinsic effect of LM contextualization, we introduce the Map-CRF baseline from [@Akbik2018ContextualLabeling] where the BiLSTM is replaced by a simple linear projection of each word embedding. We only consider domain adaptation from CoNLL03 to OntoNotes$^\ast$ and WNUT$^\ast$ assuming that labeled data is scarcer, less varied and more generic than target data in real use cases. We use the IOBES tagging scheme for NER and no preprocessing. We fix a batch size of 64, a learning rate of 0.001 and a 0.5 dropout rate at the embedding layer and after the BiLSTM or linear projection. The maximum number of epochs is set to 100 and we use early stopping with patience 5 on validation global micro-F1. For each configuration, we use the best performing optimization method between SGD and Adam with $\beta_1=0.9$ and $\beta_2=0.999$. We report the mean and standard deviation of five runs. [@lr@\*[4]{}[Y]{}r\*[4]{}[Y]{}r\*[3]{}[Y]{}@]{} & & & & & &\ Embedding & Dim & EM & PM & New & All & & EM & PM & New & All & & PM & New & All\ BERT & 4096 & 95.7$_{{\scalebox{\s}}{.1}}$ & 88.8$_{{\scalebox{\s}}{.3}}$ & 82.2$_{{\scalebox{\s}}{.3}}$ & 90.5$_{{\scalebox{\s}}{.1}}$ & & 96.9$_{{\scalebox{\s}}{.2}}$ & 88.6$_{{\scalebox{\s}}{.3}}$ & 81.1$_{{\scalebox{\s}}{.5}}$ & **93.5**$_{{\scalebox{\s}}{.2}}$ & & 77.0$_{{\scalebox{\s}}{4.6}}$ & 53.9$_{{\scalebox{\s}}{.9}}$ & **57.0**$_{{\scalebox{\s}}{1.0}}$\ ELMo & 1024 & 95.9$_{{\scalebox{\s}}{.1}}$ & 89.2$_{{\scalebox{\s}}{.5}}$ & 85.8$_{{\scalebox{\s}}{.7}}$ & **91.8**$_{{\scalebox{\s}}{.3 }}$ & & 97.1$_{{\scalebox{\s}}{.2}}$ & 88.0$_{{\scalebox{\s}}{.2}}$ & 79.9$_{{\scalebox{\s}}{.7}}$ & **93.4**$_{{\scalebox{\s}}{.2}}$ & & 67.7$_{{\scalebox{\s}}{3.2}}$ & 49.5$_{{\scalebox{\s}}{.9}}$ & 52.1$_{{\scalebox{\s}}{1.0}}$\ Flair & 4096 & 95.4$_{{\scalebox{\s}}{.1}}$ & 88.1$_{{\scalebox{\s}}{.6}}$ & 83.5$_{{\scalebox{\s}}{.5}}$ & 90.6$_{{\scalebox{\s}}{.2}}$ & & 96.7$_{{\scalebox{\s}}{.1}}$ & 85.8$_{{\scalebox{\s}}{.5}}$ & 75.0$_{{\scalebox{\s}}{.6}}$ & 92.1$_{{\scalebox{\s}}{.2}}$ & & 64.9$_{{\scalebox{\s}}{.7}}$ & 48.2$_{{\scalebox{\s}}{2.0}}$ & 50.4$_{{\scalebox{\s}}{1.8}}$\ ELMo\[0\] & 1024 & 95.8$_{{\scalebox{\s}}{.1}}$ & 87.2$_{{\scalebox{\s}}{.2}}$ & 83.5$_{{\scalebox{\s}}{.4}}$ & 90.7$_{{\scalebox{\s}}{.1}}$ & & 96.9$_{{\scalebox{\s}}{.1}}$ & 85.9$_{{\scalebox{\s}}{.3}}$ & 75.5$_{{\scalebox{\s}}{.6}}$ & 92.4$_{{\scalebox{\s}}{.1}}$ & & 72.8$_{{\scalebox{\s}}{1.3}}$ & 45.4$_{{\scalebox{\s}}{2.8}}$ & 49.1$_{{\scalebox{\s}}{2.3}}$\ GloVe + char & 350 & 95.3$_{{\scalebox{\s}}{.3}}$ & 85.5$_{{\scalebox{\s}}{.7}}$ & 83.1$_{{\scalebox{\s}}{.7}}$ & 89.9$_{{\scalebox{\s}}{.5}}$ & & 96.3$_{{\scalebox{\s}}{.1}}$ & 83.3$_{{\scalebox{\s}}{.2}}$ & 69.9$_{{\scalebox{\s}}{.6}}$ & 91.0$_{{\scalebox{\s}}{.1}}$ & & 63.2$_{{\scalebox{\s}}{4.6}}$ & 33.4$_{{\scalebox{\s}}{1.5}}$ & 38.0$_{{\scalebox{\s}}{1.7}}$\ GloVe & 300 & 95.1$_{{\scalebox{\s}}{.4}}$ & 85.3$_{{\scalebox{\s}}{.5}}$ & 81.1$_{{\scalebox{\s}}{.5}}$ & 89.3$_{{\scalebox{\s}}{.4}}$ & & 96.2$_{{\scalebox{\s}}{.2}}$ & 82.9$_{{\scalebox{\s}}{.2}}$ & 63.8$_{{\scalebox{\s}}{.5}}$ & 90.4$_{{\scalebox{\s}}{.2}}$ & & 59.1$_{{\scalebox{\s}}{2.9}}$ & 28.1$_{{\scalebox{\s}}{1.5}}$ & 32.9$_{{\scalebox{\s}}{1.2}}$\ \[table:in-domain\] [@l@lr\*[4]{}[Y]{}r\*[4]{}[Y]{}r\*[4]{}[Y]{}@]{} & & & & & & &\ & Emb & & EM & PM & New & All & & EM & PM & New & All & & EM & PM & New & All\ & BERT & & 95.7$_{{\scalebox{\s}}{.1}}$ & 88.8$_{{\scalebox{\s}}{.3}}$ & 82.2$_{{\scalebox{\s}}{.3}}$ & 90.5$_{{\scalebox{\s}}{.1}}$ & & 95.1$_{{\scalebox{\s}}{.1}}$ & 82.9$_{{\scalebox{\s}}{.5}}$ & 73.5$_{{\scalebox{\s}}{.4}}$ & **85.0**$_{{\scalebox{\s}}{.3}}$ & & 57.4$_{{\scalebox{\s}}{1.0}}$ & 56.3$_{{\scalebox{\s}}{1.2}}$ & 32.4$_{{\scalebox{\s}}{.8}}$ & 37.6$_{{\scalebox{\s}}{.8}}$\ & ELMo & & 95.9$_{{\scalebox{\s}}{.1}}$ & 89.2$_{{\scalebox{\s}}{.5}}$ & 85.8$_{{\scalebox{\s}}{.7}}$ & **91.8**$_{{\scalebox{\s}}{.3}}$ & & 94.3$_{{\scalebox{\s}}{.1}}$ & 79.2$_{{\scalebox{\s}}{.2}}$ & 72.4$_{{\scalebox{\s}}{.4}}$ & 83.4$_{{\scalebox{\s}}{.2}}$ & & 55.8$_{{\scalebox{\s}}{1.2}}$ & 52.7$_{{\scalebox{\s}}{1.1}}$ & 36.5$_{{\scalebox{\s}}{1.5}}$ & **41.0**$_{{\scalebox{\s}}{1.2}}$\ & Flair & & 95.4$_{{\scalebox{\s}}{.1}}$ & 88.1$_{{\scalebox{\s}}{.6}}$ & 83.5$_{{\scalebox{\s}}{.5}}$ & 90.6$_{{\scalebox{\s}}{.2}}$ & & 94.0$_{{\scalebox{\s}}{.3}}$ & 76.1$_{{\scalebox{\s}}{1.1}}$ & 62.1$_{{\scalebox{\s}}{.5}}$ & 79.0$_{{\scalebox{\s}}{.5}}$ & & 56.2$_{{\scalebox{\s}}{2.2}}$ & 49.4$_{{\scalebox{\s}}{3.4}}$ & 29.1$_{{\scalebox{\s}}{3.3}}$ & 34.9$_{{\scalebox{\s}}{2.9}}$\ & ELMo\[0\] & & 95.8$_{{\scalebox{\s}}{.1}}$ & 87.2$_{{\scalebox{\s}}{.2}}$ & 83.5$_{{\scalebox{\s}}{.4}}$ & 90.7$_{{\scalebox{\s}}{.1}}$ & & 93.6$_{{\scalebox{\s}}{.1}}$ & 76.8$_{{\scalebox{\s}}{.6}}$ & 66.1$_{{\scalebox{\s}}{.3}}$ & 80.5$_{{\scalebox{\s}}{.2}}$ & & 52.3$_{{\scalebox{\s}}{1.2}}$ & 50.8$_{{\scalebox{\s}}{1.5}}$ & 32.6$_{{\scalebox{\s}}{2.2}}$ & 37.6$_{{\scalebox{\s}}{1.8}}$\ & G + char & & 95.3$_{{\scalebox{\s}}{.3}}$ & 85.5$_{{\scalebox{\s}}{.7}}$ & 83.1$_{{\scalebox{\s}}{.7}}$ & 89.9$_{{\scalebox{\s}}{.5}}$ & & 93.9$_{{\scalebox{\s}}{.2}}$ & 73.9$_{{\scalebox{\s}}{1.1}}$ & 60.4$_{{\scalebox{\s}}{.7}}$ & 77.9$_{{\scalebox{\s}}{.5}}$ & & 55.9$_{{\scalebox{\s}}{.8}}$ & 46.8$_{{\scalebox{\s}}{1.8}}$ & 19.6$_{{\scalebox{\s}}{1.6}}$ & 27.2$_{{\scalebox{\s}}{1.3}}$\ & GloVe & & 95.1$_{{\scalebox{\s}}{.4}}$ & 85.3$_{{\scalebox{\s}}{.5}}$ & 81.1$_{{\scalebox{\s}}{.5}}$ & 89.3$_{{\scalebox{\s}}{.4}}$ & & 93.7$_{{\scalebox{\s}}{.2}}$ & 73.0$_{{\scalebox{\s}}{1.2}}$ & 57.4$_{{\scalebox{\s}}{1.8}}$ & 76.9$_{{\scalebox{\s}}{.9}}$ & & 53.9$_{{\scalebox{\s}}{1.2}}$ & 46.3$_{{\scalebox{\s}}{1.5}}$ & 13.3$_{{\scalebox{\s}}{1.4}}$ & 27.1$_{{\scalebox{\s}}{1.0}}$\ & BERT & & 93.2$_{{\scalebox{\s}}{.3}}$ & 85.8$_{{\scalebox{\s}}{.4}}$ & 73.7$_{{\scalebox{\s}}{.8}}$ & 86.2$_{{\scalebox{\s}}{.4}}$ & & 93.5$_{{\scalebox{\s}}{.2}}$ & 77.8$_{{\scalebox{\s}}{.5}}$ & 67.8$_{{\scalebox{\s}}{.9}}$ & 80.9$_{{\scalebox{\s}}{.4}}$ & & 57.4$_{{\scalebox{\s}}{.3}}$ & 53.5$_{{\scalebox{\s}}{2.6}}$ & 33.9$_{{\scalebox{\s}}{.6}}$ & 38.4$_{{\scalebox{\s}}{.4}}$\ & ELMo & & 93.7$_{{\scalebox{\s}}{.2}}$ & 87.2$_{{\scalebox{\s}}{.6}}$ & 80.1$_{{\scalebox{\s}}{.3}}$ & **88.7**$_{{\scalebox{\s}}{.2}}$ & & 93.6$_{{\scalebox{\s}}{.1}}$ & 79.1$_{{\scalebox{\s}}{.5}}$ & 69.5$_{{\scalebox{\s}}{.4}}$ & **82.2**$_{{\scalebox{\s}}{.3}}$ & & 61.1$_{{\scalebox{\s}}{.7}}$ & 53.0$_{{\scalebox{\s}}{.9}}$ & 37.5$_{{\scalebox{\s}}{.7}}$ & **42.4**$_{{\scalebox{\s}}{.6}}$\ & Flair & & 94.3$_{{\scalebox{\s}}{.1}}$ & 85.1$_{{\scalebox{\s}}{.3}}$ & 78.6$_{{\scalebox{\s}}{.3}}$ & 88.1$_{{\scalebox{\s}}{.03}}$ & & 93.2$_{{\scalebox{\s}}{.1}}$ & 74.0$_{{\scalebox{\s}}{.3}}$ & 59.6$_{{\scalebox{\s}}{.2}}$ & 77.5$_{{\scalebox{\s}}{.2}}$ & & 52.5$_{{\scalebox{\s}}{1.2}}$ & 50.6$_{{\scalebox{\s}}{.4}}$ & 28.8$_{{\scalebox{\s}}{.5}}$ & 33.7$_{{\scalebox{\s}}{.5}}$\ & ELMo\[0\] & & 92.2$_{{\scalebox{\s}}{.3}}$ & 80.5$_{{\scalebox{\s}}{1.0}}$ & 68.6$_{{\scalebox{\s}}{.4}}$ & 83.4$_{{\scalebox{\s}}{.4}}$ & & 91.6$_{{\scalebox{\s}}{.4}}$ & 69.6$_{{\scalebox{\s}}{1.0}}$ & 56.8$_{{\scalebox{\s}}{1.5}}$ & 75.0$_{{\scalebox{\s}}{1.0}}$ & & 51.9$_{{\scalebox{\s}}{1.1}}$ & 42.6$_{{\scalebox{\s}}{.9}}$ & 32.4$_{{\scalebox{\s}}{.3}}$ & 35.8$_{{\scalebox{\s}}{.4}}$\ & G + char & & 93.1$_{{\scalebox{\s}}{.3}}$ & 80.7$_{{\scalebox{\s}}{.9}}$ & 69.8$_{{\scalebox{\s}}{.7}}$ & 84.4$_{{\scalebox{\s}}{.4}}$ & & 91.8$_{{\scalebox{\s}}{.3}}$ & 69.3$_{{\scalebox{\s}}{.3}}$ & 55.6$_{{\scalebox{\s}}{1.1}}$ & 74.8$_{{\scalebox{\s}}{.5}}$ & & 50.6$_{{\scalebox{\s}}{.9}}$ & 42.5$_{{\scalebox{\s}}{1.4}}$ & 20.6$_{{\scalebox{\s}}{2.8}}$ & 28.7$_{{\scalebox{\s}}{2.5}}$\ & GloVe & & 92.2$_{{\scalebox{\s}}{.1}}$ & 77.0$_{{\scalebox{\s}}{.4}}$ & 61.7$_{{\scalebox{\s}}{.3}}$ & 81.5$_{{\scalebox{\s}}{.05}}$ & & 89.6$_{{\scalebox{\s}}{.3}}$ & 62.8$_{{\scalebox{\s}}{.6}}$ & 38.5$_{{\scalebox{\s}}{.4}}$ & 68.1$_{{\scalebox{\s}}{.4}}$ & & 46.8$_{{\scalebox{\s}}{.8}}$ & 41.3$_{{\scalebox{\s}}{.5}}$ & 3.2$_{{\scalebox{\s}}{.2}}$ & 18.9$_{{\scalebox{\s}}{.7}}$\ \[table:results\] General Observations {#general} -------------------- ### ELMo, BERT and Flair Drawing conclusions from the comparison of ELMo, BERT and Flair is difficult because there is no clear hierarchy accross datasets and they differ in dimensions, tokenization, contextualization levels and pretraining corpora. However, although BERT is particularly effective on the WNUT dataset in-domain, probably due to its subword tokenization, ELMo yields the most stable results in and out-of-domain. Furthermore, Flair globally underperforms ELMo and BERT, particularly for unseen mentions and out-of-domain. This suggests that LM pretraining at a lexical level (word or subword) is more robust for generalization than at a character level. In fact, Flair only beats the non contextual ELMo\[0\] baseline with Map-CRF which indicates that character-level contextualization is less beneficial than word-level contextualization with character-level representations. ### ELMo\[0\] vs GloVe+char Overall, ELMo\[0\] outperforms the GloVe+char baseline, particularly on unseen mentions, out-of-domain and on WNUT$^\ast$. The main difference is the incorporation of morphological features: in ELMo\[0\] they are learned jointly with the LM on a huge dataset whereas the char-BiLSTM is only trained on the source NER training set. Yet, morphology is crucial to represent words never encountered during pretraining and in WNUT$^\ast$ around 20% of words in test mentions are out of GloVe vocabulary against 5% in CoNLL03 and 3% in OntoNotes$^\ast$. This explains the poor performance of GloVe baselines on WNUT$^\ast$, all the more out-of-domain, and why a model trained on CoNLL03 with ELMo outperforms one trained on WNUT$^\ast$ with GloVe+char. Thus, ELMo’s improvement over previous state-of-the-art does not only stem from contextualization but also an effective non-contextual word representation. ### Seen Mentions Bias In every configuration, $F1_{exact} > F1_{partial} > F1_{new}$ with more than 10 points difference. This gap is wider out-of-domain where the context differs more from training data than in-domain. NER models thus poorly generalize to unseen mentions, and datasets with high lexical overlap only encourage this behavior. However, this generalization gap is reduced by two types of contextualization described hereafter. LM and NER Contextualizations {#analysis} ----------------------------- The ELMo\[0\] and Map-CRF baselines enable to strictly distinguish contextualization due to LM pretraining ($C_{LM}$: ELMo\[0\] to ELMo) from task supervised contextualization induced by the BiLSTM network ($C_{NER}$: Map to BiLSTM). In both cases, a BiLSTM incorporates syntactic information which improves generalization to unseen mentions for which context is decisive, as shown in Table \[table:results\]. ### Comparison However, because ${C_{NER}}$ is specific to the source dataset, it is more effective in-domain whereas $C_{LM}$ is particularly helpful out-of-domain. In the latter setting, the benefits from $C_{LM}$ even surpass those from ${C_{NER}}$, specifically on domains further from source data such as web text in OntoNotes$^\ast$ (see Table \[table:genres\]) or WNUT$^\ast$. This is again explained by the difference in quantity and quality of the corpora on which these contextualizations are learned. The much larger and more generic unlabeled corpora on which LM are pretrained lead to contextual representations more robust to domain adaptation than ${C_{NER}}$ learned on a small and specific NER corpus. Similar behaviors can be observed when comparing BERT and Flair to the GloVe baselines, although we cannot separate the effects of representation and contextualization. ### Complementarity Both in-domain and out-of-domain on OntoNotes$^\ast$, the two types of contextualization transfer complementary syntactic features leading to the best configuration. However, in the most difficult case of zero-shot domain adaptation from CoNLL03 to WNUT$^\ast$, ${C_{NER}}$ is detrimental with ELMo and BERT. This is probably due to the specificity of the target domain, excessively different from source data. [0.75]{}[@l\*[7]{}[Y]{}@]{} & bc & bn & nw & mz & tc & wb & All\ BERT & 87.2$_{{\scalebox{\s}}{.5}}$ & 88.4$_{{\scalebox{\s}}{.4}}$ & 84.7$_{{\scalebox{\s}}{.2}}$ & 82.4$_{{\scalebox{\s}}{1.2}}$ & 84.5$_{{\scalebox{\s}}{1.1}}$ & 79.5$_{{\scalebox{\s}}{1.0}}$ & **85.0**$_{{\scalebox{\s}}{.3}}$\ ELMo & 85.0$_{{\scalebox{\s}}{.6}}$ & 88.6$_{{\scalebox{\s}}{.3}}$ & 82.9$_{{\scalebox{\s}}{.3}}$ & 78.1$_{{\scalebox{\s}}{.7}}$ & 84.0$_{{\scalebox{\s}}{.8}}$ & 79.9$_{{\scalebox{\s}}{.5}}$ & 83.4$_{{\scalebox{\s}}{.2}}$\ Flair & 78.0$_{{\scalebox{\s}}{1.1}}$ & 86.5$_{{\scalebox{\s}}{.4}}$ & 80.4$_{{\scalebox{\s}}{.6}}$ & 71.1$_{{\scalebox{\s}}{.4}}$ & 73.5$_{{\scalebox{\s}}{1.8}}$ & 72.1$_{{\scalebox{\s}}{.8}}$ & 79.0$_{{\scalebox{\s}}{.5}}$\ ELMo\[0\] & 82.6$_{{\scalebox{\s}}{.5}}$ & 88.0$_{{\scalebox{\s}}{.3}}$ & 79.6$_{{\scalebox{\s}}{.5}}$ & 73.4$_{{\scalebox{\s}}{.6}}$ & 79.2$_{{\scalebox{\s}}{1.2}}$ & 75.1$_{{\scalebox{\s}}{.3}}$ & 80.5$_{{\scalebox{\s}}{.2}}$\ GloVe + char & 80.4$_{{\scalebox{\s}}{.8}}$ & 86.3$_{{\scalebox{\s}}{.4}}$ & 77.0$_{{\scalebox{\s}}{1.0}}$ & 70.7$_{{\scalebox{\s}}{.4}}$ & 79.7$_{{\scalebox{\s}}{1.8}}$ & 69.2$_{{\scalebox{\s}}{.8}}$ & 77.9$_{{\scalebox{\s}}{.5}}$\ \[table:genres\] Related Work ============ Augenstein et al. [@Augenstein2017GeneralisationAnalysis] perform a quantitative study of two CRF-based models and a CNN with classical word embeddings [@Collobert2011NaturalScratch] over seven NER datasets including CoNLL03 and OntoNotes. They separate performance on seen (*exact match*) and unseen mentions and show a drop in F1 on unseen mentions and out-of-domain. Although comprehensive in experiments, this analysis is limited to models dating back from 2005 to 2011. We use a similar experimental setting to draw new insights on state-of-the art architectures and word representations. We limit to the two main English NER benchmarks as well as WNUT which was specifically designed to tackle this generalization problem in the Twitter domain. These three datasets cover all the domains studied in [@Augenstein2017GeneralisationAnalysis]. Moosavi and Strube raise a similar lexical overlap issue in Coreference Resolution on the CoNLL2012 dataset. They first show that for out-of-domain evaluation the performance gap between Deep Learning models and a rule-based system fades away [@Moosavi2017LexicalCaution]. They then add linguistic features (such as gender, NER, POS...) to improve out-of-domain generalization [@Moosavi2018UsingResolvers]. Nevertheless, such features are obtained using models in turn based on lexical features and at least for NER the same lexical overlap issue arises. Finally, Pires et al. [@Pires2019HowBERT] concurrently evaluate the cross-lingual generalization capability of Multilingual BERT for NER and POS tagging. Our work on monolingual generalization to unseen mentions and domains naturally complements this study. Conclusion ========== NER benchmarks are biased towards seen mentions, at the opposite of real-life applications. Hence the necessity to disentangle performance on seen and unseen mentions and test out-of-domain. In such setting, we show that contextualization from LM pretraining is particularly beneficial for generalization to unseen mentions, all the more out-of-domain where it surpasses supervised contextualization. Despite this improvement, unseen mentions detection remains challenging and further work could explore attention or regularization mechanisms to better incorporate context and improve generalization. Furthermore, we can investigate how to best incorporate target data to improve this LM pretraining zero-shot domain adaptation baseline.
--- abstract: | We consider the proton decay in supersymmetric models with a gravitino or axino lighter than the proton. This consideration leads to a stringent limit on the $R$ parity and $B$ violating Yukawa coupling of the superpotential operator $U^c_iD^c_jD^c_k$ as $\lambda^{\prime\prime}_{112}\leq 10^{-15}(m_{3/2}/{\rm eV})$ for a light gravitino, and $\lambda^{\prime\prime}_{112}\leq 10^{-15} (F_a/ 10^{10} \, {\rm GeV})$ for a light Dine-Fischler-Srednicki- Zhitnitskii axino. For hadronic axino, the constraint is weakened by the factor of $10^{3}$. address: - 'Department of Physics, Korea Advanced Institute of Science and Technology, Taejon 305-701, Korea${}^{\dagger}$' - 'Department of Physics, Chungbuk National University, Cheongju 360-763, Korea${}^*$' author: - 'Kiwoon Choi${}^{\dagger}$ , Eung Jin Chun${}^*$, and Jae Sik Lee${}^{\dagger}$' title: '**Proton Decay with a Light Gravitino or Axino** ' --- Proton stability strongly constrains the baryon ($B$) and lepton ($L$) number violating couplings. Since all known fermions lighter than the proton carry a nonzero lepton number, the couplings (or the combinations of couplings) relevant for the proton decay should conserve $B-L$. However if there is a lighter fermion which does [*not*]{} carry any lepton number, proton decay may be induced by a $B$ violating but $L$ conserving interaction alone [@chang]. There are in fact very interesting class of models which predict such a light fermion. In supersymmetric models in which supersymmetry (SUSY) breaking is mediated by gauge interactions, the squark and/or gaugino masses, i.e. the soft masses in the supersymmetric standard model (SSM) sector, are given by $m_{\rm soft}\simeq ({\alpha\over \pi})^n \Lambda_S$ where $n$ is a model-dependent positive integer and $\Lambda_S$ corresponds to the scale of spontaneous SUSY breaking [@nelson]. In such models, in order for $m_{\rm soft}$ to be of order the weak scale, $\Lambda_S$ is assumed to be $10\sim 1000$ TeV, leading to the gravitino mass $m_{3/2}=\Lambda_S^2/M_P\leq 1$ keV far below the proton mass. If a global $U(1)_{PQ}$ symmetry is introduced in a gauge-mediated SUSY breaking model to solve the strong CP problem by the axion mechanism [@pq], SUSY breaking in the axion sector is mediated also by some gauge interactions. The axino mass in such models is given by $m_{\tilde{a}}\simeq (\alpha/\pi)^m\Lambda_S^2/F_a$ where $m$ is again a model-dependent (but typically bigger than $n$) positive integer and $F_a$ denotes the scale of spontaneous $U(1)_{PQ}$ breaking [@axino]. Obviously then the axino is lighter than the proton for a phenomenologically allowed $F_a\geq 10^{10}$ GeV. In other type of models in which SUSY breaking is transmitted by supergravity interactions, the gravitino mass is fixed to be of order the weak scale, however there is still a room for an axino lighter than the proton [@chun]. As was pointed out in Ref. [@chun], some supergravity-mediated models lead to $m_{\tilde{a}}\simeq m_{3/2}(m_{3/2}/M_P)^{1/2}\simeq 1$ keV for which the axino would be a good warm dark matter candidate [@turner]. In this paper, we wish to examine the proton decay involving a light gravitino or axino to derive a constraint on the superpotential interaction $\lambda^{\prime\prime}_{ijk} U^c_iD^c_jD^c_k$ which violates $R$ parity and $B$, while conserving $L$. Let us first consider the proton decay involving a light gravitino, more precisely the helicity $\pm 1/2$ Goldstino component. Our starting point is the effective lagrangian below the scale $\Lambda_S=\sqrt{m_{3/2}M_P}$ but above the weak scale soft mass $m_{\rm soft}$: =[L]{}\_[SSM]{}+[L]{}\_[G]{}, where ${\cal L}_{\rm SSM}$ denotes the lagrangian density of the SSM fields and the Goldstino lagrangian ${\cal L}_{G}$ is given by [@fayet] \_[G]{}=|[G]{}\^\_G+ ( |\^a\^\^ \_GF\_\^a+ 2 |\_[I]{}(1-\_5) \^\^ \_GD\_\^\*\_I+ [h.c]{} ) where $G$ denotes the four-component Majorana Goldstino field. Here ${\cal L}_{\rm SSM}$ includes the terms associated with the $B$ violating superpotential interaction, W\_[SSM]{} \^\_[ijk]{}U\^c\_iD\^c\_jD\^c\_k, and $(\phi_I,\psi_I)$ and $(\lambda^a, F^a_{\mu\nu})$ stand for the left-handed chiral matter and gauge multiplets in the SSM sector. Note that the above form of Goldstino lagrangian is enough for the study of the process involving a single on-shell Goldstino obeying $i\gamma^{\mu}\partial_{\mu}G=m_{3/2}G$. Integrating out all fields heavier than the scale of the QCD chiral symmetry breaking, i.e. $\Lambda_{\chi}\simeq 1$ GeV, we are left with an effective lagrangian of the light quarks, $q_{\alpha}$ ($\alpha = (u, d, s)$), and gluons together with the light Goldstino (of course also the light leptons and the photon which are not relevant for our discussion). The operators responsible for the proton decay in this effective lagrangian at $\Lambda_{\chi}$ are induced by the exchange of the $SU(2)_L$ singlet squarks as \_[eff]{}= (|[q]{}\_(1-\_5)q\^c\_)(\_|[q]{}\_ (1-\_5)\^G). Here $m_0^2$ denotes the squark masses which are assumed to be (approximately) universal, y\_[dsu]{}=y\_[uds]{}= y\_[usd]{}=1, and all other components of $y_{\alpha\beta\gamma}$ do vanish. Note that the above operator has $B=S=-1$, and thus the relevant proton decay mode is $p\rightarrow G+K^+$. For a generic non-universal squark mass matrix, $S=0$ operator can be induced also to give rise to $p\rightarrow G+\pi^+$, however it is suppressed by a small squark mixing. To arrive at the above interaction operator, we have used the equation of motion of the on-shell Goldstino field and ignored the piece suppressed by the small $m_{3/2}$. Also ignored are the renormalization effects between the weak scale and $\Lambda_{\chi}$. The hadronic matrix elements of the above $B=S=-1$ operator would be described by an effective chiral lagrangian including the Goldstino field. Let us consider a chiral operator ${\cal O}_{\chi}$ which would induce $p\rightarrow G+K^+$ as a low energy realization of the light quark operator ${\cal O}_{\rm eff}$ below $\Lambda_{\chi}$. Obviously it can be written as ${\cal O}_{\chi}=Z^{\mu}(1-\gamma_5) \partial^{\mu}G$ where $Z^{\mu}$ is a fermionic $B=S=-1$ operator including $\bar{P}$ and $K^{+}$. If $Z^{\mu}$ does not include any spacetime derivative, ${\cal O}_{\chi}$ is suppressed by the small factor $m_{3/2}/m_p$ (for on-shell Goldstino) where $m_p$ denotes the proton mass. For $Z^{\mu}$ containing a single spacetime derivative, we have \_= (|[P]{}(1-\_5) \^G)\_K\^+, where again the equations of motion are used together with $m_{3/2}\ll m_p$. To estimate the size of the hadronic coefficient $\xi$, we use the naive dimensional analysis (NDA) rule of Ref. [@manohar], yielding ||4f\_\^2, where $f_{\pi}=93$ MeV is the pion decay constant. In fact, the NDA rule gives $\Lambda_{\chi}=4\pi f_{\pi}$ and then the typical energy in the proton decay, i.e. $m_p$, is comparable to $\Lambda_{\chi}$. This means that, within the NDA rule, chiral operators with more spacetime derivatives are equally important as the operator of Eq. (6). However for an order of magnitude estimate of the hadronic matrix element, the consideration of $Z^{\mu}$ with a single derivative would be enough. Then applying the experimental limit on $p\rightarrow K^++\nu$ for $p\rightarrow K^++G$ induced by the interaction of Eq. (6), we find the following constraint on the $R$ parity and $B$ violating coupling: \^\_[112]{}510\^[-16]{} ()\^2 () (), which is one of the main results of this paper. Let us now consider the proton decay involving a light axino. Similarly to the case of a light gravitino, we start from the effective lagrangian at scales below the scale $F_a$ of $U(1)_{PQ}$ breaking but above $m_{\rm soft}$: =[L]{}\_[SSM]{}+[L]{}\_A, where the axino lagrangian ${\cal L}_A$ can be read off from d\^2d\^2| (A+[A]{}\^)\_I\^\_I+ { d\^2 [c\_a16\^2 F\_a]{} AW\^aW\^a+[h.c]{} }, where $A=(s+ia)+\sqrt{2}\theta\tilde{a}+F_A\theta^2$ is the axion superfield containing the axion $a$, the saxion $s$ and the axino $\tilde{a}$, while $\Phi_I$ and $W^a$ are the chiral superfields for the SSM matter and gauge multiplets $(\phi_I,\psi_I)$ and $(\lambda^a, F^a_{\mu\nu})$, respectively. Here $c_{_I}$ and $c_a$ are dimensionless real coefficients. For $F_a$ defined as the scale of spontaneous $U(1)_{PQ}$ breaking, the coefficients $c_a$ of the axion coupling to the gauge multiplets are of order unity in general. However as we will discuss later, the size of the coefficients $c_{_I}$ of the axion coupling to the matter multiplets is somewhat model-dependent. Note that the above lagrangian corresponds to the supersymmetric generalization of the conventional axion effective lagrangian [@kim1]: \_a= \_a|\_[I]{}\^\_5\_[I]{} +aF\^[a]{}\^a\_. Obviously it is manifestly invariant under the nonlinear $U(1)_{PQ}$ transformation, $A\rightarrow A+ic$ ($c=$ real constant), up to the PQ anomaly. At any rate, the relevant axino lagrangian is given by \_A=&& [12]{}i|\^\_ -(i \_|\_I\^(1+\_5)\^\*\_I +[h.c]{})\ && + (|\^a\^\^(1-\_5)F\^a\_+ [h.c]{}), where $\tilde{a}$ denotes the four-component Majorana axino field. Again the exchange of the $SU(2)_L$ singlet squarks leads to the following $B=S=-1$ interaction in the effective lagrangian at $\Lambda_{\chi}$: \_[eff]{}= (|[q]{}\_(1-\_5)q\^c\_)\_ |[q]{}\_\^ (1+\_5), where $c_{\gamma}$ ($\gamma=u,d,s$) denotes the axino coupling to the supermultiplet containing the $SU(2)_L$ singlet right-handed light quark $q_{\gamma R}$ in Eq. (12) and the squark degeneracy is assumed also. Similarly to the gravitino case, in order to estimate the proton decay rate from the above effective interaction, we consider a chiral operator of the form ${\cal O}_{\chi}=X^{\mu}\gamma_{\mu}(1+\gamma_5) \tilde{a}$ where $X^{\mu}$ is a $B=S=-1$ fermionic current made of $\bar{P}$ and $K^+$ which are on mass-shell. For $X^{\mu}\propto K^+\bar{P}\gamma^{\mu}$, the chiral operator ${\cal O}_{\chi}$ with the smallest number of spacetime derivatives is given by (|[P]{}(1+\_5)) K\^+, where the hadronic coefficients $\xi_{\gamma}$ are again determined by the NDA rule as |\_|16\^2 f\_\^3. This then leads to the experimental bound on the $R$ parity and $B$ violating coupling as \^\_[112]{}710\^[-16]{} ()\^2 () (), which is another result of this paper. The above constraint from the proton decay involving a light axino depends upon the dimensionless coefficients $c_{\gamma}$ describing the axino coupling to the supermultiplets of the $SU(2)_L$ singlet quarks \[see Eq. (12)\], as well as the axion scale $F_a$. In fact, the size of $c_{\gamma}$ has a certain model-dependence. If the quark superfields carry a nonzero $U(1)_{PQ}$ charge, which would be the case for the supersymmetric extension of the Dine-Fischler-Srednicki-Zhitnitskii (DFSZ) axion model [@dfsz], the coefficients $c_{\gamma}$ would be of order unity in general. However in hadronic axion models [@kim] in which all SSM fields have a vanishing $U(1)_{PQ}$ charge, the coefficients $c_{\gamma}$ are zero at tree level. However the axino-quark couplings are radiatively generated through the axino coupling to the gluon multiplet, yielding $c_{\gamma}\simeq (\alpha_c/\pi)^2\ln(F_a/m_{\rm soft})\simeq 10^{-3}\sim 10^{-2}$ [@kim1]. Thus the constraint for hadronic axion models becomes weaker than that for DFSZ models by the factor of $10^2\sim 10^3$. To conclude, we have considered the proton decay involving a gravitino or axino lighter than the proton. Generic models in which supersymmetry breaking is mediated by gauge interactions contain such a light gravitino. Then the $R$ parity and $B$ violating coupling $\lambda^{\prime\prime}_{112}$ is strongly constrained by the proton stability \[see Eq. (8)\] to be less than about $10^{-15}(m_{3/2}/{\rm eV})$. About the possibility of a light axino, gauge-mediated supersymmetry breaking models endowed with a global $U(1)_{PQ}$ symmetry generically predict an axino lighter than the proton. Also some supergravity-mediated models can give rise to a light axino, while the gravitino mass in such models is fixed to be the weak scale. We find that $\lambda^{\prime\prime}_{112}$ in models with a light axino is constrained \[see Eq. (16)\] to be less than about $10^{-15}(F_a/10^{10} \, {\rm GeV})$ and $10^{-12}(F_a/10^{10} \, {\rm GeV})$ for Dine-Fischler-Srednicki-Zhitnitskii axion models and hadronic axion models respectively. This work is supported in part by KOSEF Grant 951-0207-002-2 (KC, JSL), KOSEF through CTP of Seoul National University (KC), Programs of Ministry of Education BSRI-96-2434 (KC), and Non Directed Research Fund of KRF (EJC). EJC is a Brain-Pool fellow. D. Chang and W. -Y. Keung, preprint FERMILAB-PUB-961200-T, hep-ph/9608313. For recent gauge-mediated supersymmetry breaking models, see M. Dine and A. E. Nelson, Phys. Rev. [**D48**]{}, 1277 (1993); M. Dine, A. E. Nelson and Y. Shirman, Phys. Rev. [**D51**]{}, 1362 (1995); M. Dine, A. E. Nelson, Y. Nir and Y. Shirman, Phys. Rev. [**D53**]{}, 2658 (1996). R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. [**38**]{}, 1440 (1977). K. Tamvakis and D. Wyler, Phys. Lett. [**B112**]{}, 451 (1992); J. F. Nieves, Phys. Rev. [**D33**]{}, 1762 (1985). T. Goto and M. Yamaguchi, Phys. Lett. [**B276**]{}, 103 (1992); E. J. Chun, J. E. Kim and H. P. Nilles, Phys. Lett. [**B287**]{}, 123 (1992); E. J. Chun and A. Lukas, Phys. Lett. [**B357**]{}, 43 (1995). K. Rajagopal, M. S. Turner and F. Wilczek, Nucl. Phys. [**B358**]{}, 447 (1991). P. Fayet, Phys. Lett. [**B70**]{}, 461 (1977); Phys. Lett. [**B86**]{}, 272 (1979); Phys. Lett. [**B175**]{}, 471 (1986). A. Manohar and H. Georgi, Nucl. Phys. [**B234**]{}, 239 (1984); H. Georgi and L. Randall, Nucl. Phys. [**B276**]{}, 41 (1986). See for instance, J. E. Kim, Phys. Rep. [**150**]{}, 1 (1987). A. P. Zhitnitskii, Sov. J. Nucl. Phys. [**31**]{}, 260 (1980); M. Dine, W. Fischler and M. Srednicki, Phys. Lett. [**B104**]{}, 199 (1981). J. E. Kim, Phys. Rev. Lett. Phys. Rev. Lett. [**43**]{}, 103 (1979); M. A. Shifman, V. I. Vainstein and V. I. Zakharov, Nucl. Phys. [**B166**]{}, 4933 (1980).
--- abstract: 'Let $F$ be a self-similar set on ${\mathbb{R}}$ associated to contractions $f_j(x) = r_j x + b_j$, $j \in {\mathcal{A}}$, for some finite ${\mathcal{A}}$, such that $F$ is not a singleton. We prove that if $\log r_i / \log r_j$ is irrational for some $i \neq j$, then $F$ is a set of multiplicity, that is, trigonometric series are not in general unique in the complement of $F$. No separation conditions are assumed on $F$. We establish our result by showing that every self-similar measure $\mu$ on $F$ is a Rajchman measure: the Fourier transform $\widehat{\mu}(\xi) \to 0$ as $|\xi| \to \infty$. The rate of $\widehat{\mu}(\xi) \to 0$ is also shown to be logarithmic if $\log r_i / \log r_j$ is diophantine for some $i \neq j$. The proof is based on quantitative renewal theorems for random walks on ${\mathbb{R}}$.' address: - 'Institut de Mathématiques de Bordeaux, Université Bordeaux 1, 351 cours de la Libération, 33 405 Talence Cedex, France' - 'School of Mathematics, Alan Turing Building, University of Manchester, Oxford Road, Manchester, UK' author: - Jialun Li - Tuomas Sahlsten title: 'Trigonometric series and self-similar sets' --- Introduction and the main result ================================ The *uniqueness problem* in Fourier analysis that goes back to Riemann [@Riemann] in 1868 concerns the following question: suppose we have two converging trigonometric series $\sum a_n e^{2\pi i nx}$ and $\sum b_n e^{2\pi i nx}$ with coefficients $a_n,b_n \in {\mathbb{C}}$ such that for “many” $x \in [0,1]$ they agree: $$\begin{aligned} \label{eq:uniq}\sum_{n \in {\mathbb{Z}}} a_n e^{2\pi i n x} = \sum_{n \in {\mathbb{Z}}} b_n e^{2\pi i n x},\end{aligned}$$ then are the coefficients $a_n = b_n$ for all $n \in {\mathbb{Z}}$? For how “many” $x \in [0,1]$ do we need to have so that $a_n = b_n$ holds for all $n \in {\mathbb{Z}}$? If we assume holds *for all* $x \in [0,1]$, then using Toeplitz operators Riemann [@Riemann] proved that indeed $a_n = b_n$ for all $n \in {\mathbb{Z}}$. However, it would be interesting to see how small the set of $x \in [0,1]$, where holds can be so that we have $a_n = b_n$ for all $n \in {\mathbb{Z}}$. Motivated by this one defines that a subset $F \subset [0,1]$ is a *set of uniqueness* if whenever we have coefficients $a_n,b_n \in {\mathbb{C}}$, $n \in {\mathbb{Z}}$, such that holds for all $x \in [0,1] \setminus F$, then $a_n = b_n$ for all $n \in {\mathbb{Z}}$. Here one defines also that if $F$ is not a set of uniqueness, then it is called a *set of multiplicity*. In particular by Riemann’s result this shows that the empty set ${\varnothing}$ is a set of uniqueness and so $[0,1]$ is a set of multiplicity. Cantor [@Cantor] proved that that every closed countable set is a set of uniqueness, and later Young [@Young] generalised to every countable set. In the uncountable case, however, even if assuming $F$ is very small, uniqueness of $F$ may fail: Menshov [@Menshov] constructed a set $F$ of Lebesgue measure $0$, which is a set of multiplicity, that is, the uniqueness problem fails if we only assume for all $x \in [0,1] \setminus F$. This can be proved using the following criteria, which goes back to Salem [@Salem] that if a set $F$ supports a Borel probability measure $\mu$ such that the *Fourier transform* $$\widehat{\mu}(\xi) := \int e^{-2\pi i \xi x} \, d\mu(x), \quad \xi \in {\mathbb{R}},$$ satisfies $\widehat{\mu}(n) \to 0$ as $|n| \to \infty$, $n \in {\mathbb{Z}}$, then $F$ is a set of multiplicity. Such measures $\mu$ are called *Rajchman measures* in the literature. Hence constructing measures $\mu$ with decaying Fourier coefficients provides a way to check whether $F$ is of multiplicity. It remains an open problem to classify which uncountable sets $F$ are of multiplicity and which $F$ are of uniqueness and much work has been done in many examples of $F$ on trying to establish their uniqueness or multiplicity. In the series of works Salem [@Salem] proved that the middle third Cantor set $C_{1/3}$ is a set of uniqueness. More generally, Salem established that if $C_\lambda$ is the middle $\lambda$-Cantor set with $0 < \lambda < 1/2$, that is, interval of length $1-2\lambda$ is removed from the center of $[0,1]$ at every construction stage, then $C_\lambda$ is a set of uniqueness when $\lambda^{-1}$ is a Pisot number. In the opposite case, if $\lambda^{-1}$ is not a Pisot number, by constructing a Rajchman measure on $C_\lambda$, Piatetski-Shapiro [@PS], Salem and Zygmund [@SZ] established that $C_\lambda$ is a set of multiplicity. The Cantor set $C_\lambda$ is an example of a *self-similar set*. Recall that a subset $F \subset [0,1]$ is self-similar if there exists similitudes $f_j : [0,1] \to [0,1]$, that is, $f_j(x) = r_j x + b_j$, $j \in {\mathcal{A}}$, for some finite set ${\mathcal{A}}$, translations $b_j \in {\mathbb{R}}$ and contractions $0 < r_j < 1$ such that $$F = \bigcup_{j \in {\mathcal{A}}} f_j (F).$$ As far as we know nothing is known about the uniqueness or multiplicity of self-similar sets beyond beyond the case of $C_\lambda$ or if adding finitely many more similitudes with the same contraction ratio $\lambda$ to the definition, which was done by Salem [@Salem]. For example if we have two different contractions $r_0 = 1/2$ and $r_1 = 1/3$ for the iterated function system, do we expect $F$ to be of multiplicity or of uniqueness? Due to having same contraction ratio $\lambda$ the case $C_\lambda$ has a convolution structure, which is helpful when connecting to the algebraic properties of the number $\lambda$. In the general case, however, we would need to find a way out of this. It turns out that the algebraic properties of the additive subgroup $\Gamma$ generated by the log-contraction ratios $\{-\log r_j : j \in {\mathcal{A}}\}$ in ${\mathbb{R}}$ is important in the study of the multiplicity of a self-similar set $F$ with contraction ratios $r_j$. In particular if this subgroup $\Gamma$ is dense, which happens when $\log r_j / \log r_\ell$ is irrational for some $j \neq \ell$ (e.g. $r_j = 1/2$ and $r_\ell = 1/3$), we can establish $F$ is a set of multiplicity. \[thm:multi\] Let $F \subset [0,1]$ be a self-similar set associated to contractions $f_j(x) = r_j x + b_j$, $j \in {\mathcal{A}}$, such that $F$ is not a singleton. If $\log r_j / \log r_\ell$ is irrational for some $j \neq \ell$, then $F$ is a set of multiplicity. Notice that by assuming $\log r_j / \log r_\ell$ is irrational we exclude the case of $C_\lambda$ as in that case every ratio of logarithms of the contractions is just $1$. It remains an open problem to study the case when $\log r_j / \log r_\ell \in {\mathbb{Q}}$ for all $j \neq \ell$. We predict that here typically $F$ should be a set of uniqueness unless all the contraction ratios are equal, like the case $C_\lambda$, and then an algebraic number theoretic condition like $\lambda^{-1}$ being Pisot needs to be imposed. In order to prove the multiplicity of a self-similar set $F$, it is enough by Salem’s criterion [@Salem] for multiplicity to find a Rajchman measure supported on $F$. Hence Theorem \[thm:multi\] follows by establishing that all positive dimensional self-similar measures on $F$ are Rajchman measures. Recall that a probability measure $\mu$ on ${\mathbb{R}}$ is called *self-similar* if there exists a finite collection $\{f_j : j \in {\mathcal{A}}\}$ of similitudes of ${\mathbb{R}}$ with at least two maps and weights $0 < p_j < 1$, $j \in {\mathcal{A}}$, with $\sum_{j \in {\mathcal{A}}} p_j = 1$ such that $\mu = \sum_{j \in {\mathcal{A}}} p_j f_j \mu$. \[thm:main\] Let $F \subset [0,1]$ be a self-similar set associated to contractions $f_j(x) = r_j x + b_j$, $j \in {\mathcal{A}}$, such that $F$ is not a singleton. If $\log r_j / \log r_\ell$ is irrational for some $j \neq \ell$, then the Fourier transform $\widehat{\mu}(\xi) \to 0$ as $|\xi| \to \infty$ for every self-similar measure $\mu$ on $F$. Theorem \[thm:main\] is closely related to another problem in a currently active problem in the community of fractal geometry, where we would like to understand the Fourier transforms of fractal measures, see the book [@Mattila] by Mattila for an history and overview. In particular there are various past and recent works on random fractals by Kahane [@KahaneImage; @KahaneLevel], Shmerkin and Suomala [@ShmerkinSuomala] and other people [@FOS; @FS], connections to Diophantine approximation by Kaufman *et al.* [@Kaufman1; @Kaufman2], dynamical systems [@JordanSahlsten; @SahlstenStevens] and additive combinatorics [@Bourgain2010; @LP09]. Analysing the spectrum of fractal measures has been particularly important in finding normal numbers from the support of fractals [@HochmanShmerkinEquidistribution; @QetR; @DEL] and the study of harmonic analysis defined by fractal measures, see for example applications to the spectrum of convolution operators defined by fractal measures in the work of Sarnak [@Sarnak] and later by Sidorov and Solomyak [@SidorovSolomyak], and more recently applications to quantum resonances in quantum chaos by Bourgain and Dyatlov [@BourgainDyatlov]. The study of Fourier transforms of self-similar measures in general goes back to the works of Strichartz [@Strichartz1; @Strichartz2], where an average decay of Fourier transform $\widehat{\mu}(\xi)$ of self-similar measures $\mu$ are obtained, where proportions of frequencies $\xi \in {\mathbb{R}}$ are excluded. More recently a large deviation estimate for these average decays was proved by Tsujii [@Tsujii]. However, the methods here cannot be used to obtain a full decay over all $|\xi| \to \infty$. Before Theorem \[thm:main\] the only cases of self-similar measures $\mu$ where $\widehat{\mu}(\xi) \to 0$ as $|\xi| \to \infty$ was known were self-similar measures on the middle $\lambda$-Cantor sets $C_\lambda$, $0 < \lambda < 1/2$, by Salem [@Salem], Piatetski-Shapiro [@PS], Salem and Zygmund [@SZ], and in the overlapping case for the *Bernoulli convolutions* $\mu_\beta$, $1 < \beta < 2$, which are the distribution of the random sum $\sum \pm \beta^{-k}$ with i.i.d. chosen signs. For Bernoulli convolutions Fourier transforms play an important role as proving that $\widehat{\mu}_\beta(\xi)$ has power decay as $|\xi| \to \infty$ implies $\mu_\beta$ is absolutely continuous, which is a well-known open problem in the field, see Shmerkin [@ShmerkinGAFA]. It is known by the results of Erdös [@Erdos] and Kahane [@Kahane] that the set of $1 < \beta < 2$ such that $\mu_\beta$ does not have a power decay has Hausdorff dimension zero. Moreover, Erdös [@Erdos] proved that $\widehat{\mu}_\beta(\xi) \to 0$ as $|\xi| \to \infty$ if and only if $\beta$ is a Pisot number. In the non-Pisot case the rate of convergence was later shown to be logarithmic for rational number $\beta$ or some algebraic numbers $\beta$ by Dai [@Dai1] and Bufetov and Solomyak [@Bufetov], and some power decay for algebraic numbers $\beta$ has been obtained by Dai, Feng and Wang [@Dai2]. Notice that in Theorem \[thm:multi\] and Theorem \[thm:main\] there can be any types of overlaps for the maps $f_j$ and no separation conditions are assumed. Typically in the overlapping case the analysis of self-similar sets and measures can be notoriously difficult to understand, say, their Hausdorff dimension has required some deep connections to additive combinatorics, see for example the recent works of Hochman [@Hochman], Breuillard-Varju [@BV] and Varju [@V]. The reason overlaps do not cause us any issues is the fact that the main contribution to the Fourier decay comes from controlling the distribution of lengths of the construction intervals, and not their relative positions. Understanding the distribution of the lengths of the construction intervals then can be reduced as a problem of studying the renewal theory for random walk random walk $X_1,X_2,\dots$ on ${\mathbb{R}}$ driven by $\lambda = \sum_{j \in {\mathcal{A}}} p_j \delta_{-\log r_j}$. This strategy to establish Fourier decay is similar to what was done in the case of the stationary measure for group actions by the first author in [@Li1]. In the self-similar case case we consider, however, the proof is much more straightforward and we can see the idea governing the Fourier decay more clearly. The irrationality of $\log r_i / \log r_j$ is key to prove the random walk becomes *non-lattice*, that is, not concentrated on an arithmetic progression, which is a key assumption for the renewal theorem we use. If we want a rate of convergence in Theorem \[thm:main\] using the strategy we present in this paper, one needs to go into the rate of convergence for the renewal theorems we use. Here it is well-known that the diophantine properties of the random walk become an essential property, in particular, how well $\log r_i / \log r_j$ is approximated by rationals. In Diophantine approximation, it is defined that an irrational real number $a \in {\mathbb{R}}$ is called *diophantine* if for some $c> 0$ and $l>2$ we have $$\Big|a - \frac{p}{q}\Big| \geq \frac{c}{q^l}$$ for all $p\in {\mathbb{Z}}$ and $q\in{\mathbb{N}}^*$. This happens for example when $a = \log 2 / \log 3$ or in general for $a = \log p / \log q$ for coprime $p,q$, see Baker [@Baker]. Having some diophantine $\log r_i / \log r_j$ in the iterated function system imposes the random walk generated by the contractions to quantitatively avoid lattices and then gives quantitative rates for the renewal theorem. Under this condition, we can improve Theorem \[thm:main\] in the following way: \[thm:mainquantitative\] Let $F \subset [0,1]$ be a self-similar set associated to contractions $f_j(x) = r_j x + b_j$, $j \in {\mathcal{A}}$, such that $F$ is not a singleton. If $\log r_j / \log r_\ell$ is diophantine for some $j \neq \ell$, for some $\alpha > 0$ we have $$|\widehat{\mu}(\xi)| = O\Big(\frac{1}{|\log |\xi||^{\alpha}}\Big), \quad |\xi| \to \infty$$ for every self-similar measure $\mu$ on $F$. Removing the irrationality of ratios of log-contractions ratios makes the random walk $X_1,X_2,\dots$ on ${\mathbb{R}}$ driven by $\lambda = \sum_{j \in {\mathcal{A}}} p_j \delta_{-\log r_j}$ *lattice*, that is, concentrated on arithmetic progressions. Then the renewal theorems do not hold anymore in the same form, and in fact the Fourier transform no longer may not even decay at infinity as given by the middle $1/3$ Cantor measure. However, in the case $\beta$ is not Pisot, the Bernoulli convolution $\mu_\beta$ associated to $\beta$ provides examples of a measure where the Fourier transform does decay at infinity, even with polynomial rate for some algebraic $\beta$, but the additive random walk on ${\mathbb{R}}$ generated by $\log \beta$ is a lattice. Hence it would be interesting to develop the connection to renewal theory further and find a full classification of self-similar sets $F$ which are of uniqueness and which are of multiplicity. In this paper we considered the self-similar case, but if we impose the maps $f_j$ to be suitably nonlinear, such as the inverse branches of the Gauss map $x \mapsto 1/x {\,\,\mathrm{mod\,}}1$ and study the Fourier transforms of self-conformal measures $\mu$, then the rates of Fourier decay in Theorem \[thm:mainquantitative\] for Fourier decay can be improved to power decay, see for example the works [@JordanSahlsten; @BourgainDyatlov; @SahlstenStevens; @Li2]. Here the non-lattice condition of contractions $-\log r_j$ is replaced by a non-concentration condition of the log-derivatives of the iterates $-\log (f_{j_1} \circ \dots \circ f_{j_n})'(x)$ as $n \to \infty$ and these types of conditions appear in the Fourier decay properties of multiplicative convolutions in the discretised sum-product theory developed by Bourgain [@Bourgain2010]. What about the higher dimensional case? Here the analogue to Theorem \[thm:main\] and Theorem \[thm:mainquantitative\] would be to understand Fourier transforms $\widehat{\mu}$ of *self-affine measures* $\mu$ on ${\mathbb{R}}^d$. They are measures on ${\mathbb{R}}^d$ associated to affine contractions $f_j = A_j + b_j$ of ${\mathbb{R}}^d$, $j \in {\mathcal{A}}$, for some finite set ${\mathcal{A}}$, where $b_j \in {\mathbb{R}}^d$ and $A_j \in \mathrm{GL}(d,{\mathbb{R}})$ such that $$\mu = \sum_{j \in {\mathcal{A}}} p_j f_j \mu$$ for some weights $0 < p_j < 1$, $j \in {\mathcal{A}}$, with $\sum_{j \in {\mathcal{A}}} p_j = 1$. In a follow-up paper [@LSaffine], we apply a similar strategy as we do in this paper by considering renewal theory for random walks on the group $\mathrm{GL}(d,{\mathbb{R}})$ coming from $\{A_j : i \in {\mathcal{A}}\}$ to establish a Fourier decay for self-affine measures. The renewal theory we need has been done recently by the first author in [@Li2]. Here the non-lattice condition can be replaced by an irreducibility and proximality assumption of the subgroup $\Gamma$ generated by $\{A_j : i \in {\mathcal{A}}\}$ as Bárány, Hochman and Rapaport did recently in their work [@BHR] for the computation of Hausdorff dimension of self-affine measures on ${\mathbb{R}}^2$. Moreover, due to the better rates for quantitative renewal theorems for random walks in real split groups [@Li2], that is, when the Zariski closure of $\Gamma$ is ${\mathbb{R}}$-splitting, we can improve the rates for the Fourier decay of $\mu$ to power decay.\ **Organisation of the paper.** The article is organised as follows. In Section \[sec:renewal\] we give the quantitative renewal theorems we need for our results and then prove them in Section \[secrentheory\]. Then in Section \[sec:proofs\] we give the proof of Theorem \[thm:main\], which implies Theorem \[thm:multi\] on the multiplicity of self-similar sets, and also in Section \[sec:proofs\] we prove the quantiative Theorem \[thm:mainquantitative\] using the quantitative estimates for the renewal theorem established in Section \[sec:renewal\]. Quantitative renewal theorems for random walks in ${\mathbb{R}}$ {#sec:renewal} ================================================================ The proof of Theorem \[thm:main\] and the quantitative version Theorem \[thm:mainquantitative\] rely on quantitative renewal theorems for random walks on ${\mathbb{R}}$, which we will give in this section. We will first fix some notation: for two real functions $f$ and $g$, we write $f=O(g), f\ll g$ or $g\gg f$ if there exists a constant $C>0$ such that $|f|\leq Cg$, where $C$ only depends on the measure $\mu$. We write $f=O_{\varepsilon}(g)$ or $ f\ll_{\varepsilon}g$ if the constant $C$ depends on an extra parameter ${\varepsilon}$. Let $\lambda$ be a probability measure on ${\mathbb{R}}^+$ with finite support and let $|\operatorname{supp}\lambda|$ be the maximal of the support of $\lambda$. Let $\sigma$ be the expectation of $\lambda$. We call $\lambda$ *non-lattice* if the support of $\lambda$ generates a dense additive subgroup of ${\mathbb{R}}$. In our case of self-similar measures associated to an iterated function system $f_j (x) = r_j x + b_j$ and weights $\sum_{j \in {\mathcal{A}}} p_j = 1$, we will apply this in the case of $$\lambda = \sum_{j \in {\mathcal{A}}} p_j \delta_{-\log r_j}$$ which is non-lattice as long as the the additive subgroup generated by $-\log r_j$, $j \in {\mathcal{A}}$, is dense in ${\mathbb{R}}$. Let now $X_1,X_2,\dots$ be i.i.d. sequence of random variables with $\lambda$. Write for $n \in {\mathbb{N}}$ the sum $$S_n := X_1+X_2+\dots+X_n.$$ Thus $S_n$ has the distribution $\lambda^{\ast n}$, where $\lambda^{\ast n} = \lambda^{\ast (n-1)} \ast \lambda$, $n \geq 2$, is the iterated convolution with $\lambda^{\ast 1} := \lambda$. We defined for $t > 0$ the *stopping time* $$n_t := \inf\{n \in {\mathbb{N}}: S_n \geq t\},$$ and we define $S_t := S_{n_t}$. The main intuition is that if $\lambda$ is non-lattice, then the residue distribution $S_t-t$ will converge to a distribution absolutely continuous with respect to the Lebesgue measure when $t$ tends to infinite. \[prop:renewal\] If $\lambda$ is non-lattice, then we have for $t>|\operatorname{supp}\lambda|+1>0$ and $C^1$ function $g$ on ${\mathbb{R}}$, $${\mathbb{E}}_t (g(S_t-t))=\frac{1}{\sigma}\int_{{\mathbb{R}}^+}g(x)p(x)d x+o_t| g|_{C^1},$$ where $o_t$ tends to zero as $t$ going to $\infty$ and where $p(x)=\int_{y>x}d\lambda(y)$ is a piecewise constant function and vanishes when $x$ passes the support of $\lambda$. Let ${\varrho}$ be a smooth cutoff such that ${\varrho}_{[-|\operatorname{supp}\lambda|,|\operatorname{supp}\lambda|]}=1$ and becomes $0$ outside of $[-|\operatorname{supp}\lambda|-1,|\operatorname{supp}\lambda|+1]$. Take $f(v,u)=g(v+u){\varrho}(v){\varrho}(u)$. Then $f(v,u)=g(v+u)$ when $v,u$ are in the interval $[-|\operatorname{supp}\lambda|,|\operatorname{supp}\lambda|]$. By definition, we have $${\mathbb{E}}_t(g(S_t-t))=E_C f(t).$$ This function $f$ satisfies the conditions in Proposition \[prop:rescut\], and the proof is complete by using Proposition \[prop:rescut\]. If we want to apply the Diophantine condition on the ratios of the logarithms, we need a rate. This can be obtained in the following: For a probability measure $\lambda$ on ${\mathbb{R}}$ and $l$ in ${\mathbb{R}}^+$, we call it *$l$-weakly diophantine* if $$\liminf_{|b|\rightarrow \infty}|b|^l|1-{\mathcal{L}}\lambda(ib)|>0,$$ where ${\mathcal{L}}\lambda$ is the *Laplace transform* of $\lambda$, defined for $z \in {\mathbb{C}}$ by the formula $${\mathcal{L}}\lambda(z)=\int e^{zx}d\lambda(x).$$ More generally, we say that $\lambda$ is weakly-diophantine if it is $l$-weakly diophantine. This definition can be find in [@Boyer]. If there exist $r_j, r_k$ for $j,k\in{\mathcal{A}}$ such that $\log r_j/\log r_k$ is diophantine, then the measure $\lambda$ is weakly diophantine. We have that $$\begin{aligned} &|1-{\mathcal{L}}\lambda(ib)|\geq |{\operatorname{Re}}(p_j(1-e^{-ib\log r_j})+p_k(1-e^{-ib\log r_k}))|\\ &\gg \max\{d(b\log r_j ,2\pi{\mathbb{Z}})^2, d(b\log r_k,2\pi {\mathbb{Z}})^2 \}\gg \max\{d(b_1,{\mathbb{Z}})^2,d(b_1\frac{\log r_k}{\log r_j},{\mathbb{Z}})^2 \}, \end{aligned}$$ with $b_1=b\log r_j/2\pi$. By the definition of diophantine number, we obtain that for some $l\in{\mathbb{N}}$ $$\max\{d(b_1,{\mathbb{Z}})^2,d(b_1\frac{\log r_k}{\log r_j},{\mathbb{Z}})^2 \}\gg |b_1|^{-2l}.$$ Combing the above two inequalities, we know that the measure $\lambda$ is weakly diophantine. \[prop:renewalquantitative\] If the measure $\lambda$ is $l$-weakly diophantine, then for $t>|\operatorname{supp}\lambda|+1$ we have $${\mathbb{E}}_t (g(S_t-t))=\frac{1}{\sigma}\int_{{\mathbb{R}}^+}g(x)p(x)d x+O(t^{-1/(4l+1)})|g|_{C^1}.$$ We need to use the weakly diophantine condition to give an estimate of the error term in Proposition \[prop:rescut\]. For the supremum of the norm of $\frac{1}{1-{\mathcal{L}}\lambda(i\xi)}+\frac{1}{\sigma}\frac{1}{i\xi}$ and its derivative $$\partial_\xi(\frac{1}{1-{\mathcal{L}}\lambda(i\xi)}+\frac{1}{\sigma}\frac{1}{i\xi})=\frac{-\partial_\xi{\mathcal{L}}\lambda(i\xi)}{(1-{\mathcal{L}}\lambda(i\xi))^2}-\frac{1}{\sigma}\frac{1}{i\xi^2},$$ on the interval $[-C_\psi\delta^{-2},C_\psi\delta^{-2}]$, by the definition $l$-weakly diophantine, we obtain that it is less than $C\delta^{-4l}$. Then by Proposition \[prop:rescut\] $$O_\delta\leq C\delta^{-4l}.$$ Then take $\delta=t^{-1/(4l+1)}$. The proof is complete. Proof of the Fourier decay {#sec:proofs} ========================== Dimension theory and symbolic notations --------------------------------------- Let us write ${\mathcal{A}}^*$ as the space of all words $w$ with entries in ${\mathcal{A}}$ of finite length. Moreover, ${\mathcal{A}}^n$ is the space of all words of length $n$ with entries in ${\mathcal{A}}$. If $w = w_1w_2\dots w_n \in {\mathcal{A}}^n$, define the composition $$f_w := f_{w_1} \circ \dots \circ f_{w_n}.$$ Then $f_w$ is again a similitude with a contraction $$r_w := r_{w_1} \dots r_{w_n}.$$ Using this notation the self-similarity of $\mu$ implies that $$\mu = \sum_{w \in {\mathcal{A}}^n} p_w f_w \mu,$$ where $$p_w := p_{w_1}\dots p_{w_n} > 0$$ as the product of weights $p_j$, $j \in {\mathcal{A}}$, according to the entries of the word $w$. See the book by Falconer [@Falconer1] for more details, notations and history on self-similar sets and measures. Reduction to exponential sums ----------------------------- Given $\xi \in {\mathbb{R}}$ and $t > 0$, the first step is to reduce the Fourier transform of $\mu$ to double $\mu$ integrals over exponential sums determined by the stopping time $n_t$. Recall that we defined in Section \[sec:renewal\] for $t > 0$ the stopping time $$n_t := \inf\{n \in {\mathbb{N}}: S_n \geq t\},$$ where $S_n = X_1 + \dots + X_n$ and $X_j$ are i.i.d. distributed according to $$\lambda = \sum_{j \in {\mathcal{A}}} p_j \delta_{\log(1/r_j)}.$$ Let ${\mathbb{P}}_t$ be the probability distribution on ${\mathcal{A}}^*$ associated to the stopping time and write $${\mathcal{W}}_t := \operatorname{spt}{\mathbb{P}}_t,$$ for the support of ${\mathbb{P}}_t$. The reason to use the stopping time here is that we want to use the equidistribution phenomenon of the renewal theorem (Proposition \[prop:renewal\]), which combined with high-oscillation can give decay of exponential sums. Later we will make $t$ depend on $\xi$ and let $|\xi| \to \infty$, but for now we keep everything fixed. For every $\xi \in {\mathbb{R}}$ and $t > 0$ we have $$|\widehat{\mu}(\xi)|^2 \leq {\int\hspace{-0.1in}\int}\sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y).$$ Firstly, by $$\mu = \sum_{j \in {\mathcal{A}}} p_j f_j \mu$$ we see that for any $t > 0$ we can write $$\mu = \sum_{w \in {\mathcal{W}}_t} p_w f_w \mu.$$ The proof of this is similar to [@Li1 Proposition 3.5]. Hence we obtain $$\widehat{\mu}(\xi) = \sum_{w \in {\mathcal{W}}_t} p_w \int e^{-2\pi i \xi f_w(x)} \, d\mu(x).$$ Thus by Cauchy-Schwartz, we have $$|\widehat{\mu}(\xi)|^2 \leq \sum_{w \in {\mathcal{W}}_t} p_w \Big|\int e^{-2\pi i \xi f_w(x)} \, d\mu(x)\Big|^2.$$ Opening up we see that $$\begin{aligned} \sum_{w \in {\mathcal{W}}_t} p_w \Big|\int e^{-2\pi i \xi f_w(x)} \, d\mu(x)\Big|^2 &= {\int\hspace{-0.1in}\int}\sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y). \end{aligned}$$ Thus to prove Fourier decay, we would need to prove $$\begin{aligned} \label{eq:needed}{\int\hspace{-0.1in}\int}\sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y) \to 0\end{aligned}$$ as $|\xi| \to \infty$ for a suitable $t = t(\xi) \to \infty$, and if we want a rate for the Fourier decay, we need to control the speed of convergence in . In order to do this, we first write $|\xi|$ in the separate form $$\xi=se^t$$ with $s\in{\mathbb{R}}$ and $t>0$ and later we will first take $|s|$ large, then take $t > 0$ large enough depending on $s$. Using these parameters, write $$\delta = 1/\sqrt{|s|} > 0.$$ Then define the tube $$A_\delta = \{(x,y) \in {\mathbb{R}}: |x-y| \leq \delta\}.$$ We will split into two cases depending on how close $|x-y|$ are in terms of the $\delta > 0$ defined above. We will have the following two propositions given in Proposition \[prop:nearby\] and Proposition \[prop:far\], which together imply Theorem \[thm:main\]. For the quantitative part, we also need Proposition \[prop:farquant\] to control the rate in . Controlling nearby points {#sec:HY} ------------------------- The first one is on the nearby points $x,y \in {\mathbb{R}}$, that is, those with $|x-y| \leq \delta$, and here is where we use the fact that $F$ is not a singleton. By [@FengLau Proposition 2.2], due to $F$ is not a singleton, there exist $r_0 > 0$, $\alpha > 0$ and $C > 0$ such that for all $0 < r < r_0$ and $x\in F$ we have $$\begin{aligned} \label{eq:decay} \mu(B(x,r)) \leq Cr^{\alpha}.\end{aligned}$$ A measure which satisfies this condition is sometimes called a Frostman measure. Using the decay of the $\mu$ measure on balls, we can control the nearby points in the following lemma: \[prop:nearby\] For any $|\xi| = se^{t}$, we have $$\Big|{\int\hspace{-0.1in}\int}_{A_\delta} \sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y) \Big| \to 0$$ as $|\xi| \to \infty$. First of all, since for all $t > 0$ we have that $$\sum_{w \in {\mathcal{W}}_t} p_w = 1,$$ we can directly bound using triangle inequality that $$\Big|{\int\hspace{-0.1in}\int}_{A_\delta} \sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y)\Big| \leq (\mu \times \mu) (A_\delta)$$ as $|e^{i\theta}| = 1$ for all $\theta \in {\mathbb{R}}$. Using Fubini’s theorem we see that $$\begin{aligned} \label{eq:fubini}(\mu \times \mu) (A_\delta) = \int \mu(B(x,\delta)) \, d\mu(x),\end{aligned}$$ thus by the right-hand side converges to $0$ as $\delta \to 0$. Application of the renewal theorem and high-oscillations -------------------------------------------------------- In the case when $x,y \in {\mathbb{R}}$ are chosen such that $|x-y| > \delta$, we will use the renewal theory to prove the following convergence. \[prop:far\] Suppose $\log r_j / \log r_\ell$ is irrational for some $j \neq \ell$. Then $${\int\hspace{-0.1in}\int}_{{\mathbb{R}}^2 \setminus A_\delta} \sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y) \to 0$$ as $|\xi| \to \infty$. This rate is not quantitative, so in the later section, by adding an extra assumption ($\log r_j / \log r_\ell$ is diophantine) to the renewal theory, gives us a quantitative version (Proposition \[prop:farquant\]). By definition of $f_j$ we have that for all $x,y \in [0,1]$ and $w \in {\mathcal{W}}_t$ the difference $$f_w(x)-f_w(y) = r_w (x-y).$$ Therefore we can write $$e^{-2\pi i \xi (f_w(x)-f_w(y))} = e^{-2\pi i \xi (x-y) r_w}$$ Recall that we have fixed $s \in {\mathbb{R}}$ and $t > 0$ such that $\xi$ has the form $\xi = se^{t}$. With this $s \in {\mathbb{R}}$, we can define a function $g_{s} : {\mathbb{R}}\to {\mathbb{C}}$ by $$g_s(r) := \exp(-2\pi i s e^{-r}), \quad r \in {\mathbb{R}}.$$ Using $g_s$ we can write for any pair $x,y \in {\mathbb{R}}$ that $$\sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} = {\mathbb{E}}_t(g_{s(x-y)}(S_t - t)),$$ where expectation is with respect to the probability measure ${\mathbb{P}}_t$ determined by the stopping time $n_t$. We will apply the function $g_{s(x-y)}$ in the renewal theorem (see Proposition \[prop:renewal\]) as the renewal function $h$ in Proposition \[prop:renewal\]. Here is where we need to invoke the condition that $\log r_j / \log r_\ell$ is irrational for some $j \neq \ell$. It implies that the i.i.d. random walk $X_1,X_2,\dots$ on ${\mathbb{R}}$ with the distribution $$\lambda = \sum_{j \in {\mathcal{A}}} p_j \delta_{\log(1/r_j)},$$ is non-lattice. Hence we can apply the renewal theorem (Proposition \[prop:renewal\]) to obtain some piecewise continuous function $r \mapsto p(r)$ on ${\mathbb{R}}$ such that for any $h : {\mathbb{R}}\to {\mathbb{C}}$ we have the following convergence of the expectations: $$\lim_{t \to \infty} {\mathbb{E}}_t(h(S_t - t)) = \int_{\mathbb{R}}h(r) p(r) \, dr.$$ Applying this with $h = g_{s_1}$ gives us when $|s_1|$ tends to infinite $$\lim_{t \to \infty} {\mathbb{E}}_t(g_{s_1}(S_t - t)) = \int_{\mathbb{R}}g_{s_1}(r) p(r) \, dr.$$ If we now look at the right-hand side, since $p(r)$ is a piecewise continuous function, or just integrable, Riemann-Lebesgue lemma implies that$$\begin{aligned} \label{eq:lim}\int_{\mathbb{R}}g_{s_1}(r) p(r) \, dr \to 0.\end{aligned}$$ However, in our case we only know that $x,y \in {\mathbb{R}}^2 \setminus A_\delta$, so $|x-y| > \delta$ and $|s_1|=|s(x-y)|\in[|s|^{1/2},C|s|]$, where $C$ depends on the support of $\lambda$. Thus to be able to use the above convergence , we need uniformity for $|s_1|$ in the interval $[|s|^{1/2},C|s|]$ and to make it more effective using the error term in the renewal theorem Proposition \[prop:renewal\]. Let us fix ${\varepsilon}> 0$ small enough. Then first use choose $s_0 \in {\mathbb{R}}$ such that for all $s_1 \in {\mathbb{R}}$ with $|s_1| \geq |s_0|$ we have $$\Big|\int_{\mathbb{R}}g_{s_1}(r)p(r) \, dr\Big| \leq \frac{{\varepsilon}}{2}.$$ Then we take $t_0$ large enough such that for all $|s_1|\in [|s_0|,C|s_0|^2]$ and $t > t_0$ the error term $$o_t|g_{s_1}|_{C^1}$$ in Proposition \[prop:renewal\] is also less than ${\varepsilon}/2$. Then for all $g_{s_1}=g_{s(x-y)}$ with $s$ equal to $|s_0|^2$ and $|x-y| > |s|^{-1/2}=|s_0|^{-1}$, we have $|s_1|=|s(x-y)|\in[|s_0|,C|s_0|^2]$. Therefore for all $|\xi|=|s_0|^2e^t>|s_0|^2e^{t_0}$, we will have that $$|{\mathbb{E}}_t(g_{s(x-y)}(S_t - t))| \leq {\varepsilon}$$ for all $x,y \in A_\delta$. Quantitative rate for Fourier decay ----------------------------------- In order to prove a quantitative rate (Theorem \[thm:mainquantitative\]), we need a rate in Proposition \[prop:far\] and for this we give the following \[prop:farquant\] Suppose $\log r_j / \log r_\ell$ is diophantine for some $j \neq \ell$. Then there exists $\alpha>0$ such that $$\Big|{\int\hspace{-0.1in}\int}_{{\mathbb{R}}^2 \setminus A_\delta} \sum_{w \in {\mathcal{W}}_t} p_w e^{-2\pi i \xi (f_w(x)-f_w(y))} \, d\mu(x) \, d\mu(y)\Big| = O\Big(\frac{1}{|\log |\xi||^{\alpha}}\Big),$$ as $|\xi| \to \infty$. By the quantitative renewal theorem (see Proposition \[prop:renewalquantitative\]), we obtain for some $\alpha > 0$ that $$\Big|{\mathbb{E}}_{t}(g_{s_1}(S_{t} - t)) - \int_{\mathbb{R}}g_{s_1}(r) p(r) \, dr\Big| = O\Big(\frac{s_1}{t^\alpha}\Big).$$ Because the function $p(r)=\int_{x>r}d\lambda(x)$ ($r\geq 0$) is piecewise constant with a finite number of disconnected points, the decay rate in the main term, the oscillation integral, is given by the oscillation (See [@Li1 Lemma 3.8] for more details) $$|\int_{{\mathbb{R}}^+} g_{s_1}(r) p(r)\, dr| =O(\frac{1}{s_1}) .$$ Then we take $s=t^{\alpha/2}$, which implies $|s_1|=|s(x-y)|\in [t^{\alpha/4},Ct^{\alpha/2}]$ for $(x,y)\in {\mathbb{R}}^2\backslash A_\delta$. Due to $|\xi|=t^{\alpha/2}e^t$, thus after taking logarithms the rate is $O(\frac{1}{|\log|\xi||^{\alpha/4}})$. Proofs of the renewal theorems {#secrentheory} ============================== Let us now finish the paper by giving the proofs of the renewal theorems Proposition \[prop:renewal\] and Proposition \[prop:renewalquantitative\]. This follows the similar proofs in [@Li1], but we give the proofs for completeness. Recall that $\lambda$ is a finite supported probability measure ${\mathbb{R}}^+$. We define a renewal operator $R$ as follows. For a positive bounded Borel function $f$ on $ {\mathbb{R}}$ and a real number $t$, we set $$\begin{aligned} Rf(t)=\sum_{n=0}^{+\infty}\int f(x-t){\mathrm{d}}\lambda^{*n}(x).\end{aligned}$$ Because of the positivity of $f$, this sum is well defined. The classical theory of Blackwell gives us a limit. But a uniform speed of convergence is needed in our application. We will give a proof using the Laplace transform, which fulfills our demands. The renewal theorem will give us an equidistribution phenomenon, where the key input is non-lattice. First we give a proof of renewal theorem for good functions. Then we prove some regularity properties. These will imply a version of residue process. Laplace transform ----------------- The Laplace transform of a compactly supported probability measure on ${\mathbb{R}}$ is defined by $$\begin{aligned} {\mathcal{L}}\lambda(z)=\int e^{-z x}{\mathrm{d}}\lambda(x).\end{aligned}$$ By the definition of non-lattice, we have \[prop:invtran\] If $\lambda$ is non-lattice, then for any pure imaginary number $i\xi$ not $0$, the Laplace transform of $\lambda$ is different from $1$ and $$\label{equ:i-pz} u(i\xi):=\frac{1}{1- {\mathcal{L}}\lambda(i\xi)}-\frac{1}{\sigma i\xi}$$ is holomorphic. Renewal theory for regular functions ------------------------------------ We start to compute the renewal operator. A result for the renewal operator for “good" functions will be proved. Let $f$ be a function on $ {\mathbb{R}}$. Define a norm by $|f|_{L^\infty}=\sup_{\xi\in{\mathbb{R}}}|f(\xi)|$. Define another norm $|f|_{W^{1,\infty}}=|f|_{L^\infty}+|\partial_\xi f|_{L^\infty}$. Write the Fourier transform ${\widehat}f(\xi)=\int e^{i\xi u}f(u){\mathrm{d}}u$. \[prop:renreg\] Let $f$ be a positive bounded continuous function in $L^1( {\mathbb{R}}, Leb )$ such that its Fourier transform satisfies ${\widehat}f\in L^{\infty}$ and $\partial_\xi {\widehat}f\in L^\infty$. Assume that the projection of $\operatorname{supp}{\widehat}{f}$ onto ${\mathbb{R}}$ is in a compact set $K$. Then for all $t>0$, we have $$Rf(t)=\frac{1}{\sigma}\int_{-t}^{\infty}f(u){\mathrm{d}}u+ \frac{1}{t}O_K(|{\widehat}f|_{W^{1,\infty}}),$$ where $O_K$ is the the norm of $u(i\xi)$ and $\partial_\xi u(i\xi)$ on $K$. Combine the following two lemmas. \[lem:rencon\] Under the same assumption as in Proposition \[prop:renreg\], we have $$\begin{aligned} Rf(t)=\frac{1}{\sigma}\int_{t}^{\infty}f(u){\mathrm{d}}u+\frac{1}{2\pi}\int e^{it\xi}u(i\xi){\widehat}{f}(\xi){\mathrm{d}}\xi, \end{aligned}$$ where $u$ is defined in Proposition \[prop:invtran\]. This is a classical computation, for more details please see Lemma 4.6 in [@Li1]. \[lem:renres\] Under the same assumption as in Proposition \[prop:renreg\], we have $$|\int e^{-it\xi}u(i\xi){\widehat}{f}(\xi){\mathrm{d}}\xi|\leq \frac{1}{t}O_K\left(|{\widehat}f|_{L^\infty }+|\partial_\xi{\widehat}f|_{L^\infty }\right).$$ Use the fact that ${\widehat}f(\xi)$ is compactly supported and $|{\widehat}{f}(\xi)|_{ },\,|\partial_{\xi}{\widehat}{f}(\xi)|_{ }<\infty$. Then applying integration by parts, we have $$\begin{aligned} \int e^{-it\xi}u(i\xi){\widehat}{f}(\xi){\mathrm{d}}\xi&=\frac{1}{it}\int e^{-it\xi}\partial_{\xi}(u(i\xi){\widehat}{f}(\xi)){\mathrm{d}}\xi\\ &=\frac{1}{it}\int e^{-it\xi}\left(\partial_{\xi}(u(i\xi)){\widehat}{f}(\xi)+u(i\xi)\partial_{\xi}{\widehat}{f}(\xi)\right){\mathrm{d}}\xi. \end{aligned}$$ Since the operator norms of $u(i\xi)$ and $\partial_\xi u(i\xi)$ are uniformly bounded on compact regions, the result follows. Regularity properties of renewal measures {#sec:regular} ----------------------------------------- We want to use convolution to smooth out the target function. There exists an even function $\psi$ such that it is a probability density, and the Fourier transform ${\widehat}{\psi}$ is compactly supported. Let $\psi_{\delta}(t)=\frac{1}{\delta^2}\psi(\frac{t}{\delta^2})$. Then $\int_{-\delta}^{\delta}\psi_{\delta}(t){\mathrm{d}}t=\int_{-1/\delta}^{1/\delta}\psi(t){\mathrm{d}}t>1-C\delta$. \[prop:renint\] Let $\delta\leq 1/3$ and $b_2\geq b_1$. If $b_2-b_1\geq 2\delta$, then $t>0$, we have $$\label{ineq:renint} R({\mathbf{1}}_{[b_1,b_2]})(t)\ll_\psi (b_2-b_1)(1/\sigma+O_{\delta}(1+|b_2|+|b_1|)/t),$$ where $O_\delta\leq \sup_{\xi\in[-C_\psi\delta^{-2},C_\psi\delta^{-2}]}(|u(i\xi)|+|\partial_\xi u(i\xi)|)$. If $u$ is in $[b_1,b_2]$, then $[u-b_2,u-b_1]$ contains at least one of $[0,\delta]$ or $[-\delta,0]$. Therefore $$\psi_{\delta}*{\mathbf{1}}_{[b_1,b_2]}(u)=\int_{b_1}^{b_2}\psi_{\delta}(u-v){\mathrm{d}}v\geq \int_{0}^{\delta}\psi(v){\mathrm{d}}v\geq (1-\delta)/2.$$ Then $$\label{ineq:psidellarg} {\mathbf{1}}_{[b_1,b_2]}\leq 3\psi_{\delta}*{\mathbf{1}}_{[b_1,b_2]}.$$ It is sufficient to bound $R(\psi_{\delta}*{\mathbf{1}}_{[b_1,b_2]})$. Proposition \[prop:renreg\] implies that $$R(\psi_{\delta}*{\mathbf{1}}_{[b_1,b_2]})=\frac{1}{\sigma_{\lambda}}\int_{-t}^{\infty}\psi_{\delta}*{\mathbf{1}}_{[b_1,b_2]}+\frac{O_\delta}{t}|{\widehat}{\psi_\delta}{\widehat}{{\mathbf{1}}}_{[b_1,b_2]}|_{W^{1,\infty} }.$$ The first term is less than $\int\psi_{\delta}*{\mathbf{1}}_{[b_1,b_2]}=(b_2-b_1)$. For the second term, we have $$\begin{aligned} |{\widehat}{\psi_\delta}{\widehat}{{\mathbf{1}}}_{[b_1,b_2]}|_{W^{1,\infty} }&=|{\widehat}{\psi_\delta}{\widehat}{{\mathbf{1}}}_{[b_1,b_2]}|_{L^\infty }+|\partial_\xi{\widehat}{\psi_\delta}{\widehat}{{\mathbf{1}}}_{[b_1,b_2]}|_{L^{\infty} }\\ &\leq C_\psi(|{\mathbf{1}}_{[b_1,b_2]}(u)|_{L^1}+|u{\mathbf{1}}_{[b_1,b_2]}(u)|_{L^1})\leq C_\psi(b_2-b_1)(1+|b_1|+|b_2|). \end{aligned}$$ Because every step of the random walk is positive, every trajectory can only stay at most $Cs$ times in the interval $[t,t+s]$, with $C$ depending on $\lambda$. \[lem:renintts\] For real numbers $s,t$ and a point $x$ in $X$, we have $$\label{ineq:renintts} R({\mathbf{1}}_{[0,s]})(t)\ll \max\{1,s \}.$$ Residue process {#subsecrespro} --------------- We introduce the residue process, which not only deals with $X_1+\cdots +X_n$ but also takes into account the next step $X_{n+1}$. Let $f$ be a positive bounded Borel function on $ {\mathbb{R}}^2$. For $t\in {\mathbb{R}}$, we define the residue operator by $$\begin{split} \Res f(t)&=\sum_{n\geq 0}\int f(y, x-t){\mathrm{d}}\lambda^{*n}(x){\mathrm{d}}\lambda(y). \end{split}$$ Let $\cal F_uf(v,\xi)=\int f(v,u)e^{iu\xi}{\mathrm{d}}u$ be the Fourier transform on ${\mathbb{R}}_u$. Let $F$ be a function on $ {\mathbb{R}}_{v}\times{\mathbb{R}}_{\xi}$,. Define the infinite norm by $$|F|_\operatorname{Lip}=\sup_{v,\xi\in{\mathbb{R}}}|F(v,\xi)|.$$ \[prop:residue\] If $f$ is a positive bounded continuous function on $ {\mathbb{R}}^2$. Assume that the projection of $\operatorname{supp}\cal F_u(f)$ onto ${\mathbb{R}}_\xi$ is contained in a compact set $K$, and $|\cal F_u(f)|_\operatorname{Lip},|\partial_\xi\cal F_u(f)|_\operatorname{Lip}$ are finite. Then for $t>0$ and $x\in X$, we have $$\begin{split} \Res f(t)=&\frac{1}{\sigma}\int_{-t}^\infty\int_{{\mathbb{R}}^+} f(y,u){\mathrm{d}}\lambda(y){\mathrm{d}}u+\frac{1}{t}O_K\left(|\cal F_u(f)|_\operatorname{Lip}+|\partial_\xi\cal F_u(f)|_\operatorname{Lip}\right). \end{split}$$ For a bounded continuous function $f$ on $ {\mathbb{R}}^2$ and $u\in {\mathbb{R}}$, we define an operator $Q$ by $$Qf(u)=\int f(y,u){\mathrm{d}}\lambda(y).$$ Then $$\Res f(t)=\sum_{n\geq 0}\int Qf( x-t){\mathrm{d}}\lambda^{*n}(x)=R(Qf)(t).$$ We want to use Proposition \[prop:renreg\], so we need to verify the hypotheses. The function $Qf$ is bounded and integrable by the hypotheses on $f$. Then $$\begin{aligned} \widehat{Qf}(\xi)&=\int Qf(u)e^{iu\xi}{\mathrm{d}}u=\int f(y,u)e^{iu\xi}{\mathrm{d}}u{\mathrm{d}}\lambda(y)=\int\cal F_uf(y,\xi){\mathrm{d}}\lambda(y). \end{aligned}$$ Thus $\widehat{Qf}$ is also compactly supported on $\xi$. Under the assumptions of Proposition \[prop:residue\], we have$$|\widehat{Qf}|_{\linf}\ll|\cal F_u(f)|_\operatorname{Lip},\ |\partial_\xi\widehat{Qf}|_{\linf}\ll|\partial_\xi\cal F_uf|_\operatorname{Lip}.$$ The second inequality follows by the same computation as $\widehat{Qf}$. By Proposition \[prop:renreg\], we have $$\begin{aligned} R(Qf)(t&)=\frac{1}{\sigma}\int_X\int_{-t}^{\infty}Qf(u){\mathrm{d}}u+\frac{1}{t}O_K\left(|\widehat{Qf}|_{\linf}+|\partial_\xi \widehat{Qf}|_{\linf}\right)\\ &=\frac{1}{\sigma}\int_X\int_{-t}^{\infty}Qf(u){\mathrm{d}}u+\frac{1}{t}O_K\left(|\cal F_u(f)|_\operatorname{Lip}+|\partial_\xi \cal F_u(f)|_\operatorname{Lip}\right). \end{aligned}$$ The proof is complete. Residue process with cutoff --------------------------- In this section, we restrict the residue process to the sequences $(X_{n+1},X_n,\dots,X_1)$ such that $X_n+\cdots+ X_1<t\leq X_{n+1}+\cdots+ X_1$. Let $f$ be a function on $ {\mathbb{R}}^2$. For a $C^1$ function on ${\mathbb{R}}_v\times {\mathbb{R}}_u$, define a norm by $$\begin{aligned} | f|_{\lf}=| f|_\infty+|\partial_uf|_\infty.\end{aligned}$$ Define an operator from bounded Borel functions on $ {\mathbb{R}}^2$ to functions on $ {\mathbb{R}}$ by $$\Cut f(t)=\sum_{n\geq 0}\int_{ x<t\leq y+x} f(y, x-t){\mathrm{d}}\lambda(y){\mathrm{d}}\lambda^{*n}(x).$$ By Lemma \[lem:resfin\], which will be proved later, this operator is well defined. Let $K$ be a compact set in ${\mathbb{R}}$. We denote $|K|$ by the supremum of the distance between a point in $K$ and $0$. \[prop:rescut\] Let $f$ be a continuous function on $ {\mathbb{R}}^2$ with $| f|_\lf$ finite. Assume that the projection of $\operatorname{supp}f$ on ${\mathbb{R}}_v$ is contained in a compact set $K$. For all $\delta>0$ and $t>|K|+\delta$, we have $$\begin{split} \Cut f(t)=\int_{{\mathbb{R}}^+}\int_{-y}^{0} f(y,u){\mathrm{d}}u{\mathrm{d}}\lambda(y) +O_K(\delta +O_\delta/t)| f|_\lf, \end{split}$$ where $O_\delta$ is the same as in Proposition \[prop:renint\]. We decompose $f$ into real and imaginary parts, then decompose these two parts into positive and negative parts. Each part satisfies the hypotheses of Proposition \[prop:rescut\], with the support and the Lipschitz norm bounded by the original one. Thus, it is sufficient to prove this proposition for $f$ positive. The following lemma connects the operator $E_c$ with $E$. Under the assumptions of Proposition \[prop:rescut\], let $ f_o(x,v,u)={\mathbf{1}}_{-v\leq u<0} f(x,v,u)$. Then $$\Cut f(t)=\Res f_o(t).$$ Before proving this proposition, we describe some regularity properties. They are corollaries of analogous properties for the renewal process. The idea is to decompose the integral according to the last letter. \[lem:resfin\] There exists $C>0$ such that for all $t\in{\mathbb{R}}$ and $x\in X$, we have $$\label{ineq:resfin} \Cut({\mathbf{1}})(t)=\Res({\mathbf{1}}_{-v\leq u<0})(t)\leq C.$$ By Lemma \[lem:renintts\], we have $$\begin{aligned} \quad\sum_{n\geq 0}\lambda\otimes\lambda^{*n}\{(y,x)| x-t\in[- y,0], y\geq 0\}=\int R({\mathbf{1}}_{[-y,0]})(t){\mathrm{d}}\lambda(y)\ll \int\max\{1,y\}{\mathrm{d}}\lambda(y). \end{aligned}$$ The proof is complete. Using $\psi_\delta$ to regularize these functions, we write $ f_\delta(v,u)=\int f_o(v,u-u_1)\psi_\delta(u_1){\mathrm{d}}u_1=\psi_\delta*f_o(v,u)$. \[lem:rescut\] Under the same hypotheses as in Proposition \[prop:rescut\], we have $$\begin{aligned} \Res( f_\delta)(t)=\int_{{\mathbb{R}}^+}\int_{-y}^{0} f(y,u){\mathrm{d}}u{\mathrm{d}}\lambda(y) +O(\delta+\frac{O_\delta}{t}(|K|+|K|^2))| f|_\infty. \end{aligned}$$ We want to verify the conditions in Proposition \[prop:residue\] and then use this proposition. For the Fourier transform, we have $$\begin{aligned} \cal F_u f_\delta=\cal F_u(\psi_\delta*f_o)={\widehat}{\psi}_\delta\cal F_u f_o. \end{aligned}$$ We need to estimate the infinite norm of $\cal F_u f_o$. This function equals $$\int f_o(v,u)e^{i\xi u}{\mathrm{d}}u=\int_{-v}^{0} f(v,u)e^{i\xi u}{\mathrm{d}}u.$$ Under the same hypotheses as in Proposition \[prop:rescut\], we have $$|\cal F_u f_\delta|_\operatorname{Lip}\leq |K||f|_\infty,\ |\partial_\xi\cal F f_{\delta}|_\operatorname{Lip}\leq |K|^2|f|_\infty.$$ Noting that in the integration $|u|\leq |v|$, we get the second inequality by the same computation. The projection of the support of $\cal F_uf$ onto ${\mathbb{R}}_\xi$ is contained in $[-C_\psi\delta^{-2},C_\psi\delta^{-2}]$. Therefore by Proposition \[prop:residue\], we have $$\begin{aligned} \Res( f_\delta)(t)=\frac{1}{\sigma}\int_{-t}^\infty\int_{{\mathbb{R}}^+} f_\delta( y,u) {\mathrm{d}}\lambda(y){\mathrm{d}}u+\frac{O_\delta}{t}\left(|f|_\infty(|K|+|K|^2)\right). \end{aligned}$$ Then $$\begin{aligned} \int_{-t}^{\infty} f_{\delta} (v,u){\mathrm{d}}u&=\int_{-t}^{\infty}\int_{-v}^{0} f(v,u_1)\psi_\delta(u-u_1){\mathrm{d}}u_1{\mathrm{d}}u=\int_{-v}^{0} f(v,u_1)\int_{-t}^{\infty}\psi_{\delta}(u-u_1){\mathrm{d}}u{\mathrm{d}}u_1\\ &=\int_{-v}^{0} f (v,u_1){\mathrm{d}}u_1-\int_{-v}^{0} f (v,u_1)\int_{-\infty}^{-t-u_1}\psi_{\delta}(u){\mathrm{d}}u{\mathrm{d}}u_1. \end{aligned}$$ Since $t-\delta\geq |K|$, we have $-t-u_1\leq -t+v\leq -\delta$. By $\int_{-\infty}^{-\delta}\psi_\delta\leq C_\psi\delta$, this implies that $\int_{-t}^{\infty} f_{\delta} (v,u){\mathrm{d}}u=\int_{-v}^{0} f_{\delta} (v,u){\mathrm{d}}u(1+O(\delta))$. Using Lemma \[lem:resfin\], we have $$|\int_{{\mathbb{R}}^+}\int_{-y}^{0} f(y,u){\mathrm{d}}u{\mathrm{d}}\lambda(y) |\leq | f|_\infty \Cut({\mathbf{1}})=O(| f|_\infty).$$ Therefore $$\begin{aligned} &\int_{-t}^\infty\int_{{\mathbb{R}}^+} f_\delta( y,u) {\mathrm{d}}\lambda(y){\mathrm{d}}u=\int_{{\mathbb{R}}^+}\int_{-y}^{0} f( y,u){\mathrm{d}}u{\mathrm{d}}\lambda(y) +O(\delta|f|_\infty). \end{aligned}$$ The proof is complete. To simplifier the notation, we normalize $ f$ in such a way that $| f|_\infty=1$. By Lemma \[lem:rescut\], we only need to give an estimate of $\Res(| f_\delta- f_o|)(t)$. Due to $ f_o(v,u)={\mathbf{1}}_{-v\leq u<0}(u) f(v,u)$, elementary computation (Lemma 4.26 in [@Li1]) implies that (For simplifying notation, we omit the variable $v$ in the following computation) $$| f_\delta- f_o|(u)\leq \begin{cases} (|\partial_u f|_\infty+2)\delta & u\in[-v+\delta,-\delta],\\ 2 & u\in[-v-\delta,-v+\delta]\cup[-\delta,\delta],\\ \psi_\delta*{\mathbf{1}}_{[-v,0]}(u) & u\in[-v-\delta,\delta]^c. \end{cases}$$ By definition of $|K|$, the first term is less than $(|\partial_u f|_\infty+2)\delta{\mathbf{1}}_{[-|K|+\delta,-\delta]}$. The third term equals $$\begin{aligned} {\mathbf{1}}_{[-\infty,-v-\delta]\cup[\delta,\infty]}&\psi_\delta*{\mathbf{1}}_{[-v,0]}(u)={\mathbf{1}}_{[-\infty,-v-\delta]\cup[\delta,\infty]}(u)\int_{-v}^0\psi_\delta(u-u_1){\mathrm{d}}u_1\\ &={\mathbf{1}}_{[-\infty,-v-\delta]\cup[\delta,\infty]}(u)\int_{u}^{u+v} \psi_\delta(u_1){\mathrm{d}}u_1. \end{aligned}$$ By definition and the above arguments, we have $$\begin{aligned} \Res(| f_\delta- f_o|)(t)&=\sum_{n\geq 0}\int | f_\delta- f_o|(y, x-t){\mathrm{d}}\lambda^{*n}(x){\mathrm{d}}\lambda(y)\\ & \leq\sum_{n\geq 0}\int\Big((|\partial_u f|_\infty+2)\delta{\mathbf{1}}_{[-|K|,-\delta]}( x-t) +2{\mathbf{1}}_{[- y-\delta,- y+\delta]\cup[-\delta,\delta]}( x-t)\\ &+{\mathbf{1}}_{[-\infty,- y-\delta]\cup[\delta,\infty]}( x-t)\int_{ x-t}^{x+y-t}\psi_\delta(u_1){\mathrm{d}}u_1\Big) {\mathrm{d}}\lambda^{*n}(x){\mathrm{d}}\lambda(y). \end{aligned}$$ By Lemma \[lem:renintts\], the first term is controlled by $(|\partial_u f|_\infty+2)\delta |K|$. The second term is less than $R({\mathbf{1}}_{[-\delta,\delta]})(t)$. Due to Proposition \[prop:renint\], it is controlled by $C_\psi\delta(1/\sigma+O_\delta(1+2\delta)/t)$. For the third term, we need to change the order of integration. Since $ x-t>\delta$ or $ x-t<- y-\delta$, we have $u_1\geq x-t>\delta$ or $u_1\leq x+y-t\leq -\delta$. We integrate first with respect to $u_1$, then the third term is less than $$\begin{aligned} &\int_{[-\infty,-\delta]\cup[\delta,\infty]}\psi_\delta(u_1)\sum_{n\geq 0}\lambda\otimes\lambda^{*n}\{(y,x)|x+y\geq u_1+t\geq x \}{\mathrm{d}}u_1\\ &=\int_{[-\infty,-\delta]\cup[\delta,\infty]}\psi_\delta(u_1)E_C({\mathbf{1}})(u_1+t){\mathrm{d}}u_1. \end{aligned}$$ By Lemma \[lem:resfin\], the above quantity is less than $C\int_{[-\infty,-\delta]\cup[\delta,\infty]}\psi_\delta(u_1){\mathrm{d}}u_1\ll_\psi \delta$. Therefore, we have $$\begin{aligned} \Res(| f_\delta- f|)(t)=O(\delta|K|+C_\delta/t)| f|_\lf. \end{aligned}$$ The proof is complete. Acknowledgements {#acknowledgements .unnumbered} ================ The first author would like to thank Jean-François Quint for inspiring discussions on the regularity of self-similar measures. The second author thanks Pablo Shmerkin and Boris Solomyak for useful discussions. This work was completed while the second author was visiting Institut de Mathématiques de Bordeaux, and the authors would like to thank the hospitality of the institution. [10]{} A. Baker: *Transcendental Number Theory*, Cambridge University Press, 1st ed., 1975. B. Bárány, M. Hochman, A. Rapaport: Hausdorff dimension of planar self-affine sets and measures. *Invent. Math.* (2018), to appear, https://arxiv.org/abs/1712.07353 J. Bourgain, The discretized sum-product and projection theorems, J. Anal. Math. 112(2010), 193–236. J. Bourgain and S. Dyatlov. Fourier dimension and spectral gaps for hyperbolic surfaces. Geom. Funct. Anal. 27 (2017), no. 4, 744-771. Jean-Baptiste Boyer: The speed of convergence in the renewal theorem, *preprint (2015)* E. Breuillard, P. Varjú: On the dimension of Bernoulli convolutions. *Ann. Prob.*, to appear (2018), https://arxiv.org/abs/1610.09154 A. Bufetov and B. Solomyak. On the modulus of continuity for spectral measures in substitution dynamics. *Adv. Math.* 260 (2014), 84–129 G. Cantor. *Gesammelte Abhandlungen mathematischen und philosophischen Inhalts*. Springer, 1932. X-R. Dai: When does a [B]{}ernoulli convolution admit a spectrum? *Adv. Math.* 231(3- 4):1681–1693, 2012. X-R. Dai, D-J. Feng, Y. Wang: Refinable functions with non-integer dilations. *J. Funct. Anal.*, 250(1):1–20, 2007. H. Davenport, P. Erdös, and W. LeVeque. On Weyl’s criterion for uniform distribution. Michigan Math. J., 10:311-314, 1963. D.-J. Feng, K.-S. Lau. Multifractal formalism for self-similar measures with weak separation condition *J. Math. Pures Appl.*, 92 (2009) 407–428. J. Fraser, T. Orponen, T. Sahlsten: On the Fourier analytic properties of graphs. *Int. Math. Res. Not.* 2014:10 (2014), 2730–2745 J. Fraser, T. Sahlsten: On the Fourier analytic structure of the Brownian graph. *Analysis & PDE*, Volume 11, Number 1 (2018), 115-132. P. Erdös. On the smoothness properties of a family of Bernoulli convolutions. *Amer. J. Math.*, 62:180–186, 1940. M. Hochman: On self-similar sets with overlaps and inverse theorems for entropy. *Ann. of Math.* 180 (2014), no. 2, 773-822 M. Hochman, P. Shmerkin. Equidistribution from fractal measures. Invent Math. Volume 202, Issue 1, pp 427–479, 2015. T Jordan, T. Sahlsten. Fourier transforms of Gibbs measures for the Gauss map. . 983-1023, 2015. J.-P. Kahane: Sur la distribution de certaines series aleatoires. In Colloque de Theorie des Nom- bres (Univ. Bordeaux, Bordeaux, 1969), pages 119–122. Bull. Soc. Math. France, Mem. No. 25, Soc. Math. France Paris, 1971. J.-P. Kahane: Some Random series of functions (2nd ed.). . 1985. J.-P. Kahane: Ensembles alatoires et dimensions. . 65-121, 1983. R. Kaufman: Continued fractions and Fourier transforms. . 262-267, 1980. R. Kaufman: On the theorem of Jarník and Besicovitch. . 39(3):265-267, 1981. I. Laba and M. Pramanik. Arithmetic progressions in sets of fractional dimension. Geom. Funct. Anal., 19(2):429–456, 2009 J. Li: Fourier decay, Renewal theorem and Spectral gaps for random walks on split semisimple Lie groups. *Preprint* (2018), <https://www.math.u-bordeaux.fr/~jli004/publications/sl3r_final.pdf> J. Li, T. Sahlsten: Fourier transform of self-affine measures. In preparation, 2019. P. Mattila: *Fourier Analysis and Hausdorff Dimension*, Cambridge University Press, 2015. D. Mensov. Sur l’unicité du dévelloppement trigonométrique. *CRASP*, 163:433-436, 1916. Piatetski-Shapiro. *Moscov. Gos. Univ. Uc. Zap.*, 165, Mat 7:79-97, 1954. M. Queffélec and O. Ramaré. Analyse de Fourier des fractions continues á quotients restreints. Enseign. Math.(2), 49(3-4):335-356, 2003. B. Riemann: *Habilitatsionschrift*. Abh. der Ges. der Wiss. zu Gott., 13:87- 132, 1868. T. Sahlsten, C. Stevens: . R. Salem: Sets of uniqueness and sets of multiplicity. *Trans AMS*, 54:218-228, 1943. Corrected on pp. 595-598, Trans AMS, vol 63, 1948. R. Salem, A. Zygmund. Sur un theoreme de Piatetski-Shapiro. *CRASP*, 240:2040-2042, 1954. P. Sarnak: Spectra of singular measures as multipliers on $L^p$, *J. Funct. Anal.* 37 (1980), 302–317. P. Shmerkin: On the exceptional set for absolute continuity of Bernoulli convolutions. *Geom. Funct. Anal.* 24 (2014), no. 3, 946–958 P. Shmerkin and V. Suomala: Spatially independent martingales, intersections, and applications. Mem. Amer. Math. Soc. 251 (2018), no. 1195 N. Sidorov, B. Solomyak: Spectra of Bernoulli convolutions as multipliers in $L^p$ on the circle. *Duke Math. J.* Volume 120, Number 2 (2003), 353-370. R. Strichartz: Self-Similar Measures and Their Fourier Transforms I. *Indiana University Mathematics Journal* Vol. 39, No. 3 (Fall, 1990), pp. 797-817 (21 pages) R. Strichartz: Self-Similar Measures and Their Fourier Transforms II. *Transactions of the American Mathematical Society*. Volume 336, Number 1, 1993. M. Tsujii: On the Fourier transforms of self-similar measures. *Dynamical Systems: An International Journal*, Volume 30, Issue 4: Pages 468-484, 2015. P. Varjú: On the dimension of Bernoulli convolutions for all transcendental parameters. Preprint (2018), https://arxiv.org/abs/1810.08905 W. Young. A note on trigonométrie séries. *Mess, for Maths.*, 38:44-48, 1909.
--- abstract: 'Let $R$ be a noetherian ring, ${\mathfrak{a}}$ an ideal of $R$ such that $\dim R/{\mathfrak{a}}=1$ and $M$ a finite $R$–module. We will study cofiniteness and some other properties of the local cohomology modules $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$. For an arbitrary ideal ${\mathfrak{a}}$ and an $R$–module $M$ (not necessarily finite), we will characterize ${\mathfrak{a}}$–cofinite artinian local cohomology modules. Certain sets of coassociated primes of top local cohomology modules over local rings are characterized.' address: - | Moharram Aghapournahr\ Arak University\ Beheshti St, P.O. Box:879, Arak, Iran - | Leif Melkersson\ Department of Mathematics\ Linköping University\ SE–581 83 Linköping, Sweden author: - Moharram Aghapournahr - Leif Melkersson title: 'Cofiniteness and coassociated primes of local cohomology modules\' --- Introduction ============ Throughout $R$ is a commutative noetherian ring. By a finite module we mean a finitely generated module. For basic facts about commutative algebra see [@BH] and [@Mat] and for local cohomology we refer to [@BSh]. Grothendieck [@SGA2], made the following conjecture: For every ideal ${\mathfrak{a}}$ and every finite $R$–module $M$, the module $\operatorname{Hom}_{R}(R/{\mathfrak{a}},\operatorname{H}^{n}_{{\mathfrak{a}}}(M))$ is finite for all $n$. Hartshorne [@Ha] showed that this is false in general. However, he defined an $R$–module $M$ to be [*${\mathfrak{a}}$–cofinite*]{} if $\operatorname{Supp}_R(M)\subset {\mbox{V}}{({\mathfrak{a}})}$ and $\operatorname{Ext}^i_{R}(R/{\mathfrak{a}},M)$ is finite (finitely generated) for each $i$ and he asked the following question: If ${\mathfrak{a}}$ is an ideal of $R$ and $M$ is a finite $R$–module. When is $\operatorname{Ext}^i_{R}(R/{\mathfrak{a}},\operatorname{H}^{j}_{{\mathfrak{a}}}(M))$ finite for every $i$ and $j$ ? Hartshorne [@Ha] showed that if $(R,{\mathfrak{m}})$ is a complete regular local ring and $M$ a finite $R$–module, then $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ is ${\mathfrak{a}}$–cofinite in two cases: \(a) If ${\mathfrak{a}}$ is a nonzero principal ideal, and \(b) If ${\mathfrak{a}}$ is a prime ideal with $\dim R/{\mathfrak{a}}=1$. Yoshida [@Yo] and Delfino and Marley [@DM] extended (b) to all dimension one ideals ${\mathfrak{a}}$ of an arbitrary local ring $R$. In \[C:cofmin2\], we give a characterization of the ${\mathfrak{a}}$–cofiniteness of these local cohomology modules when ${\mathfrak{a}}$ is a one-dimensional ideal in a non-local ring. In this situation we also prove in \[T:Hinz\], that these local cohohomology modules always belong to a class introduced by Zöschinger in [@Zrmm]. Our main result in this paper is \[T:artcof\], where we for an arbitrary ideal ${\mathfrak{a}}$ and an $R$–module $M$ (not necessarily finite), characterize the artinian ${\mathfrak{a}}$–cofinite local cohomology modules (in the range $i< n$). With the additional assumption that $M$ is finitely generated, the characterization is also given by the existence of certain filter- regular sequences. The second author has in [@Mel Theorem 5.5] previously characterized artinian local cohomology modules, (in the same range). In case the module $M$ is not supposed to be finite, the two notions differ. For example let ${\mathfrak{a}}$ be an ideal of a local ring $R$, such that $\dim(R/{\mathfrak{a}})>0$ and let $M$ be the injective hull of the residue field of $R$. The module $\operatorname{H}^0_{\mathfrak{a}}(M)$, which is equal to $M$, is artinian. However it is not ${\mathfrak{a}}$–cofinite, since $0\underset{M}:{\mathfrak{a}}$ does not have finite length. An $R$–module $M$ has [*finite Goldie dimension*]{} if $M$ contains no infinite direct sum of submodules. For a commutative noetherian ring this can be expressed in two other ways, namely that the injective hull ${\mbox{E}}(M)$ of $M$ decomposes as a finite direct sum of indecomposable injective modules or that $M$ is an essential extension of a finite submodule. A prime ideal ${\mathfrak{p}}$ is said to be [*coassociated*]{} to $M$ if ${\mathfrak{p}}=\operatorname{Ann}_R({M/N})$ for some $N\subset M$ such that $M/N$ is artinian and is said to be [*attached*]{} to $M$ if ${\mathfrak{p}}=\operatorname{Ann}_R({M/N})$ for some arbitrary submodule $N$ of $M$, equivalently ${\mathfrak{p}}=\operatorname{Ann}_R({M/{{\mathfrak{p}}}M})$. The set of these prime ideals are denoted by $\operatorname{Coass}_R(M)$ and $\operatorname{Att}_R(M)$ respectively. Thus $\operatorname{Coass}_R(M)\subset \operatorname{Att}_R(M)$ and the two sets are equal when $M$ is an artinian module. The two sets behave well with respect to exact sequences. If $0\rightarrow M^{\prime}\rightarrow M\rightarrow M^{\prime\prime} \rightarrow 0$ is an exact sequence, then $$\operatorname{Coass}_R(M^{\prime\prime})\subset \operatorname{Coass}_R(M)\subset{\operatorname{Coass}_R(M^{\prime})\cup \operatorname{Coass}_R(M^{\prime\prime})}$$ and $\operatorname{Att}_R(M^{\prime\prime})\subset \operatorname{Att}_R(M)\subset{\operatorname{Att}_R(M^{\prime}) \cup \operatorname{Att}_R(M^{\prime\prime})}.$ There are equalities $\operatorname{Coass}_R(M\otimes_R{N})= \operatorname{Coass}_R(M)\cap \operatorname{Supp}_R(N)$ and $\operatorname{Att}_R(M\otimes_R{N})= \operatorname{Att}_R(M)\cap \operatorname{Supp}_R(N)$, whenever the module $N$ is required to be finite. We prove the second equality in \[L:att\]. In particular $\operatorname{Coass}_R(M/{{\mathfrak{a}}}M)=\operatorname{Coass}_R(M)\cap {\mbox{V}}({\mathfrak{a}})$ and $\operatorname{Att}_R(M/{{\mathfrak{a}}}M)=\operatorname{Att}_R(M)\cap {\mbox{V}}({\mathfrak{a}})$ for every ideal ${\mathfrak{a}}$. Coassociated and attached prime ideals have been studied in particular by Zöschinger, [@Zrkoass] and [@Zrlk]. In \[C:attcoass\] we give a characterization of certain sets of coassociated primes of the highest nonvanishing local cohomology module $\operatorname{H}_{{\mathfrak{a}}}^t(M)$, where $M$ is a finitely generated module over a complete local ring. In case it happens that $t=\dim M$, the characterization is given in [@DM Lemma 3]. In that case the top local cohomology module is always artinian, but in general the top local cohomology module is not artinian if $t<\dim M$. Main results ============ First we extend a result by Zöschinger [@Zrko Lemma 1.3] with a much weaker condition. Our method of proof is also quite different. \[P:fgwl\] Let $M$ be a module over the noetherian ring $R$. The following statements are equivalent: 1. $M$ is a finite $R$–module. 2. $M_{\mathfrak{m}}$ is a finite $R_{\mathfrak{m}}$–module for all ${\mathfrak{m}}{\in}\operatorname{Max}{R}$ and\ ${\mbox{Min}\,}_R(M/N)$ is a finite set for all finite submodules $N\subset M$. The only nontrivial part is (ii)$\Rightarrow$ (i). Let $\mathcal F$ be the set of finite submodules of $M$. For each $N \in\mathcal F$ the set $\operatorname{Supp}_R(M/N)$ is closed in $\operatorname{Spec}(R)$, since ${\mbox{Min}\,}_R(M/N)$ is a finite set. Also it follows from the hypothesis that, for each ${\mathfrak{p}}\in \operatorname{Spec}(R)$ there is $N\in \mathcal F$ such that $M_{{\mathfrak{p}}}=N_{{\mathfrak{p}}}$, that is ${\mathfrak{p}}\notin \operatorname{Supp}_R(M/N)$. This means that ${\bigcap}_{N\in \mathcal F}{\operatorname{Supp}_R(M/N)}=\varnothing$. Now $\operatorname{Spec}(R)$ is a quasi-compact topological space. Consequently $\bigcap_{i=1}^r\operatorname{Supp}_R(M/N_i)=\varnothing$ for some $N_1,...,N_r\in \mathcal F$. We claim that $M=N$, where $N=\sum_{i=1}^r N_i$. Just observe that $\operatorname{Supp}_R(M/N)\subset \operatorname{Supp}_R(M/N_i)$ for each $i$, and therefore $\operatorname{Supp}_R(M/N)=\varnothing$. \[C:cofmin1\] Let $M$ be an $R$–module such that $\operatorname{Supp}M\subset {\mbox{V}}({\mathfrak{a}})$ and $M_{\mathfrak{m}}$ is ${{\mathfrak{a}}}R_{{\mathfrak{m}}}$–cofinite for each maximal ideal ${\mathfrak{m}}$. The following statements are equivalent: 1. $M$ is ${\mathfrak{a}}$–cofinite. 2. For all $j$, ${\mbox{Min}\,}_R(\operatorname{Ext}^{j}_{R}(R/{\mathfrak{a}},M)/T)$ is a finite set for each finite submodule $T$ of $\operatorname{Ext}^{j}_{R}(R/{\mathfrak{a}},M)$. The only nontrivial part is (ii)$\Rightarrow$ (i). Suppose ${\mathfrak{m}}$ is a maximal ideal of $R$. By hypothesis $M_{\mathfrak{m}}$ is ${{\mathfrak{a}}}R_{{\mathfrak{m}}}$–cofinite. Therefore $\operatorname{Ext}^{j}_{R}(R/{\mathfrak{a}}, M)_{{\mathfrak{m}}}$ is a finite $R_{\mathfrak{m}}$–module for all $j$. Hence by \[P:fgwl\]   $\operatorname{Ext}^{j}_{R}(R/{\mathfrak{a}},M)$ is finite for all $j$. Thus $M$ is ${\mathfrak{a}}$–cofinite. \[C:cofmin2\] Let ${\mathfrak{a}}$ an ideal of $R$ such that $\dim R/{\mathfrak{a}}=1$, $M$ a finite $R$–module and $i\geq 0$. The following statements are equivalent: 1. $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ is ${\mathfrak{a}}$–cofinite. 2. For all $j$, ${\mbox{Min}\,}_R(\operatorname{Ext}^{j}_{R}(R/{\mathfrak{a}}, \operatorname{H}^{i}_{{\mathfrak{a}}}(M))/T)$ is a finite set for each finite submodule $T$ of $\operatorname{Ext}^{j}_{R}(R/{\mathfrak{a}}, \operatorname{H}^{i}_{{\mathfrak{a}}}(M))$. For all maximal ideals ${\mathfrak{m}}$, $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)_{\mathfrak{m}}\cong \operatorname{H}^{i}_{{{\mathfrak{a}}}R_{{\mathfrak{m}}}}(M_{\mathfrak{m}})$. By [@DM Theorem 1] $\operatorname{H}^{i}_{{{\mathfrak{a}}}R_{{\mathfrak{m}}}}(M_{\mathfrak{m}})$ is ${{\mathfrak{a}}}R_{{\mathfrak{m}}}$–cofinite. A module $M$ is [*weakly Laskerian*]{}, when for each submodule $N$ of $M$ the quotient $M/N$ has just finitely many associated primes, see [@DiM]. A module $M$ is [*${\mathfrak{a}}$–weakly cofinite*]{} if $\operatorname{Supp}_R(M)\subset {\mbox{V}}({\mathfrak{a}})$ and $\operatorname{Ext}^{i}_{R}(R/{\mathfrak{a}}, M)$ is weakly Laskerian for all $i$. Clearly each ${\mathfrak{a}}$–cofinite module is ${\mathfrak{a}}$–weakly cofinite but the converse is not true in general see [@DiM2 Example 3.5 (i) and (ii)]. \[c:wlcof\] If $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ [(]{}with $\dim R/{\mathfrak{a}}=1$[)]{} is an ${\mathfrak{a}}$–weakly cofinite module, then it is also ${\mathfrak{a}}$–cofinite. Next we will introduce a subcategory of the category of $R$–modules that has been studied by Zöschinger in [@Zrmm Satz 1.6]. \[T:classz\]**[(]{}Zöschinger[)]{}** For any $R$–module $M$ the following are equivalent: 1. $M$ satisfies the minimal condition for submodules $N$ such that $M/N$ is soclefree. 2. For any descending chain $N_1\supset N_2\supset N_3\supset \dots$ of submodules of $M$, there is $n$ such that the quotients $N_{i}/N_{i+1}$ have support in $\operatorname{Max}R$ for all $i\geq n$. 3. With $L(M)=\underset{{\mathfrak{m}}\in \operatorname{Max}R}{\bigoplus}{\Gamma}_{{\mathfrak{m}}}(M)$, the module $M/L(M)$ has finite Goldie dimension, and $\dim R/{\mathfrak{p}}\leq 1$ for all $ {\mathfrak{p}}\in \operatorname{Ass}_R(M)$. If they are fulfilled, then for each monomorphism $f:M{\longrightarrow}M$, $$\operatorname{Supp}_R(\operatorname{Coker}f)\subset \operatorname{Max}R.$$ We will say that $M$ is in the class $\mathcal Z$ if $M$ satisfies the equivalent conditions in \[T:classz\]. A module [M]{} is [*soclefree*]{} if it has no simple submodules, or in other terms $\operatorname{Ass}M\cap\operatorname{Max}R=\varnothing$. For example if $M$ is a module over the local ring $(R,{\mathfrak{m}})$ then the module $M/{{\Gamma}_{{\mathfrak{m}}}(M)}$, where ${\Gamma}_{{\mathfrak{m}}}(M)$ is the submodule of $M$ consisting of all elements of $M$ annihilated by some high power ${{\mathfrak{m}}}^n$ of the maximal ideal ${\mathfrak{m}}$, is always soclefree. \[T:serrez\] The class $\mathcal Z$ is a Serre subcategory of the category of $R$–modules, that is $\mathcal Z$ is closed under taking submodules, quotients and extensions. The only difficult part is to show that $\mathcal Z$ is closed under taking extensions. To this end let $0{\longrightarrow}M^{\prime}\overset f{\longrightarrow}M\overset g{\longrightarrow}M^{\prime\prime}{\longrightarrow}0$ be an exact sequence with $M^{\prime},M^{\prime\prime}\in\mathcal Z$ and let $N_1\supset N_2\supset ...$ be a descending chain of submodules of $M$. Consider the descending chains $f^{-1}(N_1)\supset f^{-1}(N_2)\supset ...$ and $g(N_1)\supset g(N_2)\supset ...$ of submodules of $M^{\prime}$ and $M^{\prime\prime}$ respectively. By (ii) there is $n$ such that $\operatorname{Supp}_R(f^{-1}(N_i)/f^{-1}(N_{i+1}))\subset \operatorname{Max}R$ and $\operatorname{Supp}_R(g(N_i)/g(N_{i+1}))\subset \operatorname{Max}R$ for all $i\geq n$. We use the exact sequence $$0{\longrightarrow}f^{-1}(N_i)/f^{-1}(N_{i+1}){\longrightarrow}N_i/N_{i+1}{\longrightarrow}g(N_i)/g(N_{i+1}){\longrightarrow}0.$$ to conclude that $\operatorname{Supp}_R(N_i/N_{i+1})\subset \operatorname{Max}R$ for all $i\geq n$. \[T:Hinz\] Let $N$ be a module over a noetherian ring $R$ and ${\mathfrak{a}}$ an ideal of $R$ such that $\dim{R/{\mathfrak{a}}}=1$. If $N_{\mathfrak{m}}$ is ${{\mathfrak{a}}}R_{\mathfrak{m}}$–cofinite for all ${\mathfrak{m}}\in \operatorname{Max}R$, then $N$ is in the class $\mathcal Z$. In particular, if $M$ is a finite $R$–module then $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ is in the class $\mathcal Z$ for all $i$. Let $X=N/L(N)$. Note that $\operatorname{Ass}_R(X)\subset {\mbox{Min}\,}{{\mathfrak{a}}}$ and therefore is a finite set. Since $${\mbox{E}}(X)=\underset{{\mathfrak{p}}\in \operatorname{Ass}_R(X)}\bigoplus {\mbox{E}}(R/{\mathfrak{p}})^{\mu^{i}({\mathfrak{p}},X)},$$ it is enough to prove that $\mu^{i}({\mathfrak{p}},X)$ is finite for all ${\mathfrak{p}}\in \operatorname{Ass}_R(X)$. This is clear, since each ${\mathfrak{p}}\in \operatorname{Ass}_R(X)$ is minimal over ${\mathfrak{a}}$ and therefore $X_{\mathfrak{p}}\cong N_{{\mathfrak{p}}}$ which is, ${{\mathfrak{a}}}R_{{\mathfrak{p}}}$–cofinite, i.e. artinian over $R_{{\mathfrak{p}}}$. Given elements $x_1,\dots,x_r$ in $R$, we denote by $\operatorname{H}^{i}(x_1,\dots,x_r;M)$ the $i$’th Koszul cohomology module of the $R$–module $M$. The following lemma is used in the proof of \[T:artcof\]. \[L:inj\] Let $E$ be an injective module. If $\operatorname{H}^0(x_1,\dots,x_r ; E)=0$, then $\operatorname{H}^i(x_1,\dots,x_r ; E)=0$ for all $i$. We may assume that $E={\mbox{E}}(R/{\mathfrak{p}})$ for some prime ideal ${\mathfrak{p}}$, since $E$ is a direct sum of modules of this form, and Koszul cohomology preserves (arbitrary) direct sums. Put ${\mathfrak{a}}=(x_1,\dots,x_r)$. By hypothesis $0:_E{{\mathfrak{a}}}=0$, which means that ${\mathfrak{a}}\not\subset {\mathfrak{p}}$. Take an element $s\in {\mathfrak{a}}\setminus {\mathfrak{p}}$. It acts bijectively on $E$, hence also on $\operatorname{H}^i(x_1,\dots,x_r ; E)$ for each $i$. But ${\mathfrak{a}}\subset \operatorname{Ann}_R({\operatorname{H}^i(x_1,\dots,x_r ; E)})$ for all $i$, so the element $s$ therefore acts as the zero homomorphism on each $\operatorname{H}^i(x_1,\dots,x_r ; E)$. The conclusion follows. First we state the definition, given in [@Mel], of the notion of filter regularity on modules (not necessarily finite) over any noetherian ring. When $(R,{\mathfrak{m}})$ is local and $M$ is finite, it yields the ordinary notion of filter-regularity, see [@CST]. Let $M$ be a module over the noetherian ring $R$. An element $x$ of $R$ is called filter-regular on $M$ if the module $0:_M{x}$ has finite length. A sequence $x_1,...,x_s$ is said to be filter regular on $M$ if $x_j$ is filter-regular on $M/(x_1,...,x_{j-1})M$ for $j=1,...,s$. The following theorem yields a characterization of artinian cofinite local cohomology modules. \[T:artcof\] Let ${\mathfrak{a}}=(x_1,...,x_r)$ be an ideal of a noetherian ring $R$ and let $n$ be a positive integer. For each $R$–module $M$ the following conditions are equivalent: 1. $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ is artinian and ${\mathfrak{a}}$–cofinite for all $i<n$. 2. $\operatorname{Ext}^{i}_{R}(R/{\mathfrak{a}},M)$ has finite length for all $i<n$. 3. The Koszul cohomology modules $\operatorname{H}^i(x_1,\dots,x_r ; M)$ has finite length for all $i<n$. When $M$ is finite these conditions are also equivalent to: 1. $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ is artinian for all $i<n$. 2. There is a sequence of length $n$ in ${\mathfrak{a}}$ that is filter-regular on $M$. We use induction on $n$. When $n=1$ the conditions (ii) and (iii) both say that $0:_M{{\mathfrak{a}}}$ has finite length, and they are therefore equivalent to (i) [@Mel Proposition 4.1]. Let $n> 1$ and assume that the conditions are equivalent when $n$ is replaced by $n-1$. Put $L={\Gamma}_{{\mathfrak{a}}}(M)$ and $\overline{M}=M/L$ and form the exact sequence $0{\longrightarrow}L{\longrightarrow}M{\longrightarrow}\overline{M}{\longrightarrow}0$. We have ${\Gamma}_{{\mathfrak{a}}}(\overline{M})=0$ and $\operatorname{H}^{i}_{{\mathfrak{a}}}(\overline{M})\cong \operatorname{H}^{i}_{{\mathfrak{a}}}(M)$ for all $i> 0$. There are exact sequences $$\operatorname{Ext}^{i}_{R}(R/{\mathfrak{a}},L)\rightarrow \operatorname{Ext}^{i}_{R}(R/{\mathfrak{a}},M)\rightarrow \operatorname{Ext}^{i}_{R}(R/{\mathfrak{a}},\overline{M})\rightarrow \operatorname{Ext}^{i+1}_{R}(R/{\mathfrak{a}},L)$$ and $\operatorname{H}^i(x_1,\dots,x_r ; L)\rightarrow \operatorname{H}^i(x_1,\dots,x_r ; M) \rightarrow \operatorname{H}^i(x_1,\dots,x_r ; \overline{M}) \rightarrow \operatorname{H}^{i+1}(x_1,\dots,x_r ; L)$ Because $L$ is artinian and ${\mathfrak{a}}$–cofinite the outer terms of both exact sequences have finite length. Hence $M$ satisfies one of the conditions if and only if $\overline{M}$ satisfies the same condition. We may therefore assume that ${\Gamma}_{{\mathfrak{a}}}(M)=0$. Let $E$ be the injective hull of $M$ and put $N=E/M$. Consider the exact sequence $0{\longrightarrow}M{\longrightarrow}E{\longrightarrow}N{\longrightarrow}0$. We know that $0:_M{{\mathfrak{a}}}=0$. Therefore $0:_E{{\mathfrak{a}}}=0$ and ${\Gamma}_{{\mathfrak{a}}}(E)=0$. Consequently there are isomorphisms for all $i\geq 0$: $$\operatorname{H}^{i+1}_{{\mathfrak{a}}}(M)\cong \operatorname{H}^{i}_{{\mathfrak{a}}}(N),$$ $$\operatorname{Ext}^{i+1}_{R}(R/{\mathfrak{a}},M)\cong \operatorname{Ext}^{i}_{R}(R/{\mathfrak{a}},N)$$ and $\operatorname{H}^{i+1}(x_1,\dots,x_r ; M)\cong \operatorname{H}^i(x_1,\dots,x_r ; N).$ In order to get the third isomorphism, we used that $\operatorname{H}^i(x_1,\dots,x_r ; E)=0$ for all $i\geq 0$ (\[L:inj\]). Hence $M$ satisfies one of the three conditions if and only if $N$ satisfies the same condition, with $n$ replaced by $n-1$. By induction, we may therefore conclude that the module $M$ satisfies all three conditions if it satisfies one of them. Let now $M$ be a finite module. (ii)$\Leftrightarrow $(iv) Use [@Mel Theorem 5.5 (i) $\Leftrightarrow $(ii)]. (v)$\Rightarrow $(i) Use [@Mel Theorem 6.4]. (i)$\Rightarrow $(v) We give a proof by induction on $n$. Put $L={\Gamma}_{{\mathfrak{a}}}(M)$ and $\overline{M}=M/L$. Then $\operatorname{Ass}_R L=\operatorname{Ass}_R M\cap{\mbox{V}}({\mathfrak{a}})$ and $\operatorname{Ass}_R\overline M=\operatorname{Ass}_R M\setminus{\mbox{V}}({\mathfrak{a}})$. The module $L$ has finite length and therefore $\operatorname{Ass}_R L\subset\operatorname{Max}R$. By prime avoidance take an element $y_1\in{\mathfrak{a}}\setminus\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_R(\overline M)}{{\mathfrak{p}}}$. Then $\operatorname{Ass}_R(0:_M{y_1})=\operatorname{Ass}_R(M)\cap{\mbox{V}}(y_1) = (\operatorname{Ass}_R L\cap{\mbox{V}}(y_1))\cup(\operatorname{Ass}_R\overline M \cap{\mbox{V}}(y_1)) \subset\operatorname{Max}R$, Hence $0:_M{y_1}$ has finite length, so the element $y_1\in{\mathfrak{a}}$ is filter regular on $M$. Suppose $n> 1$ and take $y_1$ as above. Note that $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)\cong \operatorname{H}^{i}_{{\mathfrak{a}}}(\overline{M})$ for all $i\geq 1$. Thus we may replace $M$ by $\overline{M}$, [@Mel Proposition 6.3 (b)], and we may assume that $y_1$ is a non-zerodivisor on $M$. The exact sequence $0\rightarrow M\overset{y_1}\rightarrow M\rightarrow M/{y_1}M\rightarrow 0$ yields the long exact sequence $$\dots{\longrightarrow}\operatorname{H}^{i-1}_{{\mathfrak{a}}}(M){\longrightarrow}\operatorname{H}^{i-1}_{{\mathfrak{a}}}(M/{y_1}M) {\longrightarrow}\operatorname{H}^{i}_{{\mathfrak{a}}}(M){\longrightarrow}\dots.$$ Hence $\operatorname{H}^{i}_{{\mathfrak{a}}}(M/{y_1}M)$ is ${\mathfrak{a}}$–cofinite and artinian for all $i< n-1$, by [@LMcof Corollary 1.7]. Therefore by the induction hypothesis there exists $y_2,\dots,y_n$ in ${\mathfrak{a}}$, which is filter-regular on $M/{y_1}M$. Thus $y_1,\dots,y_n$ is filter-regular on $M$. \[r:cofart2\] In [@AMel] we studied the kernel and cokernel of the natural homomorphism $f:\operatorname{Ext}_R^n(R/{\mathfrak{a}},M)\to\operatorname{Hom}_R(R/{\mathfrak{a}},\operatorname{H}^n_{{\mathfrak{a}}}(M))$. Applying the criterion of \[T:artcof\] we get that if $\operatorname{Ext}^{t-j}_{R}(R/{\mathfrak{a}}, \operatorname{H}^{j}_{{\mathfrak{a}}}(M))$ has finite length for $t=n,n+1$ and for all $j<n$, then $\operatorname{Ext}^n_{R}(R/{\mathfrak{a}},M)$ has finite length if and only if $\operatorname{H}^{n}_{{\mathfrak{a}}}(M)$ is ${\mathfrak{a}}$–cofinite artinian. Next we will study attached and coassociated prime ideals for the last nonvanishing local cohomology module. First we prove a lemma used in \[C:attcoass\] \[L:att\] For all $R$–modules $M$ and for every finite $R$–module $N$, $$\operatorname{Att}_R(M\otimes_R{N})=\operatorname{Att}_R(M)\cap \operatorname{Supp}_R(N).$$ Let ${\mathfrak{p}}\in \operatorname{Att}_R(M\otimes_R{N})$, so $ {\mathfrak{p}}=\operatorname{Ann}_R((M\otimes_R{N})\otimes_R{R/{\mathfrak{p}}})$. However this ideal contains both $\operatorname{Ann}_R (M/{{\mathfrak{p}}M})$ and $\operatorname{Ann}_R (N)$ and therefore ${\mathfrak{p}}= \operatorname{Ann}_R(M/{{\mathfrak{p}}M})$ and ${\mathfrak{p}}\in\operatorname{Supp}_R(N)$. Conversely let ${\mathfrak{p}}\in \operatorname{Att}_R(M)\cap \operatorname{Supp}_R(N)$. Then ${\mathfrak{p}}=\operatorname{Ann}M/{{\mathfrak{p}}}M$ and we want to show that $ {\mathfrak{p}}=\operatorname{Ann}_R((M\otimes_R{N})\otimes_R{R/{\mathfrak{p}}})$. Since $(M\otimes_R{N})\otimes_R{R/{\mathfrak{p}}}\cong M/{{\mathfrak{p}}}M\otimes_{R/{\mathfrak{p}}}{N/{{\mathfrak{p}}}N}$, we may assume that $R$ is a domain and ${\mathfrak{p}}=(0)$. Let $K$ be the field of fractions of $R$. Then $\operatorname{Ann}M=0$ and $N\otimes_R{K}\neq 0$. Therefore the natural homomorphism $f: R{\longrightarrow}\operatorname{End}_R(M)$ is injective and we have the following exact sequence $$0{\longrightarrow}\operatorname{Hom}_R(N,R){\longrightarrow}\operatorname{Hom}_R(N,\operatorname{End}_R(M)).$$ But $\operatorname{Hom}_R(N,\operatorname{End}_R(M))\cong \operatorname{Hom}_R(M\otimes_R{N},M)$. Hence we get $\operatorname{Ann}_R({M\otimes_R{N}})\subset \operatorname{Ann}_R{\operatorname{Hom}_R(M\otimes_R{N},M)} \subset \operatorname{Ann}_R{\operatorname{Hom}_R(N,R)}\subset \operatorname{Ann}_R({\operatorname{Hom}_R(N,R)\otimes_R{K}}).$ On the other hand $\operatorname{Hom}_R(N,R)\otimes_R{K}\cong \operatorname{Hom}_R(N\otimes_R{K},K)$, which is a nonzero vector space over $K$. Consequently $\operatorname{Ann}_R({M\otimes_R{N}})=0$. \[T:attcoass\] Let $(R,{\mathfrak{m}})$ be a complete local ring and let ${\mathfrak{a}}$ be an ideal of $R$. Let $t$ be a nonnegative integer such that $\operatorname{H}^{i}_{{\mathfrak{a}}}(R)=0$ for all $i>t$. 1. If ${\mathfrak{p}}\in \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))$ then $\dim R/{{\mathfrak{p}}}\geq t.$ 2. If ${\mathfrak{p}}$ is a prime ideal such that $\dim R/{{\mathfrak{p}}}=t$, then the following conditions are equivalent: 1. ${\mathfrak{p}}\in \operatorname{Coass}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))$. 2. ${\mathfrak{p}}\in \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))$. 3. $\operatorname{H}^{t}_{{\mathfrak{a}}}(R/{\mathfrak{p}})\neq 0$. 4. $\sqrt{{\mathfrak{a}}+{\mathfrak{p}}}={\mathfrak{m}}$. \(a) By the right exactness of the functor $\operatorname{H}^{t}_{{\mathfrak{a}}}(-)$ we have $$\label{E:iso} \operatorname{H}^{t}_{{\mathfrak{a}}}(R/{\mathfrak{p}})\cong \operatorname{H}^{t}_{{\mathfrak{a}}}(R)/{{\mathfrak{p}}}\operatorname{H}^{t}_{{\mathfrak{a}}}(R)$$ If ${\mathfrak{p}}\in \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))$, then $\operatorname{H}^{t}_{{\mathfrak{a}}}(R)/{{\mathfrak{p}}}\operatorname{H}^{t}_{{\mathfrak{a}}}(R)\neq 0$. Hence $\operatorname{H}^{t}_{{\mathfrak{a}}}(R/{\mathfrak{p}})\neq 0$ and $\dim R/{{\mathfrak{p}}}\geq t.$ \(b) Since $R/{\mathfrak{p}}$ is a complete local domain of dimension $t$, the equivalence of (iii) and (iv) follows from the local Lichtenbaum Hartshorne vanishing theorem. If $\operatorname{H}^{t}_{{\mathfrak{a}}}(R/{\mathfrak{p}})\neq 0$, then by (\[E:iso\]) $\operatorname{H}^{t}_{{\mathfrak{a}}}(R)/{{\mathfrak{p}}}\operatorname{H}^{t}_{{\mathfrak{a}}}(R)\neq 0$. Therefore ${\mathfrak{p}}\subset {\mathfrak{q}}$ for some ${\mathfrak{q}}\in \operatorname{Coass}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))\subset \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))$. By (a) $\dim R/{{\mathfrak{q}}}\geq t= \dim R/{{\mathfrak{p}}}$, so we must have ${\mathfrak{p}}={\mathfrak{q}}$. Thus (iii) implies (i) and since always $\operatorname{Coass}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))\subset \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))$, (i) implies (ii). If (ii) holds then the module $\operatorname{H}^{t}_{{\mathfrak{a}}}(R)/{{\mathfrak{p}}}\operatorname{H}^{t}_{{\mathfrak{a}}}(R)\neq 0$, since its annihilator is zero. Hence, using again the isomorphism (\[E:iso\]), (ii) implies (iii). \[C:attcoass\] Let $(R,{\mathfrak{m}})$ be a complete local ring, ${\mathfrak{a}}$ an ideal of $R$ and $M$ a finite $R$–module and $t$ a nonnegative integer such that $\operatorname{H}^{i}_{{\mathfrak{a}}}(M)=0$ for all $i>t$. 1. If ${\mathfrak{p}}\in \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(M))$ then $\dim R/{{\mathfrak{p}}}\geq t.$ 2. If ${\mathfrak{p}}$ is a prime ideal in $\operatorname{Supp}_R(M)$ such that $\dim R/{{\mathfrak{p}}}=t$, then the following conditions are equivalent: 1. ${\mathfrak{p}}\in \operatorname{Coass}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(M))$. 2. ${\mathfrak{p}}\in \operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(M))$. 3. $\operatorname{H}^{t}_{{\mathfrak{a}}}(R/{\mathfrak{p}})\neq 0$. 4. $\sqrt{{\mathfrak{a}}+{\mathfrak{p}}}={\mathfrak{m}}$. Passing from $R$ to $R/\operatorname{Ann}M$, we may assume that $\operatorname{Ann}M=0$ and therefore using Gruson’s theorem, see [@V Theorem 4.1], $\operatorname{H}^{i}_{{\mathfrak{a}}}(N)=0$ for all $i>t$ and every $R$–module $N$. Hence the functor $\operatorname{H}^{t}_{{\mathfrak{a}}}(-)$ is right exact and therefore, since it preserves direct limits, we get $$\operatorname{H}^{t}_{{\mathfrak{a}}}(M)\cong M\otimes_R{\operatorname{H}^{t}_{{\mathfrak{a}}}(R)}.$$ The claims follow from \[T:attcoass\] using the following equalities $$\operatorname{Coass}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(M))=\operatorname{Coass}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))\cap \operatorname{Supp}_R(M)$$ by [@Zrmm Folgerung 3.2] and $$\operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(M))=\operatorname{Att}_R(\operatorname{H}^{t}_{{\mathfrak{a}}}(R))\cap \operatorname{Supp}_R(M)$$ by \[L:att\]. [99]{} M. Aghapournahr, L. Melkersson, *A natural map in local cohomology*, priprint. M.P. Brodmann, R.Y. Sharp, *Local cohomology: an algebraic introduction with geometric applications*, Cambridge University Press, 1998. W. Bruns, J. Herzog, *Cohen-Macaulay rings*, Cambridge University Press, revised ed., 1998. D. Delfino and T. Marley, *Cofinite modules and local cohomology*, J. Pure Appl. Alg. [**121**]{}(1997), 45–52. K. Divaani-Aazar, A. Mafi *Associated primes of local cohomology modules of weakly Laskerian modules* Comm. Algebra [**34**]{}(2006), 681–690. K. Divaani-Aazar, A. Mafi, *Associated primes of local cohomology modules*, Proc. Amer. Math. Soc. [**133**]{} (2005), 655–660. A. Grothendieck, *Cohomologie locale des faisceaux coh$\acute{e}$rents et th$\acute{e}$or$\grave{e}$mes de Lefschetz locaux et globaux (SGA 2)*, North-Holland, Amsterdam, 1968. R. Hartshorne, *Affine duality and cofiniteness*, Invent. Math. **9** (1970), 145–164. H.  Matsumura, *Commutative ring theory*, Cambridge University Press, 1986. L. Melkersson, *Modules cofinite with respect to an ideal*, J. Algebra. [**285**]{}(2005), 649–668. L. Melkersson, *Properties of cofinite modules and applications to local cohomology*, Math. Proc. Cambridge Phil. Soc. [**125**]{} (1999), 417–423. W. Vasconcelos, *Divisor theory in module categories*, North- Holland, Amsterdam, 1974. P. Schenzel, N. V. Trung, N. T. Cuong, *Verallgemeinerte Cohen-Macaulay-Moduln*, Math. Nachr. [**85**]{}(1978), 57–73. K. I. Yoshida, *Cofiniteness of local cohomology modules for ideals of dimension one*, Nagoya Math. J. [**147**]{}(1997), 179–191. H. Zöschinger, *Koatomare Moduln*, Math. Z. [**170**]{}(1980) 221-232. H. Zöschinger, *Minimax Moduln*, J. Algebra. [**102**]{}(1986), 1–32. H. Zöschinger, *Über koassoziierte Primideale*, Math Scand. [**63**]{}(1988), 196–211. H. Zöschinger, *Linear-kompakte Moduln über noetherschen Ringen*, Arch Math. [**41**]{}(1983), 121–130.
--- author: - 'Joachim Janz[^1] [^2] & Thorsten Lisker' title: 'A continuum of structure and stellar content from Virgo cluster early-type dwarfs to giants?' --- Introduction ============ Early-type dwarf (dE) galaxies are commonly expected to play a key role in understanding galaxy cluster evolution. Their importance is given by their abundances – they outnumber all other galaxy types in dense cluster environments by far – and the fact that they provide not too massive, not too dense test particles to probe processes that let the cluster environment alter the appearance of galaxies. At the same time, dEs are predicted to form in models of a $\Lambda$CDM universe as the descendants of building blocks in hierarchical structure formation and to be in that sense close relatives to their giant counterparts, sharing a cosmological origin. A better understanding of dEs is therefore not only linked to our knowledge of formation and evolution of galaxy clusters but also of structure formation itself. Once believed to be systems of simple appearance and well-defined properties, dEs were recently shown to exhibit a puzzling variety among their structures and stellar populations (see e.g. T. Lisker, this issue). This diversity opens the door widely for different formation scenarios. And indeed there are different suggestions, for example the transformation of other galaxy types by the cluster environment via ram pressure stripping or harassment, which are partly able to explain the appearance of dEs and also reproduce with some successes fundamental scaling relations of early-type galaxies. But still today, it remains an open question to what extent these different processes play a role and whether some of the early-type dwarf galaxies share the same origin and formation mechanisms with their more massive relatives. The above mentioned scaling relations have ever been an important tool not only to study galaxy properties but also to link those properties to their formation and evolution, and thus to answer the question. Very well studied examples are the relations between surface brightness and size (“Kormendy relation", @1985ApJ...295...73K), between surface brightness and luminosity (e.g. @binggeli_cameron). In combination with velocity dispersion, the Faber-Jackson relation [@1976ApJ...204..668F] and the extensionto the Fundamental Plane (@1987ApJ...313...42D, @1987ApJ...313...59D) became famous. Every time these relations were analyzed for dwarfs and giants in conjunction, it was discussed, whether or not they show a common behavior and what causes it. Any dwarf formation scenario has to reproduce the observationally found relations. Additionally to these morphological and kinematical relations, the color magnitude relation (CMR), connecting the global parameter total brightness of a galaxy to its stellar population, was extensively studied, e.g. @1959PASP...71..106B [@1992MNRAS.254..601B; @1973ApJ...179..731F; @1978ApJ...223..707S; @1978ApJ...225..742S]; @1977ApJ...216..214V. The CMR is typically explained by an increase of mean stellar metallicity (and age; see e.g. @cmr_age) with increasing galaxy mass as the dominant effect. The common underlying idea is that more massive galaxies have deeper potential wells, which can retain metal-enriched stellar ejecta more effectively and subsequently recycle the enriched gas into new stars . Also here it was explored whether and, if so, how much these processes shape the CMR of giants and dwarfs in a similar way. We made use of a very homogenous data set of the early types in the Virgo cluster to investigate these questions via the scaling relation of size and brightness (which is a relative to the aforementioned morphological scaling relations) and the color magnitude relation [@2008ApJ...689L..25J; @2009ApJ...696L..102J]. Sample Selection and Imaging Data {#sec:imagingdata} ================================= Our sample is based on the Virgo Cluster Catalog (VCC; @bst). All early-type galaxies therein with a certain cluster member status and $m_B<18.0$ mag are taken into account, which is the same magnitude limit up to which the VCC was found to be complete. This translates into $M_B<-13.09$ mag with our adopted distance modulus of m-M=31.09 mag (d=16.5 Mpc, @2007ApJ...655..144M). Uncertain classifications are treated as follows: galaxies listed as “S0:", “E/S0”, “S0/Sa”, and “SB0/SBa” are taken as S0, and one S0 (VCC1902) is excluded, since it shows clear spiral arm structure. For the dwarfs, we selected galaxies classified as dE, dS0, and “dE:", whereas “dE/Im” as well as possible irregulars based on visual inspection are excluded [@lisker_etal]. We exclude 37 galaxies for the following reasons: the Petrosian aperture (see below) could not be obtained, the objects were too strongly contaminated by the light of close neighbour objects, or the $S/N$ in either the $u$ or the $z$ band was too low. Our working sample thus consists of 468 galaxies. The Sloan Digital Sky Survey (SDSS) Data Release Five (DR5) [@2007ApJS..172..634A] covers all but six early-type dwarf galaxies of the VCC. Since the quality of sky level subtraction of the SDSS pipeline is insufficient, we use sky-subtracted images as provided by @lisker_etal, based on a careful subtraction method. The images were flux-calibrated and corrected for galactic extinction [@1998ApJ...500..525S]. For each galaxy, we determined a “Petrosian semimajor axis" $a_p$ [@1976ApJ...209L...1P], i.e. we use ellipses instead of circles in the calculation of the Petrosian radius (see, e.g., @2004AJ....128..163L). The total flux in the $r$-band was measured within $2 a_p$, yielding a value for the half-light semimajor axis (SMA), $a_{hl,r,uncorr}$. This Petrosian aperture still misses some flux, which is of particular relevance for the giant galaxies [@2001MNRAS.326..869T]. The brightness and the half-light SMA were corrected for this missing flux according to @2005AJ....130.1535G. Axial ratio and position angle were then determined through an isophotal fit at $2 a_{hl,r}$. The effective radius is then given by $r_{\textit{eff}}=a_{\textit{hl,r}}\sqrt{b/a}$ with the axis ratio $b/a$. Additionally we fitted Sérsic profiles to the radial intensity profiles, keeping half light radius fixed and using an implementation of the nonlinear least-squares Levenberg-Marquardt algorithm. For the fits we used the intensities at $r/a_{\textit{hl,r}}=2^x$ with $x=-2 + j/4$ and $j=0, \dots, 16$. We omitted intensities at radii $r < 2^{\prime\prime}$ in order to avoid seeing effects. Colors were measured within the elliptical $r$-band half-light aperture for each filter. Errors were estimated from the $S/N$ and calibration uncertainties (which we estimate to have a *relative* effect of 0.01 mag in each band, which is smaller than the absolute values given by SDSS), as described in @2008AJ....135..380L. Sizes of early-type galaxies ============================ Introduction ------------ The scaling relation of size and brightness of early types was not as widely studied as its relatives, like the Kormendy relation and the relation between brightness and effective surface brightness. Studies of the sizes, for example, are: @1992ApJ...399..462B [@1993MNRAS.265..731G]; @1977ApJ...218..333K, and for the Virgo Cluster in particular by @binggeli_cameron for dwarfs and by @1993MNRAS.265.1013C for giants. But all of them share a similar history: Early studies until the 1990’s came to the conclusion that giant and dwarf early-type galaxies show a distinct behavior in the scalings such as the relation between size and brightness. The dwarfs were seen to show less change of size with luminosity than the giants. This together with the other scaling relations was interpreted as evidence for a different origin of dwarf and giant early-type galaxies. Towards the turn of the millenium, however, it became more widely realized that the light profile shapes of early types vary continuously with luminosity. Neither do dwarf galaxies simply follow exponential profiles, nor do all giants exhibit de Vaucouleurs profiles. Instead, all early types are well described by the generalized Sérsic profile [@1963BAAA....6...41S] with different Sérsic indices $n$ [@1994MNRAS.268L..11Y; @2006ApJS..164..334F]. Several authors reasoned that the scaling relations naturally follow what is predicted by $n$ changing linearly with magnitude, and that all these galaxies can indeed be of the same kind (@jerjen_binggeli; ; ). In @2008ApJ...689L..25J we studied the size brightness relation of early types in Virgo and analyzed it in the light of a continuous variation of profile shapes. Results ------- In Fig. 1 (bottom panel) we present the size luminosity diagram for our sample. At first glance the sequence from dwarf to giant early-type galaxies does not look very continuous: the giants follow a steep relation with a well-defined edge on the bright end of their distribution. The bunch of dwarfs apparently lie with a larger scatter around an effective radius of $r_{\textit{eff}}=1$ kpc, their sizes showing weak to no dependence on luminosity. ![*Bottom panel:* Absolute magnitude in $r$ versus logarithm of half light radius. Filled squares - E, gray stars - S0, open squares - M32 candidates, open triangles - dE,N, open pentagons - dE,nN and gray open circles for dwarf galaxies with probable disk-like structure (dE(di) or dE(bc)). The grey line is calculated with the linear fits in the *top panels* (see text).](f1.eps) A similar impression can also be obtained from other previous studies, e.g.  @binggeli_cameron, Fig. 1b; @1992ApJ...399..462B, Table 1; @kormendy08, Fig. 37. It is, however, not as clearly seen in the compilation of sizes of elliptical galaxies from several different studies presented by @2008MNRAS.tmp..752G (Fig. 10). In this more heterogeneous data set, the relative number of small low-luminosity giants as well as that of large bright dwarfs appears to be somewhat smaller. ### Varying profile shapes ? {#varying-profile-shapes .unnumbered} @graham_guzman suggested that the apparent dichotomy between dwarfs and giants in scaling relations can be explained just by the fact that the profile shape of a galaxy scales with magnitude. They describe the light profiles with Sérsic profiles and show the effect of a linear relation between magnitude and logarithm of the Sérsic index $n$ on the other scaling relations. As a result, the dependence of effective radius on magnitude becomes stronger at higher luminosities and the brightest galaxies are naturally larger (Fig. 11 in @2008MNRAS.tmp..752G). For investigating whether our galaxies display the predicted behavior, we use the Sérsic indices $n$ and central surface brightnesses $\mu_0$ to obtain linear fits to the $\mu_0/M_r$ and $n/M_r$ relations, using a least squares fitting algorithm (Fig. 1, top panels). For those fits we exclude systems with a (probable) disk component, namely galaxies classified as S0, dEs with disk features [@2006AJ....132..497L], and dEs with blue centers [@2006AJ....132.2432L]. This is to ensure that the light profiles can be well parametrized by Sérsic profiles. Our fits (Fig. 1, top panels) together with equation (16) of @2008MNRAS.tmp..752G predict a non-linear sequence in the $r_{\textit{eff}}/M_r$ diagram. The predicted relation is shown together with the observed galaxies in the bottom panel of Fig. 1. With the visual guidance of the line, it appears more likely that the data points follow one common continuous relation. And the gross trend in the diagram can indeed be explained by varying profile shapes. However, at luminosities brightwards of the transition between dwarfs and giants, a substantial amount of galaxies fall below the relation, while faintwards most of the dwarfs lie above it. As we showed in @2008ApJ...689L..25J this finding holds also if all objects with signs of disk components are omitted in order to have a purer sample of dynamically hot systems not biased by systems with more complex kinematics. Furthermore, we analyzed the deviations statistically and found significance (Fig. 2 therein). We will discuss the implications in Sect. \[sec:sam\] and Sect. \[sec:disc\]. Our analysis showed two things. First, the distribution of data points does not resemble a quite large random scatter around the relation. And therefore the size luminosity relation can not be fully explained by varying profile shapes. Second, the abrupt change in the behavior of faint and bright galaxies is even emphasized through the above examination, and this break is a real discontinuity of the sequence from lowest to highest luminosities. Color magnitude relation ======================== Introduction ------------ From early on, it was discussed whether the universality of the CMR also holds over the whole range of galaxy masses, i.e. whether dwarf and giant early-type galaxies follow the same CMR. Studies of different clusters show consistency with one common CMR for dwarfs and giants, albeit with a significant increase in the scatter at low luminosities (@1997PASP..109.1377S for Coma, @2002AJ....123...2246C for Perseus, @2003MNRAS.344..188K and for Fornax, @2008MNRAS.386.2311S for Antlia, and for Hydra I). More explicitely, @1983AJ.....88..804C stated that there is a common linear relation. But his Fig. 3 might hint at a change of slope from high to low luminosities, similar to what @1961ApJS....5..233D suggested. Interestingly, visual examination of the diagrams presented by most of the above-mentioned studies indicates consistency also with a change of slope – yet linear relations were fitted in most cases (see, however, @2006ApJS..164..334F; our colors are consistent with the ones in their Virgo cluster study in the range of brightness common to both). In @2009ApJ...696L..102J we revisited the question of the universality of the CMR for dwarfs and giants. Result ------ We showed the CMRs for four different representative colors in @2009ApJ...696L..102J. Here we choose to show the CMR in $u-z$, the color with the longest wavelength baseline available to SDSS. It looks remarkably similar to the CMRs in the other colors (Fig. 1 in @2009ApJ...696L..102J). First of all, the impression one can get by examining just the *black points* in Fig. 2 is that there is not one common linear relation from the faint to the bright galaxies.The overall shape appears more like “S” shaped. The brightest ($M_r<-21$) galaxies have almost constant color, i.e. no correlation between color and brightness; the very brightest galaxies show a larger scatter. These were reported before to be *morphologically* different from the other galaxies in more detailed studies of the inner light profiles (e.g. @2006ApJS..164..334F [@kormendy08; @2007ApJ...664..226L]; @2001AJ....122..653R [@2004AJ....127.1917T]). For the remaining galaxies several descriptions seem to be plausible, ranging from just an offset between two relations with similar slopes up to a curved relation. ![Color Magnitude Relation in $u-z$. Filled circles show our sample’s galaxies. The gray line indicates the “running histogram” as found in successive magnitude bins with a width of $1$ mag and steps of $0.25$ mag, clipped one time at $3 \sigma$. We limit the drawing range for the line to the region with at least three galaxies in a bin. The white histograms show the distributions in bins of the same width, normalized to the square root of the number of galaxies in the bin, shown for every fourth step. The white errorbars indicate typical photometric errors at the respective brightness. In the panel right to the color magnitude diagram a measure for the intrinsic scatter is given by the RMS difference $\delta$ of the observed scatter and the photometric error (see text) in continuous bins. ](f2.eps) With the non-linear shape, it seems not very favorable to fit a straight line. This would not describe the data well, and there is no theoretical prediction what other function is expected. So at first, we want to make the overall shape more clearly visible, using continous, overlapping magnitude bins, in which mean and scatter are calculated. In Fig. 2 these derived relations are shown with grey lines. The first impression is confirmed: one common linear relation for dwarfs and giants cannot be seen. Moreover, the white histograms showing the galaxy distributions in the bins are clearly peaked towards the bright and the faint end, while they are rather flat at intermediate luminosity. The scatter about the relation is greatly influenced by increasing photometric errors towards faint brightness. In order to measure the *intrinsic* scatter, we calculate the RMS of the scatter around the mean in running bins (clipping one time at 3$\sigma$) and subtract the RMS of the photometric errors $$\delta\equiv\textrm{RMS difference}\equiv\sqrt{\textrm{rms}^2_\textrm{scat}-\textrm{rms}^2_\textrm{err}}$$ $$=\sqrt{{\sum_i (c_i-\left<c\right>_i)^2}-{\sum_i \sigma_i^2}},$$ with color $c$ and mean color $\left<c\right>$, averaging over the galaxies in the respective bin. Here we exclude dEs with blue cores, since they are known to have different colors [@2008AJ....135..380L]. This RMS difference should be zero if the scatter is only due to the measurement errors and larger if there is an intrinsic scatter. In Fig. 2 we show the CMR along with the RMS difference $\delta$. Indeed, the RMS difference is enhanced for the dwarfs and peaks around $M_r\approx -18$, indicating an intrinsically increased scatter, consistent with the increased intrinsic scatter for dwarfs found by [@1997PASP..109.1377S] in the Coma cluster. One can argue about the significance of the RMS difference increase for the brightest galaxies, since it is just a handful of them – nevertheless, this larger scatter might be related to the absence of a well-defined CMR at the brightest magnitudes. Comparison to a Semi-Analytic Model {#sec:sam} =================================== ![image](f3a.eps) ![image](f3b.eps) Dark-halo merger trees of a high resolution $N$-body simulation of $\Lambda$CDM structure formation were taken as input for a semi-analytic model (SAM) of the physical processes governing galaxy formation and evolution in order to produce the Numerical Galaxy Catalog [@2005ApJ...634...26N]. In particular, the dynamical response to starburst-induced gas removal after gas-rich mergers (also for cases intermediate between a purely baryonic cloud and a baryonic cloudfully supported by surrounding dark matter as in ) were taken into account. This process plays a crucial role for the sizes of early-type dwarf galaxies, since the subsequent variation of the potential results in an increase in size. We identify model galaxies as early-type galaxies if they are bulge dominated (bulge to total ratio $>0.6)$. To compare our data with the model we transformed SDSS $g$ magnitudes into $B$ according to [@2002AJ....123.2121S], using the galaxies’ $g-r$ color measured within $a_{hl,r}$. B-V was obtained likewise. In Fig. 3 one can see how the model galaxies compare with our observed Virgo early types. In the left panel the comparison is displayed for the size brightness relation. The model galaxies show a bimodality similar to what we observe, with low galaxy density between the two regions. Note that @2005ApJ...634...26N assume de Vaucouleurs profiles to calculate projected half-light radii from half-mass radii. For exponential profiles, which would be more appropriate for dwarfs, the model galaxies would shift upwards in the diagram by 0.11 dex [@2003MNRAS.340..509N]. In the model, a starburst follows in those dwarfs that form by gas-rich major mergers and the dwarfs are enlarged by the dynamic response to the subsequent gas loss. This mechanism is not at work in gas-deficient mergers, and the resulting galaxies stay smaller. Note, though, that this appears to be in contrast with the SAM of @khochfar_sam, where gas-rich mergers lead to more compact early-type galaxies than gas-poor mergers. In the right panel of Fig. 3 the comparison is displayed for the CMR. The shapes of the distribution of Virgo galaxies and the model CMR are indeed not well represented by linear relations. Both the CMR and the relation between size and brightness do not show a linear, nor one common behaviour of dwarfs and giants. Nevertheless, it needs to be emphasized that there is a *qualitative* similarity in the shapes of observed and model CMR, in the sense that both show a similar bend at intermediate luminosities. This is noteworthy, since in the framework of the SAM, both dwarf and giants form by the same physical processes, which govern $\Lambda$CDM structure formation, and thus both can be of cosmological origin (see also @2008arXiv0812.3272C). Beside the similarity in the overall shape, an offset is observed. This offset could partly be due to uncertainties of the adopted synthetic stellar population model. Furthermore, the relative number of bright galaxies exceeds the observed one and the luminosity function is clearly different, which could possibly be explained with model input physics. Discussion {#sec:disc} ========== We studied two scaling relations of the Virgo cluster early-type galaxies, based on model-independent size measurements from SDSS imaging data. In both, the relation of size and brightness and the CMR, we find noticeable unsteadinesses between dwarfs and giants. In the former relation dwarfs do not fall on the extension of the rather steep sequence of the giants. While the gross trend in the size luminosity relation can be explained by light profile shapes becoming steeper for more luminous galaxies, a closer look reveals a statistically significant distinctness in the behaviour of faint and bright galaxies. The CMR is continuous over the whole range. Yet the observed change in slope and the variation of the scatter might hint at more complex reasons for this particular behavior than what might be naively expected from having one common origin, with the very same processes shaping the CMR in the same way. But with the comparison to the semianalytic model (notwithstanding our rather crude way of selecting early types in the SAM) we show that neither of the two findings necessarily implies a formation by substantially distinct processes. Instead, the qualitatively similar distributions of the model galaxies in the two scaling relations might hint at a formation in a cosmological context for the dwarf galaxies, hence an origin common to the giant ellipticals. This is in accordance with previous claims of no distinction between them (; ). It must be mentioned that different approaches can also successfully explain the dwarf behavior. For example, ram pressure stripping can reproduce the radius brightness relation (@2008arXiv0807.3282B, see also A. Boselli, this issue). Given the newly appreciated diversity of dwarf appearances we see that as an advantage rather than a shortcoming. For future studies, the following procedure seems promising: Both relations seem to be related in one or the other way with the galaxies’ dynamics. @2005AJ....129...61B concluded that the CMR is a result of two more fundamental relations: the Faber-Jackson relation and a relation between color and velocity dispersion. Given the slope change of the Faber-Jackson relation from giants to dwarfs a change of slope of the CMR would actually be expected. And in the semianalytic model the sizes are strongly influenced by the dynamic feedback. In the model’s context this should be closely related to the internal dynamics of the galaxies. Therefore it is desirable to obtain central velocity dispersions and kinematics for early-type dwarfs in greater numbers. We thank the organizers for financial support to participate in the conference. J.J.  is supported by the Gottlieb Daimler and Karl Benz Foundation. J.J. and T.L. are supported by the Excellence Initiative within the German Research Foundation (DFG) through the Heidelberg Graduate School of Fundamental Physics (grant number GSC 129/1). The study is based on SDSS (http://www.sdss.org/). [42]{} , J. K., et. al. 2007, , 172, 634 , W. A. 1959, , 71, 106 , R., [Burstein]{}, D., & [Faber]{}, S. M. 1992, , 399, 462 , M., [Sheth]{}, R. K., [Nichol]{}, R. C., [Schneider]{}, D. P., & [Brinkmann]{}, J. 2005, , 129, 61 , B. & [Cameron]{}, L. M. 1991, A&A, 252, 27 , B. & [Jerjen]{}, H. 1998, A&A, 333, 17 , B., [Sandage]{}, A., & [Tammann]{}, G. A. 1985, , 90, 1681 , A., [Boissier]{}, S., [Cortese]{}, L., & [Gavazzi]{}, G. 2008, A&A, 489, 1015 , R. G., [Lucey]{}, J. R., & [Ellis]{}, R. S. 1992, , 254, 601 , N. 1983, , 88, 804 , N., [Capaccioli]{}, M., & [D’Onofrio]{}, M. 1993, , 265, 1013 , R., et al. 2006, , 366, 717 , I. 2009, , 394, 1229 , C. J., [Gallagher]{}, III, J. S., & [Wyse]{}, R. F. G. 2002, , 123, 2246 , S., [Michielsen]{}, D., [Dejonghe]{}, H., [Zeilinger]{}, W. W., & [Hau]{}, G. K. T. 2005, A&A, 438, 491 , G. 1961, , 5, 233 , S. & [Davis]{}, M. 1987, , 313, 59 , A., et.al. 1987, , 313, 42 , S. M. 1973, , 179, 731 , S. M. & [Jackson]{}, R. E. 1976, , 204, 668 , L., et. al. 2006, , 164, 334 , I., [Charlot]{}, S., & [Silk]{}, J. 1999, , 521, 81 , A., [Charlot]{}, S., [Brinchmann]{}, J., & [White]{}, S. D. M. 2006, , 370, 1106 , G., et. al. 2002, , 576, 135 , G., et al. 2005, A&A, 430, 411 , A. W., et. al. 2005, , 130, 1535 , A. W. & [Guzm[á]{}n]{}, R. 2003, , 125, 2936 , A. W. & [Worley]{}, C. C. 2008, , 388, 1708 , R., [Lucey]{}, J. R., & [Bower]{}, R. G. 1993, , 265, 731 , J. & [Lisker]{}, T. 2008, , 689, L25 , J. & [Lisker]{}, T. 2009, , 696, L102 , H. & [Binggeli]{}, B. 1997, in ASP Conf. Ser. 116, The Nature of Elliptical Galaxies; ed. M. [Arnaboldi]{}, G. S. [Da Costa]{}, & P. [Saha]{}, 239 , A. M., [Drinkwater]{}, M. J., & [Gregg]{}, M. D. 2003, , 344, 188 , S. & [Silk]{}, J. 2006, , 648, L21 , T. & [Arimoto]{}, N. 1997, A&A, 320, 41 , J., [Weidner]{}, C., & [Kroupa]{}, P. 2007, , 375, 673 , J. 1977, , 218, 333 —. 1985, , 295, 73 , J., [Fisher]{}, D. B., [Cornell]{}, M. E., & [Bender]{}, R. 2009, , 182, 216 , T. R., et al. 2007, , 664, 226 , T., [Grebel]{}, E. K., & [Binggeli]{}, B. 2006, , 132, 497 , T., [Glatt]{}, K., [Westera]{}, P., & [Grebel]{}, E. K. 2006, , 132, 2432 —. 2008, , 135, 380 , T., [Grebel]{}, E. K., [Binggeli]{}, B., & [Glatt]{}, K. 2007, , 660, 1186 , T. & [Han]{}, Z. 2008, , 680, 1042 , J. M., [Primack]{}, J., & [Madau]{}, P. 2004, , 128, 163 , A. & [Guzm[á]{}n]{}, R. 2005, , 362, 289 , S., [Blakeslee]{}, J. P., et al. 2007, , 655, 144 , S., [Hilker]{}, M., [Infante]{}, L., & [Mendes de Oliveira]{}, C. 2007, A&A, 463, 503 , I., [Mieske]{}, S., & [Hilker]{}, M. 2008, A&A, 486, 697 , M., [Yahagi]{}, H., [Enoki]{}, M., [Yoshii]{}, Y., & [Gouda]{}, N. 2005, , 634, 26 , M. & [Yoshii]{}, Y. 2003, , 340, 509 —. 2004, , 610, 23 , V. 1976, , 209, L1 , S., [Ho]{}, L. C., [Peng]{}, C. Y., [Filippenko]{}, A. V., & [Sargent]{}, W. L. W. 2001, , 122, 653 , A. & [Visvanathan]{}, N. 1978, , 225, 742 —. 1978, , 223, 707 , D. J., [Finkbeiner]{}, D. P., & [Davis]{}, M. 1998, , 500, 525 , J., [Harris]{}, W. E., & [Plummer]{}, J. D. 1997, , 109, 1377 , J. L. 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41 , et. al. 2002, , 123, 2121 , A. V., et al. 2008, , 386, 2311 , I., [Erwin]{}, P., [Asensio Ramos]{}, A., & [Graham]{}, A. W. 2004, , 127, 1917 , I., [Graham]{}, A. W., & [Caon]{}, N. 2001, , 326, 869 , N. & [Sandage]{}, A. 1977, , 216, 214 , Y. & [Arimoto]{}, N. 1987, A&A, 188, 13 , C. K. & [Currie]{}, M. J. 1994, , 268, L11 [^1]: Fellow of the Gottlieb Daimler and Karl Benz Foundation. [^2]:
--- abstract: 'We review recent progress on the calculations on the inclusive forward hadron production within the saturation formalism. After introducing the concept of perturbative parton saturation and nonlinear evolution we discuss the formalism for the forward hadron production at high energy in the leading and next-to-leading order. Numerical results are presented and compared with the experimental data on forward hadron production in $dA$ and $pA$. We discuss the problem of the negativity of the NLO cross section at high transverse momenta, study its origin in detail and present possible improvements which include the corrected kinematics and the suitable choice of the rapidity cutoff.' bibliography: - 'nlosatrev.bib' --- [**Saturation in inclusive production beyond leading logarithm accuracy**]{} $^a$ [*Physics Department, Pennsylvania State University, 104 Davey Laboratory\ University Park, Pennsylvania 16802, USA\ *]{}[astasto@phys.psu.edu]{} $^b$ [*Institute of Particle Physics, Central China Normal University\ Wuhan, Hubei, China\ *]{}[david.zaslavsky@mailaps.org]{} Introduction {#sec:introduction} ============ Modern particle colliders like the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) allow physicist to probe the dynamics of strongly interacting matter under extreme conditions of high energy and density. One of the main goals of these experiments is to create and test the new form of strongly interacting matter, the quark-gluon plasma. A plethora of experimental data, for example on jet quenching and the elliptic flow [@Arsene:2004fa; @Back:2004je; @Adams:2005dq; @Adcox:2004mh; @ATLAS:2011ah; @ATLAS:2012at; @Aamodt:2010pa] confirm the existence of this strongly coupled system with collective effects, whose subsequent evolution is tractable within hydrodynamical approaches [@Kolb:2003dz]. One of the main questions that arise, when trying to describe the properties and dynamics of such a complex system, is the role of the initial state in the highly energetic collisions of hadrons and nuclei. The standard approaches to describe the wide variety of processes in hadronic collisions are based on the collinear approximation [@Collins:1989gx] and are grounded on the assumption of the presence of high momentum scales, which justifies the use of perturbation theory and allows for the factorization of the cross section into perturbative matrix elements, which are process-specific, and universal parton distributions and fragmentation functions. This framework is highly successful in the description of phenomena that involve large scales and relatively small numbers of partons. On the other hand, it has been well known since the deep inelastic scattering experiments at the HERA collider that at high energy, or equivalently at small values of Bjorken $\bjorkenx$, the structure function of the proton grows very fast with decreasing values of $\bjorkenx$. This is related to the self-interaction of gluons in QCD, which leads to a strong increase of the density of gluons in the low-$\bjorkenx$ regime, and subsequently of the observable cross sections in deep inelastic scattering. The famous Balitsky-Fadin-Kuraev-Lipatov (BFKL) [@Balitsky:1978ic; @Kuraev:1977fs] evolution equation, which sums gluon emissions in the Regge limit — that is, when $\mandelstams \gg \abs{\mandelstamt}$ and $\alphas \ln\mandelstams$ is large — leads to strong power growth of the scattering amplitude. This multiplication of gluons with increasing energy can eventually lead to a regime where the densities are high and a new computational framework has to be developed in order to account for the complexities of the multiparton state. Various calculations within Quantum Chromodynamics predict that in such a case, the phenomenon of parton saturation occurs [@Gribov:1984tu; @Mueller:1985wy]: in addition to the gluon splitting which leads to the growth of the density, a competing process emerges which tames the growth of the gluon densities and consequently of the observable cross section. The onset of this behavior depends not only on the energy of the process in question, but also on the type of the particle involved in the collision. By changing the size of the initial particle from a proton to a heavy nucleus, the process of gluon recombination is enhanced by a factor proportional to a certain power of the mass number, and the energy at which it can happen is lowered with respect to the smaller particle. The calculations in the high-energy and high-density limit predict the emergence of an energy-dependent scale which characterizes the onset of parton saturation, namely the saturation scale $\satscale(\xtarget, \massnumber)$. It is predicted that this scale increases with the energy and the mass number $\massnumber$, though its absolute normalization is not yet predicted by these calculations, and has to be deduced from comparisons with the experimental data. In that context, knowledge of the partonic initial state and the possible onset of the parton saturation is essential for the complete description of both proton-proton as well as proton-ion collisions, and it will shed more light into the initial state for the heavy ion collisions. There have been numerous tests of parton saturation in a variety of processes which involve both protons and nuclei. Among them are proton-nucleus collisions, which serve as one of the important benchmark processes for the heavy ion collisions. For a review see e.g. [@Albacete:2013tpa]. In a proton-nucleus collision, the proton is relatively dilute and is probing the dense system of the heavy nucleus. Of particular importance is the study of the forward single inclusive production of hadrons in proton (or deuteron) collisions with nuclei. By observing the hadron which is produced in the forward direction of the proton (or deuteron), the kinematics are such that the participating parton from the proton side has a large value of fractional momentum $\xprojectile$, and as such is usually a valence parton. On the other hand, the parton from the nucleus side has a small value of $\xtarget \ll 1$. Therefore, the process is sensitive to the nuclear parton density at small $\bjorkenx$, and is dominated by the gluon density. It was observed, in the data from BRAHMS at the RHIC, that the ratio of the single inclusive production in deuteron-nucleus collisions to that in proton-proton collisions was suppressed at forward rapidity. This phenomenon had a natural explanation within the parton saturation framework, which predicted enhanced suppression with increasing rapidity [@Kharzeev:2003wz; @Kharzeev:2004yx]. Nevertheless, the interpretation of these data was not unique, as the process occurred near the kinematic boundary, and other effects not necessary related to the parton saturation could also be responsible for the observed suppression. On the other hand, the accuracy of calculations which include parton saturation in the description of the experimental data, from deep inelastic scattering to nuclear collisions, still has to be better quantified by incorporating higher order corrections. Most of the calculations so far, which were compared to experimental data, were performed at leading logarithmic order in $\ln 1/\bjorkenx$, sometimes including only part of the next-to-leading order corrections; for example, in the form of the running coupling in the nonlinear evolution equations. Clearly, in order to make more robust predictions with smaller theoretical uncertainties, one needs to go beyond the lowest order calculations. During the last decade, there have been a number of theoretical derivations which aim to go beyond the lowest order approximation in the small-$\bjorkenx$ limit, for example the derivation of the next-to-leading order corrections to the nonlinear evolution equations [@Camici:1997ij; @Ciafaloni:1998gs; @Fadin:1998py; @Balitsky:2008zza], the higher order corrections to the impact factors, and the corrections to the single inclusive hadron production [@Chirilli:2012jd; @Xiao:2014uba; @Watanabe:2016gws]. There has thus been an increased interest in incorporating these higher order calculations into phenomenological studies. Recently, the forward single inclusive hadron production in proton (or deuteron)-nucleus collisions has been analyzed up to next-to-leading order accuracy [@Xiao:2014uba] based on the analytical calculation presented in [@Chirilli:2012jd]. It was shown that the NLO corrections are large and can overtake the leading order term at higher values of transverse momenta. In this review, we discuss this calculation as well as possible ways to remedy the negativity problem, including the kinematic corrections that go beyond the high energy limit. The structure of this document is as follows. In the next section we shall briefly recap the idea of parton saturation and the nonlinear evolution equations which aim to include this effect. In Sec. \[sec:forward inclusive LO\], we shall introduce the framework for the calculation of forward hadron production in the high energy limit, which includes rescattering corrections. In Sec. \[sec:forward inclusive NLO\], we will present the outline of the NLO calculation, and the comparison with experimental data. We will also discuss the origin of the negativity and present different ways to improve the calculation, among them the inclusion of kinematical effects. Parton saturation and nonlinear evolution equation {#sec:evolution} ================================================== At small values of the Bjorken variable $\bjorkenx$, integrated parton densities exhibit a very strong growth as $\bjorkenx$ decreases. This phenomenon is characteristic of a massless non-abelian theory like QCD, and is due to the multiple splitting of gluons. It can be explained qualitatively in the following way. Consider an initial state which contains a single quark or gluon. The subsequent emission of a gluon occurs with the probability $$\dd\probability \simeq \frac{C\alphas}{\pi^2} \frac{\dd[2]\kperp}{\kperp^2}\frac{\dd\bjorkenx}{\bjorkenx} \; , \label{eq:fac}$$ where $\kperp$ is the transverse momentum of the emitted gluon with respect to the initial particle and $\gluonsplitting$ is the fraction of the longitudinal momentum of the initial particle. The color factor $C$ depends on the type of the parent particle. Now, each subsequent emission can proceed from the quark or the gluon, bringing in another copy of the factor associated with Eq. . Let us consider here the situation in the high energy limit in which the transverse momenta of the emitted gluons are comparable but the longitudinal momenta are strongly ordered. Even if $\alphas$ is small, it can be compensated by the large logarithm $\ln 1/\gluonsplitting$. The subsequent emissions carry ever smaller values of $\gluonsplitting$, i.e. emissions with strong ordering in the longitudinal momentum $$\gluonsplitting_n \ll \gluonsplitting_{n-1} \ll \dots \ll \gluonsplitting_2 \ll \gluonsplitting_1 \; \; ,$$ will result in terms proportional to $(\alphas \ln 1/\gluonsplitting)^n$. For small values of $\gluonsplitting$, these are large and need to be resummed. This resummation is accomplished in the Regge limit by the BFKL equation [@Balitsky:1978ic; @Kuraev:1977fs], which is the evolution equation for the unintegrated gluon density derived in the high energy limit. It can be cast in the following form $$\pdv{f(\gluonsplitting,\kperp)}{\ln 1/\gluonsplitting} = \int \frac{\dd \kperp'^2}{\kperp'^2} \, K(\kperp,\kperp') \, f(\gluonsplitting,\kperp') \; , \label{eq:bfkl_LO}$$ where $f(\gluonsplitting, \kperp)$ is the unintegrated gluon density, which depends on the gluon transverse momentum $\kperp$. In the small-$\gluonsplitting$ approximation, the unintegrated gluon density $f(\gluonsplitting, \kperp)$ is related to the more standard integrated gluon density through the following relation $${\ensuremath{\gluonsplitting g\left(\gluonsplitting, Q^2\right)}} = \int^{Q^2} \frac{\dd \kperp^2}{\kperp^2} \, f(\gluonsplitting, \kperp) \, .$$ The function $\bfklkernel(\kperp, \kperp')$ is the evolution kernel of the BFKL equation, which gives the branching probability for the gluons in the small-$\gluonsplitting$ limit, and which has the following expansion in terms of powers of $\alphas$: $$\bfklkernel(\kperp, \kperp') = \alphasbar \bfklkernel_{0}(\kperp, \kperp') + \alphasbar^2 \bfklkernel_{1}(\kperp, \kperp') + \order{\alphasbar^3}\, , \label{eq:kernel_alphas}$$ where $\alphasbar = \alphas \Nc/\pi$, and $\bfklkernel_0$ and $\bfklkernel_1$ are leading logarithmic (LL) and next-to-leading logarithmic (NLL) kernels, computed respectively in Refs. [@Balitsky:1978ic; @Kuraev:1977fs] and [@Camici:1997ij; @Ciafaloni:1998gs; @Fadin:1998py]. The evolution kernels contain real and virtual parts, which are divergent as $\kperp'\to \kperp$, but when combined together insure the infrared safety of the evolution. The solution to this equation has been constructed in Ref. [@Lipatov:1985uk] (for a recent derivation of the solution in the NLL case see [@Chirilli:2013kca]). It can be shown that the solution to the LL BFKL equation behaves like a power with decreasing $\gluonsplitting$, $$f(\gluonsplitting, \kperp) \sim \gluonsplitting^{-\lambda}, \qquad \lambda = 4 \ln 2 \alphasbar \quad (\text{LL result}) \label{eq:pomeron_LO}$$ This behavior is known as the hard Pomeron behavior, as opposed to the soft Pomeron, which would have a power $\lambda \sim 0.08$, and which was used in the phenomenology of hadronic collisions. This result poses two immediate problems. The first one is that such a strong power growth is not phenomenologically supported, since the result would give about $\lambda \sim 0.5$ for typical values of the strong coupling , whereas the power extracted from the experimental data, primarily on deep inelastic scattering of leptons off protons, is approximately $0.25-0.3$. This is the power seen in the gluon distribution function extracted from the proton structure function. The second problem is with the unitarity of the scattering amplitude. The gluon density itself and the corresponding cross section can grow without bound; however, the scattering amplitude at fixed impact parameter has to obey a unitarity bound. This poses a constraint on the possible functional form of the growth of the gluon density itself, when the scattering amplitude is integrated over the impact parameter. The Pomeron solution in the form of the power, as in Eq. , thus violates the unitarity bound. The NLL corrections to the evolution kernel have been computed in Refs. [@Camici:1997ij; @Ciafaloni:1998gs; @Fadin:1998py], and more recently in the context of the dipole evolution in  [@Balitsky:2008zza]. There are several physical sources of the corrections, which we can classify as follows: - running coupling corrections - corrections from the kinematics - corrections from the non-singular (in $1/\gluonsplitting$) terms of the DGLAP splitting function In LL order, the BFKL kernel has fixed coupling, and in fact its kernel is identical to that in $N=4$ SYM theory at this level, see for example Ref. [@Balitsky:2009xg]. It is only at the NLL level that the coupling starts to run in the small-$\bjorkenx$ formalism in QCD. Clearly, the running of the coupling is one of the most important corrections to be incorporated into phenomenological calculations. The second class of corrections stems from the kinematics. For example, in the LL order, the working assumption is that the rapidity difference between two subsequent gluon emissions is large enough so that the leading logarithm in energy is picked up. At NLL there is a large correction from the kinematical situation when the two gluons are not necessarily very distant in rapidity. This leads to the corrections from the choice of scale at this order, see Ref. [@Ciafaloni:1998gs]. The third class of corrections stems from the non-singular parts of the DGLAP splitting function. In the double logarithmic limit, the BFKL and DGLAP equations actually coincide, with the $1/\gluonsplitting$ part of the gluon splitting function being the dominant one and common to both equations. At the NLL level, the BFKL equation contains subleading (in $1/\gluonsplitting$) terms of the DGLAP splitting function. All these corrections are numerically very large, as compared with the LL calculation. In Fig. \[fig:bfklllnll\] we show the intercept of the solution to the BFKL equation at LL and NLL orders. It is evident that the NLL corrections are large, even at very small values of the coupling constant, and can lead to the negative intercept. Also shown are different resummation schemes which stabilize the result. ![The effective intercept of the BFKL solution in the case of the leading logarithmic (yellow dashed) and next-to-leading logarithmic (green dotted-dashed) approximation. Also shown are different resummation schemes (red dotted - dashed, black dashed and blue solid). Figure reproduced from [@Ciafaloni:2003rd].[]{data-label="fig:bfklllnll"}](omegasbfkl.pdf){width="8cm"} Apart from the NLO corrections described above, there are also other classes of corrections that arise due to the presence of the high gluon density. It is expected from QCD that when the energy is very high, or correspondingly $\gluonsplitting$ is very small, the gluon distribution described by Eq.  is so large due to the gluon splitting that the competing mechanism of gluon recombination becomes significant. The high density of gluons leads to the screening of gluons in transverse space, and as a result the growth is tamed by the presence of the additional terms in the evolution equation. These additional terms are nonlinear in the density and enter the evolution equation with a negative sign, leading to the phenomenon known as parton saturation [@Gribov:1984tu; @Mueller:1985wy]. The basic equation that incorporates these effects is the Balitsky-Kovchegov (BK) equation, which was independently derived in Ref. [@Balitsky:1995ub] using the operator product expansion for high energy scattering and in Ref. [@Kovchegov:1999yj] from the Mueller dipole formalism [@Mueller:1993rr] at small $\gluonsplitting$. To be precise, the Balitsky formalism gives an infinite hierarchy of coupled equations for the correlators of Wilson lines. In the large-$\Nc$ limit, the first equation decouples from the rest of the hierarchy and can be solved independently. In this approximation it coincides with the single equation derived in Ref. [@Kovchegov:1999yj]. The BK equation is an equation for the dipole scattering amplitude $\dipoleN$, which can be related to the dipole unintegrated gluon density in momentum space as follows $${\mathcal{F}_{\xtarget}}(\kperp) = \int \frac{\dd[2]\vec\xperp \dd[2]\vec\yperp}{2\pi^2} e^{-i \vec\kperp \cdot (\vec\xperp - \vec\yperp)} \, \bigl(1 - \dipoleN(\vec\xperp, \vec\yperp)\bigr) \, .$$ This is the amplitude for the scattering of a quark-antiquark dipole off a potentially dense target. The gluon branching is then described as the evolution of this amplitude, i.e. the splitting of the dipole into daughter dipoles. The BK equation for the dipole evolution can be cast in the following form $$\pdv{\dipoleN(\vec\xperp, \vec\yperp)}{\evolutionrapidity} = \int\frac{\dd[2] \vec\bperp}{2\pi} \bkkernel\bigl[\dipoleN(\vec\xperp, \vec\bperp) + \dipoleN(\vec\bperp, \vec\yperp) - \dipoleN(\vec\xperp, \vec\yperp) - \dipoleN(\vec\xperp, \vec\bperp)\,\dipoleN(\vec\bperp, \vec\yperp)\bigr] \, . \label{eq:BK}$$ The two vectors $\vec\xperp$ and $\vec\yperp$ denote the positions of the dipole endpoints in the two-dimensional transverse coordinate space. The branching kernel $\bkkernel$ depends on the dipole sizes involved and contains all information about the splitting of the dipoles. In addition, it also depends on the running coupling $\alphas$. It can be demonstrated that the linear part of Eq. , when transformed into momentum space, is equivalent to the linear BFKL equation. The additional nonlinear term that appears in Eq.  is responsible for the parton saturation and tames the growth of the gluon density. As is clear from the form of Eq. , the nonlinear term will reduce the growth of the dipole amplitude. In fact $\dipoleN = 1$ is a fixed point of this equation, and the solution will saturate to that value after a sufficiently long rapidity ($\evolutionrapidity = \ln 1/\gluonsplitting$) interval. The BK equation is now known up to the NLL order [@Balitsky:2008zza] and at this higher order its form is more complicated than shown in Eq.  (see discussion later in this section). In the LL order, the branching kernel has the form $$\bkkernel(\vec\xperp, \vec\yperp, \vec\bperp; \alphas) = \frac{\alphas\Nc}{\pi} \frac{(\vec\xperp - \vec\yperp)^2}{(\vec\xperp - \vec\bperp)^2(\vec\bperp - \vec\yperp)^2} \; . \label{eq:BKkernel}$$ Note that the kernel depends only on the differences between the dipole endpoints and not on the absolute coordinate positions (unlike the dipole amplitudes which depend on both the coordinate differences — dipole sizes —- and the coordinate sums — the dipole impact parameter). The BK equation is usually solved under a simplifying assumption that the impact parameter is neglected, i.e. that the dipole amplitude depends only on the dipole size $\vec\rperp = \vec\xperp - \vec\yperp$. The solution for that case is plotted in Fig. \[fig:bksol\], where the dipole amplitude is shown as a function of the dipole size $\rperp$ for a fixed value of rapidity $\evolutionrapidity$. The dipole amplitude is small for small values of the dipole size, i.e. the color transparency phenomenon, and saturated to unity for large values of dipole size. With increasing rapidity $\evolutionrapidity$ the amplitude grows as well, and the point at which it becomes substantial moves to lower values of the dipole sizes. We observe that the solution exhibits a front in $\rperp$ which moves towards smaller values of $\rperp$ as rapidity increases. This can be quantified by introducing the saturation scale $\satscale(\evolutionrapidity)$, which is the characteristic scale at which the amplitude becomes large and the nonlinear effects become important. The saturation scale can be defined by $$\dipoleN(\rperp = 1/\satscale, \evolutionrapidity) = \kappa \; , \label{eq:satscale}$$ where $\kappa$ is some constant number, for example $1/2$. ![The solution to the nonlinear BK equation as a function of the dipole size $r$ for different values of the rapidity. Different curves (solid blue) from right to left denote the different values in rapidity $\evolutionrapidity = 1.5,\dots,15.6$. The dotted magenta line denotes the initial condition for the evolution equation. The horizontal axis is in arbitrary units.[]{data-label="fig:bksol"}](bk2r_diffy.pdf){width="8cm"} Eq.  will result in the rapidity dependence of the saturation scale $\satscale(Y)$. The functional dependence of the saturation scale on the rapidity is approximately exponential $\satscale \sim \exp(\lambda_s \evolutionrapidity) \sim \gluonsplitting^{-\lambda_s}$. We stress that the growth of the saturation scale with rapidity can be computed from the evolution equation and thus is derived from perturbative QCD. On the other hand the normalization cannot be extracted from the evolution itself, but it depends on the initial conditions which include the non-perturbative physics, and also on the type of the target, i.e. proton versus nucleus. The saturation scale thus divides two regions of different parton densities: the dilute regime, such that $\rperp < 1/\satscale$, with scales higher than the saturation scale, and the dense regime $\rperp > 1/\satscale$, where the nonlinear effects need to be taken into account. This is illustrated in Fig. \[fig:saturation\]. The horizontal axis is related to the momentum scale $Q$, which could be roughly related to the inverse of the transverse coordinate (dipole size, $\rperp \sim 1/Q$). It characterizes the resolution of the process. The vertical axis, is given by $\ln 1/\gluonsplitting$, which is more related to the available energy. We see that the saturation scale is denoted by a curve in this $(Q^2, \gluonsplitting)$ plane, and divides the dilute and dense regimes. In the dilute regime, the linear evolution is applicable; either the BFKL evolution which predicts changes along the $\gluonsplitting$ axis, or the more standard DGLAP evolution which predicts changes along the $Q^2$ axis. In the dense regime, nonlinear effects in the density need to be taken into account and nonlinear evolution equations are required. ![Schematic illustration of the different types of evolution in the $(\gluonsplitting, Q)$ plane. The diagonal line is the saturation scale which divides the dilute and dense partonic regime. The plot is taken from Ref. [@AbelleiraFernandez:2012cc].[]{data-label="fig:saturation"}](saturation.pdf){width="8cm"} We stress that the transition in this diagram is not an abrupt one, but rather is smooth, with saturation being defined up to a normalization factor. It is sometimes also useful to introduce the geometrical scaling region, which is the region where the amplitude depends only on the ratio $\satscale \rperp$, rather than $\rperp, \evolutionrapidity$ separately. This regime encloses the deeply saturated regime and also part of the transition regime. Geometric scaling is a property of the solution to the nonlinear equation in the leading logarithmic approximation in $\ln 1/\gluonsplitting$. However, it may be violated if higher order corrections are included. In particular, it was shown from the analysis of the DGLAP evolution with running coupling above the saturation scale [@Kwiecinski:2002ep] that the geometrical scaling is indeed violated. The violation, however, can be factored out and its size is controlled by the parameter $\alphasbar(\satscale^2)\ln Q^2/\satscale^2(\gluonsplitting)$. Therefore, in the region where $\gluonsplitting \ll 1$ and $\ln Q^2/\satscale^2 \ll \ln \satscale^2/\LambdaQCD^2$ the geometrical scaling is preserved. Similar conclusions were also reached in Ref. [@Iancu:2002tr], where this observation was referred to as an extended geometrical scaling window. If the impact parameter is not neglected in the evolution equation, then the complete solution becomes rather complicated as it starts to depend on five variables: rapidity, dipole size, impact parameter and two angles. At first it would seem that since the kernel  does not depend on the impact parameter, that this variable would not play a major role in the evolution. However, the impact parameter comes into the evolution because the dipole amplitudes $\dipoleN$ depend on it. As a matter of fact, the dipole size and the impact parameter are closely interconnected with each other. As shown in Refs. [@GolecBiernat:2003ym] and [@Berger:2010sh], once the initial condition includes the profile in the impact parameter dependence, the subsequent evolution typically changes the initial form quite rapidly. For example, when the initial profile is exponential in impact parameter, the evolution will modify it into power-like behavior, since the kernel is scale invariant and therefore the interaction is long-range. This is unphysical, because QCD exhibits confinement, and therefore there is a mass gap in the theory which results in the finite range of the strong interaction. Therefore, as it stands at the moment, the BK equation is incomplete as it does not include this vital information, and thus the kernel has to be regulated by the mass parameter. This mass parameter needs to be included essentially by hand. In other words the perturbative BK evolution does indeed preserve the unitarity condition of the dipole amplitude through the perturbative saturation, but the resulting growth in energy of the integrated cross section (over the impact parameter) will still take the form of a power law [@Kovner:2001bh; @Kovner:2002yt]. This will violate the Froissart bound which was derived under the assumption of the finite range of the interaction [@Froissart:1961ux]. In the following we will not discuss the impact parameter dependence, since the observables that we will study are sufficiently inclusive and not sensitive to the impact parameter profile. Nevertheless, the proper modeling of the impact parameter dependence is essential for many of the phenomenological applications, in particular for many more exclusive reactions, though not only restricted to those (see Refs. [@Berger:2011ew; @Berger:2012wx] for phenomenological studies using full impact parameter dependence). Recently, there has been a lot of research activity concerning the nonlinear evolution at NLL order and beyond. The original calculation of the nonlinear evolution at NLL was performed in Ref. [@Balitsky:2008zza], where the linear limit of this calculation coincided with the linear BFKL evolution at NLL order derived earlier [@Fadin:1998py; @Ciafaloni:1998gs]. The numerical analysis of this nonlinear evolution equation at NLL was first performed in Ref. [@Lappi:2015fma], where it was demonstrated that the NLL corrections are large and lead to instability of the solution. Following this work, a resummation procedure was proposed [@Iancu:2015joa; @Iancu:2015vea] to stabilize the solution, based on collinear improvements, which is essentially analogous to the resummation proposed earlier in Refs. [@Andersson:1995ju; @Kwiecinski:1996td; @Kwiecinski:1997ee; @Salam:1998tj; @Ciafaloni:2003rd; @Ciafaloni:2003ek; @Vera:2005jt], which was applied to the linear case. The solution with the resummation was shown to be numerically stable [@Lappi:2016fmu; @Iancu:2015joa; @Iancu:2015vea]. Forward inclusive hadron production at LO {#sec:forward inclusive LO} ========================================= In the previous section, we have introduced the concept of parton saturation and how can it be described through the nonlinear evolution equations derived in QCD. The major question is whether this phenomenon is present in hadron collisions at currently attained collider energies, and how to observe it in experimental data and best quantify it. There have been many phenomenological applications of the small $\bjorkenx$ formalism which include parton saturation effects. Among them are the calculations of the inclusive structure function at HERA [@GolecBiernat:1998js; @GolecBiernat:1999qd; @Stasto:2000er; @Gotsman:2002yy; @Albacete:2009fh], diffraction and vector meson production [@Munier:2001nr; @Gotsman:2003br; @Rogers:2003vi; @Kowalski:2006hc; @Berger:2012wx; @Lappi:2013am], and also multiplicities at RHIC and LHC in proton-proton and heavy nucleus collisions [@Albacete:2007sm], to name just a few. In this review we shall focus on the inclusive forward production of single hadrons in proton-nucleus collisions. In this section we shall describe the special formalism for the calculation of this process in the high energy limit and its extension beyond the lowest order of accuracy. ![Kinematics of the leading order process for the forward inclusive hadron production. In this illustration the quark from the incoming projectile interacts with the dense gluon field of the nucleus and emerges with the additional transverse momentum. Finally it hadronizes into the hadron which is detected experimentally. Similar process exists for the initial state gluon.[]{data-label="fig:lo-kinematics"}](lo-kinematics) We start the small-$\bjorkenx$ description of the forward production in $\pA$ collisions by considering the scattering of a quark on a nucleus, which is illustrated in Fig. \[fig:lo-kinematics\]. Multiple scattering of the quark off the gluons in the field of the nucleus can be encompassed in the Wilson line $$\Uwilson(\xperp) = \pathordered \exp\biggl(i \strongcoupling \int_{-\infty}^{+\infty} \dd x^+ \, T^a A_a^{-}(x^+,\xperp)\biggr) \; , \label{eq:uwilson}$$ where the integral is over the path of the quark which is traveling along the $x^+$ direction, and $A_a^{-}(x^+, \xperp)$ is the gluon field of the nucleus, the solution of the classical Yang-Mills equation. Here, $T^a$ is an $SU(3)$ generator matrix in the fundamental representation, and $\strongcoupling$ is the strong coupling. We shall be working in the high energy or small $\bjorkenx$ approximation, which assumes a large center-of-mass energy between the incoming quark and the nucleus. We also neglect the recoil of the nucleus. The lowest order differential cross section for the process of the inclusive quark production, $\pA\to qX$, is then $$\frac{\dd[3]\quarkxsec}{\dd\rapidity \dd[2]\vec\kperp} = \sum_f \xprojectile q_f(\xprojectile) \int \frac{\dd[2]\vec\xperp \dd[2]\vec\yperp}{(2\pi)^2} \; e^{-i\vec\kperp \cdot (\vec\xperp - \vec\yperp)} \frac{1}{\Nc} \expval{\trace \Uwilson(\vec\xperp) \Uwilson^{\dagger}(\vec\yperp)}_{\evolutionrapidity} \; , \label{eq:quarklo}$$ where $\xprojectile = \frac{\kperp}{\sqs}e^{\quarkrapidity}$ is the fraction of the longitudinal momentum of the proton carried by the incoming quark and $\xtarget = \frac{\kperp}{\sqs} e^{-\quarkrapidity}$ is the fraction of the longitudinal momentum of the nucleus carried by the gluon. The final quark is produced with transverse momentum $\kperp$ and at rapidity $\quarkrapidity$. The color average $\expval{\cdots}_{\evolutionrapidity}$ is understood to be taken over the color sources of the nucleus, as is usually done in the framework of the Color Glass Condensate. The rapidity $\evolutionrapidity \simeq 1/x_g$ is the difference between the rapidity of the gluon and the rapidity of the nucleus. Performing the CGC color average over the correlator yields the dipole gluon distribution, $${S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp) \defn \frac{1}{N_c} \expval{\trace \Uwilson(\vec\xperp) \Uwilson^{\dagger}(\yperp)}_{\evolutionrapidity} \, .$$ Also, ${\ensuremath{\xprojectile q_f\left(\xprojectile\right)}}$ is the quark distribution in the incoming proton, where label $f$ denotes the specific flavor of the incoming quark. For a complete description of the hadronic cross section it is necessary to include the gluon-initiated channel. To this aim, one has to define the correlators of the Wilson lines in the adjoint representation $${\tilde{S}^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp) \defn \frac{1}{\Nc^2 - 1} \expval{\trace \Wwilson(\vec\xperp)\Wwilson^{\dagger}(\vec\yperp)}_{\evolutionrapidity} \; . \label{eq:wwilson}$$ One can introduce the Fourier transform into the momentum space of the spatial correlators $${\mathcal{F}_{\xtarget}}(\kperp) \defn \int \frac{\dd[2]\vec\xperp \dd[2]\vec\yperp}{(2\pi)^2} \, e^{-i \vec\kperp \cdot (\vec\xperp - \vec\yperp)} {S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp) \; , \label{eq:fk}$$ and $${\tilde{\mathcal{F}}_{\xtarget}}(\kperp) \defn \int \frac{\dd[2]\vec\xperp \dd[2]\vec\yperp}{(2\pi)^2} \, e^{-i \vec\kperp \cdot (\vec\xperp - \vec\yperp) } {\tilde{S}^{(2)}_{\xtarget}}(\xperp, \yperp) \; . \label{eq:fktilde}$$ In the large-$\Nc$ limit, one can rewrite the unintegrated gluon distribution in the adjoint representation using the dipole distributions ${S^{(2)}_{\xtarget}}$, which use the Wilson lines in the fundamental representation (thanks to the identity between Wilson lines in both representations). One thus arrives at the following simplified expression for the unintegrated gluon distribution in the adjoint representation $${\tilde{\mathcal{F}}_{\xtarget}}(\kperp) = \int \frac{\dd[2]\vec\xperp \dd[2]\vec\yperp}{(2\pi)^2} \, e^{-i \vec\kperp \cdot (\vec\xperp - \vec\yperp) } {S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp) {S^{(2)}_{\xtarget}}(\vec\yperp, \vec\xperp) \; ,$$ which is expressed only through the dipole amplitudes in the fundamental representation. In order to write the cross section for the production of a hadron one needs to convolute the quark and gluon production cross section with the appropriate fragmentation function. The final expression for the the cross section for the production of a hadron at forward rapidity in the lowest order in the saturation formalism can be expressed as $$\frac{\dd[3]\sigmainclusive}{\dd\hadronrapidity \dd[2]\vec\pperp} = \int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \biggl[\sum_f {\ensuremath{\xprojectile q_f\left(\xprojectile\right)}} {\mathcal{F}_{\xtarget}}(\kperp) {\ensuremath{{\ensuremath{D_{h/f}}}\left(\fragfrac\right)}} + {\ensuremath{\xprojectile g\left(\xprojectile\right)}} {\tilde{\mathcal{F}}_{\xtarget}}(\kperp) {\ensuremath{{\ensuremath{D_{h/f}}}\left(\fragfrac\right)}} \biggr] \, . \label{eq:sigmapt}$$ We can express the other kinematical variables (at the parton level) in terms of the hadron transverse momentum $\pperp$, i.e. $\kperp = \pperp/\fragfrac$, $\xprojectile = \frac{\pperp}{\fragfrac\sqs}e^{\hadronrapidity}$, $\tau = \fragfrac\xprojectile$, and $\xtarget = \frac{\pperp}{\fragfrac\sqs}e^{-\hadronrapidity} $. In the above formula ${\ensuremath{D_{h/f}}}$ and ${\ensuremath{D_{h/g}}}$ are fragmentation functions for the fragmenting quark and gluon into hadron correspondingly. There are several important points about this formula. The transverse momentum dependence of the produced hadron is generated exclusively from the transverse momentum dependence of the unintegrated gluon distributions in the nucleus ${\mathcal{F}_{\xtarget}}(\kperp)$ and ${\tilde{\mathcal{F}}_{\xtarget}}(\kperp)$. The lowest order process here is a $2\to 1$ process. This is in contrast with the collinear approach where the hard scattering process is $2\to 2$ (at the lowest order) and the transverse momentum dependence is generated through the hard scattering only. The formalism here includes two types of distributions: it includes the unintegrated parton distribution from the nucleus side and the collinear parton distribution on the hadron side. As such it is highly asymmetric and only applicable at very high rapidities. Also, formally at this order both the parton distribution and fragmentation functions do not possess any scale dependence. For the phenomenological applications however, the scale-dependent parton distribution ${\ensuremath{\xprojectile q\left(\xprojectile, \facscale^2\right)}}$ and fragmentation function ${\ensuremath{{\ensuremath{D_{h/q}}}\left(\fragfrac, \facscale^2\right)}}$ have been commonly used. Finally, at this lowest order, the correlators ${S^{(2)}_{\xtarget}}$ are rapidity independent. The rapidity dependence can be incorporated through the BK evolution equation, which enters formally at higher order as we shall see in the next section. Formula  has been extensively used for phenomenology, in particular for the description of the nuclear ratios $$R_{p(d)A}(\pperp, \hadronrapidity) = \frac{\frac{\dd[2]\sigmainclusive}{\dd\hadronrapidity\dd[2]\pperp}}{\Ncoll\frac{\dd[2]\sigmainclusivepp}{\dd\hadronrapidity\dd[2]\pperp}} \; . \label{eq:nuclear_ratio}$$ where $\Ncoll$ is the number of collisions. Forward inclusive production at NLO {#sec:forward inclusive NLO} =================================== Moving beyond leading order, there are two main sources of subleading corrections to the $\pA\to hX$ cross section. One is the corrections to the BK evolution previously discussed in section \[sec:evolution\]. The next-to-leading corrections to the BK evolution have been computed in Ref. [@Balitsky:2008zza], and more recently the next-to-leading order form of the more general JIMWLK equation has been obtained in Ref. [@Altinoluk:2014eka]. The dilute limit of the BK equation is the famous BFKL equation. Both calculations, i.e. NLL BK and NLL JIMWLK, reduce to the NLL BFKL [@Fadin:1998py; @Ciafaloni:1998gs] in the regime of low density. The NLL BK is computationally complex, and it has been only solved recently numerically in Refs. [@Lappi:2015fma; @Lappi:2016fmu]. In here we shall focus mostly our attention on the next-to-leading order (NLO) terms in the cross section itself, which result from diagrams in which an unobserved quark or gluon is emitted. The general kinematics of the process is shown in Fig. \[fig:nlo-kinematics\]. In this review we will describe the contributions resulting from the one-loop diagrams. ![Kinematics of the next-to-leading order process. The initial quark from the incoming projectile undergoes splitting into a quark and gluon, before hadronizing into final state particle. The interaction with the gluon field of the nucleus can occur before the splitting, as shown, or after it.[]{data-label="fig:nlo-kinematics"}](nlo-kinematics) The next-to-leading order calculation of the single inclusive hadron production requires evaluation of several contributions. There are actually four channels to consider. These include two “diagonal” channels in which the projectile parton is the same species (quark or gluon) as the one that fragments into a hadron, the quark-quark (qq) channel and the gluon-gluon (gg) channel, as well as two non-diagonal channels, quark-gluon (qg) and gluon-quark (gq), in which the projectile parton and the progenitor of the hadron are different species. Fig. \[fig:nlo-kinematics\] shows one of the real diagrams for the quark-quark channel with the kinematics labeled. Several of the real diagrams for the qq channel are shown in Fig. \[fig:qtoqreal\]. The quark is the observed particle, and the emitted gluon has to be integrated over. In addition to the real diagrams, one needs to include the virtual contributions, as shown in Fig. \[fig:qtoqvirtual\]. ![Example of real diagrams for the next-to-leading order quark production $qA\rightarrow qX$. The elliptic blobs denote the interaction of the gluons from the nucleus with the $qg$ system of the initial state. The lower (round) blobs and the vertical gluons symbolize multiple interactions of these projectile partons with the target nucleus.[]{data-label="fig:qtoqreal"}](real1.pdf "fig:"){width="4.5cm"} ![Example of real diagrams for the next-to-leading order quark production $qA\rightarrow qX$. The elliptic blobs denote the interaction of the gluons from the nucleus with the $qg$ system of the initial state. The lower (round) blobs and the vertical gluons symbolize multiple interactions of these projectile partons with the target nucleus.[]{data-label="fig:qtoqreal"}](real2.pdf "fig:"){width="4.5cm"} ![Example of real diagrams for the next-to-leading order quark production $qA\rightarrow qX$. The elliptic blobs denote the interaction of the gluons from the nucleus with the $qg$ system of the initial state. The lower (round) blobs and the vertical gluons symbolize multiple interactions of these projectile partons with the target nucleus.[]{data-label="fig:qtoqreal"}](real3.pdf "fig:"){width="4.5cm"} ![Example of real diagrams for the next-to-leading order quark production $qA\rightarrow qX$. The elliptic blobs denote the interaction of the gluons from the nucleus with the $qg$ system of the initial state. The lower (round) blobs and the vertical gluons symbolize multiple interactions of these projectile partons with the target nucleus.[]{data-label="fig:qtoqreal"}](real4.pdf "fig:"){width="4.5cm"} ![Example of virtual diagrams for the next-to-leading order quark production $qA\rightarrow qX$. The blobs and the vertical gluons symbolize multiple interactions with the target nucleus.[]{data-label="fig:qtoqvirtual"}](virtual1.pdf "fig:"){width="4.5cm"} ![Example of virtual diagrams for the next-to-leading order quark production $qA\rightarrow qX$. The blobs and the vertical gluons symbolize multiple interactions with the target nucleus.[]{data-label="fig:qtoqvirtual"}](virtual2.pdf "fig:"){width="4.5cm"} The real and virtual terms need to be combined together to produce the full cross section. These expressions contain different types of divergences, which need to be appropriately subtracted in order to yield finite result. There are rapidity and collinear divergences in the final and initial state, which will be absorbed into the corresponding distributions. Specifically, the rapidity divergence is absorbed into the unintegrated gluon distribution, and the initial and final state collinear divergences are absorbed into the integrated parton distribution and fragmentation functions. The remaining finite contributions are collected in the hard factors. We shall discuss the subtractions in more detail in Sec. \[sec:divergences\]. As mentioned above, in addition to the diagonal channel with both real and virtual contributions, there are also non-diagonal channels, where only real contributions exist. Correspondingly, the contribution to the single inclusive cross section from the non-diagonal quark to gluon channel will be obtained by integrating out the emitted quark in the contributions from diagrams in Fig. \[fig:qtoqreal\]. When all the NLO contributions are included, one arrives at a considerably more complicated expression as compared to the leading order formula, which has the form $$\begin{gathered} \frac{\dd[2]\sigmainclusive}{\dd\hadronrapidity\dd[2]\pperp} = \int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \int_{\tau/\fragfrac}^1 \dd\emitfrac \Biggl[ \sum_f {\ensuremath{\xprojectile q_f\left(\xprojectile, \facscale^2\right)}} S_{qq} {\ensuremath{{\ensuremath{D_{h/f}}}\left(\fragfrac, \facscale^2\right)}} \\ + \sum_f {\ensuremath{\xprojectile q_f\left(\xprojectile, \facscale^2\right)}} S_{qg} {\ensuremath{{\ensuremath{D_{h/g}}}\left(\fragfrac, \facscale^2\right)}} + \sum_f {\ensuremath{\xprojectile g\left(\xprojectile, \facscale^2\right)}} S_{gq} {\ensuremath{{\ensuremath{D_{h/f}}}\left(\fragfrac, \facscale^2\right)}} \\ + {\ensuremath{\xprojectile g\left(\xprojectile, \facscale^2\right)}} S_{gg} {\ensuremath{{\ensuremath{D_{h/g}}}\left(\fragfrac, \facscale^2\right)}} \Biggr] \; . \label{eq:sigmaptnlo}\end{gathered}$$ There are two key differences that are clear already in this (still simplified) expression, as compared to the leading order result . First, the unobserved particle, which is integrated over, introduces an additional kinematic degree of freedom. It is parametrized by $\emitfrac$, the fraction of plus-component momentum retained by the final-state particle which fragments into the hadron. For the process shown in figure \[fig:nlo-kinematics\], $\emitfrac = \frac{k^+}{k^+ + q^+} = \frac{k^+}{\xprojectile p_p^+}$. Also, the NLO equation includes “off-diagonal” channels, the terms in the second line, where the initial-state particle is a quark and the particle that fragments is a gluon, or vice-versa. These off-diagonal channels do not exist at leading order, because without the emission of the undetected particle, the initial-state and final-state particles are necessarily of the same species. The full complexity of the one-loop corrections lies in the expressions for $S_{ij}$, which incorporate both the Wilson line correlators representing the interaction with the target, and the perturbative hard factors representing the emission of the unobserved particle. The one-loop contributions have been derived and investigated only in the past few years. In 2011, building on the leading order result [@Dumitru:2005gt; @Albacete:2010bs], Altinoluk and Kovner [@Altinoluk:2011qy] began the investigation of the NLO cross section by incorporating the “inelastic terms” which result from projectile partons with high transverse momentum. They found the following result for the multiplicity, translated from the notation of Ref. [@Altinoluk:2011qy]: $$\begin{gathered} \frac{\dd[3] N_h}{\dd\hadronrapidity\dd[2]\vec\pperp} = \frac{1}{(2\pi)^2} \int_{x_F}^1 \frac{\dd\fragfrac}{\fragfrac^2} \biggl[{\ensuremath{\xprojectile g\left(\xprojectile, \facscale^2\right)}} {\tilde{\mathcal{F}}_{\xtarget}}(\kperp) {\ensuremath{{\ensuremath{D_{h/g}}}\left(\fragfrac, \facscale^2\right)}} \\ \shoveright{+ \sum_f {\ensuremath{\xprojectile q_f\left(\xprojectile, \facscale^2\right)}} {\mathcal{F}_{\xtarget}}(\kperp) {\ensuremath{{\ensuremath{D_{h/q}}}\left(\fragfrac, \facscale^2\right)}}\biggr]\qquad} \\ + \int_{x_F}^1 \frac{\dd\fragfrac}{\fragfrac^2} \frac{\alphas}{(2\pi)^2} \frac{\fragfrac^4}{\pperp^2} \int \frac{\dd[2]\vec\kperp}{(2\pi)^2}\kperp^2 {\tilde{\mathcal{F}}_{\xtarget}}(\kperp) \xprojectile \int_{\xprojectile}^1 \frac{\dd\emitfrac}{\emitfrac}\sum_j w_{i/j}(\emitfrac) P_{ij}(\emitfrac) f_{j}\biggl(\frac{\xprojectile}{\emitfrac}, \facscale^2\biggr) {\ensuremath{{\ensuremath{D_{h/i}}}\left(\fragfrac, \facscale^2\right)}}\; .\end{gathered}$$ Here $P_{ij}$ are splitting functions and $w_{i/j}$ are the inelastic weight functions defined in Ref. [@Altinoluk:2011qy]. The first two lines in the above equation are the elastic LO terms, the same as before in Eq. . The inelastic terms are in the third line. These terms were derived under the assumption that the projectile partons enter with low transverse momentum, and acquire large transverse momentum from hard collisions with gluons in the target nucleus. The elastic terms are accounted for by the leading order formula , and the inelastic terms represent the simplest NLO contribution. The first numerical calculation to include these results was presented in Ref. [@Albacete:2012xq]. Already the inelastic terms displayed several interesting features. They found that the terms are negative and have a steeper dependence on transverse momentum than the elastic terms. In fact, the inelastic contributions completely overwhelm the elastic contributions above some cutoff momentum, which depends on the model chosen for the unintegrated gluon distribution. This could be expected because the inelastic terms are roughly proportional to $\ln(\pperp/\satscale)$ while the elastic terms are roughly proportional to $-\ln(\pperp/\LambdaQCD)$ [@Altinoluk:2011qy; @Albacete:2012xq], making the ratio $$r = \frac{\text{elastic}+\text{inelastic}}{\text{elastic}} \sim \frac{\ln(\satscale/\LambdaQCD)}{\ln(\pperp/\LambdaQCD)} \; ,$$ This form of the expression helps justify the observation that $r$ drops with increasing hadron transverse momementum $\pperp$. Later on, the complete corrections to the cross section up to the one-loop order were derived in Ref. [@Chirilli:2012jd]. In this calculation, the expressions $S_{ij}$ correspond to a total of eleven terms, each a convolution of a hard factor and a multipole gluon distribution (a correlator of Wilson lines). $$\begin{aligned} S_{qq} &= \int \frac{\dd[2] \vec\xperp\dd[2] \vec\yperp}{(2\pi)^2}{S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp) \Bigl[{\mathcal{H}_{2qq}^{(0)}} + \frac{\alphas}{2\pi}{\mathcal{H}_{2qq}^{(1)}}\Bigr] \notag\\ &+ \int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp\dd[2]\vec\bperp}{(2\pi)^4}{S^{(4)}_{\xtarget}}(\vec\xperp, \vec\bperp, \vec\yperp)\frac{\alphas}{2\pi}{\mathcal{H}_{4qq}^{(1)}} \\ S_{qg} &= \frac{\alphas}{2\pi}\int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp}{(2\pi)^2} {S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp)\Bigl[{\mathcal{H}_{2qg}^{(1,1)}} + {S^{(2)}_{\xtarget}}(\vec\yperp, \vec\xperp){\mathcal{H}_{2qg}^{(1,2)}}\Bigr] \notag\\ &+ \frac{\alphas}{2\pi}\int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp\dd[2]\vec\bperp}{(2\pi)^4} {S^{(4)}_{\xtarget}}(\vec\xperp, \vec\bperp, \vec\yperp){\mathcal{H}_{4qg}^{(1)}} \\ S_{gq} &= \frac{\alphas}{2\pi}\int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp}{(2\pi)^2} {S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp)\Bigl[{\mathcal{H}_{2gq}^{(1,1)}} + {S^{(2)}_{\xtarget}}(\vec\yperp, \vec\xperp){\mathcal{H}_{2gq}^{(1,2)}}\Bigr] \notag\\ &+ \frac{\alphas}{2\pi}\int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp\dd[2]\vec\bperp}{(2\pi)^4} {S^{(4)}_{\xtarget}}(\vec\xperp, \vec\bperp, \vec\yperp){\mathcal{H}_{4gq}^{(1)}} \\ S_{gg} &= \int \frac{\dd[2] \vec\xperp\dd[2] \vec\yperp}{(2\pi)^2}{S^{(2)}_{\xtarget}}(\vec\xperp, \vec\yperp) {S^{(2)}_{\xtarget}}(\vec\yperp, \vec\xperp) \Bigl[{\mathcal{H}_{2gg}^{(0)}} + \frac{\alphas}{2\pi}{\mathcal{H}_{2gg}^{(1)}}\Bigr] \notag\\ &+ \int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp\dd[2]\vec\bperp}{(2\pi)^4}{S^{(2)}_{\xtarget}}(\vec\xperp, \vec\bperp){S^{(2)}_{\xtarget}}(\vec\bperp, \vec\yperp)\frac{\alphas}{2\pi}{\mathcal{H}_{2q\bar q}^{(1)}} \notag\\ &+ \int \frac{\dd[2]\vec\xperp\dd[2]\vec\yperp\dd[2]\vec\bperp}{(2\pi)^4}{S^{(2)}_{\xtarget}}(\vec\xperp, \vec\bperp){S^{(2)}_{\xtarget}}(\vec\bperp, \vec\yperp){S^{(2)}_{\xtarget}}(\vec\yperp, \vec\xperp)\frac{\alphas}{2\pi}{\mathcal{H}_{6gg}^{(1)}}\end{aligned}$$ \[eq:allhardfactors\] In the above formulae there is another correlator which is defined as $${S^{(4)}_{\xtarget}}(\vec\xperp, \vec\bperp, \vec\yperp) = \frac{1}{\Nc^2} \expval{\trace[\Uwilson(\vec\xperp)\Uwilson^{\dagger}(\vec\bperp)]\trace[\Uwilson(\vec\bperp)\Uwilson^{\dagger}(\vec\yperp)]}_{\evolutionrapidity} \; .$$ The leading-order hard factors are proportional to the delta function, ${\mathcal{H}_{2qq}^{(0)}} = {\mathcal{H}_{2gg}^{(0)}} = e^{-i\vec\kperp\cdot\vec\rperp}\delta(1 - \emitfrac)$, as required to reproduce equation , but the remaining hard factors are considerably more complicated. In this paper, we don’t reproduce the full definitions of the hard factors, instead referring interested readers to the original paper [@Chirilli:2012jd] for the expressions. We will, however, highlight some of their important features as a guide to the complexity of the next-to-leading order calculation. It is important to note that the expressions  together with Eqs.  exhibit a factorized form with appropriate divergences being factored out into the corresponding distribution functions. It is by far a non-trivial feature as the divergences which arise in this calculation are of different origin. We shall discuss these divergences and their subtractions in some detail below. Divergences {#sec:divergences} ----------- Most importantly, unlike the leading-order result, the next-to-leading order terms contain divergences, which need to be properly regulated to obtain the finite results represented by equations . There are two classes of divergences: rapidity and collinear ones. ### Rapidity divergences {#sec:rapidity divergences} Rapidity divergences arise from integrals of the form $$\int^1\frac{\dd\emitfrac}{1 - \emitfrac}\times\text{finite}(\emitfrac) \; ,$$ specifically from the upper endpoint $\emitfrac\to 1$. Above, $\text{finite}(\emitfrac)$ denotes some function which is finite when $\emitfrac\to 1$. The condition $\emitfrac\to 1$ is kinematically equivalent to the gluon rapidity $\gluonrapidity = \ln\frac{1}{\emitfrac}$ going to $\infty$. The physical interpretation of this divergence is such that of the projectile parton emitting a gluon with large longitudinal momentum in the *opposite* direction. This gluon is actually indistinguishable from a gluon in the target nucleus, therefore it makes sense to absorb this singularity into the target gluon distribution. This divergence vanishes if we integrate over the transverse momentum $\kperp$. Let’s consider the expression for the quark-quark channel only, as an example of how rapidity divergences are removed. Prior to the subtractions, the quark-quark channel cross section can be expressed as $$\begin{gathered} \label{eq:unsubtracted qq cross section} \frac{\dd[3]\sigmainclusive}{\dd[2]\pperp \dd\rapidity} = \int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \xprojectile q(\xprojectile, \facscale)D_{h/q}(\fragfrac, \facscale) {{\mathcal{F}_{}}^{\star}}(\kperp) \\ + \frac{\alphas}{2\pi^2}\int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \int_{\tau/\fragfrac}^1 \frac{\dd\emitfrac}{1 - \emitfrac} \frac{\xprojectile}{\emitfrac} q\biggl(\frac{\xprojectile}{\emitfrac}, \facscale\biggr)D_{h/q}(\fragfrac, \facscale) S_{qq}^{\text{real}} \\ - \frac{\alphas}{2\pi^2}\int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \int_{0}^1 \frac{\dd\emitfrac}{1 - \emitfrac} \xprojectile q(\xprojectile, \facscale)D_{h/q}(\fragfrac, \facscale) S_{qq}^{\text{virt}}\; ,\end{gathered}$$ where ${{\mathcal{F}_{}}^{\star}}$ is the unrenormalized dipole gluon distribution as defined in equation , and $S_{qq}^{\text{real}}$, $S_{qq}^{\text{virt}}$ represent the real and virtual NLO terms (which of course depend on ${{\mathcal{F}_{}}^{\star}}$). We use ${{\mathcal{F}_{}}^{\star}}$ instead of the conventional ${\mathcal{F}_{}}^{(0)}$ to denote the bare distribution for consistency with ${S^{\star(2)}}$. In the notation of Ref. [@Ducloue:2016shw],[^1] $$\begin{aligned} S_{qq}^{\text{real}} &= (1 + \emitfrac^2)\biggl[\CF\mathcal{I}(\vec\kperp, \emitfrac) + \frac{\Nc}{2}\mathcal{J}(\vec\kperp, \emitfrac)\biggr]\;, \\ S_{qq}^{\text{virt}} &= (1 + \emitfrac^2)\biggl[\CF\mathcal{I}_v(\vec\kperp, \emitfrac) + \frac{\Nc}{2}\mathcal{J}_v(\vec\kperp, \emitfrac)\biggr]\; . \end{aligned}$$ To regulate the divergence, we rewrite the $\emitfrac$ integral in the NLO terms as $$\label{eq:separate subtraction term} \int^1 \frac{\dd\emitfrac}{1 - \emitfrac}f(\emitfrac) = \underbrace{\int^1 \frac{\dd\emitfrac}{(1 - \emitfrac)_+}f(\emitfrac)}_{\text{finite term}} + \underbrace{\int_0^1 \frac{\dd\emitfrac}{1 - \emitfrac}f(1)}_{\text{subtraction term}}\; ,$$ which follows directly from the definition of the plus prescription. We then define the renormalized gluon distribution ${\mathcal{F}_{}}$ to be the sum of ${{\mathcal{F}_{}}^{\star}}$ and the subtraction terms from the real and virtual NLO contributions. $$\label{eq:rapidity subtraction} {\mathcal{F}_{}}(\qperp) \defn {{\mathcal{F}_{}}^{\star}}(\qperp) + \frac{\alphas}{2\pi^2}\int_0^1\frac{\dd\emitfrac}{1 - \emitfrac}\Bigl[S_{qq}^{\text{real}} - S_{qq}^{\text{virt}}\Bigr]_{\emitfrac=1}\; .$$ After combining the bare dipole gluon distribution with the subtraction terms, we can write the cross section entirely in terms of finite expressions: ${\mathcal{F}_{}}$ and plus-regulated NLO contributions. $$\begin{gathered} \frac{\dd[3]\sigmainclusive}{\dd[2]\pperp \dd\rapidity} = \int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \xprojectile q(\xprojectile, \facscale)D_{h/q}(\fragfrac, \facscale) {\mathcal{F}_{\xtarget}}(\kperp) \\ + \frac{\alphas}{2\pi^2}\int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \int_{\tau/\fragfrac}^1 \frac{\dd\emitfrac}{(1 - \emitfrac)_+} \frac{\xprojectile}{\emitfrac} q\biggl(\frac{\xprojectile}{\emitfrac}, \facscale\biggr)D_{h/q}(\fragfrac, \facscale) S_{qq}^{\text{real}} \\ - \frac{\alphas}{2\pi^2}\int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^2} \int_{0}^1 \frac{\dd\emitfrac}{(1 - \emitfrac)_+} \xprojectile q(\xprojectile, \facscale)D_{h/q}(\fragfrac, \facscale) S_{qq}^{\text{virt}}\end{gathered}$$ The NLO terms $S_{qq}^{\text{real}}$ and $S_{qq}^{\text{virt}}$ also depend on ${{\mathcal{F}_{}}^{\star}}$, but the difference between ${{\mathcal{F}_{}}^{\star}}$ and ${\mathcal{F}_{}}$ is one order of $\alphas$ higher — in this case, that means $\order{\alphas^2}$, which we assume to be negligible in this calculation. So we can freely replace ${{\mathcal{F}_{}}^{\star}}\to{\mathcal{F}_{}}$ within the NLO terms without making any additional changes. One can take the definition of the renormalized gluon distribution , plug in the full expressions for $S_{qq}^{\text{real}}$ and $S_{qq}^{\text{virt}}$, and transform to coordinate space. The equation becomes [@Chirilli:2012jd; @Kang:2014lha; @Ducloue:2016shw] $$\begin{gathered} \label{eq:position space subtraction} {S^{(2)}_{}}(\vec\xperp, \vec\yperp) = {S^{\star(2)}}(\vec\xperp, \vec\yperp) \\ - \frac{\alphas\Nc}{2\pi^2}\int_0^1\frac{\dd\emitfrac}{1 - \emitfrac}\int\dd[2]\vec\bperp\frac{(\vec\xperp - \vec\yperp)^2}{(\vec\xperp - \vec\bperp)^2(\vec\yperp - \vec\bperp)^2}\Bigl[{S^{(2)}_{}}(\vec\xperp, \vec\yperp) - {S^{(4)}_{}}(\vec\xperp, \vec\bperp, \vec\yperp)\Bigr]\end{gathered}$$ which looks very similar to the integral form of the BK evolution equation. However, the similarity is deceptive since there is no evolution in this expression. Ref. [@Chirilli:2012jd] offers two procedures for artifically introducing the rapidity evolution as required to obtain the BK equation. One can either shift the upper limit of the integral to $1 - e^{-\rapidity}$, where $\rapidity$ is the rapidity difference between the projectile proton and the target nucleus, or shift the denominator of the integration to $1 - \emitfrac + e^{-\rapidity}$. Either way, this corresponds to dropping the approximation that the projectile and target are moving at speed $c$, taking them off the light cone. Taking the derivative with respect to $\rapidity$ then yields the BK equation. However, introducing the rapidity gap between the proton and nucleus as the evolution variable is somewhat unsatisfying, because the BK equation governs the evolution of a gluon field, a parton-level construct, which should not be sensitive to hadron-level kinematics like the rapidity gap  [@Xiao:2014uba]. To resolve this issue, we need to carefully consider the physical significance of Eqs.  and . In the NLO kinematics, the projectile parton undergoes two types of gluon interactions: the scattering off the dense gluon field of the target nucleus, represented by ${{\mathcal{F}_{}}^{\star}}$ or ${S^{\star(2)}}$, and the initial or final state emission, represented by the NLO terms in Eq. . We might naively consider these two processes to be separated, as the NLO emission involves a large momentum transfer, while the interaction with the gluon field involves small momentum transfer, making them easily distinguishable. But in fact, the emitted gluon can carry any momentum allowed by kinematics. Gluons emitted with very small $q^+$ and small $\qperp$ are actually collinear with the target nucleus, and kinematically indistinguishable from the gluon field of the nucleus itself. So it makes sense to take the part of the NLO term corresponding to “slow” and soft (small $q^+$, small $\qperp$) gluon emission, separate it from the “fast” emissions with large $q^+$ (Ref. [@Balitsky:1998kc] justifies this split), and *reinterpret* the emission as scattering off an external gluon field. This external field should be considered part of the target. This procedure introduces a scale separating the fast and slow gluon fields; basically, a cutoff on how much of the phase space for gluon emissions we are going to absorb into the renormalized gluon distribution. The cutoff is going to enter Eq.  or  through the limits of the $\emitfrac$ integral. We will return to this issue in Section \[sec:rapidity subtraction\]. ### Collinear divergences Collinear divergences arise from integrals of the form $\int\frac{\dd[2]\kperp'}{(\vec\kperp - \vec\kperp')^2}$ or similar, which after angular integration scales as $1/\kperp'$ as $\vec\kperp'\to\vec\kperp$. Physically, these contributions correspond to quarks or gluons emitted either in the initial state with momentum parallel to that of the incoming parton, or in the final state with momentum parallel to the outgoing parton. These divergences should therefore be absorbed by, respectively, the parton distribution function or the fragmentation function. When regulating the collinear divergences, one has to keep consistency with other parts of the calculation, namely the parton distributions and fragmentation functions. The commonly used fits for these functions come from expressions derived using dimensional regularization in the ${\ensuremath{\overline{\mathrm{MS}}}}$ scheme, so the regularization must be done in the same scheme. For example, applying the ${\ensuremath{\overline{\mathrm{MS}}}}$ scheme to perform collinear subtractions one can remove the collinear divergences in the quark to quark channel by redefining the quark distribution and fragmentation as follows $$\begin{aligned} q(\xprojectile, \facscale) &= q^{(0)}(\xprojectile) - \frac{1}{\hat{\epsilon}} \frac{\alphas(\facscale)}{2\pi} \int_{\xprojectile}^1 \frac{\dd\emitfrac}{\emitfrac} \CF P_{qq}(\emitfrac) q\biggl(\frac{\xprojectile}{\emitfrac}\biggr)\; , \nonumber \\ D_{h/q}(\fragfrac,\facscale) & = D_{h/q}^{(0)}(\fragfrac)-\frac{1}{\hat{\epsilon}} \frac{\alpha_s(\facscale)}{2\pi} \int_x^1 \frac{\dd\emitfrac}{\emitfrac} \CF P_{qq}(\emitfrac) D_{h/q}\biggl(\frac{\fragfrac}{\emitfrac}\biggr) \; ,\end{aligned}$$ where $1/\hat{\epsilon}=1/\epsilon-\gamma_E+\ln 4\pi$ and $\epsilon$ is the parameter of dimensional regularization ($D=4-2\epsilon$). Above, $P_{qq}$ is the DGLAP leading order splitting function $$P_{qq}(\emitfrac) = \frac{1+\emitfrac^2}{(1-\emitfrac)}_+ + \frac{3}{2}\delta(1-\emitfrac) \; .$$ In that manner, parton distributions and fragmentation functions obtain the scale dependence in this order of calculation. Note that the resulting scale and rapidity dependence which shows up at NLO calculation is formally due to the leading order logs: leading in $\ln 1/\xtarget$ and leading in $\ln \facscale$. This is because the leading order calculation of the inclusive production is free from any singularities. ![Results of the full NLO cross section, from Ref. [@Stasto:2013cha].[]{data-label="fig:completenloresults"}](fig1a-BRAHMS.pdf "fig:"){width=".48\linewidth"} ![Results of the full NLO cross section, from Ref. [@Stasto:2013cha].[]{data-label="fig:completenloresults"}](fig1b-STAR.pdf "fig:"){width=".48\linewidth"} Numerical results at NLO ------------------------ Numerical results from this calculation, using the MSTW 2008 NLO parton distributions[@Martin:2009iq] and DSS NLO fragmentation functions[@deFlorian:2007hc; @deFlorian:2007aj], were first presented in 2013 by Staśto et al. [@Stasto:2013cha] The results, shown in Fig. \[fig:completenloresults\], are compared with the experimental data on deuteron-gold collisions measured by BRAHMS [@Arsene:2004ux] and by STAR [@Adams:2006uz]. Three different models for the unintegrated gluon distribution ${\mathcal{F}_{\xtarget}}(\kperp)$ were used: two phenomenological models, the McLerran-Venugopalan model [@McLerran:1993ni] and the Golec-Biernat-Wusthoff [@GolecBiernat:1998js] model as well as the direct solution to the leading order BK equation with the running coupling. In general, the agreement between the data and the calculation is very good for low transverse momenta, up to the values of about $\pperp\sim \satscale$. The NLO result does match the experimental data fairly well at lower $\pperp$, down to $\pperp \sim \SI{0.5}{GeV}$ where nonperturbative QCD effects start to dominate. Within the region of validity, the NLO correction terms do reduce the theoretical uncertainty resulting from the factorization scale and renormalization scale. The reduction of the scale dependence at NLO is illustrated in Fig. \[fig:scaledependence\]. As is seen from this figure, the leading order result is quite sensitive to the choice of the factorization scale $\facscale$. This is understandable as both the parton distribution and fragmentation functions depend on $\facscale$ quite sharply for large values of $\xprojectile > 0.1$ and $\fragfrac > 0.2$. On the other hand, in the NLO calculation the scale dependence is canceled out (up to the one loop order) as is shown in this figure. The calculation also demonstrates that the best choice of the factorization scale is in the region where $\facscale \sim 2-3 \pperp$. ![Results of the full NLO calculation, from Ref. [@Stasto:2013cha] showing the scale dependence of the LO (solid lines) and NLO calculation (dotted and dashed lines). The NLO calculations are performed for the fixed, $\alphas=0.2$, and running coupling in the hard coefficients. Two sets of curves are plotted for the LHC $\sqrt{s_{NN}}=\SI{5.02}{TeV}$ and RHIC $\sqrt{s_{NN}}=\SI{200}{GeV}$ energies. Figure from Ref. [@Stasto:2013cha].[]{data-label="fig:scaledependence"}](fig4-vsmu.pdf){width="60.00000%"} The most dramatic feature of the NLO calculation, however, is the fact that it turns negative at moderate to high transverse momentum, depending on rapidity. This confirms previous calculations [@Altinoluk:2011qy] based on the partial calculation of the NLO contributions. The NLO correction becomes negative and then it dominates over the LO term at some values of the transverse momentum. The critical value at which the cross section becomes negative depends on the rapidity as can be seen from Fig. \[fig:completenloresults\]. The higher the rapidity, the larger the critical value at which the calculation turns negative. ![Scaling of the cutoff momentum with the saturation scale, using the LO+NLO cross section with the $L_q$ and $L_g$ corrections included (see Section \[sec:implkc\]). Since each calculation of the cross section incorporates a range of values of the saturation scale, we cannot assign a specific value of $\satscale$ to each point. The number on the horizontal axis is an overall fixed scaling factor applied to $\satscale$, such that a scaling factor of $1$ corresponds to minimum bias collisions. []{data-label="fig:cutoff"}](cutoff) The existence of this negativity is independent of the form of the gluon distribution. This fact is illustrated in Fig.\[fig:gluoncomparison\] where we show the calculations performed with different form of the unintegrated gluon distribution: the GBW model, the MV model and two versions of the BK equation with the running coupling corrections. All these calculations agree with the data at low $\pperp$ but then turn negative at high $\pperp$. The exact cutoff momentum where the negativity sets in depends somewhat on the form of the gluon distribution, but the feature persists in all cases. As shown in figure \[fig:cutoff\], the cutoff momentum bears an approximately linear relationship to the saturation scale. ![Results of the full NLO calculation (solid bands), from Ref. [@Stasto:2013cha] showing the comparison of the calculations with the experimental data from BRAHMS for four different choices of the unintegrated gluon distributions: GBW, MV models, and two solutions to the BK equation with fixed coupling $\alphas=0.1$ and running coupling. The bands correspond to the variation of the scale $\facscale^2=\SI{10}{GeV}^2$ to $\SI{50}{GeV^2}$. The crosshatch fill denotes the LO calculation.[]{data-label="fig:gluoncomparison"}](fig2-BRAHMS.pdf){width="85.00000%"} In principle, the fact that the cross section is negative does not *necessarily* indicate a problem with the small-$\bjorkenx$ formalism. This result only reflects the first two terms in a perturbation series, and it’s entirely possible that higher-order terms compensate for the negativity, giving a positive, finite result. However, the result is still somewhat disconcerting. The fact that the NLO correction is larger than the LO result at high $\pperp$ leads one to wonder whether higher-order terms will continue to grow larger and larger as diagrams with more and more loops are incorporated. Even though it’s not practical to calculate these higher-order contributions (barring some automated method to compute higher-order terms), this would indicate that the perturbation series is divergent and that results up to any finite order cannot be trusted. In other words, even if we did somehow manage to incorporate the NNLO contribution, it’s not clear that it would provide any better predictive value at large $\pperp$ than the current results. This would leave us largely unable to constrain the gluon distribution at high $\kperp$ using hadron production in $\pA$ collisions. Rapidity Subtraction {#sec:rapidity subtraction} -------------------- Following the discovery of the negative results, it was quickly established that the negative contribution originates from the plus prescription used in the subtraction of the rapidity divergence, as described in section \[sec:divergences\]. Using the definition of the plus distribution, we can write the finite term from Eq.  as $$\int_a^1 \frac{f(\emitfrac)}{(1 - \emitfrac)_+}\dd\emitfrac = \int_a^1 \frac{f(\emitfrac) - f(1)}{1 - \emitfrac}\dd\emitfrac - \int_0^a\frac{f(1)}{1 - \emitfrac}\dd\emitfrac \; .$$ For the relevant functions in the NLO diagonal channels, $f(\emitfrac)$ achieves its maximum value within the range $[0,1]$ at $1$, so $f(\emitfrac) - f(1) \leq 0$. The negativity in the region near $\emitfrac = 1$ is amplified by the denominator going to zero. With this in mind, it’s natural to consider modifying the subtraction procedure in an attempt to mitigate the negativity of the cross section. Two recent papers [@Kang:2014lha; @Ducloue:2016shw] have proposed introducing a cutoff on the momentum fraction, $\rapfac$ (in the notation of Ref. [@Ducloue:2016shw]), which alters the subtraction procedure from Eq.  by separating the high- and low-momentum gluon emissions as described in section \[sec:rapidity divergences\]. $$\int^1 \frac{\dd\emitfrac}{1 - \emitfrac}f(\emitfrac) = \int^1 \frac{\dd\emitfrac}{(1 - \emitfrac)_+}f(\emitfrac) + \int_0^{\rapfac} \frac{\dd\emitfrac}{1 - \emitfrac}f(1) + \int_{\rapfac}^1 \frac{\dd\emitfrac}{1 - \emitfrac}f(1)\; .$$ With this done, only the last term, representing the low-momentum emissions, is absorbed into the renormalized gluon distribution. $$\label{eq:momentum space modified subtraction} {\mathcal{F}_{\rapfac}}(\qperp) \defn {{\mathcal{F}_{}}^{\star}}(\qperp) + \frac{\alphas}{2\pi^2}\int_{\rapfac}^1\frac{\dd\emitfrac}{1 - \emitfrac}\Bigl[S_{qq}^{\text{real}} - S_{qq}^{\text{virt}}\Bigr]_{\emitfrac=1}\; ,$$ or, in position space (and using the fact that ${S^{\star(2)}}= {S^{(2)}_{\rapfac}}$ to leading order in $\alphas$), $$\begin{gathered} \label{eq:position space modified subtraction} {S^{(2)}_{\rapfac}}(\vec\xperp, \vec\yperp) = {S^{\star(2)}}(\vec\xperp, \vec\yperp) \\ - \frac{\alphas\Nc}{2\pi^2}\int_{\rapfac}^1\frac{\dd\emitfrac}{1 - \emitfrac}\int\dd[2]\vec\bperp\frac{(\vec\xperp - \vec\yperp)^2}{(\vec\xperp - \vec\bperp)^2(\vec\yperp - \vec\bperp)^2}\Bigl[{S^{(2)}_{\rapfac}}(\vec\xperp, \vec\yperp) - {S^{(4)}_{\rapfac}}(\vec\xperp, \vec\bperp, \vec\yperp)\Bigr] \; .\end{gathered}$$ The cutoff $\rapfac$, or more precisely the logarithm $\ln\frac{1}{1 - \rapfac}$ which is the minimum rapidity of emitted (slow) gluons represented by the subtraction term, provides the evolution variable we need to transform this into the BK equation. ![Dependence of the cross section on the rapidity factorization scale using the cutoff chosen in Ref. [@Kang:2014lha]. Reprinted figure with permission from Kang *et al* Phys. Rev. Lett. 113 (2014) 062002; http://dx.doi.org/10.1103/PhysRevLett.113.062002. Copyright 2014 by American Physical Society.[]{data-label="fig:rapidity factorization variation"}](y-dep){width="65.00000%"} ![Variation of the cross section with different choices of the rapidity factorization scale $\rapfac$ as computed in Ref. [@Ducloue:2016shw]. The plot on the left shows results using a fixed factorization scale cutoff, while the plot on the right shows the results for momentum-dependent $\rapfac$ as in Eq. . Reprinted figure with permission from Ducloué *et al* Phys. Rev. D93, (2016) 114016; http://dx.doi.org/10.1103/PhysRevD.93.114016. Copyright 2016 by The American Physical Society.[]{data-label="fig:rapidity factorization cross section"}](dN_cutoff "fig:"){width="48.00000%"} ![Variation of the cross section with different choices of the rapidity factorization scale $\rapfac$ as computed in Ref. [@Ducloue:2016shw]. The plot on the left shows results using a fixed factorization scale cutoff, while the plot on the right shows the results for momentum-dependent $\rapfac$ as in Eq. . Reprinted figure with permission from Ducloué *et al* Phys. Rev. D93, (2016) 114016; http://dx.doi.org/10.1103/PhysRevD.93.114016. Copyright 2016 by The American Physical Society.[]{data-label="fig:rapidity factorization cross section"}](dN_xifcontinuous "fig:"){width="48.00000%"} The remaining term, $\int_0^{\rapfac} \frac{f(1)}{1 - \emitfrac}\dd\emitfrac$, is grouped into the finite part of the cross section, unlike Eq.  where it was considered part of the subtraction term. This additional finite term becomes a new $\order{\alphas}$ contribution to the cross section, one which is positive at high $\pperp$ and could potentially cancel out the negativity in the original NLO results. The new contribution takes the form $$\begin{aligned} \frac{\dd[3]\sigmainclusive}{\dd\rapidity\dd[2]\pperp} &= \frac{\alphas}{2\pi^2}\int_\tau^1 \frac{\dd\fragfrac}{\fragfrac^2}D_{h/q}(\fragfrac) \xprojectile q(\xprojectile) \int_{0}^{\rapfac}\frac{\dd\emitfrac}{1 - \emitfrac} \Bigl[S_{qq}^{\text{real}} - S_{qq}^{\text{virt}}\Bigr]_{\emitfrac=1} \\ &= \frac{\alphas}{2\pi^2}\int_\tau^1 \frac{\dd\fragfrac}{\fragfrac^2}D_{h/q}(\fragfrac) \xprojectile q(\xprojectile) \ln\biggl(\frac{1}{1 - \rapfac}\biggr) \Bigl[S_{qq}^{\text{real}} - S_{qq}^{\text{virt}}\Bigr]_{\emitfrac=1} \; .\end{aligned}$$ We can see that the value of the cutoff $\rapfac$ affects the calculated NLO cross section. In fact, the effect can be quite significant, and even brings the cross section from negative to positive over large ranges of $\pperp$. Given the strong dependence on $\rapfac$, it is important to choose the most sensible value. Choosing $\rapfac = 0$ reproduces the original result of Ref. [@Chirilli:2012jd], but more recent work takes varying views on how the value should be fixed. Kang et al.[@Kang:2014lha] argued that $\rapfac$ should be chosen similar to the value of $\xtarget$ to which the gluon distribution ${S^{(2)}_{\xtarget}}$ is evolved. Since our renormalization of ${S^{(2)}_{\xtarget}}$ has incorporated the evolution into the leading order term, we use the leading order kinematics, $\rapfac = \xtarget = \frac{\pperp}{\fragfrac\sqs}e^{-y}$. However, their results suggest that $\rapfac$ can vary by a factor of order $1$ without changing the cross section too much, as shown in figure \[fig:rapidity factorization variation\]. In a response, Xiao and Yuan [@Xiao:2014uba] have claimed that this rapidity subtraction term is not quite correct because $\rapidity$ should actually be the rapidity difference between the radiated gluon and the projectile *parton* (e.g. quark), which is $\ln\frac{1}{x_g}$, not the projectile hadron (e.g. proton or deuteron). More recently, Ducloué et al.[@Ducloue:2016shw] performed a more detailed analysis, showing the effect of various choices for the cutoff $\rapfac$ over a wider range of possible values. As shown in the left panel of Fig. \[fig:rapidity factorization cross section\], the effect is very pronounced at high hadron momenta, and the cutoff momentum, at which the LO+NLO cross section becomes negative, varies over nearly the entire kinematically allowed range as $\rapfac$ varies. Instead of a fixed cutoff value, they propose a momentum-dependent cutoff $\rapfac(\kperp)$, motivated by ordering of the emitted gluons in $k^-$. Under this scheme, the subtraction term in Eqs.  and  should include gluon emissions in which the fluctuation of the light-cone energy, $\Delta k^-$ (in the notation of our Fig. \[fig:nlo-kinematics\]), is at least $x_{\mathrm{f}}p_a^{-}$ for some value $x_{\mathrm{f}}$, which would likely be close to $\xtarget$. This results in a formula like the following for the cutoff: $$\label{eq:k-dependent rapidity cutoff} \rapfac(\kperp) = \frac{\kperp^2}{\kperp^2 + (\xtarget / x_{\mathrm{f}})\satscale^2}\; .$$ The associated results are shown in Fig. \[fig:rapidity factorization cross section\] on the right. Each of these methods enhances the cross section at moderate $\pperp$, thus increasing the region in which the LO+NLO result is positive. However, at sufficiently high momenta, the negativity always comes back. Section \[sec:expansion\] will address the question of whether any prescription can completely cure the negativity at arbitrarily high $\kperp$. Exact kinematics and matching to the collinear calculation ---------------------------------------------------------- The NLO calculation incorporates the $2\to 2$ processes with an off-shell gluon from the target side. In principle, in the collinear approximation, or when the transverse momentum of this gluon is relatively small, this calculation should match into the collinear calculation. We shall emphasize that the origin of the transverse momentum of the final state hadron is from the hard scattering subprocess only in the collinear factorization and from the transverse momentum of the unintegrated gluon and the hard process in the hybrid approach at NLO. This is schematically illustrated in Fig. \[fig:collvshybrid\]. ![Left: Hybrid calculation at NLO. Right: collinear calculation at LO.[]{data-label="fig:collvshybrid"}](cgcnlo.pdf "fig:"){width="6.2cm"} ![Left: Hybrid calculation at NLO. Right: collinear calculation at LO.[]{data-label="fig:collvshybrid"}](coll_lo.pdf "fig:"){width="4.7cm"} The matching to the collinear factorization can be shown by expanding the exact NLO formulae (before any subtractions are performed) in powers of $\satscale^2/\kperp^2$ in the large limit of $\kperp^2 \gg \satscale^2$. In principle, systematic expansion leads to the twist expansion, in the spirit of the calculations presented in Refs. [@Bartels:2000hv; @Bartels:2009tu]. To match to the collinear calculation, only the leading power in the expansion is retained. In Ref. [@Stasto:2014sea] this expansion was performed both for the $q\to q$ and $g \to g$ channels, with the following formulae for the leading terms: $$\begin{gathered} \frac{\dd[3]\sigmainclusive_{q\to q}}{\dd\quarkrapidity \dd[2]\vec\pperp} = \frac{\alphas}{2\pi^{2}}\int_{\tau}^1 \frac{\dd\fragfrac}{\fragfrac^{2}} {\ensuremath{{\ensuremath{D_{h/q}}}\left(\fragfrac, \facscale^2\right)}}\int_{\tau/\fragfrac}^{1}\dd\emitfrac \frac{1+\emitfrac ^{2}}{1-\emitfrac } {\ensuremath{\frac{\xprojectile}{\emitfrac} q\left(\frac{\xprojectile}{\emitfrac}, \facscale^2\right)}} \\ \times \biggl\{ \CF\frac{(1-\emitfrac)^2}{\kperp^4} + \Nc\frac{\emitfrac}{\kperp^4}\biggr\} \int_{\mathcal{R}} \dd[2] \vec\qperp \qperp^2{\mathcal{F}_{\xtarget}}(\qperp)\, , \end{gathered}$$ $$\begin{gathered} \frac{\dd[3]\sigmainclusive_{g\to g}}{\dd\gluonrapidity \dd[2]\vec\pperp} = \frac{ \Nc }{2\pi^2}\int^1_{\tau} \frac{\dd\fragfrac}{\fragfrac^2} {\ensuremath{{\ensuremath{D_{h/g}}}\left(\fragfrac, \facscale^2\right)}}\int_{\tau/\fragfrac}^{1}\dd\emitfrac {\ensuremath{\frac{\xprojectile}{\emitfrac} g\left(\frac{\xprojectile}{\emitfrac}, \facscale^2\right)}} \\ \times \frac{2 [1-\emitfrac (1-\emitfrac)]^2 [1+\emitfrac^2+(1-\emitfrac)^2]}{ \emitfrac (1-\emitfrac)} \frac{1}{\kperp^4} \int_{\mathcal{R}} \dd[2] \vec\qperp \qperp^2{\mathcal{F}_{\xtarget}}(\qperp).\label{eq:collinearexpansion}\end{gathered}$$ The integrals over the unintegrated gluon distributions can be then simplified to the integrated, collinear densities in the following way $$\int_{\mathcal{R}} \dd[2] \vec\qperp \qperp^2{\mathcal{F}_{\xtarget}}(\qperp) \simeq \frac{2\pi^2}{\Nc} {\ensuremath{x' g_A\left(x', \facscale^2\right)}} \; , \label{eq:unintegrated}$$ where $\facscale$ is the scale of the integrated distribution of the nucleus. In this formalism it is set to be equal to $\satscale$. In order to match to the collinear calculation one needs to carefully evaluate the kinematics. The exact kinematics for the $2\to 2$ process with energy-momentum conservation is defined by $$\begin{aligned} \xprojectile &= \frac{\kperp}{\sqs\emitfrac}e^{\hadronrapidity}\, , \notag \\ \xtarget &= \frac{\kperp}{\sqs}e^{-\hadronrapidity} + \frac{(\vec{k}_{g\perp}-\vec{k}_{\perp})^2}{\sqrt{s} k_{\perp}} \frac{\emitfrac}{1-\emitfrac} e^{-y}\label{eq:xa}\, .\end{aligned}$$ The small-$\bjorkenx$ limit requires the center-of-mass energy to be very large, $\mandelstams\rightarrow \infty$, and at the same time the $\xprojectile$ is kept large which corresponds to the forward limit of the hadron production. However, in any practical calculations, even in the LHC kinematics the energy is not that large, and one has to keep kinematics exact. In the small $x$ limit, one takes the gluon transverse momentum to be of the order of the $k_T$, which results in the approximation to the $x_a\simeq x_{g0}$, because the second term in the above equation is small. However, in the collinear limit this is no longer the case and we have $k_T \gg k_{gT}$, which leads to the following approximation for the gluon longitudinal momentum fraction $$x' \; = \; \frac{k_{\perp}}{\sqrt{s}}e^{-y} + \frac{{k}_{\perp}}{\sqrt{s}} \frac{\emitfrac}{1-\emitfrac} e^{-y}\, .\label{eq:xprim}$$ As a result, [@Stasto:2014sea] using “exact kinematics” (i.e. $x_a<1$ with definition ) reveals that the largest kinematically allowed value of $\emitfrac$ is $$\label{eq:ximax} \emitfrac_\text{max} = \frac{1 - \xglo}{1 - \xglo + \xglo(\vec\kgperp - \vec\kperp)^2/\kperp^2} \; ,$$ where $\xglo = \frac{\kperp}{\sqs}e^{-\rapidity}$ is the definition of the gluon momentum fraction in leading order kinematics (although in this formula it is simply a convenient abbreviation and does not represent a physical momentum fraction). This is strictly less then $1$ except when $\vec\kgperp = \vec\kperp$, i.e. when the emitted gluon has zero transverse momentum. Going back to formula and using together with the collinear kinematics one can demonstrate that indeed it coincides with the leading order collinear formula. One can extract the dominant contributions to the cross section at high $\kperp$ in momentum space and evaluate those with the constraint in force. The result [@Stasto:2014sea] is positive and matches experimental data fairly well at high $\pperp$. The comparison of the LO and NLO calculations is shown in Fig. \[fig:collinearmatching\]. One can see that the calculation with the expansion of the kinematics coincides well with the data at large $\pperp$ and stays positive. On the other hand it does also match to the NLO calculation for intermediate values of the $\pperp$, of the order of the saturation scale. For lower values of $\pperp$, the expansion is overshooting the NLO calculation, which includes more of the higher twist effects in this region and better matches the experimental data. ![Comparison of the LO, NLO small $x$-calculations [@Stasto:2014sea] together with the leading power expansion with exact kinematics at rapidities $\hadronrapidity=2.2,3.2$ at $\sqsnn=\SI{200}{GeV}$. The data are from BRAHMS [@Arsene:2004ux], the calculations use running coupling BK equation. The figure is taken from Ref. [@Stasto:2014sea].[]{data-label="fig:collinearmatching"}](figure-brahms-matching.pdf){width="85.00000%"} Furthermore, at high transverse momentum, $\kperp\gg\satscale$ so saturation effects are generally negligible. Therefore the perturbative description (which does not account for the nonlinear phenomenon of saturation) should be accurate, and indeed the high-$\kperp$ approximation can be analytically shown to coincide with the result from collinear factorization and perturbative QCD at large $\pperp$. Implementation of the kinematical constraint {#sec:implkc} -------------------------------------------- ![Contribution of $L_q$ and $L_g$ to the cross section, equations  and , along with the leading and original next-to-leading contributions for reference. The contribution from $L_q$ and $L_g$ is able to restore the cross section to a positive result up to moderate $\pperp$. Data are from ATLAS with center of mass energy $\sqsnn=\SI{5.02}{TeV}$ compared with the SOLO results for the GBW model and the running coupling BK. Figure referenced from [@Watanabe:2015tja].[]{data-label="fig:lqlgresults"}](nlowxyz.pdf){width="85.00000%"} There has been recent progress[@Altinoluk:2014eka; @Watanabe:2015tja] towards evaluating the cross section subject to the constraint . Two groups take different approaches. The work of Altinoluk et al.[@Altinoluk:2014eka] describes the Ioffe time restriction of the split pair. Their argument is as follows: when a projectile parton emits a gluon prior to interacting with the target nucleus, the resulting pair has a coherence time $$\coherencetime \sim \frac{2\emitfrac(1 - \emitfrac)\xprojectile P^+}{\kperp^2} \; ,$$ where it is assumed that the final-state gluon and progenitor parton have equal and opposite transverse momenta $\pm\vec\kperp$. If $\coherencetime$ for a given $qg$ pair is less than the time $\targettime$ it takes to traverse the target, the pair behaves as a single dressed quark, not as a resolved parton pair. Therefore, when accounting for gluon emission at NLO, we should omit the region of phase space in which $\coherencetime < \tau$. This leads to the kinematic constraint $$\frac{\emitfrac(1 - \emitfrac)\xprojectile}{\kperp^2} > \frac{1}{\mandelstams} \; .$$ Since this constraint is relevant only for $\emitfrac \approx 1$, we can approximate it as $$\emitfrac \lesssim 1 - \frac{\kperp^2}{\xprojectile \mandelstams} \; .$$ Application of the constraint then leads[@Altinoluk:2014eka] to a modification of the Weiszäcker-Williams field; specifically, it introduces a factor of $$\label{eq:ioffe time constraint} 1 - \besselJzero\Biggl(\uperp\sqrt{2\emitfrac(1 - \emitfrac)\frac{\xprojectile P^+}{\tau}}\Biggr)\,,$$ where $\vec\uperp = \vec\xperp - \vec\bperp$ is the transverse separation of the original dipole, due to the restriction imposed by the constraint on the Fourier transform. Alternatively, Watanabe et al.[@Watanabe:2015tja] justify the constraint using conservation of the minus component of four-momentum. $$\xtarget = \frac{\qperp^2}{(1 - \emitfrac)\xprojectile \mandelstams} + \frac{\kperp^2}{\emitfrac \xprojectile \mandelstams} \leq 1\, .$$ Here $\qperp$ is the gluon momentum and $\kperp$ is that of the progenitor parton. As $\emitfrac\to 1$, the first term becomes dominant and this becomes $$\emitfrac \lesssim 1 - \frac{\qperp^2}{\xprojectile \mandelstams} \; .$$ This constraint can be applied to the dipole splitting function, which introduces a factor of $$1 - \besselJzero\Biggl(\uperp\sqrt{\emitfrac(1 - \emitfrac)\xprojectile \mandelstams}\Biggl) \; ,$$ again arising from the constraint acting on the Fourier transform. This is equivalent to equation  if one takes $\frac{2P^+}{\targettime} = \mandelstams$. The final result of the constraint is an additional contribution to the cross section which can be broken down into two terms, one from the quark-quark channel and another from the gluon-gluon channel. Respectively, $$\begin{aligned} \frac{\dd[3]\sigma_{L_q}}{\dd\rapidity\dd[2]\pperp} &= \int_\tau^1 \frac{\dd\fragfrac}{\fragfrac^2}\sum_f {\ensuremath{\xprojectile q_f\left(\xprojectile, \facscale^2\right)}}{\ensuremath{{\ensuremath{D_{h/f}}}\left(\fragfrac, \facscale^2\right)}} L_q(\kperp) \; ,\label{eq:Lq cross section} \\ \frac{\dd[3]\sigma_{L_g}}{\dd\rapidity\dd[2]\pperp} &= \int_\tau^1 \frac{\dd\fragfrac}{\fragfrac^2} {\ensuremath{\xprojectile g\left(\xprojectile, \facscale^2\right)}}{\ensuremath{{\ensuremath{D_{h/g}}}\left(\fragfrac, \facscale^2\right)}} L_g(\kperp) \; . \label{eq:Lg cross section} \end{aligned}$$ Explicit expressions for the functions $L_q$ and $L_g$ can be found in Watanabe et al.[@Watanabe:2015tja] Numerically, one finds that these kinematical correction terms are positive and large enough to restore the positivity of the cross section at intermediate $\pperp$. Figure \[fig:lqlgresults\] shows that the cutoff momentum at which the cross section becomes negative is larger with the correction. Expansion and identification of negativity {#sec:expansion} ------------------------------------------ ![The lowest curve, labeled “NLO”, shows the negative coefficient in the high-$\kperp$ expansion of the cross section, $\ln(1 - \xprojectile) - (1 - \xprojectile)$ from Eq. . The higher curves show how this changes with the addition of the $L_q$ term  or the rapidity subtraction correction .[]{data-label="fig:asymptotic-coefficient"}](asymptotic-coefficient) As previously discussed, there have been several proposals [@Kang:2014lha; @Ducloue:2016shw; @Watanabe:2015tja; @Altinoluk:2014eka] for modifications to the NLO cross section which address the negativity, but so far none appear to cure it entirely. They simply push the cutoff momentum at which the cross section turns negative up to higher values. A better solution would truly remove the negativity from the result at all values of $\pperp$. In theory, this should be doable with an all-order resummation, but developing a suitable resummation procedure is quite difficult and there has not been useful progress in this area. If the negativity can be truly cured with fixed-order terms, it could represent a significant improvement in the predictive value of the formula. Since the negativity is strongest at high momenta, let’s examine the high-momentum limit of the original LO+NLO cross section derived by Chirilli et al.[@Chirilli:2012jd]. We will limit our calculation to the quark-quark channel, which is expected to be representative of the full cross section. Expanding the integrand in powers of $\frac{\satscale^2}{\kperp^2}$, we find that the leading contribution to the integrand is of order $\order{\kperp^{-4}}$ and takes the form $$\label{eq:lo+nlo asymptotic} \frac{\alphas}{2\pi^2} \int_\tau^1\dd\fragfrac \int_{\tau/\fragfrac}^1\dd\emitfrac\ \frac{{\ensuremath{\frac{\xprojectile}{\emitfrac} q\left(\frac{\xprojectile}{\emitfrac}, \facscale^2\right)}}{\ensuremath{{\ensuremath{D_{q/h}}}\left(\fragfrac, \facscale^2\right)}}}{\fragfrac^2}\frac{1 + \emitfrac^2}{(1 - \emitfrac)_+}\Bigl(\CF(1 - \emitfrac)^2 + \Nc\emitfrac\Bigr) \frac{2\pi}{\kperp^4}\int\dd[2]\vec\qperp\,\qperp^2{\mathcal{F}_{\xtarget}}(\qperp)$$ To compare this to the other contributions, we need to integrate over $\emitfrac$. Although we can’t do the integral exactly without an explicit form for ${\ensuremath{\frac{\xprojectile}{\emitfrac} q\left(\frac{\xprojectile}{\emitfrac}, \facscale^2\right)}}$, we can take the leading term in a series expansion around $\frac{\tau}{\fragfrac} = \xprojectile \approx 1$. We expect this to be a reasonable approximation because $\tau \leq \emitfrac \leq 1$, and $\tau \propto \pperp$, so when $\pperp$ becomes large, the range of allowed values for $\emitfrac$ is small. When doing the $\emitfrac$ integral, the term proportional to $\CF$ vanishes, and we obtain $$\begin{gathered} \label{eq:lo+nlo asymptotic xp expansion} \frac{\alphas\Nc}{2\pi^2} \int_\tau^1\dd\fragfrac \frac{{\ensuremath{\xprojectile q\left(\xprojectile, \facscale^2\right)}}{\ensuremath{{\ensuremath{D_{q/h}}}\left(\fragfrac, \facscale^2\right)}}}{\fragfrac^2} \bigl(\ln(1 - \xprojectile) - (1 - \xprojectile)\bigr) \frac{4\pi}{\kperp^4}\int\dd[2]\vec\qperp\,\qperp^2{\mathcal{F}_{\xtarget}}(\qperp) \\ + \frac{\alphas\Nc}{2\pi^2} \int_\tau^1\dd\fragfrac \frac{\xprojectile^2 q'\left(\xprojectile, \facscale^2\right){\ensuremath{{\ensuremath{D_{q/h}}}\left(\fragfrac, \facscale^2\right)}}}{\fragfrac^2} (1 - \xprojectile) \frac{4\pi}{\kperp^4}\int\dd[2]\vec\qperp\,\qperp^2{\mathcal{F}_{\xtarget}}(\qperp)\end{gathered}$$ Within the first line, the negativity comes from the factor $\ln(1 - \xprojectile) - (1 - \xprojectile)$, in particular the logarithm, which dominates over the other terms as $\xprojectile\to 1$.[^2] Any correction term which is to cure the negativity will have to cancel this logarithm. Meanwhile, on the second line, we see the derivative of the quark distribution $q'(\xprojectile, \facscale^2) = \pdv{\xprojectile}q(\xprojectile, \facscale^2)$. In the kinematic region we’re looking at, the parton distribution decreases as $\xprojectile \to 1$ from below, so we can expect that this contribution will also be negative, although not divergent. Moving on to the $L_q$ term arising from the kinematic constraint[@Altinoluk:2014eka; @Watanabe:2015tja], we can expand it in $\kperp$ and find that it makes the following contribution to the cross section in the high-$\kperp$ limit: $$\label{eq:Lq asymptotic} \frac{\alphas\Nc}{2\pi^2} \int_\tau^1 \dd\fragfrac \frac{{\ensuremath{\xprojectile q\left(\xprojectile, \facscale^2\right)}}{\ensuremath{{\ensuremath{D_{h/q}}}\left(\fragfrac, \facscale^2\right)}}}{\fragfrac^2} \frac{4\pi}{\kperp^4} \int\dd[2]\vec\qperp\,\qperp^2{\mathcal{F}_{\xtarget}}(\qperp)$$ This affects the negative factor from the LO+NLO term  only by adding a constant. $$\underbrace{\ln(1 - \xprojectile) - (1 - \xprojectile)}_{\text{LO+NLO}} + \underbrace{1}_{L_q} = \ln(1 - \xprojectile) + \xprojectile$$ As $\xprojectile\to 1$, this is still dominated by the negative logarithm, as shown in Fig. \[fig:asymptotic-coefficient\]. ![This figure shows the high-$\kperp$ approximation to the differential yield resulting from the original LO+NLO calculation [@Chirilli:2012jd; @Stasto:2014sea], the same quantity with the $L_q$ addition [@Altinoluk:2014eka; @Watanabe:2015tja], and the LO+NLO result with the rapidity subtraction correction [@Ducloue:2016shw; @Kang:2014lha] for two different fixed values of $\rapfac$. Except for the latter, all the results are actually negative, so what is plotted here is the absolute value of the yield. The exception is the $\rapfac = 0.999$ curve, which is positive up to $\pperp\approx \SI{4.5}{GeV}$ and negative above, so this result is plotted using two colors, green for the positive part and red for the negative part. The results show clearly the dominance of the negative logarithm at the highest values of $\pperp$. The kinematic limit for these conditions, namely BRAHMS $\sqs = \SI{200}{GeV}$ and $\hadronrapidity = 3.2$, is $\pperp < \SI{8.15}{GeV}$.[]{data-label="fig:nlo-xsec-correction"}](nlo-xsec-corrections){width="75.00000%"} If we expand the rapidity subtraction correction [@Ducloue:2016shw; @Kang:2014lha], denoted $\Delta H_Y$, in the same way, we get the contribution $$\label{eq:HY asymptotic} \frac{\alphas \Nc}{2\pi^2}\int_{\tau}^1 \dd\fragfrac \frac{{\ensuremath{\xprojectile q\left(\xprojectile, \facscale^2\right)}}{\ensuremath{{\ensuremath{D_{h/q}}}\left(\fragfrac, \facscale^2\right)}}}{\fragfrac^2} \ln\biggl(\frac{1}{1 - \rapfac}\biggr) \frac{4\pi}{\kperp^4} \int\dd[2]\vec\qperp\,\qperp^2{\mathcal{F}_{\xtarget}}(\qperp)$$ This in turn affects the negative factor from Eq.  as $$\underbrace{\ln(1 - \xprojectile) - (1 - \xprojectile)}_{\text{LO+NLO}} + \underbrace{\ln\biggl(\frac{1}{1 - \rapfac}\biggr)}_{\Delta H_Y} = \ln(1 - \xprojectile) + \text{constant}$$ The additional term is, again, just a constant with respect to $\xprojectile$. Although the constant can be made as large as desired by adjusting $\rapfac$, there will always be a small range of $\xprojectile$ close to $1$ where the $\ln(1 - \xprojectile)$ term still dominates, as shown in Fig. \[fig:asymptotic-coefficient\]. Thus there will always be some value of $\pperp$ which puts $\tau$ close enough to $1$ to make the cross section negative even with the $\Delta H_Y$ correction. Fig. \[fig:nlo-xsec-correction\] shows a sample calculation, illustrating how even $\rapfac=0.999$ is not sufficient to cancel the negativity at very high $\pperp$, close to the kinematic limit of $\SI{8.15}{GeV}$. Of course, one could go to larger and larger values of $\rapfac$, and eventually bring the cross section positive up to some $\pperp$ that is practically indistinguishable from the kinematic limit, so the rapidity subtraction correction is at least a useful phenomenological tool. Refs. [@Ducloue:2016shw; @Kang:2014lha] additionally propose methods of setting the cutoff $\rapfac$ to a momentum-dependent value, which is more general than the constant cutoff considered here. However, this does not seem likely to change the qualitative result that the negativity persists at very high $\pperp$. We leave a detailed verification of this fact, as well as investigation of other modification schemes which might be able to cancel out the negative logarithm, to future work. Summary ======= In this review we have briefly summarized recent progress in the calculation of the single inclusive hadron production at forward rapidities within the saturation formalism, as well as its application to phenomenology. This process is particularly useful for testing the small-$\bjorkenx$ dynamics, due to the low values of the longitudinal momentum fraction being probed in the target hadron or nucleus. Thus, it has been applied not only to proton-proton collisions, but also to proton-nucleus collisions in the proton’s forward rapidity range. This formalism, sometimes called the hybrid formalism, employs a combination of the collinear parton distribution functions from the projectile side and the unintegrated parton distribution functions on the target side. The formalism in the lowest order has been very successful in the description of experimental data. The addition of NLO corrections was a significant step forward to extend the saturation formalism for this process, and it was demonstrated that the factorization still holds at this level. The appropriate divergences, i.e. collinear and rapidity divergences, have been incorporated into the integrated parton distribution functions of the projectile, the fragmentation functions of the produced hadron, and the unintegrated gluon distribution of the target. Numerical evaluation for this process showed that at this order, the differential distribution fits better to the experimental data at very low transverse momenta and has smaller scale dependence. However, the calculation turns negative at larger values of transverse momenta. Although the precise nature of the negativity, including the transverse momentum at which it sets in, depends on the kinematics, e.g. being more prevalent at lower rapidity, its existence is “universal”, being independent of the form of the unintegrated distribution used and other parameters. The negativity can be traced to the subtraction of the rapidity divergence through the plus prescription. Recent work has focused on several paths to remedy this problem. Improvements of the kinematics, essentially based on the Ioffe time constraint, have been considered. This improvement generates additional terms in the NLO formalism, which shrink the kinematic range in which the results are negative. Another approach proposes a modification of the rapidity subtraction, and by varying this cutoff one can push the negativity to yet higher values of the transverse momenta. However, we have shown that none of these approaches appear to be capable of eliminating the negativity entirely. In the future, more improvements to this formalism could be done, among them the calculation using the solution to the nonlinear evolution in the NLL level, or better yet, the resummed version of the nonlinear evolution equation. It would also be interesting to see whether the hybrid form of the factorization can be extended beyond the NLO level. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Department of Energy Grant No. DE-SC-0002145 and by the National Science Center, Poland, Grant No. 2015/17/B/ST2/01838. We would like to thank Francois Arleo, Bertrand Ducloué, Zefang (Jimmy) Jiang, Tuomas Lappi, Lech Szymanowski, Bowen Xiao, and Yan Zhu for useful discussions. [^1]: This is easily translated to the notation of Ref. [@Chirilli:2012jd] by comparing Eqs. (2)–(5) of Ref. [@Ducloue:2016shw] to Eqs. (16) and (20) of Ref. [@Chirilli:2012jd]. [^2]: Note that the integral is not divergent. One can show this by expressing the rest of the integrand as a power series in $\fragfrac$, then using $$\int_\tau^1 \fragfrac^n [\ln(1 - \tau/\fragfrac) - (1 - \tau/\fragfrac)]\dd \fragfrac = (1 - \tau)[\ln(1 - \tau) - 1] + \order{(1 - \tau)^2}\;,$$ which is finite and negative and goes to zero as $\tau\to 1$.
--- author: - 'A. J. Muñoz-Arjonilla' - 'J. Martí' - 'J. A. Combi' - 'P. Luque-Escamilla' - 'J. R. Sánchez-Sutil' - 'V. Zabalza' - 'J. M. Paredes' date: 'Received / Accepted' title: 'Candidate counterparts to the soft gamma-ray flare in the direction of [LS I +61 303]{}' --- [A short duration burst reminiscent of a soft gamma-ray repeater/anomalous X-ray pulsar behaviour was detected in the direction of [LS I +61 303]{} by the *Swift* satellite. While the association with this well known gamma-ray binary is likely, a different origin cannot be excluded.]{} [We explore the error box of this unexpected flaring event and establish the radio, near-infrared and X-ray sources in our search for any peculiar alternative counterpart.]{} [We carried out a combined analysis of archive Very Large Array radio data of [LS I +61 303]{} sensitive to both compact and extended emission. We also reanalysed previous near infrared observations with the 3.5 m telescope of the Centro Astronómico Hispano Alemán and X-ray observations with the *Chandra* satellite.]{} [Our deep radio maps of the [LS I +61 303]{} environment represent a significant advancement on previous work and 16 compact radio sources in the [LS I +61 303]{} vicinity are detected. For some detections, we also identify near infrared and X-ray counterparts. Extended emission features in the field are also detected and confirmed. The possible connection of some of these sources with the observed flaring event is considered. Based on these data, we are unable to claim a clear association between the [*Swift*]{}–BAT flare and any of the sources reported here. However, this study represents the most sophisticated attempt to determine possible alternative counterparts other than [LS I +61 303]{}.]{} Introduction ============ [LS I +61 303]{} (V615 Cas) is a gamma-ray binary originally discovered in 1977 in the radio during a survey for variable sources in the Galactic plane ([@gregory-78]; [@gregory-79]). The orbital period is about 26.5 d ([@hutchings-81]) and it has been detected in data of frequencies between radio ([@taylor-82], 1984) and high energy gamma-rays ([@albert-08]). The physical interpretation of the [LS I +61 303]{} emission across the complete electromagnetic spectrum still remains a matter of ([@romero-07]). Two different models have been proposed to explain the full spectral energy distribution: (i) a microquasar binary ([@bosch-ramon-06]), and (ii) a non-accreting pulsar interacting with the envelope of the rapidly rotating Be star ([@dubus-06]). On 2008 September 10, the [*Swift*]{} Burst Alert Telescope (BAT) detected a burst in the direction of [LS I +61 303]{} within its 15–150 keV energy range ([@depas-08]). The calculated location of this soft gamma-ray short-duration burst-event was found to have an error of 2[${\rlap.}^{\prime}$]{}2 and the position of [LS I +61 303]{} was found to be clearly consistent with this event ([@bart-08]). Given this coincidence, this burst was associated with magnetar-like activity linked to a young highly magnetized pulsar in the binary system ([@dubus-atel-08]). Unusual X-ray activity (hard high flux and QPOs) had also been reported by [@ray-08] just a few weeks before based on PCA data from the [*RXTE*]{} satellite. If the QPOs originate in [LS I +61 303]{}, an accretion disk would be necessary to explain the nature of this source. However, the PCA field of view of $\sim 1$ degree includes many additional sources that could be responsible for the QPO behaviour. Despite these facts, one cannot exclude the possibility of another unrelated source being responsible for the observed gamma-ray flare. A population of faint X-ray sources in the vicinity of [LS I +61 303]{}were reported by [@rea-torres-08], who suggested that one of these sources might be the quiescent counterpart of a new transient magnetar. [@ma-08] also reported additional X-ray and radio sources coincident with the [*Swift*]{}–BAT error circle and some of them with stellar-like counterparts. We present extensive radio, X-ray, and near infrared observations of the [*Swift*]{}–BAT error circle at the location of this 2008 September 10 event. Both archival data and observations conducted by the authors were used in this paper as described in the log in Table \[table-obs\]. Different populations of sources were detected and their main observational properties are reported. A few interesting objects in the direction of the magnetar-like flare are highlighted and the possibility of them being alternative candidate counterparts is assessed. The census of radio/X-ray sources reported here represents the most complete study to date of alternative counterpart candidates to the [*Swift*]{}–BAT event. ---------- --------------- --------------- --------------------- --------- Instrument Band Integ. 08-Jun-1992 C (6 cm) 9.8 h Radio VLA CnD 27-Jun-1992 L (20 cm) 3.0 h 09-Sep-1993\* C (6 cm) 7.8 h 13-Sep-1993\* C (6 cm) 7.9 h 25-Sep-2007 $J$   (1.25 $\mu$m) 905 s Infrared CAHA 3.5m 25-Sep-2007 $H$  (1.65 $\mu$m) 905 s 25-Sep-2007 $K_s$ (2.18 $\mu$m) 1811 s X-rays [*Chandra*]{} 08-Apr-2006 0.5–7.0 keV 49.9 ks ---------- --------------- --------------- --------------------- --------- : \[table-obs\] Radio, near infrared, and X-ray observations of [LS I +61 303]{} field used in this paper. (\*) Already used by M98.\ Radio sources within the [*Swift*]{}–BAT error circle ===================================================== The magnetar-like event that renewed interest in studying [LS I +61 303]{} is probably related to a compact object formed as a result of a supernova event. Therefore, both compact and extended radio features could play a role in our understanding of this phenomenon. The environment of [LS I +61 303]{} in the radio was studied by [@marti-98] (hereafter M98) at 6 cm wavelength using the Very Large Array (VLA) of the National Radio Astronomy Observatory (NRAO) with the array in its CnD configuration providing appropriate sensitivity to both compact and extended sources. We developed the M98 approach improving their sensitivity by combining additional 6 cm VLA archive data acquired by the same compact array configuration (see Table \[table-obs\]) to a total of 25.5 h of on-source time. The AIPS package of NRAO was used for the standard interferometer data processing and self-calibration. We removed the variable [LS I +61 303]{} core to avoid artifacts and replaced it for cosmetic reasons with a constant point source component with the observed average flux density. We also removed a nearby bright radio source, whose presence affected significantly the dynamic range of the maps. The final result is presented in Fig. \[vla6cm\_taper\], where extended emission is enhanced with a slight taper. Compact sources are more clearly evident in the non-tapered radio map of Fig. \[circle\_maps\] with a rms noise of 13 $\mu$Jy beam$^{-1}$, significantly lower than that of M98. The observational properties of these compact radio sources are listed in Table \[table-sources\], where J2000.0 positions are given. Near infrared observations of the [*Swift*]{}–BAT error circle ============================================================== The field of [LS I +61 303]{} was observed at near infrared wavelengths with the 3.5 m telescope at the Centro Astronómico Hispano Alemán (CAHA) in Almería (Spain), one year before the [*Swift*]{} event. The OMEGA2000 camera was used and the images were taken through the $J$, $H$ and $K_s$ filters. CAHA observations were processed following the standard procedures for sky background subtraction, flat-fielding and median-combining of individual frames using the IRAF[^1] software package. Astrometry in the final frames was determined by identifying about twenty stars in the field for which positions were retrieved from the 2MASS catalog. The relevant part of the resulting image is shown in the right panel of Fig. \[circle\_maps\]. Photometric and astrometric data derived from this image are included in Table \[table-sources\] for VLA radio sources with a near infrared counterpart. The errors in the photometric measurements are dominated by the uncertainty in the zero point, which is estimated to be about 0.1 mag in all filters. The existence of a candidate counterpart was assessed based on the statistical parameter $r$ ([@allington-smith-82]) defined to be: $$r = \sqrt{\frac{(\Delta \alpha \cos \delta)^2}{\sigma_{\alpha \rm{,IR}}^2 + \sigma_{\alpha \rm{,rad}}^2} + \frac{(\Delta \delta)^2}{\sigma_{\delta \rm{,IR}}^2 + \sigma_{\delta \rm{,rad}}^2}}$$ where $\Delta \alpha$ and $\Delta \delta$ are the differences between the measured near infrared and radio positions, and $\sigma_{\alpha \rm{,IR}}$, $\sigma_{\alpha \rm{,rad}}$, $\sigma_{\delta \rm{,IR}}$ and $\sigma_{\delta \rm{,rad}}$ are the infrared and radio uncertainties in right ascension and declination. The offsets between radio and near infrared positions are also listed in Table \[table-sources\]. A value of $r\leq 3$ is taken to be indicative of astrometric coincidence within the combined radio and near infrared position errors. --------------------- ------------------- -------------------------------------------- ------------------- ------------------ ------- ------- ------ ------ ------ ------ VLA Right Ascension Declination Peak flux dens. Total flux dens. Id. \# (hms) ($^{\circ}$ $^{\prime}$ $^{\prime\prime}$) (mJy beam$^{-1})$ (mJy) 01\*$^{\rm (a)}$ 02 39 59.44(0.04) +61 17 03.3(0.2) 0.87(0.04) 1.00(0.07) +0.43 +0.00 0.71 20.1 18.8 18.1 02 02 40 05.63(0.13) +61 15 13.5(1.0) 0.23(0.04) 0.41(0.09) $-$ $-$ $-$ $-$ $-$ $-$ 03\* 02 40 12.22(0.11) +61 14 22.4(0.7) 0.20(0.04) 0.15(0.05) -1.01 +0.70 1.17 11.4 10.3 10.0 04 02 40 12.77(0.24) +61 17 25.9(1.4) 0.11(0.04) 0.10(0.06) -0.94 -0.10 0.27 $-$ 18.8 17.7 05 02 40 26.32(0.47) +61 15 43.2(2.4) 0.08(0.04) 0.17(0.10) +0.50 -0.90 0.38 18.0 17.1 16.0 06 02 40 27.34(0.34) +61 15 18.6(1.9) 0.07(0.04) 0.06(0.06) +0.43 -0.10 0.10 $-$ $-$ 18.2 07 02 40 28.13(0.43) +61 15 50.8(2.6) 0.07(0.04) 0.12(0.09) +4.47 +3.80 1.62 15.0 14.5 14.3 08 02 40 33.33(0.26) +61 18 05.8(1.9) 0.12(0.04) 0.24(0.10) $-$ $-$ $-$ $-$ $-$ $-$ 09\* 02 40 33.74(0.20) +61 12 00.8(1.4) 0.14(0.04) 0.17(0.07) -0.36 -0.70 0.51 19.2 17.8 16.7 10\*$^{\rm (a, b)}$ 02 40 44.94(0.05) +61 16 55.8(0.3) 0.54(0.04) 0.49(0.06) +0.07 +0.20 0.65 6.4 6.2 6.2 11$^{\rm (a)}$ 02 40 45.32(0.33) +61 12 42.3(2.0) 0.10(0.04) 0.17(0.09) $-$ $-$ $-$ $-$ $-$ $-$ 12\* 02 40 52.15(0.06) +61 16 26.1(0.4) 0.39(0.04) 0.35(0.06) +0.43 +0.10 0.54 $-$ $-$ 18.1 13\*$^{\rm (a)}$ 02 41 07.76(0.06) +61 14 54.2(0.4) 0.46(0.04) 0.49(0.07) +0.79 +0.30 1.14 20.5 19.6 18.2 14\* 02 41 09.36(0.07) +61 14 34.5(0.5) 0.34(0.04) 0.34(0.07) $-$ $-$ $-$ $-$ $-$ $-$ 15\* 02 41 09.94(0.03) +61 09 23.0(0.2) 1.16(0.04) 1.00(0.06) $-$ $-$ $-$ $-$ $-$ $-$ 16\* 02 41 10.97(0.14) +61 12 58.1(1.2) 0.18(0.04) 0.23(0.08) $-$ $-$ $-$ $-$ $-$ $-$ --------------------- ------------------- -------------------------------------------- ------------------- ------------------ ------- ------- ------ ------ ------ ------ (\*) Sources already detected by M98.\ $^{\rm (a)}$ Candidate X-ray counterpart found for this source.\ $^{\rm (b)}$ Near infrared magnitudes taken from the 2MASS All-Sky Catalog of point sources. X-ray sources within the [*Swift*]{}–BAT error circle ===================================================== --------------- --------------------- -------------------------------------------- ----------------------------------- ------------- [*Chandra*]{} Right Ascension Declination Flux Hardness Id. \# (hms) ($^{\circ}$ $^{\prime}$ $^{\prime\prime}$) ($10^{-6}$ ph cm$^{-2}$ s$^{-1}$) Ratio 01 02 39 54.899(0.024) +61 12 39.80(0.24) 0.66(0.27) 0.82(0.08) 02 02 39 58.962(0.019) +61 15 19.46(0.13) 2.44(0.43) -0.93(0.35) 03 02 39 59.445(0.016) +61 17 03.53(0.10) 3.98(0.53) 0.33(0.09) 04 02 40 01.019(0.024) +61 16 45.73(0.14) 2.43(0.43) -0.49(0.28) 05 02 40 02.592(0.023) +61 17 28.46(0.16) 2.69(0.48) 0.21(0.15) 06 02 40 10.328(0.036) +61 17 11.64(0.10) 1.09(0.31) -0.48(0.46) 07 02 40 10.521(0.017) +61 11 44.04(0.13) 0.77(0.27) -0.86(0.70) 08 02 40 10.804(0.014) +61 12 16.98(0.13) 1.01(0.29) -0.87(0.57) 09 02 40 16.604(0.018) +61 17 37.79(0.18) 1.22(0.34) -0.44(0.43) 10 02 40 22.166(0.030) +61 17 57.30(0.16) 1.09(0.33) -0.27(0.42) 11 02 40 22.802(0.018) +61 08 47.61(0.26) 0.83(0.29) -1.13(0.80) 12 02 40 22.821(0.007) +61 15 30.59(0.05) 2.42(0.40) 0.02(0.17) 13 02 40 24.507(0.015) +61 17 20.90(0.13) 1.75(0.36) 0.59(0.09) 14 02 40 26.729(0.011) +61 16 20.20(0.13) 1.37(0.33) -0.54(0.39) 15 02 40 26.859(0.016) +61 16 29.57(0.15) 1.36(0.33) 0.28(0.19) 16 02 40 28.861(0.011) +61 16 43.71(0.09) 3.06(0.48) -0.44(0.24) 17 02 40 29.471(0.011) +61 10 59.21(0.11) 0.66(0.24) 0.91(0.04) 18 02 40 36.234(0.017) +61 10 25.18(0.14) 0.70(0.25) -0.66(0.65) 19 02 40 36.754(0.015) +61 14 26.11(0.14) 0.77(0.30) 0.44(0.24) 20 02 40 37.526(0.009) +61 12 54.33(0.19) 0.53(0.22) -0.25(0.58) 21 02 40 38.956(0.006) +61 15 27.01(0.04) 6.22(0.62) 0.24(0.08) 22 02 40 40.505(0.028) +61 15 55.76(0.14) 0.48(0.25) -0.06(0.61) 23 02 40 44.029(0.019) +61 16 54.28(0.13) 1.82(0.41) -0.59(0.38) 24 02 40 44.233(0.040) +61 18 16.45(0.19) 0.97(0.33) 0.02(0.37) 25 02 40 44.384(0.022) +61 17 28.50(0.09) 4.70(0.58) -0.45(0.18) 26 02 40 44.944(0.004) +61 16 56.10(0.02) 36.61(1.51) -0.81(0.08) 27 02 40 45.346(0.019) +61 12 41.61(0.12) 0.51(0.22) 0.45(0.26) 28 02 40 47.666(0.013) +61 16 17.56(0.06) 3.05(0.76) -0.24(0.33) 29 02 40 51.254(0.007) +61 14 28.08(0.05) 4.21(0.52) -0.61(0.21) 30 02 40 53.383(0.024) +61 15 55.31(0.09) 0.88(0.28) 0.43(0.20) 31 02 40 53.754(0.014) +61 14 29.60(0.11) 0.61(0.24) -0.11(0.48) 32 02 40 59.824(0.032) +61 09 31.73(0.25) 0.82(0.30) -0.63(0.63) 33 02 41 00.953(0.018) +61 11 14.55(0.16) 2.56(0.51) -0.15(0.24) 34 02 41 02.787(0.010) +61 14 00.39(0.07) 4.14(0.52) 0.58(0.06) 35 02 41 04.715(0.036) +61 15 32.62(0.23) 1.85(0.43) 0.08(0.23) 36 02 41 07.025(0.033) +61 16 28.63(0.20) 1.28(0.36) -0.11(0.33) 37 02 41 07.835(0.021) +61 14 54.69(0.14) 3.98(0.55) 0.24(0.11) 38 02 41 08.254(0.027) +61 08 06.78(0.23) 4.82(0.65) -0.86(0.26) 39 02 41 10.223(0.019) +61 16 08.34(0.13) 5.51(0.63) 0.55(0.05) 40 02 41 14.310(0.043) +61 16 30.62(0.21) 1.29(0.36) 0.58(0.12) --------------- --------------------- -------------------------------------------- ----------------------------------- ------------- We also used X-ray data from an observation of the [LS I +61 303]{} field obtained with [*Chandra*]{} about two years before the [*Swift*]{} event using the standard ACIS-I setup in VF mode during a total of 49.9 ks. The data were reduced using the [*Chandra*]{} Interactive Analysis of Observations software package (CIAO v4.0) and CALDB v3.4.5. The analysis of [LS I +61 303]{} data itself was discussed and its results reported in detail by [@paredes-07]. We used the CIAO tool `wavdetect` to obtain a list of candidate sources with data of a signal-to-noise ratio above $3\sigma$. Most of the sources were very faint and the counts insufficient for a robust spectral analysis, so we obtained an exposure map of the observation and derived the photon fluxes and hardness ratios of the detected sources. In Table \[table-x-rays\], we present the fluxes in the band and the hardness ratio, defined as $(H-S)/(H+S)$, where $H$ is the flux in the 2.0–7.0 keV band and $S$ is the flux in the 0.5–2.0 keV band. The X-ray observation completely covers the area observed in radio with the VLA. In this region, we found a total of 40 sources, excluding [LS I +61 303]{} itself, 39 of which were previously unknown. Discussion and conclusions ========================== Our deep radio maps in Figs. \[vla6cm\_taper\] and \[circle\_maps\] exhibit 16 compact radio sources in addition to [LS I +61 303]{} with a peak flux density of a factor of four above the rms noise. When we restricted ourselves to the [*Swift*]{}–BAT error circle, three compact radio sources (labelled as \# 03, 09 and 11 in Table \[table-sources\]) appear to be located within or close to the refined BAT location. We measured their flux densities at 6 cm at the three epochs quoted in Table \[table-obs\] separately, and no evidence of variability was found in any source. The VLA sources \# 03 and 09 are consistent with point-like near infrared counterparts (see Table \[table-sources\]). Source \# 03 coincides with a particularly bright object ($K_s=10.0$) and its observed colours (e. g. $J-K_s \simeq 1.4$) are reminiscent of a giant star with a late spectral type, provided that interstellar extinction is not too high. Source \# 09 appears to be a highly reddened object with $J-K_s \simeq 2.5$ mag and its stellar or extragalactic nature cannot be established based on our photometry alone. Neither of them is detected in our [*Chandra*]{} observations. On the other hand and using the same Eq. \# (1) criterion, VLA source \# 11 is coincident with a [*Chandra*]{} X-ray source (labelled as \# 27 in Table \[table-x-rays\]) but no infrared counterpart is detected. If either the magnetar-like burst behaviour or the X-ray activity presenting QPOs are unrelated to [LS I +61 303]{}, then these radio sources appear to be potential alternative counterparts for these detections. We note that VLA source \# 11 is located at the centre of the extended radio source D ($\alpha = 02^h 40^m 45^s$, $\delta = +61^{\circ} 12{\rlap.}^{\prime}7$) reported by M98 which was proposed to be a possible large-scale lobe powered by [LS I +61 303]{}. In our new map (Fig. \[vla6cm\_taper\]), this object resembles more closely a background radio galaxy with bent bipolar jets. The X-ray detection might then originate in the central core of the radio galaxy. In this interpretation, the connection of this source to a magnetar-like flare/QPOs appears unlikely since this kind of behaviour is not typical of an extragalactic object. The new VLA maps confirm all the extended radio emission features reported by M98. A possible supernova remnant origin was tentatively considered by these authors but the low surface brightness and absence of extended X-rays rendered it unlikely. Revisiting this scenario seems timely because the magnetar-like emitter, even if it is not [LS I +61 303]{}, is naturally expected to be a galactic compact object created during a past supernova event. Thus, we attempted to estimate the spectral index of the two brightest extended radio features located west ($\alpha = 02^h 40^m 01^s$, $\delta = +61^{\circ} 13{\rlap.}^{\prime}9$) and south ($\alpha = 02^h 40^m 47^s$, $\delta = +61^{\circ} 09{\rlap.}^{\prime}5$) of [LS I +61 303]{} in Fig. \[vla6cm\_taper\] (sources A and C in M98) by combining our 6 cm map with the 20 cm archive data also quoted in Table \[table-obs\]. In this process, the visibility data was appropriately constrained to match angular resolutions in both data sets. The resulting spectral index map is not conclusive enough due to the poor quality of the 20 cm data. Only the brightest parts of the western radio feature have a spectral index error below 0.1 suggesting a possible non-thermal origin ($S_\nu \propto \nu^{-0.7}$). However, [*Chandra*]{} observations reveal no X-ray counterpart, even for this western extended radio feature. Improved long wavelength radio and X-ray observations are required to constrain more accurately the possibility of a faint supernova remnant associated with [LS I +61 303]{}  or any other source in the field. Concerning compact sources, only one source labelled as \# 15 was detected at 20 cm. Its spectral index ($-0.6 \pm 0.2$) may suggest a non-thermal emission mechanism for this source. In conclusion, we have reported a handful of X-ray and radio sources within or close to the improved [*Swift*]{}–BAT location of the magnetar-like and QPO events towards [LS I +61 303]{}. No object displays any peculiar signature that could reveal a possible connection with these unusual phenomena. Although we cannot exclude them being alternative counterpart candidates, the most likely possibility is that [LS I +61 303]{} is actually behind the observed flaring event. [The authors acknowledge support by grants AYA2007-68034-C03-02 and AYA2007-68034-C03-01 from the Spanish government, and FEDER funds. This has been also supported by Plan Andaluz de Investigación of Junta de Andalucía as research group FQM322. The NRAO is a facility of the NSF operated under cooperative agreement by Associated Universities, Inc. This paper is also based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC). This research made use of the SIMBAD database, operated at the CDS, Strasbourg, France. ]{} Albert, J., Aliu, E., Anderhub, H. et al. 2008, astro-ph 0806.1865 Allington-Smith, J. R., Perryman, M. A. C., Longair, M. S., Gunn, J. E. & Westphal, J. A. 1982, MNRAS, 201, 331 Barthelmy, S. D., et al., 2008, GCN 8215 Bond, H. E., Pollacco, D. L. & Webbink, R. F. 2003, AJ, 125, 260 Bosch-Ramon, V., Paredes, J. M., Romero, G. E. & Torres, D. F. 2006, A&A, 446, 1081 De Pasquale, M., et al., 2008, GCN 8209 Dubus, G. 2006, A&A, 456, 801 Dubus, G. & Giebels, B., 2008, ATel 1715 Gregory, P. C. & Taylor, A. R. 1978, Nat, 272, 704 Gregory, P. C., Taylor, A. R., Crampton, D. et al. 1979, AJ, 84, 1030 Hutchings, J. B. & Crampton D. 1981, PASP, 93, 486 Martí, J., Peracaula, M, Paredes, J. M., Massi, M. & Estalella, R. 1998. A&A, 329, 951 (M98) Muñoz-Arjonilla, A. J., et al., 2008, ATel 1740 Paredes, J. M., Ribó, M., Bosch-Ramon, V. et al. 2007, ApJ, 664, L39 Ray, P. S., Hartman, J. M., 2008, ATel 1730 Rea, N., Torres, D. F., 2008, ATel 1731 Romero, G. E., Okazaki, A. T., Orellana, M. & Owocki, S. P. 2007, A&A, 474, 15 Taylor, A. R. & Gregory, P. C. 1982, ApJ, 255, 210 Taylor, A. R. & Gregory, P. C. 1984, ApJ, 283, 273 [^1]: $<$http://iraf.noao.edu/$>$
--- abstract: 'We investigate the possibility of bistable lasing in microcavity lasers as opposed to bulk lasers. To that end, the dynamic behavior of a microlaser featuring two coupled, interacting modes is analytically investigated within the framework of a semiclassical laser model, suitable for a wide range of cavity shapes and mode geometries. Closed-form coupled mode equations are obtained for all classes of laser dynamics. We show that bistable operation is possible in all of these classes. In the simplest case (class-A lasers) bistability is shown to result from an interplay between coherent (population-pulsation) and incoherent (hole-burning) processes of mode interaction. We find that microcavities offer better conditions to achieve bistable lasing than bulk cavities, especially if the modes are not degenerate in frequency. This results from better matching of the spatial intensity distribution of microcavity modes. In more complicated laser models (class-B and class-C) bistability is shown to persist for modes even further apart in frequency than in the class-A case. The predictions of the coupled mode theory have been confirmed using numerical finite-difference time-domain calculations.' author: - 'Sergei V. Zhukovsky' - 'Dmitry N. Chigrin' - Johann Kroha title: Bistability and mode interaction in microlasers --- Introduction\[sec:INTRO\] ========================= In recent years, microlasers have been an object of growing interest in the photonics community because of a remarkable promise in both basic and applied research. Modern technology has facilitated fabrication of high-$Q$ micro- and nanosized cavities (microresonators) in a vast variety of designs (microdisks, -rings, -gears, -toroids, nanowires, nanoposts, and so on [@VahalaReview]). Lasers can be based on many of these set-ups as well as on different materials, e.g., semiconductors, impurity ions, or dye molecules. In addition, periodic nanostructures (photonic crystals, PhCs) can provide both cavity-based and distributed feedback resonators suitable for laser design [@PBGdefect; @PBGlaser]. The cavity size, which becomes so small as to be comparable to the operating wavelength, is what makes a microlaser physically distinct from conventional (“bulk”) cavities whose size is far larger. The small size limits the number of cavity modes that could take part in lasing, and at the same time greatly increases the influence of the cavity shape on the character of the modes. As a result, the mode structure becomes more complicated and heavily dependent on the specific cavity design. One is no longer able to describe the modes universally in an analytical manner. The variety of laser dynamics becomes much richer, which complicates the studies of microlasers to a considerable extent but at the same time can harbor interesting new effects. For example, one could look for new possibilities of bistable or multistable lasing [@BabaDisks], which would prove useful in many applications such as multiple-wavelength light sources, optical flip-flop devices or optical memory cells [@NatRings]. In the simplest case when two modes coexist in the same laser cavity (competing for the same saturable gain medium), three lasing regimes are usually considered [@Siegman]. First, when one of the modes has an advantage (e.g., larger $Q$-factor or better coupling to the gain), it simply dominates, becoming the only lasing mode (*single mode lasing*). Second, when the modes are well balanced (i.e., similar $Q$-factors and equally well coupled to the gain), they can both lase simultaneously. Such a coexistence can become possible because the modes with different frequency and/or spatial field pattern preferably interact with different gain centers. As a consequence, the spectral and spatial hole burning causes each mode to get saturated independently and allow the mode that happens to be weaker to catch up with the stronger one. Each mode saturates itself more readily as it does the other mode; in this sense, the coexisting modes are said to be *weakly coupled* (*simultaneous multimode lasing*). Third, if the reverse is true, i.e., if each mode saturates the other mode before coming to its own saturation (the modes are *strongly coupled*), the weaker mode is quenched by the stronger one before it has any chance to catch up. Whichever mode has an initial advantage wins the competition and becomes the only lasing mode (*bistable multimode lasing*). The system can lase in either mode and is in this sense bistable. Trying to understand the physical origin of strong mode coupling brings about certain problems. It was pointed out from the beginning [@LambLaser] that harmonic modes (such as longitudinal modes in bulk cavities) must always be coupled weakly because the antinodes of the field (the regions where the light-matter interaction is maximized) are spatially mismatched for different modes. Spatial hole burning would work similarly for any two modes with mismatched intensity distribution (such as transverse modes in bulk cavities). One of the ways to circumvent this limitation is to use degenerate modes with identical spatial intensity profiles, e.g., polarization degenerate modes or counterpropagating modes in ring lasers. This can make the lasing bistable due to additional mode coupling through population pulsations [@diRing; @Siegman]. Alternatively, one can place a saturable absorber in addition to the saturable gain medium into the cavity [@mcSatAbsorb; @mcSatAbsorb2; @LambLaser]. Such an absorber can be naturally realized when only a part of the active medium is pumped. Both principles can be adapted for use in microlasers and are embodied in the form of polarization-bistable and absorptive bistable laser diodes [@diPBLD]. It has also been shown that two coupled lasers can achieve bistability if the output from each laser is directed to the other one and the feedback is reduced to prevent formation of a compound cavity [@mcCoupledAgrawal; @mcCoupledOudar]. Later studies [@mcWieczorekOC; @mcWieczorekPRA] give a detailed account on the stability and mode locking regimes of bulk coupled lasers based on nonlinear bifurcation analysis of the corresponding rate equations. It is fundamentally problematic to achieve similar behavior in microlasers where the modes share the same cavity. Recent achievements in the design of bistable multimode-interference laser diodes [@diMMI-BLD], though capable of bistable lasing within a cavity of sub-millimeter size, still require saturable absorbers for the device to function properly. In the meantime, recent results show that there are yet unexplored possibilities for bistable operation of microlasers. We have shown [@zhukPRL] that a cavity based on coupled defects in a PhC exhibits bistability without the need for saturable absorption or similar additional mechanisms. The same idea was seen to work in lasers based on multimode nanopillar waveguides [@zhukPSS]. Similar results have been reported based on coupled microdisk [@BabaDisks] and coupled microring [@NatRings] resonators, the latter proposed for an ultrafast, ultralow-power optical memory cell design. Also, Ref. [@diRingFlipFlop] reports that coupled multiple-feedback ring lasers can be brought to bistability by carefully selecting the feedback times, which may be more feasible in microlasers than the conventional gain-quenching scheme as in [@mcCoupledOudar]. Finally, a time-independent multimode laser theory recently developed by H. Türeci and co-workers [@Tureci06; @Tureci07] reports that mode interaction can be very important in highly multimode nanostructure-based systems such as random lasers [@TureciScience]. In view of this, there is a pronounced need to address the question of bistability in microresonators with their specific features such as complex cavity shapes and mathematically complex cavity modes taken into account consistently. Spatial hole burning should also be accounted for rigorously without reverting to averaging approximations, which are usually applied for coupled or semiconductor lasers [@BabaDisks; @mcCoupledAgrawal; @mcWieczorekPRA]. In this paper, we consider the dynamics of two interacting modes in a microresonator-based laser. The semiclassical rate-equation model based on the Maxwell-Bloch equations is used to model a laser-active medium. Coupled mode equations are derived and analyzed for different classes of laser dynamics. Compared to existing accounts on mode dynamics and coupled lasers [@mcHodges; @mcWieczorekOC; @mcWieczorekPRA; @mcJohnBusch], no specific form is assumed for either the cavity or the mode geometry. The spatial distribution of population inversion is taken into account fully in terms of projections onto the modes’ subspace (see [@zhukFDTD]) for all classes of laser dynamics. The theory developed here can be seen as complementary to the account in Refs. [@Tureci06; @Tureci07] by being able to provide a description of time-dependent laser dynamics. Though they are rather different, both these approaches go beyond the third-order nonlinearity in the description of light-matter interaction. In the simplest case of class-A laser dynamics, the equations suitable for analytical studies have been derived. As already shown earlier for some particular cases (see, e.g., [@mcWieczorekPRA]), we confirm that coherent mode interaction (population pulsations) can result in bistable laser operation. We show that bistable lasing becomes increasingly more difficult to achieve as the intermode frequency spacing $\Delta\omega$ increases from zero. However, for microcavity modes with well-matched intensity-gain overlap the bistability window has been shown to be much greater (by up to several orders of magnitude with respect to $\Delta\omega$) than for harmonic bulk-cavity modes. A non-symmetric system, where one of the modes is given an advantage through cavity design, is also investigated. We show that a parameter mismatch favoring one of the modes can be compensated for by an opposing mismatch in another parameter that would favor the other mode. In the more complicated class-B or class-C cases, numerical studies of the obtained coupled mode equations have been carried out. The effects of increasing the pumping rate and/or $\Delta\omega$ beyond the applicability limits of the class-A approximations are studied. Bistable lasing is seen to persist unless $\Delta\omega$ becomes comparable to the width of the gain line. Even then, bistability can be further restored by increasing the pumping rate highly above threshold. The results obtained for the class-B/C microlaser systems in the framework of the coupled mode theory have been compared with full numerical finite difference time domain (FDTD) calculations. At least for the system considered (coupled defects in a 2D photonic crystal as in Ref. [@zhukPRL]), we demonstrate that the predictions of the theory are in a good agreement with the results of numerical simulations. The paper is structured as follows. In Section \[sec:EQUATIONS\], we derive the semiclassical coupled two-mode laser equations suitable for a wide range of microcavity modes. Only a few general assumptions about the cavity shape are made and no particular form for the mode geometry is specified. The derivation starts from the Maxwell-Bloch equations and is carried out from the more general (class-C) through the intermediate (class-B) to the most restrictive (class-A) laser dynamics. Specific issues pertaining to introducing the dynamics classes in multimode lasers are addressed along the way. The analysis of the equations obtained is then carried out in the reverse order. In Section \[sec:CLASS-A\], we analyze the class-A case, which, with some assumptions, turns out to be closely related to the standard two-mode competition model [@Siegman]. The parameter window of bistable operation is investigated in terms of the spatial and spectral mode properties. In Section \[sec:CLASS-BC\], class-B and class-C equations are numerically investigated, and the main differences with the class-A model as regards bistable lasing operation are discussed. Finally, Section \[sec:SUMMARY\] summarizes the paper. Coupled two-mode laser equations\[sec:EQUATIONS\] ================================================= Semiclassical laser equations and multimode expansion\[sub:EQS-GEN\] -------------------------------------------------------------------- ![(Color online) Schematic illustration of the mode frequencies ($\omega_{1,2}\equiv\omega_{0}\mp\Delta\omega$) with respect to gain ($\delta_{\omega}=\omega_{a}-\omega_{0}$), as used throughout the paper.\[fig:FREQUENCIES\]](fig/fig1){width="30.00000%"} The semiclassical laser equations used in the present paper as a starting point are composed of three parts: (i) the laser rate equations, reduced to the equation for population inversion $W$ of the laser transition; (ii) the equation of motion for the macroscopic polarization density $P$ of the laser medium, obtained in a modified electronic oscillator model, and (iii) the scalar wave equation derived from the Maxwell equations. We consider two-dimensional (2D) systems, translationally invariant in the $\hat{\mathbf{z}}$-direction, with TM light polarization, corresponding to a wide range of 2D photonic structures. In this case the electric field is $\mathbf{E}(\mathbf{r})=E_{z}(x,z)\hat{\mathbf{z}}$, allowing us to restrict ourselves to the $z$-component of the field $E(\mathbf{r},t)=E_{z}(x,y,t)$. Applying the slowly varying envelope (SVE) approximation [@Siegman], the Maxwell-Bloch system of equations takes the form [@mcHodges] $$\begin{aligned} \frac{\partial}{\partial t}W(\mathbf{r},t) & = & \gamma_{\parallel}\left[R-W(\mathbf{r},t)\right]+\frac{\mathrm{i}}{4\hbar}\left[E(\mathbf{r},t)P^{*}(\mathbf{r},t)-E^{*}(\mathbf{r},t)P(\mathbf{r},t)\right],\label{eq:SVEA_W}\end{aligned}$$ $$\begin{aligned} \frac{\partial}{\partial t}P(\mathbf{r},t) & = & -\left(\gamma_{\perp}+\mathrm{i}\delta\right)P(\mathbf{r},t)-\frac{\mathrm{i}\mu^{2}}{\hbar}W(\mathbf{r},t)E(\mathbf{r},t),\label{eq:SVEA_P}\end{aligned}$$ $$\begin{aligned} \frac{1}{\epsilon_{0}}\frac{\partial^{2}}{\partial t^{2}}\left(P(\mathbf{r},t)\mathrm{e}^{-\mathrm{i}\omega t}\right) & = & \left[c^{2}\nabla^{2}-\epsilon(\mathbf{r})\frac{\partial^{2}}{\partial t^{2}}-\kappa(\mathbf{r})\frac{\partial}{\partial t}\right]\left(E(\mathbf{r},t)\mathrm{e}^{-\mathrm{i}\omega t}.\right)\label{eq:SVEA_E}\end{aligned}$$ Here $W(\mathbf{r},t)$ has the meaning of population inversion, which can vary spatially as opposed to Ref. [@mcWieczorekPRA] where it is assumed to be constant across the whole cavity. Further, $R$ is the external pumping rate, $\mu$ is the dipole matrix element of the atomic laser transition, and the polarization and population inversion decay rates are given by $\gamma_{\perp}$ and $\gamma_{\parallel}$, respectively. We consider a resonant system that features two eigenmodes with decay rates $\kappa_{1,2}$, phenomenologically accounted for by the presence of a loss term $\kappa(\mathbf{r})$ in Eq. . The mode frequencies are $\omega_{1,2}\equiv\omega_{0}\mp\Delta\omega$, and the central frequency $\omega_{0}$ is shifted with respect to the lasing transition frequency $\omega_{a}$ by $\delta_{\omega}=\omega_{a}-\omega_{0}$ with $\Delta\omega,\delta_{\omega}\ll\omega_{0}$, as shown in Fig. \[fig:FREQUENCIES\]. We assume that the eigenmodes of the cold cavity have a spatial structure given by $u_{1,2}(\mathbf{r})$. The electric field $E(\mathbf{r},t)$ is then decomposed into the spatially dependent mode profiles $u_{1,2}(\mathbf{r})$ multiplied by time dependent SVE functions $E_{1,2}(t)$ as $$\begin{aligned}E(\mathbf{r},t)\mathrm{e}^{-\mathrm{i}\omega_{0}t}=u_{1}(\mathbf{r})E_{1}(t)\mathrm{e}^{-\mathrm{i}\omega_{1}t}+u_{2}(\mathbf{r})E_{2}(t)\mathrm{e}^{-\mathrm{i}\omega_{2}t}\\ \equiv\left[u_{1}(\mathbf{r})E_{1}(t)\mathrm{e}^{\phi_{+}}+u_{2}(\mathbf{r})E_{2}(t)\mathrm{e}^{\phi_{-}}\right]\mathrm{e}^{-\mathrm{i}\omega_{0}t}.\end{aligned} \label{eq:decomp_E}$$ Here and further, $\phi_{\pm}\equiv\pm\mathrm{i}\Delta\omega t$. Following the approach in [@mcHodges], we make a similar ansatz for the polarization, introducing the amplitudes $P_{1,2}(t)$ as $$P(\mathbf{r},t)\mathrm{e}^{-\mathrm{i}\omega_{0}t}=\left[u_{1}(\mathbf{r})P_{1}(t)\mathrm{e}^{\phi_{+}}+u_{2}(\mathbf{r})P_{2}(t)\mathrm{e}^{\phi_{-}}\right]\mathrm{e}^{-\mathrm{i}\omega_{0}t}.\label{eq:decomp_P}$$ The applicability of the expansion  needs further justification. Eq.  assumes that polarization $P(\mathbf{r},t)$ and the electric field $E(\mathbf{r},t)$ have similar spatial profiles. This is strictly true only if the field intensity is small enough, e.g., if the pumping rate $R$ is not very large. Otherwise, the polarization gets influenced by the saturation terms that involve the population inversion $W(\mathbf{r},t)$, which itself cannot be spatially decomposed. These saturation terms would modify the spatial profile of $P(\mathbf{r},t)$ outside the scope of Eq. . However, as Eq.  does not contain any explicit expansion in a series of nonlinearity orders with subsequent series truncation, the constraint on the pumping rate $R$ appears to be much weaker than what is enforced by the usual near-threshold expansion [@mcZehnle; @Haken; @HakenFu], which explicitly retains only third-order nonlinearities in the hole burning interaction. In the extreme (single-mode) case, where Eq.  implies $P(\mathbf{r},t)\propto E(\mathbf{r},t)$ and thus carries the strongest approximation, it can be shown that the coupled mode theory based on Eq.  leads to underestimation of the steady-state laser field intensity $E(R)$. However, the character of the dependence $E(R)$ is preserved for the values of $R$ well outside the range of applicability of the near-threshold expansion (see [@Tureci06]). Moreover, the dynamical behavior of the laser is also correctly predicted by the coupled mode theory employing the expansion  both for one and for two modes (see our earlier work [@zhukFDTD] for a comparison with direct numerical simulations). That taken into account, in what follows we will use the expansion , remembering that the results may deviate quantitatively and may be subjest to further checking as the pumping rate goes far above threshold. Class-C lasers\[sub:EQS-C\] --------------------------- In order to derive the equations for $E_{i}(t)$ and $P_{i}(t)$, one has to eliminate all the spatial dependencies from Eqs. –. We begin by substituting Eqs.  and  into Eq. , assuming that the time dependence of the field envelopes are slow enough so that $\left|\mathrm{d}E_{j}/\mathrm{d}t\right|\ll\omega_{j}\left|E_{j}\right|$. The modes $u_{j}(\mathbf{r})$ are assumed to be orthonormal solutions of the homogeneous wave equation $(c^{2}\nabla^{2}-\epsilon(\mathbf{r})\omega_{j}^{2})u_{j}(\mathbf{r})=0$, which means that their integral across the cavity is $$\int_{C}\varepsilon(\mathbf{r})u_{i}^{*}(\mathbf{r})u_{j}(\mathbf{r})=\delta_{ij}.\label{eq:orthogonal_cavity}$$ As a result, the spatial derivatives in Eq.  can be eliminated. If $\epsilon(\mathbf{r})=\epsilon$ is constant throughout the cavity (the bulk-cavity case [@mcHodges]), the modes in Eq.  decouple rigorously, and one obtains $$\frac{\mathrm{d}}{\mathrm{d}t}E_{j}=-\frac{\kappa_{j}E_{j}}{2}+\mathrm{\frac{i}{2\epsilon_{0}\epsilon}}\omega_{j}P_{j}.\label{eq:Ec}$$ This decoupling remains approximately true if the major part of the modes’ energy are located in a material with the same dielectric constant, as is often the case in microcavities. For details we refer the reader to our earlier work [@zhukFDTD]. A more complicated case of distributed feedback structures would require additional spatial multiscale analysis, e.g., following the approach developed for photonic crystal lasers [@mcJohnBusch]. Eliminating spatial dependencies in Eq.  is simpler and requires substitution of Eqs. – with subsequent projection onto the eigenmodes, i.e., integration $\int u_{j}^{*}(\ldots)\mathrm{d}^{3}\mathbf{r}$ over the gain medium: $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}P_{1}=\: & -\beta_{1}P_{1}-\mathrm{i}\frac{\mu^{2}}{\hbar}\left(E_{1}W_{11}+E_{2}W_{12}\mathrm{e}^{2\phi_{-}}\right),\\ \frac{\mathrm{d}}{\mathrm{d}t}P_{2}=\: & -\beta_{2}P_{2}-\mathrm{i}\frac{\mu^{2}}{\hbar}\left(E_{1}W_{21}\mathrm{e}^{2\phi_{+}}+E_{2}W_{22}\right),\end{aligned} \label{eq:Pc}$$ where $\beta_{1,2}=\left(\gamma_{\perp}+\mathrm{i}\delta_{\omega}\right)\pm\mathrm{i}\Delta\omega$ and $W_{ij}$ are the projections of the population inversion $W(\mathbf{r},t)$ onto the corresponding modes$$W_{ij}(t)\equiv\epsilon\int_{G}\mathrm{d}^{3}\mathbf{r}\, u_{i}^{*}(\mathbf{r})W(\mathbf{r},t)u_{j}(\mathbf{r})\label{eq:W_comps}$$ Analogously, by substituting Eqs. – into Eq.  and applying $\int u_{i}^{*}(\ldots)u_{j}\mathrm{d}^{3}\mathbf{r}$, one can obtain the equations for $W_{ij}$ in the following form: $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}W_{ij}=\: & \gamma_{\parallel}\left(R_{ij}-W_{ij}\right)\\ -\, & \frac{\mathrm{i}}{4\hbar}\left[E_{1}^{*}\left(\alpha_{ij}^{11}P_{1}+\alpha_{ij}^{12}P_{2}\mathrm{e}^{2\phi_{-}}\right)+E_{2}^{*}\left(\alpha_{ij}^{21}P_{1}\mathrm{e}^{2\phi_{+}}+\alpha_{ij}^{22}P_{2}\right)\right]\\ +\, & \frac{\mathrm{i}}{4\hbar}\left[E_{1}\left(\alpha_{ij}^{11}P_{1}^{*}+\alpha_{ij}^{21}P_{2}^{*}\mathrm{e}^{2\phi_{+}}\right)+E_{2}\left(\alpha_{ij}^{12}P_{1}^{*}\mathrm{e}^{2\phi_{-}}+\alpha_{ij}^{22}P_{2}^{*}\right)\right].\end{aligned} \label{eq:Wc}$$ Here, $R_{ij}$ are related to $R$ in the same way as $W_{ij}$ to $W(\mathbf{r},t)$, via Eq. . The coefficients $\alpha_{ij}^{mn}$ are mode overlap integrals defined as:$$\alpha_{ij}^{mn}\equiv\epsilon\int_{G}\mathrm{d}^{3}\mathbf{r}\, u_{i}^{*}(\mathbf{r})u_{j}(\mathbf{r})u_{m}^{*}(\mathbf{r})u_{n}(\mathbf{r}).\label{eq:alpha_comps}$$ The integration in Eqs.  and  is performed over the gain medium where $\epsilon(\mathbf{r})=\epsilon$ is assumed to be constant. Apart from that assumption, the shape of the gain region itself can be arbitrary and does not have to be contiguous. The mode geometry can also be arbitrary unlike in the previous reports [@mcHodges; @mcWieczorekOC; @mcWieczorekPRA], as the inter-mode and mode-gain overlaps are accounted for in terms of $\alpha_{ij}^{mn}$ and $W_{ij}$. Note that Eqs.  and  with the definition  do not involve any approximations on the field or pump intensity beside the one associated with the validity of Eq.  as described above. Because of this, the full population inversion $W(\mathbf{r},t)$ cannot be written explicitly in terms of $W_{ij}(t)$ and $u_{1,2}(\mathbf{r})$, in contrast to $E$ and $P$, as in Eqs. –. Also note that the rate equations  for the population inversion explicitly contain oscillatory terms, which originate from the beating in the superposition of the two modes with different frequencies $\omega_{1}$ and $\omega_{2}$. Class-B lasers\[sub:EQS-B\] --------------------------- Equations , , and  govern the dynamics of the two spectrally close, interacting modes without any assumptions on the laser dynamics besides those needed for the SVE approximation. All these equations include a decay term with a characteristic decay rate for all the variables involved. The mode amplitudes $E_{j}$ decay with the rate $\kappa_{j}$ associated with the $Q$-factors of the modes ($Q_{j}=\omega_{j}/\kappa_{j}$). The decay of all the population inversion projections $W_{ij}$ is governed by $\gamma_{\parallel}$. Finally, the polarization amplitudes $P_{j}$ decay rates are complex, $\beta_{j}=\left(\gamma_{\perp}+\mathrm{i}\delta_{\omega}\right)\pm\mathrm{i}\Delta\omega$. This complexity directly results from the multimode character of the laser under study, and in the single-mode case $\beta_{j}=\gamma_{\perp}$. In the most general case of laser dynamics there are no restrictions on the decay rates $\kappa_{j}$, $\gamma_{\parallel}$. $\gamma_{\perp}$ (so-called class-C lasers). In reality, however, the decay rates are governed by different physical processes and often belong to different time scales (class-B or class-A lasers, see [@mcWieczorekOC]), which can make the analysis of the laser equations considerably simpler. Class-B lasers are defined by $\gamma_{\perp}\gg\gamma_{\parallel},\kappa_{j}$. In the single-mode case, it would mean that the polarization relaxes and achieves saturation so fast that the polarization can be assumed to have no own dynamics and follows $E$ and $W$ adiabatically. In the two-mode case, where the polarization dynamics is influenced by the intermode spacing $\Delta\omega$, the introduction of the class-B approximations needs to be approached with greater care. Since the right-hand side of Eqs.  includes oscillatory terms on the time scale of $2\Delta\omega$, one can eliminate the polarization only if these oscillations are much slower than the exponential decay due to $\gamma_{\perp}$, i.e., $\gamma_{\perp}\gg\Delta\omega$. Note that this additional condition for class-B lasing, specific for multimode lasers, becomes especially important in microlasers where the small cavity size can place the modes much further apart from each other than in the bulk cavities. Under these assumptions, we can now eliminate the polarization adiabatically by assuming $\mathrm{d}P_{j}/\mathrm{d}t\approx0$. Hence, Eqs.  assume the form$$\begin{aligned}P_{1}=\: & -\mathrm{i}\frac{\mu^{2}}{\hbar}\frac{1}{\beta_{1}}\left(E_{1}W_{11}+E_{2}W_{12}\mathrm{e}^{2\phi_{-}}\right),\\ P_{2}=\: & -\mathrm{i}\frac{\mu^{2}}{\hbar}\frac{1}{\beta_{2}}\left(E_{1}W_{21}\mathrm{e}^{2\phi_{+}}+E_{2}W_{22}\right),\end{aligned} \label{eq:Pb}$$ which causes Eqs.  to be modified as$$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}E_{1}=\: & -\frac{\kappa_{1}}{2}E_{1}+\mathrm{\frac{\mu^{2}}{\hbar}\frac{\omega_{1}}{2\epsilon_{0}\epsilon}\frac{1}{\beta_{1}}}\left(E_{1}W_{11}+E_{2}W_{12}\mathrm{e}^{2\phi_{-}}\right),\\ \frac{\mathrm{d}}{\mathrm{d}t}E_{2}=\: & -\frac{\kappa_{2}}{2}E_{2}+\frac{\mu^{2}}{\hbar}\frac{\omega_{2}}{2\epsilon_{0}\epsilon}\frac{1}{\beta_{2}}\left(E_{1}W_{21}\mathrm{e}^{2\phi_{+}}+E_{2}W_{22}\right).\end{aligned} \label{eq:Eb}$$ Analogously, substituting Eq.  into Eq.  one may obtain the equations for $W_{ij}$. Since the population inversion $W$ is real [\[]{}see Eq. \], it follows from Eq.  that $W_{ji}^{*}=W_{ij}$, and in particular, $W_{jj}^{*}=W_{jj}$. Hence, $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}W_{ij}=\: & \gamma_{\parallel}\left(R_{ij}-W_{ij}\right)\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left|E_{1}\right|^{2}\left[\alpha_{ij}^{11}\left(\frac{1}{\beta_{1}}+\frac{1}{\beta_{1}^{*}}\right)W_{11}+\left(\frac{\alpha_{ij}^{12}}{\beta_{2}}W_{21}+\frac{\alpha_{ij}^{21}}{\beta_{2}^{*}}W_{12}\right)\right]\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left|E_{2}\right|^{2}\left[\alpha_{ij}^{22}\left(\frac{1}{\beta_{2}}+\frac{1}{\beta_{2}^{*}}\right)W_{22}+\left(\frac{\alpha_{ij}^{21}}{\beta_{1}}W_{12}+\frac{\alpha_{ij}^{12}}{\beta_{1}^{*}}W_{21}\right)\right]\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}E_{1}^{*}E_{2}\left[\left(\frac{\alpha_{ij}^{11}}{\beta_{1}}+\frac{\alpha_{ij}^{22}}{\beta_{2}^{*}}\right)W_{12}+\alpha_{ij}^{12}\left(\frac{1}{\beta_{2}}W_{22}+\frac{1}{\beta_{1}^{*}}W_{11}\right)\right]\mathrm{e}^{2\phi_{-}}\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}E_{1}E_{2}^{*}\left[\left(\frac{\alpha_{ij}^{22}}{\beta_{2}}+\frac{\alpha_{ij}^{11}}{\beta_{1}^{*}}\right)W_{21}+\alpha_{ij}^{21}\left(\frac{1}{\beta_{1}}W_{11}+\frac{1}{\beta_{2}^{*}}W_{22}\right)\right]\mathrm{e}^{2\phi_{+}}.\end{aligned} \label{eq:Wb}$$ Eqs. – are the governing equations for two-mode class-B lasers. Further knowledge about the modes in question can allow further simplification. A good example is the case when the modes are orthogonal not only withinin the whole cavity [\[]{}Eq. \], but also in the gain region, e.g., if most of the cavity or at least the portion of the cavity with maximum mode energy is filled with the pumped gain medium: $$\begin{aligned} \int_{G}u_{i}^{*}(\mathbf{r})u_{j}(\mathbf{r}) & = & \delta_{ij}.\label{eq:orthogonal_gain}\end{aligned}$$ In this case the overlap integrals with one out-of-place index ($\alpha_{ij}^{ii}$, $\alpha_{ii}^{ji}$, etc.) will be negligible compared to the rest of the overlaps such as $\alpha_{jj}^{jj}$, $\alpha_{jj}^{ii}$, $\alpha_{ij}^{ij}$, or $\alpha_{ji}^{ij}$. This allows to shorten Eq. , which then assume different forms for symmetric $W_{jj}$ vs. anti-symmetric projections $W_{ij\neq i}$:$$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}W_{jj}=\: & \gamma_{\parallel}\left(R_{jj}-W_{jj}\right)-\frac{\mu^{2}}{4\hbar^{2}}\left[\left|E_{1}\right|^{2}\alpha_{jj}^{11}\left(\frac{1}{\beta_{1}}+\frac{1}{\beta_{1}^{*}}\right)W_{11}+\left|E_{2}\right|^{2}\alpha_{jj}^{22}\left(\frac{1}{\beta_{2}}+\frac{1}{\beta_{2}^{*}}\right)W_{22}\right]\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\left(\frac{\alpha_{jj}^{11}}{\beta_{1}}+\frac{\alpha_{jj}^{22}}{\beta_{2}^{*}}\right)W_{12}+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\left(\frac{\alpha_{jj}^{11}}{\beta_{1}^{*}}+\frac{\alpha_{jj}^{22}}{\beta_{2}}\right)W_{21}\right],\end{aligned} \label{eq:Wb_jj}$$ $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}W_{12}=\: & \gamma_{\parallel}\left(R_{12}-W_{12}\right)-\frac{\mu^{2}}{4\hbar^{2}}\left[\left|E_{1}\right|^{2}\left(\frac{\alpha_{12}^{12}}{\beta_{2}}W_{21}+\frac{\alpha_{12}^{21}}{\beta_{2}^{*}}W_{12}\right)+\left|E_{2}\right|^{2}\left(\frac{\alpha_{12}^{21}}{\beta_{1}}W_{12}+\frac{\alpha_{12}^{12}}{\beta_{1}^{*}}W_{21}\right)\right]\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\alpha_{12}^{12}\left(\frac{1}{\beta_{2}}W_{22}+\frac{1}{\beta_{1}^{*}}W_{11}\right)+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\alpha_{12}^{21}\left(\frac{1}{\beta_{2}^{*}}W_{22}+\frac{1}{\beta_{1}}W_{11}\right)\right].\end{aligned} \label{eq:Wb_12}$$ where $R_{12}\ll R_{jj}$ due to the mode orthogonality and, as we remember, $W_{21}=W_{12}^{*}$. Furthermore, if the modes are *intensity-matched*, i.e., assumed to have nearly equal intensity distribution in the gain region so that$$\left|u_{1}(\mathbf{r})\right|^{2}\approx\left|u_{2}(\mathbf{r})\right|^{2},\quad\mathbf{r}\in G,\label{eq:intensities}$$ then it follows from Eq.  that $W_{11}=W_{22}\equiv W_{s}$ and $W_{12}=W_{21}^{*}\equiv W_{a}$, as well as from Eq.  that $\alpha_{jj}^{ii}=\alpha_{ji}^{ij}\equiv\alpha$ is real, while $\alpha_{ij}^{ij}\equiv\alpha'$ can be complex. Hence,$$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}W_{s}=\: & \gamma_{\parallel}\left(R_{s}-W_{s}\right)-\frac{\mu^{2}}{4\hbar^{2}}\alpha\left[\left|E_{1}\right|^{2}\left(\frac{1}{\beta_{1}}+\frac{1}{\beta_{1}^{*}}\right)+\left|E_{2}\right|^{2}\left(\frac{1}{\beta_{2}}+\frac{1}{\beta_{2}^{*}}\right)\right]W_{s}\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\alpha\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\left(\frac{1}{\beta_{1}}+\frac{1}{\beta_{2}^{*}}\right)W_{a}+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\left(\frac{1}{\beta_{1}^{*}}+\frac{1}{\beta_{2}}\right)W_{a}^{*}\right],\end{aligned} \label{eq:Wb_s}$$ $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}W_{a}=\: & -\gamma_{\parallel}W_{a}-\frac{\mu^{2}}{4\hbar^{2}}\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\alpha'\left(\frac{1}{\beta_{2}}+\frac{1}{\beta_{1}^{*}}\right)+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\alpha\left(\frac{1}{\beta_{2}^{*}}+\frac{1}{\beta_{1}}\right)\right]W_{s}\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left[\left|E_{1}\right|^{2}\left(\frac{\alpha'}{\beta_{2}}W_{a}^{*}+\frac{\alpha}{\beta_{2}^{*}}W_{a}\right)+\left|E_{2}\right|^{2}\left(\frac{\alpha}{\beta_{1}}W_{a}+\frac{\alpha'}{\beta_{1}^{*}}W_{a}^{*}\right)\right].\end{aligned} \label{eq:Wb_a}$$ Class-A lasers\[sub:EQS-A\] --------------------------- If one further assumes that $(\gamma_{\perp}\gg)\gamma_{\parallel}\gg\kappa_{j}$ (class-A lasers), the slowest-varying quantity becomes the mode decay. The population inversion follows the mode amplitudes $E_{j}(t)$ instantaneously and can be eliminated, leaving us with only two equations for the mode amplitudes. Similar to the way we have built the class-B approximation, the derivatives in Eqs.  are $\mathrm{d}W_{ij}/\mathrm{d}t\approx0$. In this case, Eqs. – become$$\begin{aligned}W_{jj}=\: & R_{jj}-\frac{\mu^{2}}{4\hbar^{2}}\frac{1}{\gamma_{\parallel}}\left[\left|E_{1}\right|^{2}\alpha_{jj}^{11}\left(\frac{1}{\beta_{1}}+\frac{1}{\beta_{1}^{*}}\right)W_{11}+\left|E_{2}\right|^{2}\alpha_{jj}^{22}\left(\frac{1}{\beta_{2}}+\frac{1}{\beta_{2}^{*}}\right)W_{22}\right]\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\left(\frac{\alpha_{jj}^{11}}{\beta_{1}}+\frac{\alpha_{jj}^{22}}{\beta_{2}^{*}}\right)W_{12}+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\left(\frac{\alpha_{jj}^{11}}{\beta_{1}^{*}}+\frac{\alpha_{jj}^{22}}{\beta_{2}}\right)W_{21}\right],\end{aligned} \label{eq:Wa_jj}$$ $$\begin{aligned}W_{12}=\: & -\frac{\mu^{2}}{4\hbar^{2}}\frac{1}{\gamma_{\parallel}}\left[\left|E_{1}\right|^{2}\left(\frac{\alpha_{12}^{12}}{\beta_{2}}W_{21}+\frac{\alpha_{12}^{21}}{\beta_{2}^{*}}W_{12}\right)+\left|E_{2}\right|^{2}\left(\frac{\alpha_{12}^{21}}{\beta_{1}}W_{12}+\frac{\alpha_{12}^{12}}{\beta_{1}^{*}}W_{21}\right)\right]\\ -\, & \frac{\mu^{2}}{4\hbar^{2}}\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\alpha_{12}^{12}\left(\frac{1}{\beta_{2}}W_{22}+\frac{1}{\beta_{1}^{*}}W_{11}\right)+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\alpha_{12}^{21}\left(\frac{1}{\beta_{2}^{*}}W_{22}+\frac{1}{\beta_{1}}W_{11}\right)\right].\end{aligned} \label{eq:Wa_12}$$ This is a system of linear algebraic equations that can be solved for $W_{ij}$. We are aiming for equations with simple enough structure to be treated analytically, namely, equations for $E_{j}$ with up to cubic-order non-linearity as analyzed, e.g., in [@Siegman]. Hence, we are looking for the solutions in the form $$W_{ij}\equiv W_{ij}^{(0)}+\sum_{m,n}W_{ij}^{(m,n)}E_{m}^{*}E_{n},\label{eq:Wa_ansatz}$$ neglecting terms with higher powers of $E$. Truncating higher-order nonlinearity corresponds physically to the case with low field intensities, i.e., just above the lasing threshold. Hence, at this point the near-threshold expansion is introduced as understood in numerous works [@mcZehnle; @Haken; @HakenFu]. We remark that this expansion is by far a stronger approximation than the one used in assuming the form  for the polarization. Hence, the class-B and class-C models described in the previous sections are valid for much stronger pumping, while the class-A description that follows is valid for pumping rates only slightly above threshold. Inserted into Eqs. –, Eq.  yields$$\begin{aligned}W_{jj}\approx\: & R_{jj}-\frac{\mu^{2}}{4\hbar^{2}}\frac{1}{\gamma_{\parallel}}\left[\left|E_{1}\right|^{2}\alpha_{jj}^{11}\left(\frac{1}{\beta_{1}}+\frac{1}{\beta_{1}^{*}}\right)R_{11}+\left|E_{2}\right|^{2}\alpha_{jj}^{22}\left(\frac{1}{\beta_{2}}+\frac{1}{\beta_{2}^{*}}\right)R_{22}\right],\\ W_{12}\approx\: & -\frac{\mu^{2}}{4\hbar^{2}}\frac{1}{\gamma_{\parallel}}\left[E_{1}^{*}E_{2}\mathrm{e}^{2\phi_{-}}\alpha_{12}^{12}\left(\frac{1}{\beta_{2}}R_{22}+\frac{1}{\beta_{1}^{*}}R_{11}\right)+E_{1}E_{2}^{*}\mathrm{e}^{2\phi_{+}}\alpha_{12}^{21}\left(\frac{1}{\beta_{2}^{*}}R_{22}+\frac{1}{\beta_{1}}R_{11}\right)\right].\end{aligned} \label{eq:eqs_Wa_approx}$$ Note that the right-hand side of Eqs. – has terms of the form $E_{m}^{*}E_{n}W_{ij}$. Hence the same result could be obtained by solving the equation system $W_{ij}=\mathbb{L}\cdot W_{ij}$ iteratively as $W_{ij}^{(k)}=\mathbb{L}\cdot W_{ij}^{(k-1)}$ with $W_{ij}^{(0)}=0$ up to $W_{ij}^{(2)}$, as was done in [@mcHodges; @mcWieczorekPRA; @zhukPRL]. Note that the presence of oscillatory exponents $\mathrm{e}^{2\phi_{\pm}}$ on the right-hand side of Eqs. , induced by beating of the field intensities, dictates that an adiabatic elimination can only be performed safely if $\gamma_{\parallel}\gg\Delta\omega$. Unfortunately, this assumption is quite restrictive and makes the resulting class-A laser equations hardly applicable for any two-mode system beyond the case of spectrally overlapping modes unless the mode $Q$-factors become very high. However, Eq.  suggests that $W_{12}$ should be oscillatory with frequency $2\Delta\omega$. This is indeed the case, as confirmed by numerical solution of class-B or class-C equations. These oscillations (also called population pulsations) are the main reason why the condition $\mathrm{d}W_{12}/\mathrm{d}t\approx0$ is valid only for vanishingly small $\Delta\omega$. By accounting for these pulsations explicitly, one can build class-A laser equations applicable for a wider range of $\Delta\omega$. We introduce oscillatory terms $\mathrm{e}^{\pm2\mathrm{i}\Delta\omega t}$ into $W_{12}$:$$W_{12}(t)=W_{21}(t)=\tilde{W}_{a}(t)\mathrm{e}^{2\phi_{+}}+\tilde{W}_{a}^{*}(t)\mathrm{e}^{2\phi_{-}}\label{eq:Wa_12_to_Wa_a}$$ where the envelope function $\tilde{W}_{a}(t)$ supposedly varies more slowly than $2\Delta\omega$ and on the same time scale as $W_{jj}(t)$. We can then reformulate the condition for adiabatic elimination of $W_{12}$ in the form $\mathrm{d}\tilde{W}_{a}/\mathrm{d}t\approx0$. The algebraic equation for $\tilde{W}_{a}$ analogous to is then$$\begin{aligned}\tilde{W}_{a}=\: & -\frac{\mu^{2}}{4\hbar^{2}}\frac{1}{\gamma_{\parallel}+2\mathrm{i}\Delta\omega}\left[\left|E_{1}\right|^{2}\left(\frac{\alpha_{12}^{12}}{\beta_{2}}+\frac{\alpha_{12}^{21}}{\beta_{2}^{*}}\right)\tilde{W}_{a}+\left|E_{2}\right|^{2}\left(\frac{\alpha_{12}^{21}}{\beta_{1}}+\frac{\alpha_{12}^{12}}{\beta_{1}^{*}}\right)\tilde{W}_{a}+E_{1}E_{2}^{*}\alpha_{12}^{21}\left(\frac{1}{\beta_{2}^{*}}W_{22}+\frac{1}{\beta_{1}}W_{11}\right)\right],\end{aligned} \label{eq:Wa_a}$$ Note that unlike $W_{12}$, $\tilde{W}_{a}$ is explicitly complex due to the substitution $\gamma_{\parallel}\to\gamma_{\parallel}+2\mathrm{i}\Delta\omega$. Also note the disappearance of oscillatory exponents in Eq. , compared to Eq. . Inserting Eq. – into  and following the same near-threshold expansion as above, we obtain the final class-A equations $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}E_{1}\approx\: & \left(\frac{g\omega_{1}}{\beta_{1}}R_{1}-\frac{\kappa_{1}}{2}\right)E_{1}-\frac{g\xi\omega_{1}}{\gamma_{\parallel}}\frac{1}{\beta_{1}}\left[\alpha_{11}R_{1}\mathcal{L}_{11}\left|E_{1}\right|^{2}+\alpha_{12}R_{2}\mathcal{L}_{22}\left|E_{2}\right|^{2}\right]E_{1}\\ & -\frac{g\xi\omega_{1}}{\gamma_{\parallel}+2\mathrm{i}\Delta\omega}\frac{\alpha_{12}}{\beta_{1}}\left(\frac{R_{1}}{\beta_{1}}+\frac{R_{2}}{\beta_{2}^{*}}\right)\left|E_{2}\right|^{2}E_{1}-\frac{g\xi\omega_{1}}{\gamma_{\parallel}-2\mathrm{i}\Delta\omega}\frac{\alpha_{12}}{\beta_{1}}\left(\frac{R_{1}}{\beta_{1}^{*}}+\frac{R_{2}}{\beta_{2}}\right)\left(E_{2}\right)^{2}E_{1}^{*}\mathrm{e}^{4\phi_{-}},\\ \frac{\mathrm{d}}{\mathrm{d}t}E_{2}\approx\: & \left(\frac{g\omega_{2}}{\beta_{2}}R_{2}-\frac{\kappa_{2}}{2}\right)E_{2}-\frac{g\xi\omega_{2}}{\gamma_{\parallel}}\frac{1}{\beta_{2}}\left[\alpha_{22}R_{2}\mathcal{L}_{22}\left|E_{2}\right|^{2}+\alpha_{12}R_{1}\mathcal{L}_{11}\left|E_{1}\right|^{2}\right]E_{2}\\ & -\frac{g\xi\omega_{2}}{\gamma_{\parallel}-2\mathrm{i}\Delta\omega}\frac{\alpha_{12}}{\beta_{2}}\left(\frac{R_{1}}{\beta_{1}^{*}}+\frac{R_{2}}{\beta_{2}}\right)\left|E_{1}\right|^{2}E_{2}-\frac{g\xi\omega_{2}}{\gamma_{\parallel}+2\mathrm{i}\Delta\omega}\frac{\alpha_{12}}{\beta_{2}}\left(\frac{R_{1}}{\beta_{1}}+\frac{R_{2}}{\beta_{2}^{*}}\right)\left(E_{1}\right)^{2}E_{2}^{*}\mathrm{e}^{4\phi_{+}}.\end{aligned} \label{eq:Ea_relaxed}$$ where $g\equiv\mu^{2}/2\epsilon_{0}\epsilon\hbar$, $\xi\equiv\mu^{2}/4\hbar^{2}$, $\alpha_{jj}\equiv\alpha_{jj}^{jj}$, $\alpha_{12}\equiv\alpha_{jj}^{ii}=\alpha_{ij}^{ji}\approx\alpha_{ij}^{ij}$, and $\mathcal{L}_{ij}\equiv\beta_{i}^{-1}+\left(\beta_{j}^{*}\right)^{-1}$. Eqs.  retain their applicability for a wide range of $\Delta\omega$ up to $\Delta\omega\simeq\gamma_{\parallel}$ and beyond. The only limitation is the requirement $\gamma_{\perp}\gg\Delta\omega$ needed to obtain the class-B equations. As was the case with the class-C to class-B transition, we see that the multimode case needs to be approached with care, since $\Delta\omega$ represents an additional dynamical parameter (mode beating). It can play a significant part in laser dynamics and render some approximations invalid despite their validity in the single-mode case for the same parameters. Bistability in class-A microlasers\[sec:CLASS-A\] ================================================= Mode competition equations\[sub:CLA-GEN\] ----------------------------------------- Now that the dynamics of a two-mode laser has been reduced to relatively simple class-A equations , the mode dynamics can be analyzed for possible steady-state and stable solutions. Eqs.  resemble the standard 2-mode competition equations (see [@Siegman]): $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}E_{1}=\left(\rho_{1}-\theta_{11}\left|E_{1}\right|^{2}-\theta_{12}\left|E_{2}\right|^{2}\right)E_{1}-\theta'_{12}\left(E_{2}\right)^{2}E_{1}^{*}\mathrm{e}^{4\phi_{-}},\\ \frac{\mathrm{d}}{\mathrm{d}t}E_{2}=\left(\rho_{2}-\theta_{21}\left|E_{1}\right|^{2}-\theta_{22}\left|E_{2}\right|^{2}\right)E_{2}-\theta'_{21}\left(E_{1}\right)^{2}E_{2}^{*}\mathrm{e}^{4\phi_{+}}.\end{aligned} \label{eq:2mode_competition}$$ Here, $\rho_{j}$ in the linear terms characterize the net unsaturated gain (minus cavity losses) for the mode $j$. The coefficients $\theta_{jj}$ and $\theta_{ij\neq i}$ are self- and cross-saturation coefficients, respectively. These terms are fully similar in form and meaning to the widely studied case in [@Siegman]. The last terms, which are special to Eqs. , also contribute to cross-saturation but contain the phases of the modes, as well as an explicit oscillatory time dependence with frequency $4\Delta\omega$. The expressions for all the coefficients can be obtained directly from Eqs. . Since Eqs.  include the phase of the modes explicitly, they can be separated into amplitude and phase equations. Substituting $E_{j}(t)=\left|E_{j}(t)\right|\mathrm{e}^{\mathrm{i}\varphi_{j}(t)}$ , one obtains:$$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}\left|E_{1}\right|=\left(\textrm{Re\,}\rho_{1}-\textrm{Re\,}\theta_{11}\left|E_{1}\right|^{2}-\textrm{Re\,}\theta_{12}\left|E_{2}\right|^{2}\right)\left|E_{1}\right|-\textrm{Re}\left(\theta'_{12}\mathrm{e}^{2\mathrm{i}(\varphi_{2}-\varphi_{1})}\mathrm{e}^{4\phi_{-}}\right)\left|E_{2}\right|^{2}\left|E_{1}\right|,\\ \frac{\mathrm{d}}{\mathrm{d}t}\left|E_{2}\right|=\left(\textrm{Re\,}\rho_{2}-\textrm{Re\,}\theta_{21}\left|E_{1}\right|^{2}-\textrm{Re\,}\theta_{22}\left|E_{2}\right|^{2}\right)\left|E_{2}\right|-\textrm{Re}\left(\theta'_{21}\mathrm{e}^{-2\mathrm{i}(\varphi_{2}-\varphi_{1})}\mathrm{e}^{4\phi_{+}}\right)\left|E_{1}\right|^{2}\left|E_{2}\right|,\end{aligned} \label{eq:2mode_amplitudes}$$ $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}\varphi_{1}=\left(\textrm{Im\,}\rho_{1}-\textrm{Im\,}\theta_{11}\left|E_{1}\right|^{2}-\textrm{Im\,}\theta_{12}\left|E_{2}\right|^{2}\right)-\textrm{Im}\left(\theta'_{12}\mathrm{e}^{2\mathrm{i}(\varphi_{2}-\varphi_{1})}\mathrm{e}^{4\phi_{-}}\right)\left|E_{2}\right|^{2},\\ \frac{\mathrm{d}}{\mathrm{d}t}\varphi_{2}=\left(\textrm{Im\,}\rho_{2}-\textrm{Im\,}\theta_{21}\left|E_{1}\right|^{2}-\textrm{Im\,}\theta_{22}\left|E_{2}\right|^{2}\right)-\textrm{Im}\left(\theta'_{21}\mathrm{e}^{-2\mathrm{i}(\varphi_{2}-\varphi_{1})}\mathrm{e}^{4\phi_{+}}\right)\left|E_{1}\right|^{2}.\end{aligned} \label{eq:2mode_phases}$$ The amplitude equations now completely coincide in form with the usual two-mode competition [@Siegman] but contain the intermode phase difference $\Delta\varphi=\varphi_{2}-\varphi_{1}$ as a parameter and have the cross-saturation coefficients explicitly time-dependent. We can see that the amplitudes always achieve saturation due to a cubic non-linearity. The phase difference, however, may either become stationary, corresponding to phase-locked solutions, or be allowed to vary, in which case the solutions are said to be unlocked. In the limiting case of $\Delta\omega=0$ one can show that there are two phase-locked solutions: one stable with $\Delta\varphi=\pi/2$ and one unstable with $\Delta\varphi=0$. Without further assumptions as to the nature of the modes (such as those in some earlier works [@mcZehnle; @mcWieczorekPRA]), the general case is difficult to analyze due to explicit time dependence in the coefficients for non-zero $\Delta\omega$. In particular, $\Delta\omega>0$ causes $\Delta\varphi$ to undergo precession even in the locked regimes. As this precession becomes faster, one can no longer distinguish between locked and unlocked solutions. For sufficiently large $\Delta\omega$, the oscillations $\mathrm{e}^{\pm4\mathrm{i}\Delta\omega t}$ occur fast enough compared to the onset time scale, which primarily depends on $\kappa$ rather than on $\Delta\omega$. In this case the modes appear always unlocked (mentioned in [@mcZehnle] as a “natural tendency” for different-frequency modes), and the effects of the phase terms can be averaged out. Our numerical estimations show that this is possible if $\Delta\omega>10^{-2}\kappa$. The case $\Delta\omega\ll\kappa$, corresponding to spectrally overlapping modes, is outside the scope of the present paper anyway as there can be additional channels of mode coupling (e.g., the Petermann excess noise [@Peterman]). Thus, we will henceforth ignore the phase terms in Eqs. – and rewrite Eq.  as $$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d}t}\left|E_{1}\right|\approx\: & \textrm{Re}\left(\frac{g\omega_{1}}{\beta_{1}}R_{1}-\frac{\kappa_{1}}{2}\right)\left|E_{1}\right|-\frac{g\xi\omega_{1}}{\gamma_{\parallel}}\left[\textrm{Re}\left(\frac{\alpha_{11}}{\beta_{1}}R_{1}\mathcal{L}_{11}\right)\left|E_{1}\right|^{2}+\textrm{Re}\left(\frac{\alpha_{12}}{\beta_{1}}R_{2}\mathcal{L}_{22}\right)\left|E_{2}\right|^{2}\right]\left|E_{1}\right|\\ & -\textrm{Re}\left[\frac{g\xi\omega_{1}}{\gamma_{\parallel}+2\mathrm{i}\Delta\omega}\frac{\alpha_{12}}{\beta_{1}}\left(\frac{R_{1}}{\beta_{1}}+\frac{R_{2}}{\beta_{2}^{*}}\right)\right]\left|E_{2}\right|^{2}\left|E_{1}\right|,\\ \frac{\mathrm{d}}{\mathrm{d}t}\left|E_{2}\right|\approx\: & \textrm{Re}\left(\frac{g\omega_{2}}{\beta_{2}}R_{2}-\frac{\kappa_{2}}{2}\right)\left|E_{2}\right|-\frac{g\xi\omega_{2}}{\gamma_{\parallel}}\left[\textrm{Re}\left(\frac{\alpha_{22}}{\beta_{2}}R_{2}\mathcal{L}_{22}\right)\left|E_{2}\right|^{2}+\textrm{Re}\left(\frac{\alpha_{12}}{\beta_{2}}R_{1}\mathcal{L}_{11}\right)\left|E_{1}\right|^{2}\right]\left|E_{2}\right|\\ & -\textrm{Re}\left[\frac{g\xi\omega_{2}}{\gamma_{\parallel}-2\mathrm{i}\Delta\omega}\frac{\alpha_{12}}{\beta_{2}}\left(\frac{R_{1}}{\beta_{1}^{*}}+\frac{R_{2}}{\beta_{2}}\right)\right]\left|E_{1}\right|^{2}\left|E_{2}\right|.\end{aligned} \label{eq:Ea_final}$$ Conditions for bistable lasing: Mode coupling\[sub:CLA-C\] ---------------------------------------------------------- With the phase terms dropped, Eqs.  represented in the amplitude form analogous to Eqs.  can be analyzed following the standard procedure [@Siegman]. The primary parameter that determines the nature of mode competition is the mode coupling constant $$C=\textrm{Re\,}\theta_{12}\textrm{Re\,}\theta_{21}/\textrm{Re\,}\theta_{11}\textrm{Re\,}\theta_{22},\label{eq:C_definition}$$ which is the ratio of cross-saturation and self-saturation coefficients. It is commonly known that the cases of simultaneous two-mode lasing and bistable lasing are characterized by $C<1$ (weak mode coupling) and $C>1$ (strong mode coupling), respectively [@Siegman; @LambLaser]. Assuming that the pumping does not favor either of the modes so that $R_{1}=R_{2}\equiv R$, as well as $\omega_{1}\approx\omega_{2}\equiv\omega\gg\Delta\omega$, we can substitute the explicit form of the coefficients from Eq.  into Eq. . As a result, we have found that $C$ can be factored as $$C=C_{\alpha}C_{\omega}.\label{eq:C_2terms}$$ The first factor $C_{\alpha}$, which originates in the spatial hole burning, has the form$$C_{\alpha}=\frac{\alpha_{12}^{2}}{\alpha_{11}\alpha_{22}}.\label{eq:C_alpha}$$ In the simplest case when the modes are intensity matched as in Eq.  so that all $\alpha_{ij}\equiv\alpha$, it follows that $C_{\alpha}=1$. Otherwise, it can be proven that $C_{\alpha}\leq1$. The second factor $C_{\omega}$, which results from population pulsations and becomes identicaly unity if those pulsations are neglected, has the form$$C_{\omega}\approx\left(\frac{4\Delta\omega^{2}\left(1-\frac{\gamma_{\parallel}}{\gamma_{\perp}}\right)+2\gamma_{\parallel}^{2}}{\left(4\Delta\omega^{2}+\gamma_{\parallel}^{2}\right)}\right)^{2}+O\left(\frac{\delta_{\omega}^{2}}{\gamma_{\perp}^{2}}\right).\label{eq:C_w}$$ ![(Color online) The dependence of $C_{\omega}$ in Eq.  on $\gamma_{\parallel}$ and $\Delta\omega$. The dashed lines are the isolines for $C_{\omega}=1$ and $C_{\omega}=9/4$. The dotted lines approximately mark the applicability limits of class-A equations. \[fig:GRAPH\_C\]](fig/fig2){width="50.00000%"} The dependence of $C_{\omega}$ is shown in Fig. \[fig:GRAPH\_C\]. We can see that $C_{\omega}\lesssim4$ for $\Delta\omega\ll\gamma_{\parallel}$ and $C_{\omega}\simeq1$ for $\gamma_{\parallel}<\Delta\omega\ll\gamma_{\perp}$. The transition between two limiting cases ($C_{\omega}=1$ and $C_{\omega}=4$) occurs rapidly around $\Delta\omega\simeq\gamma_{\parallel}$. Note that as $\Delta\omega$ increases, $C_{\omega}$ approaches unity from below, so there is a critical value $\Delta\omega^{(1)}\approx\sqrt{\gamma_{\perp}\gamma_{\parallel}}/2$ for which $C_{\omega}=1$. Hence, in the ideal case of intensity matched modes [\[]{}Eq. \] bistability is possible for $\Delta\omega$ all the way up to $\Delta\omega^{(1)}$. The limiting case of $C=4$ is known to be realized for the ideal case of counterpropagating modes in ring lasers or modes with orthogonal polarizations, which are fully intensity matched and have $\Delta\omega\approx0$ [@Siegman]. If, however, the modes are considerably mismatched, then $C_{\omega}$ must be significantly larger than one to compensate for a small $C_{\alpha}$ and thus keep the overall mode coupling constant above unity to achieve bistable lasing. For example, it can be shown that 1D harmonic (e.g., longitudinal) modes always have $C_{\alpha}=4/9$ for different frequencies. This means that the line of critical values for $\Delta\omega^{(9/4)}\approx\gamma_{\parallel}/2$ up to where bistability is possible lies much deeper than the line of $\Delta\omega^{(1)}$ (see Fig. \[fig:GRAPH\_C\]). Taking into account that the frequency shift between longitudinal modes is related to the cavity length as $\Delta\omega_{\textrm{(bulk)}}=\pi c/L$, one easily obtains the “rule of thumb” for minimum cavity length of a 1D bistable bulk laser: $L_{\textrm{min}}\simeq2\pi c/\gamma_{\parallel}$. For realistic laser media, $L_{\textrm{min}}$ is found to be prohibitively large, from around 2 m for semiconductors and up to 200-300 km for Nd:YAG [@Svelto]. This explains why it is so difficult to achieve bistable lasing for different-frequency modes in a bulk cavity: unless the cavity is extraordinary big, $\Delta\omega$ is large enough to bring $C_{\omega}$ so close to unity that any intensity mismatch causes $C_{\alpha}<1$ and brings the laser back into the weak-coupling (simultaneous lasing) regime. The only notable exception is the case when the modes are quasi-degenerate with $\Delta\omega\approx0$, such as counterpropagating modes in ring lasers or modes with orthogonal polarization, and it is in these special cases that bistability could indeed be observed. In a microcavity, however, the modes can be made very nearly intensity matched by a carefully chosen resonator design (e.g., coupled cavity-based, see [@zhukPRL]). In addition, many designs allow to control the frequency separation between the modes more or less independently from other model parameters. This opens up a whole new frequency range $\Delta\omega^{(9/4)}<\Delta\omega<\Delta\omega^{(1)}$ available for bistable laser design, which can encompass several orders of magnitude for $\Delta\omega$ (see Fig. \[fig:GRAPH\_C\]). This range becomes available in microlasers because the possibility to bring the modes to intensity matching is far greater than in bulk cavities, owing to a greater variety of cavity shapes and a more complicated nature of the modes involved. Finally, from Eqs.  one can also see the physical mechanism of bistable lasing in the class-A case. It is due to the (oscillatory) component $W_{12}$ that there is an addition to the cross-saturation coefficients $\theta_{ij\neq i}$. Without this addition, $C$ would simply coincide with $C_{\alpha}$ and all possibility for bistable operation would be excluded. Hence, it is the coherent mode interaction effects such as population pulsations or four-wave mixing [@LambLaser] that make bistability possible. Incoherent effects (e.g., spatial hole burning, which is only manifest in $C_{\alpha}$) can either allow or suppress it. As a result, an interplay between coherent and incoherent mode interaction processes is employed to achieve bistable microlaser operation. ![image](fig/fig3a){width="80.00000%"} ![image](fig/fig3b){width="80.00000%"} As an example, we have plotted the dynamics of mode amplitudes $E_{j}(t)$ as a numerical solution of Eqs.  for bulk-cavity ($C_{\alpha}=4/9$) vs. coupled-cavity ($C_{\alpha}=0.9$) modes (Fig. \[fig:DIAGRAMS\_SYM\]). Also shown are the temporal flow diagrams (i.e., projections of the solutions onto the $\left|E_{1}\right|^{2}\textrm{ vs. }\left|E_{2}\right|^{2}$ plane for different initial conditions of the cavity (the ratio $E_{1:2}^{(0)}\equiv\left|E_{1}(0)\right|:\left|E_{2}(0)\right|$). All other parameters are kept constant, as described in the caption. If the modes are mismatched (Fig. \[fig:DIAGRAMS\_SYM\]a) and $C<1$, the laser saturates to the two-mode simultaneous lasing ($\left|E_{1}\right|=\left|E_{2}\right|=\textrm{const}$) regardless of the initial conditions. Only this fixed point is stable. However, if the modes are well matched (Fig. \[fig:DIAGRAMS\_SYM\]b) so that $C>1$, the laser saturates to a single-mode lasing as the initially stronger mode quenches its weaker counterpart and becomes dominant. There are two stable fixed points on the diagram: $\left|E_{1}\right|=\textrm{const},\;\left|E_{2}\right|=0$ and $\left|E_{1}\right|=0,\;\left|E_{2}\right|=\textrm{const}$. The previously stable fixed point becomes unstable, and the line $\left|E_{1}\right|=\left|E_{2}\right|$ marks the separatrix between the stable points’ domains of attraction. The mode that has an advantage in the beginning determines the domain of attraction for the system and hence the fixed point the system will converge to, as the separatrix cannot be transcended without an external influence. These examples show that bistable lasing is possible in microlasers in such cases where only two-mode simultaneous lasing can be observed for bulk-cavity modes. Conditions for bistable lasing: Mode mismatch\[sub:CLA-I12\] ------------------------------------------------------------ Up to now, we assumed that none of the modes is favored either by the cavity or by the gain, i.e., $\kappa_{1}=\kappa_{2}$, $R_{1}=R_{2}$, $\alpha_{11}=\alpha_{12}$, and $\delta_{\omega}=0$. In this case, as seen in Fig. \[fig:DIAGRAMS\_SYM\], the two-mode lasing fixed point (labeled FP2), whether stable or unstable, is characterized by $\left|E_{1}\right|=\left|E_{2}\right|$. This means, on the one hand, that in the simultaneous-lasing case both modes lase with equal intensity (Fig. \[fig:DIAGRAMS\_SYM\]a), and on the other hand, that in the bistable regime even a slight edge given to either mode in terms of initial conditions will bring this mode to lase. It is equally easy to “select” or “switch” either mode by locking into it [@zhukPRL; @zhukPSS]. This is illustrated in Fig. \[fig:DIAGRAMS\_SYM\]b by the fact that each stable fixed point has an equally large domain of attraction. ![image](fig/fig4a){width="80.00000%"} ![image](fig/fig4b){width="80.00000%"} In a more general case, the ratio between mode intensities at FP2 $I_{1:2}\equiv\left|E_{1}\right|^{2}/\left|E_{2}\right|^{2}$ will change to reflect an advantage given to either of the modes. For example, even a slight mismatch in the mode $Q$-factors causes $I_{1:2}$ to deviate from unity (Fig. \[fig:DIAGRAMS\_ASYMM\]). Similar to the explanation given above, this may mean two things. In the simultaneous-lasing case, it simply means that once the laser achieves saturation, one mode has a greater amplitude than the other, e.g., $\left|E_{1}\right|>\left|E_{2}\right|$ for $I_{1:2}>1$ (Fig. \[fig:DIAGRAMS\_ASYMM\]a). In the bistable case, it means that the domains of attraction for the two modes change their size in phase space (Fig. \[fig:DIAGRAMS\_ASYMM\]b). If for example $I_{1:2}<1$, then the domain of attraction for Mode 1 becomes larger, so Mode 1 is “in favor” as a result. In the bistable regime, a shifted FP2 means that the mode with a smaller domain of attraction is out of favor and thus harder to bring to lasing. For example, if FP2 is placed symmetrically, initial mode amplitude ratios $E_{1:2}^{(0)}$ of 3:2 and 2:3 bring the first and the second mode to lasing, respectively (Fig. \[fig:DIAGRAMS\_SYM\]b). For asymmetrically placed FP2, the same two cases for initial condition both result in the lasing of the first mode (Fig. \[fig:DIAGRAMS\_ASYMM\]b). To be able to target the smaller domain, one has to excite the out-of-favor mode exclusively, which might be difficult experimentally. Hence we will further aim at finding the manifold of the system parameters for which $I_{1:2}=1$. Whenever $C\neq1$, the general expression for the mode intensity ratio at FP2 $I_{1:2}$ can be written as [@Siegman]$$I_{1:2}=\frac{\textrm{Re\,}\rho_{1}\textrm{Re\,}\theta_{22}-\textrm{Re\,}\rho_{2}\textrm{Re\,}\theta_{12}}{\textrm{Re\,}\rho_{2}\textrm{Re\,}\theta_{11}-\textrm{Re\,}\rho_{1}\textrm{Re\,}\theta_{21}}.\label{eq:I12_definition}$$ By substituting the coefficients in Eqs.  one can obtain an explicit analytic expression for $I_{1:2}$. Unfortunately, this general expression is very bulky and we will first investigate its behavior in several simplified cases. Let us introduce the perturbations in the form$$\kappa_{1,2}\equiv\kappa(1\pm\delta_{\kappa}),\quad R_{1,2}\equiv R(1\pm\delta_{\alpha}),\label{eq:a12_k12_perturbation}$$ from where it follows [\[]{}see Eqs.  and \] that $\alpha_{11,22}=\alpha(1\pm\delta_{\alpha})^{2}$. Now if $\delta_{\omega}=\delta_{\alpha}=0$, $\delta_{\kappa}\neq0$, $I_{1:2}$ is given by:$$I_{1:2}^{(\kappa)}=\frac{a_{\kappa}+b_{\kappa}\delta_{\kappa}}{a_{\kappa}+b_{\kappa}\delta_{\kappa}}.\label{eq:I12_Dkappa}$$ Likewise if $\delta_{\omega}=\delta_{\kappa}=0$, $\delta_{\alpha}\neq0$, then the expression is somewhat more complicated and reads:$$I_{1:2}^{(\alpha)}=\frac{\left(a_{\alpha}+c_{\alpha}\delta_{\alpha}^{2}+e_{\alpha}\delta_{\alpha}^{4}\right)+\left(b_{\alpha}+d_{\alpha}\delta_{\alpha}^{2}\right)\delta_{\alpha}}{\left(a_{\alpha}+c_{\alpha}\delta_{\alpha}^{2}+e_{\alpha}\delta_{\alpha}^{4}\right)-\left(b_{\alpha}+d_{\alpha}\delta_{\alpha}^{2}\right)\delta_{\alpha}}.\label{eq:I12_Dalpha}$$ Finally, if if $\delta_{\alpha}=\delta_{\kappa}=0$, $\delta_{\omega}\neq0$, then $$I_{1:2}^{(\omega)}=\frac{\left(a_{\omega}+c_{\omega}\delta_{\omega}^{2}+e_{\omega}\delta_{\omega}^{4}\right)+\left(b_{\omega}+d_{\omega}\delta_{\omega}^{2}+f_{\omega}\delta_{\omega}^{4}\right)\delta_{\omega}}{\left(a_{\omega}+c_{\omega}\delta_{\omega}^{2}+e_{\omega}\delta_{\omega}^{4}\right)-\left(b_{\omega}+d_{\omega}\delta_{\omega}^{2}+f_{\omega}\delta_{\omega}^{4}\right)\delta_{\omega}}.\label{eq:I12_delta}$$ The coefficients in Eqs. – are complicated polynomial functions of the dynamical parameters $\gamma_{\perp}$ and $\gamma_{\parallel}$, the intermode frequency separation $\Delta\omega$, the measure of mode intensity mismatch $\nu\equiv\alpha_{12}/\alpha$ (which ranges from 0 to a maximum value of $1-\delta_{\alpha}^{2}$ so that $C_{\alpha}\leq1$), and the pumping rate normalized to the threshold pumping $R_{\textrm{thr}}\equiv2Rg\omega/(\gamma_{\perp}\kappa)$ [^1]. Note that $\kappa$ itself does not enter these equations explicitly. It does, however, impose a limitation $\Delta\omega>10^{-2}\kappa$ so that the phase terms in Eqs.  can be averaged out. From the structure of Eqs. – one can see that $I_{1:2}=1$ for $\delta_{\alpha}=\delta_{\kappa}=\delta_{\omega}=0$, as should be expected. If any one of the perturbation parameters ($\delta_{\omega,\kappa,\alpha}$ collectively referred to as $\delta$) is non-zero, $I_{1:2}$ deviates from unity. Obviously, changing the sign of all non-zero $\delta$ causes $I_{1:2}\to1/I_{1:2}$. If favoring one of the modes (by any means) results in a certain asymmetry in lasing quantified through a non-unity $I_{1:2}$, then favoring the other mode in the same way and by the same amount naturally causes the same asymmetry with respect to the other mode [^2]. This suggests that one can choose *more than one* $\delta$ to be non-zero in such a way that the shifts of FP2 caused by individual perturbations would cancel each other out. As a result, one could achieve the resulting $I_{1:2}$ equal to or close to unity, and the restrictions on the initial conditions would be lifted. ![(Color online) Manifolds of points $I_{1:2}=1$ in the 3D perturbation space $(\delta_{\omega};\delta_{\kappa};\delta_{\alpha})$ for **(a)** $C=1.10\gtrsim1$, **(b)** $C=2.06\simeq2$, and **(c)** $C=3.84\lesssim4$. The four surfaces correspond to four values of the pumping rate ($R/R_{\textrm{thr}}=1,1.25,1.5,1.75$), as indicated in the panel (b). \[fig:I12\_ZERO\]](fig/fig5a "fig:"){width="30.00000%"}![(Color online) Manifolds of points $I_{1:2}=1$ in the 3D perturbation space $(\delta_{\omega};\delta_{\kappa};\delta_{\alpha})$ for **(a)** $C=1.10\gtrsim1$, **(b)** $C=2.06\simeq2$, and **(c)** $C=3.84\lesssim4$. The four surfaces correspond to four values of the pumping rate ($R/R_{\textrm{thr}}=1,1.25,1.5,1.75$), as indicated in the panel (b). \[fig:I12\_ZERO\]](fig/fig5b "fig:"){width="30.00000%"}![(Color online) Manifolds of points $I_{1:2}=1$ in the 3D perturbation space $(\delta_{\omega};\delta_{\kappa};\delta_{\alpha})$ for **(a)** $C=1.10\gtrsim1$, **(b)** $C=2.06\simeq2$, and **(c)** $C=3.84\lesssim4$. The four surfaces correspond to four values of the pumping rate ($R/R_{\textrm{thr}}=1,1.25,1.5,1.75$), as indicated in the panel (b). \[fig:I12\_ZERO\]](fig/fig5c "fig:"){width="30.00000%"} Fig. \[fig:I12\_ZERO\] shows the manifold of the points $I_{1:2}=1$ in the 3D perturbation space $(\delta_{\omega};\delta_{\kappa};\delta_{\alpha})$ for different parameters as a numerical solution of Eq. . We can see that this manifold is an open surface. Hence, if a mismatch in one respect is unavoidable, it can be compensated for by engineering the other two perturbation parameters. Note that in the $(\delta_{\kappa};\delta_{\alpha})$ plane the mismatch compensation ($I_{1:2}=1$) is achieved when $\delta_{\kappa}\approx\delta_{\alpha}$. This is easily understood if one remembers that the linear terms in Eqs.  have the structure $\rho_{j}\sim\zeta R_{j}-\kappa_{j}=\zeta R(1\pm\delta_{\alpha})-\kappa(1\pm\delta_{\kappa})$. On the other hand, in the $(\delta_{\omega};\delta_{\kappa})$ plane, compensation is generally achieved for the oppositely-signed $\delta_{\omega}$ and $\delta_{\kappa}$. This is in agreement with an intuitive guess that, e.g., $\delta_{\kappa}<0$ ($\kappa_{1}<\kappa_{2}$) and $\delta_{\omega}<0$ (the gain frequency $\omega_{a}<\omega_{0}$ is closer to $\omega_{1}$ than to $\omega_{2}$, see Fig. \[fig:FREQUENCIES\]) both give an edge to the first mode, so oppositely-signed $\delta$ are needed to maintain balance. However, in the vicinity of the origin the surface can be folded, so that it crosses the origin with the opposite slope and compensation is achieved when $\delta_{\omega}$ and $\delta_{\kappa}$ have the same sign. Since perturbations $\delta_{\omega}$, $\delta_{\alpha}$, and $\delta_{\kappa}$ can have different physical origin and can be varied more or less independently by a proper choice of a gain medium and a cavity configuration, one can deliberately engineer a microlaser to achieve bistable operation even if the idealized, unperturbed case is difficult to realize experimentally. An example of such compensation is changing the mode frequencies with respect to gain (which can be done straightforwardly just by scaling the cavity) to help offset the difference in mode $Q$-factors, as shown numerically in our earlier work [@zhukPSS]. To achieve a fully symmetric placement of FP2, one needs to bring three perturbation parameters into a relation. Because all these parameters show only an indirect dependency on the cavity design and/or gain medium choice, the precise control of them may still be a challenging task. Hence, it is worthwhile to investigate to what extent the relations for ideal compensation can be violated so that bistable operation is still possible (albeit, as shown above, at the cost of stricter requirements on the initial conditions). In terms of Fig. \[fig:I12\_ZERO\], that means how far one can deviate from the $I_{1:2}=1$ surface and still lase into either of the modes on demand. From Eqs. – one sees that a sufficiently high value of any $\delta$ will cause either the numerator or the denominator in $I_{1:2}$ to approach zero. On the flow diagram, this corresponds to the FP2 meeting the coordinate axes. Increasing $\delta$ further causes $I_{1:2}$ to become negative. The FP2 vanishes and the system finds itself in the single mode lasing regime (see [@Siegman]). That sets an upper limit for any $\left|\delta\right|$ beyond which no bistable lasing is possible any more. ![(Color online) Boundaries of the FP2 existence domain $I_{1:2}>0$ (dark gray) and the $I_{1:2}=1$ surface lying inside that domain (light green) for $C\lesssim4$ (as in Fig. \[fig:I12\_ZERO\]c): (a) at threshold ($R/R_{\textrm{thr}}=1$), (b) 10% above threshold ($R/R_{\textrm{thr}}=1.2$), and (c) 20% above threshold ($R/R_{\textrm{thr}}=1.2$).\[fig:I12\_DOMAIN\_R\]](fig/fig6a "fig:"){width="30.00000%"}![(Color online) Boundaries of the FP2 existence domain $I_{1:2}>0$ (dark gray) and the $I_{1:2}=1$ surface lying inside that domain (light green) for $C\lesssim4$ (as in Fig. \[fig:I12\_ZERO\]c): (a) at threshold ($R/R_{\textrm{thr}}=1$), (b) 10% above threshold ($R/R_{\textrm{thr}}=1.2$), and (c) 20% above threshold ($R/R_{\textrm{thr}}=1.2$).\[fig:I12\_DOMAIN\_R\]](fig/fig6b "fig:"){width="30.00000%"}![(Color online) Boundaries of the FP2 existence domain $I_{1:2}>0$ (dark gray) and the $I_{1:2}=1$ surface lying inside that domain (light green) for $C\lesssim4$ (as in Fig. \[fig:I12\_ZERO\]c): (a) at threshold ($R/R_{\textrm{thr}}=1$), (b) 10% above threshold ($R/R_{\textrm{thr}}=1.2$), and (c) 20% above threshold ($R/R_{\textrm{thr}}=1.2$).\[fig:I12\_DOMAIN\_R\]](fig/fig6c "fig:"){width="30.00000%"} ![(Color online) Same as Fig. \[fig:I12\_DOMAIN\_R\] for constant $R$ at 10% above threshold and (a) $C\gtrsim1$, (b) $C\simeq2$, (c) $C\lesssim4$, corresponding to the cases in Fig. \[fig:I12\_ZERO\]a–c.\[fig:I12\_DOMAIN\_C\]](fig/fig7a "fig:"){width="30.00000%"}![(Color online) Same as Fig. \[fig:I12\_DOMAIN\_R\] for constant $R$ at 10% above threshold and (a) $C\gtrsim1$, (b) $C\simeq2$, (c) $C\lesssim4$, corresponding to the cases in Fig. \[fig:I12\_ZERO\]a–c.\[fig:I12\_DOMAIN\_C\]](fig/fig7b "fig:"){width="30.00000%"}![(Color online) Same as Fig. \[fig:I12\_DOMAIN\_R\] for constant $R$ at 10% above threshold and (a) $C\gtrsim1$, (b) $C\simeq2$, (c) $C\lesssim4$, corresponding to the cases in Fig. \[fig:I12\_ZERO\]a–c.\[fig:I12\_DOMAIN\_C\]](fig/fig7c "fig:"){width="30.00000%"} More generally, the domain in space $(\delta_{\omega};\delta_{\kappa};\delta_{\alpha})$ where $I_{1:2}>0$ comprises the possible perturbation parameter window where both modes can lase (either simultaneously or subject to bistability-induced switching, as depends on $C$). This domain, called the FP2 existence domain, is shown in Figs. \[fig:I12\_DOMAIN\_R\]–\[fig:I12\_DOMAIN\_C\]. The existence domain, bounded by the surfaces defined by $I_{1:2}=0$ and $I_{1:2}=\infty$ is seen to surround the “perfect matching” surface $I_{1:2}=1$. The domain boundaries appear to slide inwards as the pumping rate increases (Fig. \[fig:I12\_DOMAIN\_R\]), which enlarges the FP2 existence domain around the point $\delta=0$. Also, the domain shrinks rapidly as the boundaries close around $I_{1:2}=1$ when $C$ approaches unity (Fig. \[fig:I12\_DOMAIN\_C\]). The latter can be intuitively understood because $C\approx1$ represents a delicately balanced system, so that even a slight mismatch is enough to throw the system heavily out of balance. Such a property is clearly a misfortune for the microcavity-specific bistability range reported above, as it relies on the situation when $C_{\omega}$ exceeds unity only slightly. However, increasing the pumping appears to counteract this disadvantage, at least for smaller $\delta_{\omega}$ (see Fig. \[fig:I12\_DOMAIN\_R\]). We believe that it is this effect that enabled us to observe bistability in earlier numerical simulations [@zhukPRL; @zhukPSS] involving the laser operating highly above threshold. The practical conclusion to this section is that there are two theoretical requirements needed to achieve bistable lasing. In the first place, FP2 needs to exist on the flow diagram, as imposed by $I_{1:2}>0$. In the second place, once FP2 exists, the mode coupling constant must exceed unity ($C>1$), as discussed before. First (Sec. \[sub:CLA-C\]), we have shown that in comparison to bulk-cavity lasers microlasers exhibit a much wider parameter window characterized by $C>1$, because the microcavity modes can better fulfill the intensity matching condition . Secondly (Sec. \[sub:CLA-I12\]), we have shown that there is an extended domain in the 3D perturbation space $(\delta_{\omega};\delta_{\kappa};\delta_{\alpha})$ where $I_{1:2}>0$. Inside this domain, the closer $I_{1:2}$ is to unity, the easier it is to realize bistability-based laser mode switching experimentally. We have shown that $I_{1:2}$ can be brought close to 1 by choosing a combination of perturbation parameters that would compensate each other’s advantage given to either mode. Class-B/C microlasers \[sec:CLASS-BC\] ====================================== The elegance of the class-A case considered in the previous section is that Eqs.  can be subject to analytical investigation based on a comparison with Eqs.  [@Siegman]. Once a laser with a more complicated dynamics needs to be examined, more complicated systems of equations [\[]{}six equations – for class-B or eight equations – for class-C\] need to be dealt with. Although attempts at analytical investigation of class-B equations are known (e.g., a near-threshold expansion of population inversion as proposed in [@mcZehnle]), only numerical solution seems to be applicable in the general case when no specific assumptions on the cavity or mode geometry are implied. Since all the equations are ordinary differential, such a numerical solution can be carried out with relative ease – the computational demands are far lower than a direct numerical integration of the Maxwell-Bloch equations by means of an FDTD-like scheme [@zhukFDTD]. A systematic investigation of class-B/C microlasers would be too lengthy to include in the present paper and will be the subject of a forthcoming publication. In this section we will outline the main differences in the behavior of such lasers compared to the previously studied class-A case as regards bistable lasing. We begin with a comparison of the laser classes in the near-threshold regime. As should be expected, the solutions for all classes display full coincidence if the class-A approximation $\gamma_{\perp}\gg\gamma_{\parallel}\gg\kappa$ holds (note that this condition is rather restrictive in microlasers, requiring a careful choice of the gain medium as well as the cavity design). The mode dynamics $E_{j}(t)$ start to exhibit differences whenever $\gamma_{\parallel}$ or $\kappa$ are increased out of the class-A approximation. The differences, however, are relatively minor, manifesting themselves mainly in the character of the transition process. In most cases, the mode coupling constant $C$ as defined for the class-A in Eqs. – continues to predict the laser dynamics correctly ($C<1$: simultaneous lasing, $C>1$: bistability) even outside its strict range of applicability, although the behavior of $E_{j}(t)$ can be quite different during the transition period. As discussed above, the class-B equations – do not involve a near-threshold approximation, it becomes possible to consider a greater range of pumping rates, including regimes far above threshold, which are often left out of the picture in a construction of a multimode laser model [@mcHodges]. Comparison of the numerical results for class-B vs. class-C equations show that as long as the class-B prerequisites $\gamma_{\perp}\gg\gamma_{\parallel},\kappa_{j}$ hold, the results are similar, unless the condition $\gamma_{\perp}\gg\Delta\omega$ is violated. This agrees well with the earlier discussions in Section \[sub:EQS-B\]. The differences appear not to be qualitative, but quantitative only, manifesting in the exact shape of the $\left|E_{j}(t)\right|$ dependence. The overall outcome of the mode interaction largely remains the same. To summarize, the main effect of the class-A to class-B transition in the context of studying bistable lasing is the inclusion of larger pumping rates $R$, while the main effect of the class-B to class-C transition is the inclusion of larger frequency mode separations $\Delta\omega$. The increase of the pumping rate in a class-B laser is known to change the saturation character of the mode amplitudes. The non-instantaneous relaxation of the population inversion with respect to the cavity field gives rise to spiking (for smaller $R$) or relaxation oscillations (for greater $R$) in the dependence $E_{j}(t)$. A still stronger pumping (several orders of magnitude above threshold) causes the oscillations to vanish, as reported in an earlier work [@zhukFDTD]. More interestingly, an increase of $R$ can restore bistable lasing in the cases when simultaneous lasing is observed just above threshold. Fig. \[fig:GRAPH\_C\] suggests that there should be no bistability in the area around $\Delta\omega\simeq\gamma_{\perp}$. The numerical solution of the class-C equations shows that this is indeed the case for smaller $R$. However, if the pumping is increased beyond a certain critical value $R_{c}$, a transition from simultaneous to bistable lasing occurs (Fig. \[fig:CLASS\_C\]). This effect was reported earlier [@zhukFDTD] with the observation that bistability ensues when pumping becomes so large that relaxation oscillations disappear. Our further investigations have revealed that this observation was rather a coincidence, and $R_{c}$ scales with $\Delta\omega$ (Fig. \[fig:CLASS\_C\]), bifurcating from threshold at approximately the point where $C_{\omega}=1$ according to Eq. . This falls in line with the result of the previous section that a stronger pumping is capable of restoring bistability where it has been deteriorated by adverse effects of insufficient mode matching. ![(Color online) The dependence of lasing regime on pumping rate $R$ and intermode frequency separation $\Delta\omega$ in a class-C laser model. The parameters are $\kappa_{j}\simeq0.1\gamma_{\perp}$ and $\gamma_{\parallel}\simeq10^{-4}\gamma_{\perp}$, as used for numerical simulation in [@zhukPRL]. The density plot shows the quantity $\left|\left|E_{1}(t)\right|-\left|E_{2}(t)\right|\right|/\max\left(\left|E_{1}(t)\right|,\left|E_{2}(t)\right|\right)$ for large $t\gg\gamma_{\perp}^{-1},\gamma_{\parallel}^{-1},\kappa_{j}^{-1}$. Near-zero (light) values indicate two-mode (simultaneous) lasing while near-unity (dark) values indicate one-mode (bistable) lasing. The lasing threshold depending on $\Delta\omega$ is marked with the dotted line. Numerical results of the FDTD simulations for coupled-defect structures (Fig. \[fig:FDTD\_structure\]) are superimposed over the density plot. Circles (red) and squares (yellow) show the location of points where simultaneous and bistable lasing, respectively, was observed in the mode dynamics during simulations. \[fig:CLASS\_C\]](fig/fig8){width="1\columnwidth"} ![(Color online) The family of structures used in numerical FDTD simulations, based on two coupled defects in a 2D photonic crystal lattice [@zhukPRL; @zhukFDTD]. Placing a different number of lattice rows between the defects (1–5), the intermode frequency separation $\Delta\omega$ can be changed. \[fig:FDTD\_structure\]](fig/fig9){width="0.75\columnwidth"} Because applicability of the expansion and sometimes even of the SVEA [@TureciOE] may become questionable far above threshold, we have carried out a comparison of Class-B/C results with direct numerical simulations. As previously described in Ref. [@zhukFDTD], a space-time FDTD solver was coupled to the four-level laser rate equations in order to model the response of a laser medium. A 2D photonic crystal lattice with two coupled defects [@zhukPRL] was used as a model system (Fig. \[fig:FDTD\_structure\]). Both defects are filled with four-level gain medium and contain a dipole source in the centre . By exciting these sources with varying amplitude/phase relations, the two modes (symmetric and antisymmetric [@zhukFDTD]) can be excited in any proportion and thus the initial state of the resonator can be controlled. By changing the number of lattice rows between the defects from 1 to 5, one can change $\Delta\omega$ from $\sim\gamma_{\perp}$ down to $\sim10^{-2}\gamma_{\perp}$ . The waveguides coupled to the defects form the primary channel for the radiation to leak out of the resonator. Care was taken that the mode $Q$-factors remain approximately the same across the whole family of structures. The results of the FDTD simulation runs are superimposed in the phase diagram in Fig. \[fig:CLASS\_C\]. For all values of $\Delta\omega$, the transition between simultaneous and bistable lasing was found approximately around $R_{c}$ as predicted by the analytical theory. For larger $\Delta\omega$ the correspondence is better because smaller $\Delta\omega$ and $R$ require much longer times to get to the steady state and there is an increased sensitivity to mode mismatch (see Fig. \[fig:I12\_DOMAIN\_R\]). Hence it becomes more difficult to establish the transition point between simultaneous and bistable lasing with good accuracy. ![image](fig/fig10){width="90.00000%"} ![image](fig/fig11){width="90.00000%"} In Figs. \[fig:FDTD\_2\] and \[fig:FDTD\_4\], temporal laser dynamics in numerical simulations and the Class-C model are compared. We analyze the electric field in the center of either defect $\mathbf{r}_{c}$. For FDTD, it is monitored directly by recording the field at the corresponding point in space $E(\mathbf{r}_{c},t)$. To reduce the excessive amount of data, we sample the field only at the local maxima, so that an envelope over the light oscillations is plotted. In the case of the coupled mode theory, the same quantity is obtained from the mode amplitudes $E_{1,2}(t)$ using Eq.  as $E_{r}(t)=u_{\text{max}}\left(E_{1}(t)\mathrm{e}^{-\mathrm{i}\omega_{1}t}+E_{2}(t)\mathrm{e}^{-\mathrm{i}\omega_{2}t}\right)$ where $u_{\text{max}}=\max\left[u(\mathbf{r})\right]$, assuming the modes are normalized according to Eq. . Taking the absolute value, light oscillations are also neglected, so the results can be compared to the simulations. In all examples of Figs. \[fig:FDTD\_2\], \[fig:FDTD\_4\] (which correspond to the laser operating way above threshold), the field dynamics shows a good qualitative and quantitative correspondence. Below $R_{c}$ where simultaneous two-mode lasing is expected, the in-cavity field envelope shows the characteristic $2\Delta\omega$ beat oscillations (see the insets in Figs. \[fig:FDTD\_2\]–\[fig:FDTD\_4\]), marking the presence of both modes in the laser radiation. Above $R_{c}$, the steady-state envelope is flat, indicative of single-mode lasing, and the beat oscillations are seen to vanish. This corresponds to quenching of the weaker mode in agreement with theoretical expectations in the bistable regime. Some quantitative discrepancies between the model and simulation results can be noticed. Some of them (e.g., temporal shifts of the spikes in Fig. \[fig:FDTD\_4\]) result from minor deviations in parameters between the real simulated structure and an idealized two-mode system considered. These deviations can be compensated for by fine-tuning the model [@zhukFDTD]. Other discrepancies such as the the difference in the field amplitudes (both at spike maxima and in the steady-state) can be attributed to gain saturation, which may introduce correction to the form of the expansion  for the gain medum polarization. This is a limitation inherent in the present coupled-mode model. However, Eqs. – and – are clearly seen to provide a valid description of laser mode dynamics scenario for relatively strong pumping, unlike the near-threshold (third-order nonlinearity) theories which are reported to fail badly in this regime (see [@Tureci06]), just like the Class-A equations would. One can overcome this limitation, e.g., following the approach in Refs. [@Tureci06; @Tureci07] where a generalization of Eqs. – is introduced. A very good agreement with numerics is reported recently [@TureciOE]. However, only the time-independent (steady-state) theory is formulated so far. The knowledge that stronger pumping can restore a laser into the bistable regime for higher $\Delta\omega$ is important in the design of a laser that can have its wavelength switched by a large value (such as several tens of nanometers in Refs. [@zhukPRL; @zhukPSS]) . A rigorous explanation of this result is yet to be given. Intuitively, stronger pumping rates cause shorter lasing onset times compared to the cavity round-trip time, so the domination of the stronger mode can occur before the modes have a chance to balance themselves through the cavity. Indeed, it could be noticed that the transition from simultaneous to bistable lasing around $R_{c}$ is accompanied by the disappearance of $4\Delta\omega$-pulsations in the phase of some dynamical variables. This suggests that shorter onset due to stronger pumping allows some of the variables to become phase-locked, which in turn influences the whole character of the mode interaction (as was seen when the transition from Eq.  to Eq.  was discussed). The detailed investigation of this effect is a subject for further studies. Conclusions and outlook\[sec:SUMMARY\] ====================================== In this work, we have addressed the problem of bistability in a microlaser by systematically formulating the coupled-mode model without prior assumptions on the mode or cavity geometry, other than the requirement of the mode orthogonality in the cavity as well as in the gain region as described by Eqs.  and . The governing equations have been derived for all laser classes, Eqs. –, –, and for class-C, B, and A, respectively. The issue of classifying the laser dynamics in the multimode case has been revisited taking into account the intermode frequency separation $\Delta\omega$ as a new parameter influencing the laser dynamics. The model has been derived for the case of two modes; however, its extension to the case of several modes can be performed along the same lines. The simplest case of the class-A laser equations has been analytically investigated. It has been shown that coherent mode interaction processes (population pulsations) can provide an additional mode coupling channel besides incoherent mode interaction (spatial hole burning). This additional coupling is what brings the laser into the bistable regime, allowing the lasing mode to be chosen on demand by the initial condition of the cavity. This result agrees with the early theoretical predictions [@Siegman; @LambLaser]. However, microcavity modes can have a far better matched intensity distribution inside the gain region, see Eq. , compared to bulk-cavity modes, which are usually heavily out of match unless $\Delta\omega=0$. As such, only a moderate amount of pulsation-induced mode coupling is enough to enter the bistability regime in the case of a microlaser. This means that microlasers can be bistable in a far greater parameter range than bulk-cavity lasers, e.g., for much larger $\Delta\omega$ (Fig. \[fig:GRAPH\_C\]). We have also shown that a sizable mismatch in the system parameters that favors one of the modes can destroy any chance of bistable operation. However, a mismatch with respect to one parameter can be compensated for by a mismatch with respect to another (Fig. \[fig:I12\_ZERO\]). Again, due to better matched intensity distributions of microcavity modes the bistable regime is more tolerant to such perturbations (Fig. \[fig:I12\_DOMAIN\_C\]). In the more general class-B or class-C laser models, we have shown numerically that even when $\Delta\omega$ is too large to allow bistability in the near-threshold class-A case, it can be overcome by increasing the pumping rate (Fig. \[fig:CLASS\_C\]). The results of the theory are confirmed by direct numerical FDTD simulations and are shown to be qualitatively valid for pumping rates several orders of magnitudes above the lasing threshold. Further results on bistability in class-B/C microlasers will be available in a forthcoming publication. Bistable operation of a multimode microlaser can be useful in many respects. Since there is no need for an external (and potentially slow) cavity-tuning process, ultrafast all-optical mode switching mechanisms can be imagined. Such switching, occurring across $\simeq20$ nm on a picosecond time scale had indeed been demonstrated numerically in our earlier work [@zhukPRL]. The fast switching between stable states and the relatively low power of microlasers can be used in the design of an optical memory (flip-flop) cell. We believe, that a compact-sized microlaser capable of multiple-wavelength operation in a wide wavelength range can find numerous applications in integrated optics and optical communication. The authors would like to thank C. Kremers for his assistance and advice on numerical simulation, as well as A. V. Lavrinenko for stimulating discussions. Financial support from the Deutsche Forschungsgemeinschaft (DFG FOR 557) is gratefully acknowledged. [20]{} K. J. Vahala, Nature **424**, 839 (2003). O. Painter, R. K. Lee, A. Scherer, A. Yariv, D. O’Brien, P. D. Dapkus, and I. Kim, Science **284**, 1819 (1999). M. Imada, A. Chutinan, S. Noda, and M. Mochizuki, **Phys. Rev. B **65**, 195306 (2002). S. Ishii and T. Baba, **Appl. Phys. Lett. **87**, 181102 (2005). M. Hill, H. Dorren, T. de Vries, X. Leijtens, J. Hendrik den Besten, B. Smalbrugge, Y.-S. Oei, H. Binsma, G.-D. Khoe, and M. Smit, Nature **432**, 206 (2004). A. Siegman, *Lasers* (University Science Books, Mill Valley, CA, 1986), Ch. 25.4. M. Sargent III, M. O. Scully, and W. E. Lamb, Jr., *Laser Physics* (Addison-Wesley, Reading, MA, 1974), Ch. 9. M. Sorel, P. J. R. Laybourn, A. Scirè, S. Balle, G. Giulliani, R. Miglierina, and S. Donati, Opt. Lett. **27**, 1992 (2002). C. L. Tang, A. Schremer, and T. Fujita, Appl.* *Phys.* *Lett. **51**, 1392 (1987). C.-F. Lin and P.-C. Ku, IEEE J.* *Quant.* *Electron. **32**, 1377 (1996). H. Kawaguchi, IEEE J. Sel. Top. Quant. Elecron. **3**, 1254 (1997). G. P. Agrawal and N. K. Dutta, J. Appl. Phys. **56**, 664 (1984). R. Kuszelewicz and J. L. Oudar, IEEE J. Quant. Electron. **QE-23**, 411 (1987). S. W. Wieczorek and W. W. Chow, Phys. Rev. A **69**, 033811 (2004). S. W. Wieczorek and W. W. Chow, Opt.* *Commun. **246**, 471 (2004). M. Takenaka, K. Takeda, Y. Kanema, Y. Nakano, M. Raburn, and T. Miyahara, Opt. Express **14**, 10785 (2006). S. V. Zhukovsky, D. N. Chigrin, A. V. Lavrinenko, and J. Kroha, Phys. Rev. Lett. **99**, 073902 (2007). S. V. Zhukovsky, D. N. Chigrin, A. V. Lavrinenko, and J. Kroha, Phys. Stat. Sol. (b) **244**, 1211 (2007). S. Zhang, D. Lenstra, Y. Liu, H. Ju, Z. Li, G. D. Khoe, and H. J. S. Dorren, Opt. Commun. **210**, 85 (2007). H. E. Türeci, A. Douglas Stone, and B. Collier, Phys. Rev. A **74**, 043822 (2006). H. E. Türeci, A. Douglas Stone, and Li Ge, Phys. Rev. A **76**, 013813 (2007). H. E. Türeci, Li Ge, S. Rotter, and A. Douglas Stone, Science **320**, 643 (2008). S. E. Hodges, M. Munroe, J. Cooper, and M. G. Raymer*,* J. Opt. Soc. Am. B **14**, 191 (1997). L. Florescu, K. Busch, and S. John, J. Opt. Soc. Am. B **19**, 2215 (2002). S. V. Zhukovsky, D. N. Chigrin, Phys. Stat. Sol. (b) **244**, 3515 (2007). V. Zehnlé, Phys. Rev. A **57**, 629 (1998). H. Haken and H. Sauermann, Z. Phys. **173**, 261 (1963); H. Fu and H. Haken, Phys. Rev. A **43**, 2446 (1991), A. E. Siegman, Phys. Rev. A **39**, 1253 (1989). O. Svelto, *Principles of lasers* (Plenum Press, New York, 1989), Ch. 6. Li Ge, R. J. Tandy, A. Douglas Stone, and H. E. Türeci, Opt. Express **16**, 16895 (2008). [^1]: Note that from the way the class-A equations were constructed, $R/R_{\textrm{thr}}$ cannot exceed one significantly. Numerical analysis shows that the mode dynamics no longer change if $R/R_{\textrm{thr}}$ is increased beyond 10, which is roughly where the near-threshold iterative expansion that yields the solution in the form of Eq.  ceases to be applicable. [^2]: There is no simple way to tell if the coefficients in Eqs. – are positive or negative for a given set of parameters. For instance, whenever the coefficients $a_{\kappa}$ or $b_{\kappa}$ change sign in Eq. , similar $\delta_{\kappa}$ will cause an opposite shift in $I_{1:2}$.
--- abstract: 'Numerical simulations of nonhydrostatic atmospheric flow, based on linearly decoupled semi-implicit or fully-implicit techniques, usually solve linear systems by a pre-conditioned Krylov method without preserving the skew-symmetry of convective operators. We propose to perform atmospheric simulations in such a fully-implicit manner that the difference operators preserve both the skew-symmetry and the tightly nonlinear coupling of the differential operators. We demonstrate that a symmetry-preserving Jacobian-free Newton-Krylov (JFNK) method mimics a balance between convective transport and turbulence dissipation. We present a wavelet method as an effective symmetry preserving discretization technique. The symmetry-preserving JFNK method for solving equations of nonhydrostatic atmospheric flows has been examined using two benchmark simulations of penetrative convection – a) dry thermals rising in a neutrally stratified and stably stratified environment, and b) urban heat island circulations for effects of the surface heat flux $H_0$ varying in the range of $25 \le H_0 \le 930$ W m$^{-2}$. The results show that an eddy viscosity model provides the necessary dissipation of the subgrid-scale modes, while the symmetry-preserving JFNK method provides the conservation of mass and energy at a satisfactory level. Comparisons of the results from a laboratory experiment of heat island circulation and a field measurement of potential temperature also suggest the modelling accuracy of the present symmetry-preserving JFNK framework.' address: - 'Department of Mathematics and Statistics, Memorial University, Elizabeth Ave, NL A1C 5S7, Canada' - 'Department of Mathematics and Statistics, Simon Fraser University, 8888 University Dr, Burnaby, BC V5A 1S6, Canada' author: - 'M. Alamgir Hossain' - Jahrul M Alam bibliography: - 'alamj2020.bib' title: 'Assessment of a symmetry-preserving JFNK method for atmospheric convection' --- JFNK; skew-symmetry; atmospheric convection; physics-based preconditioning; Introduction ============ Even with increasing power of computers and advances in numerical methods, it is a challenging endeavour to resolve the important physics of convective motion and cascade of turbulence kinetic energy (TKE) in atmospheric simulations. The large-scale physics cannot reach a near equilibrium of the interplay between convective transport and diffusive dissipation. We have to cope with the formidable problem of the subgrid-scale parameterization of convective processes [@Ferziger97; @Pielke2002; @Tannehill2013]. Mathematically, the convection ( $\bm u\cdot\bm\nabla\bm u$) and the diffusion ($\bm\nabla\cdot\bm\tau$) are governed by skew-symmetric and symmetric positive-definite operators, respectively, which are not fully preserved in many operational atmospheric modelling codes [see @Wicker1998; @Skamarock97; @Pielke2002]. A parameterization of the subgrid-scale stress $\bm\tau$ would provide a subtle balance of the two operators, which is broken by the non-symmetric discretization of the skew-symmetric convective operator. Most important reasons for preserving the symmetry of convective operator are: i) improved forecasting skill for meso-scale phenomena; and ii) reduced cost for highly complex forecasting systems when unified for both the boundary layer and the meso-scale phenomena. As the discretization breaks the skew-symmetry of the convective operator, the underlying conservation law is not globally satisfied at discrete times [@Verstappen2003]. The stability and conservative properties of the existing non-symmetric schemes are thoroughly reviewed by [@Steppeler2003] and [@Klemp2007]. Studies observed that higher-order linearly consistent schemes are sometimes unable to deal with the contamination of poorly resolved short-wavelength perturbations, usually triggered by atmospheric convection [@Skamarock97; @Bryan2002; @Pielke2002; @Steppeler2003]. In this article, we present the Jacobian-free Newton-Krylov (JFNK) method [@Knoll2004; @Zingg2009] for studying mesoscale penetrative convection and thermal dynamics of the atmospheric boundary layer flow [@Carpenter1990; @Skamarock97; @Bryan2002; @Lane2008]. The JFNK method is increasingly considered in many branches of computational fluid dynamics (CFD). However, it has not been a choice in major atmospheric flow solvers except in a few academic studies [e.g. @Reisner2002; @Alam2015]. The lack of a broad acceptance of the JFNK method by the atmospheric science community is somewhat related to the known challenge of constructing appropriate preconditioners. If the nonlinear convection and other physical effects such as turbulence, radiation, or latent heat release are included within the matrix to be inverted at each time step, the construction of a preconditioned JFNK method for atmospheric modelling is not fully clear from the existing literature. The present study fills in the research gap in atmospheric modelling, where we demonstrate that preserving the symmetry of operators in their discretization can partially circumvent the preconditioning challenge through a physics-based nonlinear preconditioning approach. To develop a JFNK solver, we discretize the tightly coupled residual of mass, momentum, and energy of the nonhydrostatic atmospheric model equations [see @Bryan2002] in a nonlinearly consistent manner. To account for the lack of implicit dissipation by the skew-symmetric discretization of the convective operator, we show that a subgrid model is capable of dissipating the short-wavelength perturbations triggered by atmospheric convection. For the proposed JFNK method, we consider a wavelet-based approximation of differential operators, which filters the short-wavelength perturbation. We study penetrative convection and convective boundary layer (CBL) flow over a heterogeneously heated surface. Although convection may not illustrate all of the computational issues of atmospheric modelling, the study of thermal dynamics indicates that the JFNK method offers much insight into the more complicated dynamics of atmospheric convection. We demonstrate that a rising thermal penetrating into a stably stratified atmosphere will eventually overshoot its level of neutral buoyancy, a crucial component of which is the generation of internal gravity waves. This overshooting involves entertainment and detrainment, which plays a key role in atmospheric mixing and convective redistribution of heat and other scalars. Capturing such phenomena of atmospheric convection illustrates our understanding of the JFNK methodology in dealing with the coupled nature of atmospheric multiphysics and the fascinating nonlinear cascade of scales of atmospheric dynamics. Section \[sec:wmdl\] presents the JFNK methodology for solving the governing equations for compressible nonhydrostatic atmospheric flows, where a technical details of the wavelet-based discretization is outlined briefly. Section \[sec:nra\] summarizes the numerical results of penetrative turbulent convection in the atmospheric boundary layer. We have discussed the results with respect to neutrally- and stably-stratified configurations. The test cases considered in this article are representative cases for the verification of atmospheric modelling. Results of other numerical models and field measurements have been utilized to validate the symmetry preserving JFNK methodology. Finally, Section \[sec:sfd\] discusses the present findings and outlines how the presented methodology may further be extended to advance the field of atmospheric modelling. Methodology {#sec:wmdl} =========== Governing equations ------------------- Let us consider the dynamics of idealized dry thermals without condensation, evaporation, or any background wind shear, where the continuity, momentum, and energy equations take the following form in Cartesian coordinates [see @Skamarock97; @Pielke2002; @Bryan2002], $$\frac{\partial p}{\partial t} + u_j \frac{\partial p}{\partial x_j} = - \frac{c_p}{c_v} \left(\frac{\partial u_i}{\partial x_i}\right)p, % \label{eq:ch2_28}$$ $$\frac{\partial u_i}{\partial t} + u_j \frac{\partial u_i}{\partial x_j} = - \theta_0 \frac{\partial p}{\partial x_i} - \theta_0 \left(\frac{\partial p_0}{\partial x} \delta_{i1}+\frac{\partial p_0}{\partial y} \delta_{i2} \right) + \frac{\partial \tau_{ij}}{\partial x_j} + \frac{\theta}{\theta_0} g \delta_{i3},% \label{eq:ch2_29}$$ $$\frac{\partial \theta}{\partial t} + u_j \frac{\partial \theta}{\partial x_j}+ u_j\delta_{j3} \beta = \frac{\partial\tau_{\theta j}}{\partial x_j}. \label{eq:ch2_30}$$ The equations (\[eq:ch2\_28\]-\[eq:ch2\_30\]) are nonlinearly coupled by the convective operator. The conservative properties and stability of this model are directly related to the energy contribution and the propagation speed of atmospheric waves [@Steppeler2003]. The velocity $u_i$ is coupled with the non-dimensional pressure ($p$) that is referred to as the Exner function and related to the dimensional pressure $P$: $$p = c_p\left( \frac{P}{p_0}\right)^{R/c_p}.$$ In Eq (\[eq:ch2\_30\]), a splitting the total potential temperature is considered, such as $\theta_0 + \bar\theta + \theta$, where $\theta_0$ is a constant background temperature, $\beta = \partial \bar \theta/\partial z$, $\bar\theta(z)$ denotes a mean vertical distribution of the temperature, and $\theta$ is a temperature perturbation. This decomposition is often useful for implementing the heat flux boundary condition at the ground [@Dubois2009]. Note, $p_0 = 1000$ hPa is the reference pressure, $R = 287$ J kg$^{-1}$ K$^{-1}$ is the gas constant, $c_p = 1004$ J kg$^{-1}$ K$^{-1}$ is the specific heat at constant pressure, and $c_v = 717$ J kg$^{-1}$ K$^{-1}$ specific heat at constant volume. In Eqs (\[eq:ch2\_28\]-\[eq:ch2\_30\]), $x_i$ denotes the Cartesian coordinate $\bm x = (x,y,z)$, $\delta_{ij}$ is the Kronecker delta, $\tau_{ij}$ is the turbulent momentum flux, and $\tau_{\theta j}$ is the turbulent heat flux. Note that $u_i$ and $u_j$ denotes the velocity components $u_1,\,u_2,\,u_2$; however, as mentioned below, the bold-face $\bm u = [u_i, p, \theta]$ represents the numerical solution vector of the system (\[eq:ch2\_28\]-\[eq:ch2\_30\]). Symmetry preserving discretization {#sec:trb} ---------------------------------- To illustrate how the symmetry of underlying physics is preserved numerically by the JFNK method, let $\bm u = [u_k]$ be a column vector $[u_i,p,\theta]_k$, [*i.e.*]{} the numerical solution of Eqs (\[eq:ch2\_28\]-\[eq:ch2\_30\]) at each spatial grid point $\bm x_k,\,\hbox{for }\, k=0\ldots\mathcal N-1$, and $\mathcal F$ represent the discretization of all spatial differential operators involved in the system (\[eq:ch2\_28\]-\[eq:ch2\_30\]). Then, the following dynamical system $$\label{eq:ds} \frac{\partial\bm u}{\partial t} = \mathcal F\bm u$$ represents the spatially discretized form of Eqs (\[eq:ch2\_28\]-\[eq:ch2\_30\]). Considering a time centred implicit (trapezoidal) scheme for the dynamical system (\[eq:ds\]), we get $$\frac{2\bm u^{n+1}}{\Delta t}-\mathcal F (\bm u^{n+1})=\frac{2\bm u^{n}}{\Delta t}+\mathcal F (\bm u^{n}),$$ which is a nonlinear system of algebraic equations of the following compact form $$\mathcal{L}(\bm u^{n+1})=\bm f(\bm u^n). \label{eq:ns}$$ For a nonlinear problem, the time centred scheme (\[eq:ns\]) with a fixed positive step size $\Delta t$ leads to a bounded error for $n\rightarrow\infty$, which is equivalent to A-stability of the scheme when it is applied to a linear problem. Since the order of an A-stable linear multistep method cannot exceed $2$, the time centered method is the best choice to deal with waves not contributing to energy conservation in the solutions of atmospheric model equations (\[eq:ch2\_28\]-\[eq:ch2\_30\]) [see, e.g. @Randall90]. The nonlinear convective operator $u_j\partial u_i/\partial (\cdot)$ is skew-symmetric because of the property of the trilinear form that $\langle u_j\partial v_i,w_i\rangle + \langle w_i, u_j\partial v_i\rangle$. In Eq (\[eq:ds\]), the operator $\mathcal F$ is said to be skew-symmetric with respect to the inner product $\langle\cdot\rangle$ if we have $\langle\mathcal F\bm u, \bm u\rangle + \langle\bm u, \mathcal F\bm u\rangle=0$ for all vectors $\bm u$. In other words, an anti self-adjoint operator is skew-symmetric. If the operator $\mathcal F$ is a matrix, the skew-symmetry is equivalent to $\mathcal F=-\mathcal F^T$. Now, taking the inner product of $\bm u$ with both the sides of Eq (\[eq:ds\]) and ignoring the effects of boundary conditions, we find that $$\frac{\partial}{\partial t}\langle\bm u,\bm u\rangle = \langle\mathcal F\bm u,\bm u\rangle + \langle\bm u,\mathcal F\bm u\rangle.$$ Clearly, the dynamical system (\[eq:ds\]) conserves the inner product $\langle\bm u,\bm u\rangle$ if the corresponding operator $\mathcal F$ satisfies the above skew-symmetric property. In order to satisfy the conservation of the inner product $\langle\bm u,\bm u\rangle$ at discrete level in the context of the dynamical system (\[eq:ds\]), we must have the inner product satisfying $\langle\bm u^{n+1} ,\bm u^{n+1}\rangle = \langle\bm u^{n} ,\bm u^{n}\rangle$ for two consecutive time steps. It can be shown that such a requirement at each time step is satisfied, subject to a truncation error of $\mathcal O(\Delta t^2)$, by the trapezoidal time integration scheme (\[eq:ns\]) considered above if the operator $\mathcal F$ is skew-symmetric. Consider a higher-order upwind-biased discretization of convection in Eq (\[eq:ds\]), which minimizes the local truncation error. An upwind discretization does not retain the skew-symmetry of the convective operator. With classical upwind methods, the eigenvalues of the operator $\mathcal F$ will have negative real parts [see @Klemp2007]. The negative real part of the eigenvalues of the operator $\mathcal F$ help ensure the stability of the system Eq (\[eq:ds\]). However, they artificially dampen the energy $\langle\bm u,\bm u\rangle$, and thus, the conservation of energy cannot be satisfied globally [see @Klemp2007]. To preserve the skew-symmetry of the convective differential operator $u_j\partial u_i/\partial x_j$ with a second order finite difference method, momentum transport equation can be discretized in the following skew-symmetric form $$% \mathcal F\bm u = \frac12\frac{\partial u_i u_j}{\partial x_j} + \frac12 u_j\frac{\partial u_i}{\partial x_j},$$ where one considers the arithmetic mean of flux- and convective-forms of the convective operator. Notice that the skew-symmetry of the convective operator is directly related to the conservation of the convective variable. It is worth mentioning that the nonlinear stability of numerical schemes is often easier to establish if the nonlinear convective term is expressed in the skew-symmetric form. First, preserving skew-symmetry results in reduced levels of artificial dissipation, which is desired in atmospheric simulations. Second, it eliminates the convective instability associated with spurious transfer of kinetic energy on grids that are not fine enough to resolve short-wavelength perturbations caused by convective transport. For example, it was reported by previous researchers that due to the artificial numerical damping of the shorter-wavelength, the Weather Research and Forecasting (WRF-ARW) model is unable to adequately resolve the capping inversions. Third, it ensures that the numerical dissipation of resolved kinetic energy does not overwhelm the dissipation provided by a subgrid-scale model. Preserving the skew-symmetry of the convective transport by the wavelet method, in addition to the tightly nonlinear coupling of the JFNK method, brings multifold benefits discussed above. Wavelet-based collocation method -------------------------------- Deslaurier-Dubuc interpolating wavelets [see @Deslauriers1989; @Mallat2009] are defined on a sequence of nested grids $ \mathcal G^j = \{\bm x_k \} % $ which are embedded over nested approximation spaces $\mathcal V^j\subset\mathcal V^{j+1}$. An element of the basis $\{\psi_k(\bm x)\}$ of $\mathcal V^j$ is presented in Fig \[fig:psi\]$a,b$. The wavelet collocation method finds an approximation $u^{\mathcal N}(\bm x)\in\mathcal V^j$ of $u(\bm x)\in L^2(\Omega)$ such that $ \langle\mathcal L(u^{\mathcal N}) - f,\tilde\psi_k\rangle = 0, $ where $\mathcal N$ is the number of grid points, $\mathcal L$ is a differential operator including the boundary conditions, and $\{\tilde\psi_k\}$ is a dual basis corresponding to the approximation space $\mathcal V^j$. For the given basis $\{\psi_k(\bm x)\}$ of $\mathcal V^j$, there exist a dual approximation space equipped with a basis $\{\tilde\psi_k(\bm x)\}$ such that $\tilde\psi_j(\psi_i)=\delta_{ij}$. The wavelet-based approximation $$\label{eq:mwd} \begin{array}{lll} % u^{\mathcal N}(\bm x) % &=& \sum\limits_{k=0}^{\mathcal N-1}u(\bm x_k)\psi_k(\bm x)\\ % \end{array}$$ projects the coefficients $\{u(\bm x_k)\}$ into $\mathcal V^j$ in which the projection $u^{\mathcal N}(\bm x)$ does not oscillate at wavelengths smaller than the grid-spacing. The discretization of differential operators are performed through projection of derivatives into $\mathcal V^j$. Without the details of wavelet theory, the wavelet-based projection [see @Deslauriers1989 for a technical details] of the first derivative with respect to $x$ is given by $$\frac{\partial}{\partial x} u^{\mathcal N}(\bm x) = \sum\limits_{k=0}^{\mathcal N} u'(\bm x_k)\psi_k(\bm x) = \sum\limits_{k=j-2p+1}^{j+2p-1}u(\bm x_k)\frac{\partial}{\partial x}\psi_k(\bm x).$$ The symmetry of differential operators are preserved due to the symmetry of $\psi_k(\bm x)$. On a uniformly refined grid having a grid-spacing of $\Delta x$ in all directions, the local truncation error is $\mathcal O(\Delta x^{2p})$ for the above approximation of derivatives [@Alam2014]. The derivative of $u(\bm x)$ is exactly represented by this wavelet method if $u(\bm x)$ is a polynomial of degree $2p-1$. The subgrid-scale modes of half the wavelength of the resolved scale modes, which are contributed by convection $u\partial u/\partial x$ contributes, are explicitly filtered and parameterized by the subgrid model. ------------------------------------- -------------------------------------- $(a)$ $(b)$ ![(](phi4.jpg "fig:"){height="6cm"} ![(](phi4x.jpg "fig:"){height="6cm"} ------------------------------------- -------------------------------------- [$a$) A wavelet function $\psi_k(\bm x)$ satisfying $\psi_k(\bm x_l)=\delta_{kl}$; it takes a value of $1$ on a given grid point $\bm x_k\in\mathcal G^j$ and $0$ on all other grid points $\bm x_l\in\mathcal G^j$. Then, $\psi_k(\bm x)$ is extended to all grid points $\bm x_k\in\mathcal G^{j+1}$ by [@Deslauriers1989] interpolation, and subsequent iterations forms a continuous function in $\mathcal G^j$ as $j\rightarrow\infty$. $(b)$ A restriction of $\psi_k(\bm x)$ on $y=0$ is displayed to indicate the symmetry and support of $\psi_k(\bm x)$.]{} \[fig:psi\] The subgrid scale closure model {#sec:sgs} ------------------------------- In atmospheric modelling [see @Pielke2002], the sugrid-scale schemes assume that turbulence produces vertical mixing in the real atmosphere, and that the role of the horizontal components of the subgrid scale stress is to control nonlinear aliasing errors [@Pielke2002]. Such schemes are based on the momentum exchange coefficient [e.g. @dear70; @Deardorff80],$$\nu_{\tau} = \left(\Delta_{\hbox{\tiny LES}}C_s\right)^2\sqrt{2\mathcal S_{ij}\mathcal S_{ij}},$$ where the stresses$$\tau_{ij} = 2\nu_{\tau}\mathcal S_{ij} - \frac{1}{3}\tau_{kk}\delta_{ij}$$ are related to the strain $\mathcal S_{ij} = \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\right)$ of the resolved flow. Computing resources limit atmospheric simulations on coarse grids, where the subgrid model acts on turbulent motions that are anisotropic and intermittent [@Moeng2015]. Moreover, the vertical dissipation remains stronger than the horizontal dissipation, for example in penetrative convection [@Bartello2013]. To examine the symmetry preserving JFNK solver with respect to a basic subgrid model, we follow the dimensional reasoning outlined by [@dear70] (see Eq 3.4 therein) to estimate the horizontal and the vertical exchange coefficients separately, $$\label{eq:KmM} K_m = C_s^{4/3}~\varepsilon^{1/3}~ (\Delta_x\Delta_y)^{2/3} \quad\hbox{and }\quad K_M = C_s^{4/3}~\varepsilon^{1/3}~ \Delta_z^{4/3}.$$ Here, $C_s$ is a dimensionless constant and the rate of dissipation of turbulent kinetic energy is $\varepsilon = \nu_{\tau}\mathcal S_{ij}\mathcal S_{ij}$. According to [@Deardorff80], the horizontal eddy diffusivity is $K_h = (1+2l/\sqrt{\Delta_x\Delta_y})K_m$ and the vertical eddy diffusivity is $K_H = (1+2l/\Delta_z)K_M$, where $l=0.75\sqrt{\tau_{kk}}/N$ is a subgrid scale mixing length and $N$ is the Brunt-Väisälä frequency. A brief outline of the JFNK method {#sec:jfnk} ---------------------------------- To ensure a minimal technical details of present contribution, we closely follow the preconditioned Krylov method considered by [@Skamarock97] in their semi-implicit scheme for solving the linearized nonhydrostatic atmospheric model Eqs (\[eq:ch2\_28\]-\[eq:ch2\_30\]). The JFNK method is a class of practical iterative methods for finding the solution $\bm u^*$ to the nonlinear system $\mathcal L(\bm u) = \bm f$, Eq (\[eq:ns\]), when an initial approximation, $\bm u^{0}$, is known. The nonlinear function $\mathcal L:\mathbb R^{\mathcal N}\rightarrow\mathbb R^{\mathcal N}$ is assumed differentiable and $\mathcal J(\bm u)$ denotes the Jacobian of $\mathcal L$ at the point $\bm u$. ### Convergence rate of Newton-Krylov solvers For the nonlinear system (\[eq:ns\]), let us find variations $\delta\bm u^k$ of the solution vector $\bm u^{n+1}$ iteratively such that $\bm u^{n+1,k+1} = \bm u^{n+1,k} + \delta\bm u^k$ for $k\ge 0$. This outer loop of iterations forms the Newton’s method. The solution from the previous time step provides the first iteration $\bm u^{n+1,0}=\bm u^{n}$. At $k$-th Newton iteration, we minimize the residual vector $\mathcal R(\bm u^{n+1,k})\, \dot{=}\, \mathcal L(\bm u^{n+1,k}) - f(\bm u^n)$ using the generalized minimal residual (GMRES) method of [@Saad1986]. Thus, we look for the variation $\delta\bm u^k$ satisfying the linear system $$\label{eq:jv} \mathcal J(\bm u)\delta\bm u^k = -\mathcal R(\bm u^{n+1,k}). %$$ Only a few Krlov iterations of the inner loop solve Eq (\[eq:jv\]). This ‘inexact’ Newton-Krylov method is equivalent to solving the ordinary differential equation $$\frac{d\bm u^{n+1,k}}{dk} = -\mathcal J^{-1}(\bm u)\mathcal R(\bm u^{n+1,k})$$ by the Euler-explicit method with a step size of one. Therefore, $\mathcal R(\bm u^{n+1,k}) = e^{-k}\mathcal R(\bm u^n)$ if Eq (\[eq:jv\]) is solved exactly. This shows the fast rate of convergence of the inexact Newton’s method. ### Physics-based nonlinear preconditioning Two families of preconditioning method are usually considered for the JFNK method. The linear preconditioning is quite similar to the Krylov method presented by [@Skamarock97]. For solving the linear system (\[eq:jv\]) by a preconditioned Krylov method, linear preconditioning is classified as right- and left-preconditioning. In contrast, a physics-based nonlinear preconditioning is cost-effective thanks the wavelet-based discetization. Each diagonal block of the operator $\mathcal L$ in Eq (\[eq:ns\]) means a physical field that couples with itself and each off-diagonal block means a physical field that couples with another field. Consider a physics-based preconditioner in which the weak coupling of off-diagonal terms in the operator $\mathcal L$ is ignored. Physics-based preconditioner gathers the eigenvalues of the preconditioned system in small areas, thereby increasing the convergence rate. The physics-based nonlinear preconditioning approach constructs an equivalent system of non-linear equations which provides faster rate of convergence with respect to the original system. It can be shown that if a fixed point iteration converges for the system (\[eq:ns\]), the eigenvalues of the Jacobian $\tilde{\mathcal J}(\bm u) = \partial\tilde{\mathcal R}(\bm u)/\partial\bm u$ for the preconditioned nonlinear system $\tilde{\mathcal R}(\bm u) = \bm u - \mathcal P^{-1}(\bm u) [\bm f(\bm u^n) - \mathcal L(\bm u) + \mathcal P(\bm u)]$ are gathered in a small area, where $\mathcal P(\bm u)$ is the nonlinear preconditioning matrix. The most attractive feature of nonlinear preconditioning is that faster convergence rate of Krylov iteration is achieved with minimal mathematical and coding effort. For example, in case of implementing the JFNK method within an existing atmospheric modelling code, a fixed point iteration can be performed with only a few code modification. For the matrix-vector product on the left side of Eq (\[eq:jv\]), the JFNK method needs to compute the action of the linear map $\mathcal J:\mathcal V^l\subset\mathbb R^{\mathcal N}\rightarrow\mathbb R^{\mathcal N}$ on the variation $\delta\bm u^k$ of the solution vector $\bm u^{n+1}$. To compute this action with a complexity of $\mathcal O(\mathcal N)$, consider the Fréchet derivative of the operator $\mathcal R(\bm u)$ defined by $$\label{eq:fd} \mathcal J(\bm u)\delta\bm u^k = \lim_{\eta\rightarrow 0}\frac{\partial}{\partial\eta}\mathcal R(\bm u + \eta\delta\bm u^k).$$ The limit in Eq (\[eq:fd\]) exists, and thus, $\mathcal R(\bm u)$ is Fréchet differentiable, where $$\lim_{\eta\rightarrow 0}\frac{||\mathcal L(\bm u^{n+1,k}+\eta\delta\bm u^k) - \mathcal L(\bm u^{n+1,k}) - \mathcal J\delta\bm u^k||}{||\eta\delta\bm u^k||} = 0.$$ In other words, the same algorithm that provides the differentiation matrix $\mathcal L$ is applied to calculate the action of $\mathcal J$ on $\delta\bm u^k$ without requiring additional technical development a preconditioner. This observation suggest that the implementation of JFNK method within an existing atmospheric modelling code is straight forward. Moreover, the complexity of the JFNK method scales like the complexity of the algorithm used for the discretization of Eqs (\[eq:ch2\_28\]-\[eq:ch2\_30\]), which is $\mathcal O(\mathcal N)$ for the wavelet method. Numerical results and discussion {#sec:nra} ================================ We report primary results on the accuracy, efficiency, and efficacy of the symmetry-preserving JFNK method as a potential candidate for problems of meteorological interest. We have studied two categories of convective phenomena to test the tightly nonlinear strategy of the JFNK method. Comparisons of the present results among experimental and numerical data collected from the literature have been considered. These numerical exercises indicate that the tightly nonlinear physics-based coupling of all physical processes considered within the JFNK method has the potential to be a scale-adaptive frame-work for modelling the transition from the near-surface small-scale 3D physics to the outer-layer meso-scale meteorology. Penetrative convection and rising thermals: reference model \[refmodel\] ------------------------------------------------------------------------ We have compared the JFNK simulation of penetrative convection with the results provided by [@Bryan2002], [@Wicker1998], and [@Carpenter1990]. [@Bryan2002] examined a time-split segregated algorithm in which the convective operator was discretized in its flux-form without preserving its skew-symmetry. They adopted a divergence damping term to help maintain the quality of the scheme. [@Carpenter1990] noted that the choice of not preserving skew-symmetry of convective operators by the positive definite upwind schemes is to help control the Gibbs phenomenon. They also reported upwind schemes tend to smear sharp gradient of penetrative thermals. We consider that a warm perturbation of $\theta$ is prescribed at the horizontal midpoint and at a height of $2$ km in the domain of $[-10,10]\times[0,10]\hbox{ km}^2$, where the surrounding environment is neutrally stratified with lapse rate of $10^{\hbox{\tiny o}}$ K km$^{-1}$ and $\theta_0=300$ K. The initial thermal has a radius of $2$ km [e.g. @Bryan2002]. As mentioned in Table \[tab:khrare\], momentum- and heat-exchange coefficients are varied for Case A at a fixed Prandtl number of $\mathcal Pr=0.71$. In Case B, the Prandtl number is varied between $0.5$ and $2.0$. This test confirms hypothesis that short-wavelength modes, triggered by the nonlinear convection process in a period of time evolution of the thermal, are accurately filtered by the subgrid model. $K_M$ (m$^2$ s$^{-1}$) $K_H$ (m$^2$ s$^{-1}$)   $\mathrm{Re}$     $\mathrm{Ra}$      $\mathrm{Pr}$    -- ------------------------ ------------------------ ------------------- -------------------- ---------------------- 10 14.1 $2.5 \times 10^3$ $4.44 \times 10^6$ 0.71 5 7.04 $5.0 \times 10^3$ $1.78 \times 10^7$ 0.71 2.5 3.52 $1.0 \times 10^4$ $ 7.1 \times 10^7$ 0.71 1.0 1.41 $2.5 \times 10^4$ $4.44 \times 10^8$ 0.71 10 5 $2.5 \times 10^3$ $4.44 \times 10^6$ 0.5 5 5 $5.0 \times 10^3$ $1.78 \times 10^7$ 1.0 2.5 5 $1.0 \times 10^4$ $7.1 \times 10^7$ 2.0 $K_M$ $10$ m$^2$ s$^{-1}$ $5$ m$^2$ s$^{-1}$ $2.5$ m$^2$ s$^{-1}$ $1.0$ m$^2$ s$^{-1}$ B & F ---------------------------- --------------------- -------------------- ---------------------- ---------------------- ----------- $\theta_{\min}$ (K) -0.000632 -0.003814 -0.009359 -0.133971 -0.144409 $\theta_{\max}$ (K) 1.408749 1.629635 1.843659 2.138108 2.02178 $u_{\min}$ (m s$^{-1}$) -9.511412 -10.058257 -10.636190 -11.667357 $u_{\max}$ (m s$^{-1}$) 9.512040 10.059020 10.637147 11.668235 $w_{\min}$ (m s$^{-1}$) -6.360285 -6.527770 -6.596165 -6.627753 -8.58069 $w_{\max}$ (m s$^{-1}$) 15.352078 15.599483 15.833058 16.018170 14.5396 $\omega_{\min}$ (s$^{-1}$) -0.065906 -0.095523 -0.137061 -0.188188 $\omega_{\max}$ (s$^{-1}$) 0.065906 0.095523 0.137049 0.188187 $K_M$ $10$ m$^2$ s$^{-1}$ $ 5 $ m$^2$ s$^{-1}$ $2.5$ m$^2$ s$^{-1}$ ---------------------------- --------------------- ---------------------- ---------------------- $\theta_{\min}$ (K) -0.0061 -0.0062 -0.0063 $\theta_{\max}$ (K) 1.7138 1.7268 1.7371 $u_{\min}$ (m s$^{-1}$) -9.7735 -10.1486 -10.4817 $u_{\max}$ (m s$^{-1}$) 9.7744 10.1494 10.4824 $w_{\min}$ (m s$^{-1}$) -6.5080 -6.5204 -6.5481 $w_{\max}$ (m s$^{-1}$) 15.4442 15.6334 15.8046 $\omega_{\min}$ (s$^{-1}$) -0.0849 -0.1053 -0.1271 $\omega_{\max}$ (s$^{-1}$) 0.0849 0.1053 0.1271 : Max/min values of $\theta$, $u$, $w$, and $\omega$ for the dry thermal simulation (Case B, see Table \[tab:khrare\]), where $\theta_0 = 300$ K and $t = 1\,000$ s.[]{data-label="tab:extreme_values_ch4_2"}    $\mathcal Ri_b$      $\mathcal Fr$        $N$ (s$^{-1}$)     $\omega/N$ $\alpha$ ---------------------- -------------------- ------------------------- ------------ ------------------------- 1.0 1.0 $2.5 \times 10^{-2}$ $0.999$ $2.65^{\hbox{\tiny o}}$ 0.25 2.0 $1.25 \times 10^{-2}$ $0.997$ $4.44^{\hbox{\tiny o}}$ 0.2 2.24 $1.12 \times 10^{-2}$ $0.985$ $9.94^{\hbox{\tiny o}}$ 0.16 2.5 $1.0 \times 10^{-2}$ $0.977$ $12.3^{\hbox{\tiny o}}$ 0.1 3.16 $7.9 \times 10^{-3}$ $0.962$ $15.8^{\hbox{\tiny o}}$ 0.05 4.47 $5.6 \times 10^{-3}$ $0.929$ $21.7^{\hbox{\tiny o}}$ : Relation between the wave frequency and buoyancy frequency for penetrative convection in a stably stratified environment.[]{data-label="tab:khrare1"} ### Thermals in a neutral environment {#sec:ntrl} In Case A-B, the effect of the horizontal momentum exchange coefficient, [*e.g.*]{} $K_M = 10,~5,~2.5,$ and $1.0$ m$^2$ s$^{-1}$, with respect to the skew-symmetry of convective operator is studied for atmospheric convection in a neutral environment ([*i.e.*]{} the Buoyancy frequency $N = 0$). The contour plots of the potential temperature $\theta$ in Fig \[fig:thetacontour\_dry\] shows the development of two ‘rotors’ around the rising thermal, which replicates the corresponding dynamics predicted by the mesoscale models of [@Wicker1998] and [@Bryan2002]. In comparison to Fig 1 of [@Bryan2002], one finds that the dynamics of penetrative thermals in a neutrally stratified dry atmosphere has been accurately simulated by the tightly nonlinear coupling strategy of the JFNK method. In particular, the sensitivity of the simulated dynamics on the values of $K_M$ (Table \[tab:khrare\]) is consistent with the similar results that appeared in the literature [e.g. @Carpenter1990]. For a quantitative comparison, we note that the minimum and maximum potential temperature reported by [@Bryan2002] are $\theta_{min} = -0.144409$ K and $\theta_{max} = 2.07178$ K, respectively. Table \[tab:extreme\_values\_ch4\_1\] indicates a good agreement of the present results with the corresponding values reported by [@Bryan2002], subject to the differences in the subgrid model and the truncation error of the numerical scheme. It is worth mentioning that the atmospheric modelling community adopts the upwind scheme for accurate numerical predictions of weather events. Clearly, the symmetry-preserving JFNK method provides numerical predictions of equivalent accuracy. Table \[tab:extreme\_values\_ch4\_1\] indicates that the potential temperature field is predicted relatively accurately by the JFNK method for the smallest of the considered values of $K_M=1.0$ m$^2$s$^{-1}$. However, the vertical velocity is predicted more accurately with a higher value of $K_M=10.0$ m$^2$s$^{-1}$. It is also evident from Table \[tab:extreme\_values\_ch4\_1\] that the numerical predictions are not noticeably sensitive to changes in momentum exchange coefficients. One observes that the upper surfaces of thermals at $t=1\,000$ s (in Fig \[fig:thetacontour\_dry\]) are located at heights of $8.04,~8.08,~\hbox{and }8.15$ km for $K_M = 10, ~5,~\hbox{and }2.5$ m$^2$ s$^{-1}$, respectively. From the predicted maximum vertical velocity in Table \[tab:extreme\_values\_ch4\_1\], we see that the rate of vertical momentum transfer in a turbulent penetration of dry thermals may be weakly sensitive to the subgrid-scale mixing length $l$ provided by the eddy diffusivity model of [@Deardorff80]. [cc]{} $(a)$ & $(b)$\ &\ ![Contour plots of the potential temperature perturbation $(\theta)$ for Case A with a neutral environment at $\mathcal Pr= 0.71$ and $t=1\,000$ s; (a) $K_M = 10$ m$^2$ s$^{-1}$; (b) $K_M = 5$ m$^2$ s$^{-1}$, and (c) $K_M = 2.5$ m$^2$ s$^{-1}$.\[fig:thetacontour\_dry\]](bubble_theta_nu10_trevb "fig:"){width="19pc"} & ![Contour plots of the potential temperature perturbation $(\theta)$ for Case A with a neutral environment at $\mathcal Pr= 0.71$ and $t=1\,000$ s; (a) $K_M = 10$ m$^2$ s$^{-1}$; (b) $K_M = 5$ m$^2$ s$^{-1}$, and (c) $K_M = 2.5$ m$^2$ s$^{-1}$.\[fig:thetacontour\_dry\]](bubble_theta_nu5_trevb "fig:"){width="19pc"}\ &\ \ &\ \ Color-filled contour plots of the horizontal and the vertical velocities are presented in Figure \[fig:velocitycontour\_neutral\]. Notice that the velocity field is symmetric about $x=0$ [see also @Lane2008]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $(a)$ $(b)$ ![Color filled contour plots of the horizontal velocity ($u$, left column) and the vertical velocity ($w$, right column) for Case A with a neutral environment at $t=1\,000$ s. (a,b) $K_M = 10$ m$^2$ s$^{-1}$, (c,d) $K_M = 5$ m$^2$ s$^{-1}$ and (e,f) $K_M = 2.5$ m$^2$ s$^{-1}$. [Red, blue and yellow colors represent positive, negative and zero values, respectively.]{} \[fig:velocitycontour\_neutral\]](contour_u_nu10_1_trev "fig:"){width="19pc"} ![Color filled contour plots of the horizontal velocity ($u$, left column) and the vertical velocity ($w$, right column) for Case A with a neutral environment at $t=1\,000$ s. (a,b) $K_M = 10$ m$^2$ s$^{-1}$, (c,d) $K_M = 5$ m$^2$ s$^{-1}$ and (e,f) $K_M = 2.5$ m$^2$ s$^{-1}$. [Red, blue and yellow colors represent positive, negative and zero values, respectively.]{} \[fig:velocitycontour\_neutral\]](contour_w_nu10_1_trev "fig:"){width="19pc"} $(c)$ $(d)$ ![Color filled contour plots of the horizontal velocity ($u$, left column) and the vertical velocity ($w$, right column) for Case A with a neutral environment at $t=1\,000$ s. (a,b) $K_M = 10$ m$^2$ s$^{-1}$, (c,d) $K_M = 5$ m$^2$ s$^{-1}$ and (e,f) $K_M = 2.5$ m$^2$ s$^{-1}$. [Red, blue and yellow colors represent positive, negative and zero values, respectively.]{} \[fig:velocitycontour\_neutral\]](contour_u_nu5_1_trev "fig:"){width="19pc"} ![Color filled contour plots of the horizontal velocity ($u$, left column) and the vertical velocity ($w$, right column) for Case A with a neutral environment at $t=1\,000$ s. (a,b) $K_M = 10$ m$^2$ s$^{-1}$, (c,d) $K_M = 5$ m$^2$ s$^{-1}$ and (e,f) $K_M = 2.5$ m$^2$ s$^{-1}$. [Red, blue and yellow colors represent positive, negative and zero values, respectively.]{} \[fig:velocitycontour\_neutral\]](contour_w_nu5_1_trev "fig:"){width="19pc"} $(e)$ $(f)$ ![Color filled contour plots of the horizontal velocity ($u$, left column) and the vertical velocity ($w$, right column) for Case A with a neutral environment at $t=1\,000$ s. (a,b) $K_M = 10$ m$^2$ s$^{-1}$, (c,d) $K_M = 5$ m$^2$ s$^{-1}$ and (e,f) $K_M = 2.5$ m$^2$ s$^{-1}$. [Red, blue and yellow colors represent positive, negative and zero values, respectively.]{} \[fig:velocitycontour\_neutral\]](contour_u_nu2_5_1_trev "fig:"){width="19pc"} ![Color filled contour plots of the horizontal velocity ($u$, left column) and the vertical velocity ($w$, right column) for Case A with a neutral environment at $t=1\,000$ s. (a,b) $K_M = 10$ m$^2$ s$^{-1}$, (c,d) $K_M = 5$ m$^2$ s$^{-1}$ and (e,f) $K_M = 2.5$ m$^2$ s$^{-1}$. [Red, blue and yellow colors represent positive, negative and zero values, respectively.]{} \[fig:velocitycontour\_neutral\]](contour_w_nu2_5_1_trev "fig:"){width="19pc"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ These plots show that a reduction of the subgrid scale momentum-exchange coefficient by $50\%$ does not introduce a significant overall variation of the vertical velocity. The results for Case B (see Table \[tab:khrare\]), where the value of $\mathcal Pr$ was also varied, are similar to that for Case A, as reported in Table \[tab:extreme\_values\_ch4\_2\]. ### Thermals in a stably stratified environment Corresponding to the neutral simulations (Case-A and Case-B), penetrative convection in a stably stratified environment is simulated, where the model was initialized for the potential temperature $\theta_0 + \beta z + \theta(x,z)$. In this situation, the frequency of internal wave ($\omega$) correlates with the buoyancy frequency ($N$) through the dispersion relation $\omega = N\cos\alpha$. Values of the buoyancy frequency $N$ and $\alpha$ are listed in Table \[tab:khrare1\]. The linear theory suggests the existence of evanescent and vertically propagating waves [see @Lin2007 p. 187] if $$\frac{2\pi}{N} > \frac{L}{U}\quad \hbox{and}\quad \frac{2\pi}{N} < \frac{L}{U},$$ respectively. Moreover, if $\frac{2\pi}{N} >> \frac{L}{U}$, the buoyancy force becomes extremely weak so that the vertical velocity can be estimated by [see @Lin2007] $$w(x,z) = W(x) e^{-k |z|}.$$ Here, $U$ and $W$ are horizontal and vertical velocity scales, respectively, and $L$ denotes a horizontal length scale. Simulated vertical velocity $w(0,z,t)$ and potential temperature $\theta(0,z,t)$ at $t = 1\,000$ s are shown in Fig \[fig:yslice\_dristra\]. The absolute maximum of the vertical velocity and of the potential temperature appear around at $2$ km from the ground. It is also evident that the rising thermals in the stable environment features overshooting that reaches up to a height of $5$ km. Considering a velocity scale of $U \sim 10$ m s$^{-1}$ (based on Table \[tab:extreme\_values\_ch4\_1\]) and the buoyancy frequency, $N \sim 1.0 \times 10^{-2}$ s$^{-2}$, one finds a vertical length scale of $L_z = \frac{2 \pi U}{N} \sim 6.25$ km. Thus, we recorded the vertical velocity at a location on the center of the horizontal domain at $z = 6.25$ km for every time step with respect to four values of the buoyancy frequency, $N = 2.5 \times 10^{-2}, 1.25 \times 10^{-2}$, $1.0 \times 10^{-2}, 7.9 \times 10^{-3}$, and the result is shown in Fig \[fig:time\_series\_vertical\_velocity\]. For each $N$, the corresponding bulk Richardson numbers $\mathcal Ri_b$, as well as the Froude number $\mathcal Fr$ are presented in Table \[tab:khrare1\]. For the result in Fig \[fig:time\_series\_vertical\_velocity\], the ratio of the wave frequency to the buoyancy frequency ($\omega/N$) at $z = 6.25$ km are reported in Table \[tab:khrare1\]. The angle ($\alpha$) between the phase velocity vector and the horizontal direction are also presented in Table \[tab:khrare1\], suggesting the dispersion relation: $$\omega = N \cos\alpha.$$ Table \[tab:khrare1\] also suggests that the wave frequency satisfies $\omega < N$, indicating the maximum possible frequency of internal waves in a stratified fluid is $N$. The angle ($\alpha$) increases as buoyancy frequency decreases. To illustrate internal wave propagation, Fig \[fig:time\_series\_vertical\_velocity\_1\] presents the time series of the vertical velocity corresponding to seven locations $(0,1.25)$, $(0,2.5)$, $(0,3.75)$, $(0,5.0)$, $(0,6.25)$, and $(0,8.75)$ for three values of $N$. These results show a good agreement of the phenomena simulated by the JFNK method with the corresponding findings reported in the literature [e.g. see @Morton1956], which means that the wavelet-based JFNK model accurately predicts the penetrative convection of thermals in a stably stratified environment. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Plots of (a) vertical velocity (b) potential temperature probed along the vertical line $x = 0$ at $t = 1\,000$ s, where the thermal penetrates into a stably stratified environment. Note that $K_M = 10$ m$^2$ s$^{-1}$ and $\mathcal Pr=0.71$.\[fig:yslice\_dristra\]](yslice_dristra_w_t50.jpg "fig:"){width="11cm"} $(a)$ vertical velocity ![Plots of (a) vertical velocity (b) potential temperature probed along the vertical line $x = 0$ at $t = 1\,000$ s, where the thermal penetrates into a stably stratified environment. Note that $K_M = 10$ m$^2$ s$^{-1}$ and $\mathcal Pr=0.71$.\[fig:yslice\_dristra\]](yslice_dristra_theta_t50.jpg "fig:"){width="11cm"} $(b)$ perturbation potential temperature -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [0.42]{} ![Evolution of vertical velocity at $x = 0$, $z = 6.25$ km for the stable case (a) $N = 2.5 \times 10^{-2}$ s$^{-1}$, (b) $N = 1.25 \times 10^{-2}$ s$^{-1}$, (c) $N = 1.0 \times 10^{-2}$ s$^{-1}$ and (d) $N = 7.9 \times 10^{-3}$ s$^{-1}$. \[fig:time\_series\_vertical\_velocity\]](time_series_w_G1_Ri1b.jpg "fig:"){width="17.5pc" height="17pc"}   [0.43]{} ![Evolution of vertical velocity at $x = 0$, $z = 6.25$ km for the stable case (a) $N = 2.5 \times 10^{-2}$ s$^{-1}$, (b) $N = 1.25 \times 10^{-2}$ s$^{-1}$, (c) $N = 1.0 \times 10^{-2}$ s$^{-1}$ and (d) $N = 7.9 \times 10^{-3}$ s$^{-1}$. \[fig:time\_series\_vertical\_velocity\]](time_series_w_G1_Ri_25b.jpg "fig:"){width="18pc"} [0.45]{} ![Evolution of vertical velocity at $x = 0$, $z = 6.25$ km for the stable case (a) $N = 2.5 \times 10^{-2}$ s$^{-1}$, (b) $N = 1.25 \times 10^{-2}$ s$^{-1}$, (c) $N = 1.0 \times 10^{-2}$ s$^{-1}$ and (d) $N = 7.9 \times 10^{-3}$ s$^{-1}$. \[fig:time\_series\_vertical\_velocity\]](time_series_w_G1_Ri_16b.jpg "fig:"){width="18pc"} [0.45]{} ![Evolution of vertical velocity at $x = 0$, $z = 6.25$ km for the stable case (a) $N = 2.5 \times 10^{-2}$ s$^{-1}$, (b) $N = 1.25 \times 10^{-2}$ s$^{-1}$, (c) $N = 1.0 \times 10^{-2}$ s$^{-1}$ and (d) $N = 7.9 \times 10^{-3}$ s$^{-1}$. \[fig:time\_series\_vertical\_velocity\]](time_series_w_G1_Ri_1b.jpg "fig:"){width="18pc"} [0.55]{} ![image](time_series_w_G1_Ri1_all1.jpg){width="19pc"} [0.45]{} ![image](time_series_w_G1_Ri_25_all1.jpg){width="19pc"}   [0.45]{} ![image](time_series_w_G1_Ri_16_all1.jpg){width="19pc" height="17.8pc"} ### Conservation of kinetic and potential energy The governing equations Eq. (\[eq:ch2\_28\]-\[eq:ch2\_30\]) leads to the following energy balance laws [e.g. @Winters2009], where the kinetic and potential energies $$E_k = \frac{1}{2} \int_\Omega (u^2+w^2) \mathrm{d}V, \qquad \mathrm{and} \qquad E_p = \int_\Omega (z_{max} -z) \theta \mathrm{d}V,$$ respectively, satisfy the following energy equations [see also @Carpenter1990 e.g. Fig 19 therein]: $$\frac{\mathrm{d}E_k}{\mathrm{d}t} = \int_\Omega w \theta \mathrm{d}V -K_M \epsilon ~,\qquad \epsilon = \int_\Omega |\Delta u|^2+|\Delta w|^2\mathrm{d}V$$ and $$\frac{\mathrm{d}E_p}{\mathrm{d}t} = -\int_\Omega w \theta \mathrm{d}V + K_H \frac{\theta_{\max} - \theta_{\min}}{z_{\max} - z_{\min}}.$$ These energy equations quantify the rate of production of $E_p$, the conversion from $E_p$ to $E_k$, and the rate of kinetic energy dissipation, $\epsilon$, thereby making a steady energy balance for the isolated thermal in the neutral environment. The time evolution of $E_p$, $E_k$, and $E = E_p + E_k$ for a rising thermal in the neutral environment have been reported in Fig \[fig:energy\_nu1\]. Clearly, the potential energy $E_p$, decreases with time as a result of the potential energy conversion into kinetic energy $E_k$. Also, the total energy, $E$, remains approximately constant. In order to conserve energy, [@Carpenter1990] considered the piece-parabolic method for the discretization of convective operators. The present result of the energy conservation depicted in Figure \[fig:energy\_nu1\] has an excellent agreement with the corresponding result reported by [@Carpenter1990]. However, the time evolution of the potential and kinetic energies feature more complex and oscillating behaviour when the environment is stability stratified. It is because a rising thermal finds itself in an environment with a higher potential temperature, where the buoyancy force pushes it downwards. In Fig \[fig:energy\_stable\], the energy curves for $N = 1.25 \times 10^{-2}$ s$^{-1}$ have been displayed. The results clearly indicates the overshooting of thermals beyond their level of buoyancy in the stably stratified environment [see also @Lane2008]. [0.5]{} ![(a) Time evolution of energy for Case A, showing that the total energy is approximately conserved, where potential energy is converted to kinetic energy. Note, $\theta_0 = 300$ K, $K_M = 1$ m$^2$ s$^{-1}$, $t\in[0,1200 s]$. (b) The effect of stratification on the time evolution of energy for Case A, but in the stable environment with $N = 1.25 \times 10^{-2}$ s$^{-1}$ for $t\in[0,4000 s]$. \[fig:energy\]](energy_nu1_ptur.jpg "fig:"){width="19pc"}   [0.5]{} ![(a) Time evolution of energy for Case A, showing that the total energy is approximately conserved, where potential energy is converted to kinetic energy. Note, $\theta_0 = 300$ K, $K_M = 1$ m$^2$ s$^{-1}$, $t\in[0,1200 s]$. (b) The effect of stratification on the time evolution of energy for Case A, but in the stable environment with $N = 1.25 \times 10^{-2}$ s$^{-1}$ for $t\in[0,4000 s]$. \[fig:energy\]](energy_G1_Ri_25b.jpg "fig:"){width="18.5pc"} [Comparison with WRF-LES using]{} urban heat island circulation --------------------------------------------------------------- There is a growing trend of land-surface modification through urbanization because over half the world’s population lives in urban areas. [Urban Heat Island (UHI) is the source of the mesoscale response of the atmosphere to horizontal variations in temperature associated with dry convection [@Grimmond2002].]{} Urban Heat Island (UHI) is a potential atmospheric model to investigate the influence of land-surface modification on the health and welfare of urban residents. UHI simulations help quantify mesoscale circulation triggered by surface heterogeneity of urbanization [see @Zhang2014]. ### Reference model Using the LES mode of the Weather Research and Forecasting (WRF-LES) model, [@Zhang2014] investigated the influence of the UHI circulation over an isolated urban area that is homogeneous in $y$-direction, where the atmosphere is dry and the terrain is flat. The WRF-LES model of [@Zhang2014] is similar to the simulation of [@Dubois2009] (hereinafter D & T). Similarly, [@Kimura1975] provides a laboratory model of an equivalent UHI circulation. In this section, these three reference models are considered to understand the accuracy of the wavelet-based simulation, where the model domain extends $100$ km horizontally and $2$ km vertically. Following [@Zhang2014] a constant heat flux $(H_{\hbox{surface}}~[\hbox{W m}^{-2}])$ and $u = w = 0$ are boundary conditions at the surface, $z = 0$, such that $$H_{\hbox{surface}}-H_{\hbox{rural}} = \left\{ \begin{array}{ll} 0 & \hbox{ if } x < -l \hbox{ or } x > l\\ H_0\tanh\left(\frac{x+1}{\xi}\right)-H_0\tanh\left(\frac{x-1}{\xi}\right) & \hbox{ if } -l \le x \le l. \end{array} \right.,$$ where $H_{\hbox{rural}}$ is the surface heat flux over the rural area and $H_0$ is the surface heat flux over the urban area (see Fig \[fig:comparison\_dt\]$a$). Values of $H_0 = 28.93$, $57.87,\,115.74,\,231.48,\,462.96$, and $926.92$ \[W m$^{-2}$\] were tested. These six values of $H_0$ correspond to six values of Rayleigh numbers $\mathcal Ra = 10^3$, $10^4,\,10^5,\,10^6,\,10^7,\,\hbox{ and } 10^8$. [cc]{}\ \ $(b)$ & $(c)$\ ![(a) A sketch of heat island circulation and the initial profile of the total potential temperature. Profiles of (b) temperature variation $\theta$ at $x = 0$ km, (c) vertical velocity at $x=0$ km and (d) horizontal velocity at $x = 2.5$ km for stationary solution. solid line, dashed line and dashed-dotted line are representing the profiles for $H_0 = 28.93, 57.87$ and $115.74$ W m$^{-2}$, respectively. \[fig:comparison\_dt\]](AnJtheta "fig:"){width="6.75cm"} & ![(a) A sketch of heat island circulation and the initial profile of the total potential temperature. Profiles of (b) temperature variation $\theta$ at $x = 0$ km, (c) vertical velocity at $x=0$ km and (d) horizontal velocity at $x = 2.5$ km for stationary solution. solid line, dashed line and dashed-dotted line are representing the profiles for $H_0 = 28.93, 57.87$ and $115.74$ W m$^{-2}$, respectively. \[fig:comparison\_dt\]](AnJv "fig:"){width="7cm"}\ \ \ $H_0$ ----------------- ----------- ----------- ----------- ----------- ----------- ----------- Present D & T Present D & T Present D & T $\theta_{\min}$ -0.023537 -0.024823 -0.064457 -0.071289 -0.167264 -0.166316 $u_{\max}$ 0.118872 0.118887 0.176103 0.174844 0.179622 0.179054 $w_{\min}$ -0.030134 -0.030470 -0.037337 -0.039291 -0.085519 -0.079265 $\omega_{\max}$ 1.957423 2.06900 3.659917 3.951325 5.345340 5.921375 Flow evolution at relatively smaller values of the surface heat flux {#comparison_dubois} -------------------------------------------------------------------- Fig \[fig:comparison\_dt\] demonstrates the vertical profiles of the vertical velocity and the potential temperature computed at the centre of the heat island for $H_0 = 28.93, 57.87$ and $115.74$ Wm$^{-2}$. In dimensionless variables, these plots are found in a very good agreement with the corresponding plots of [@Dubois2009]. The temperature, vertical velocity, and horizontal velocity decay rapidly with respect to the elevation $z$. In Figure \[fig:comparison\_dt\]b and \[fig:comparison\_dt\]c, it is clear that the mixed layer height appears between $z = 1.2$ km and $1.5$ km, and this height is reduced if $H_0$ increases. The values of the horizontal velocity $u$, the vertical velocity $w$, the potential temperature $\theta$, and the vorticity $\omega_y = \partial u/\partial z - \partial w/\partial x$ were compared between the results of the present model and that of [@Dubois2009]. The results presented in Table \[tab:comparison\_dubois\] indicates that the accuracy of the wavelet-based JFNK model is within $5\%$ to $10\%$ of the results of D & T for the test case of UHI. Experimental investigation for penetrative convection ----------------------------------------------------- It was observed in the experimental study of [@Kimura1975] that the centre of the heat island circulation is located near the edges of the heat island when the surface heat flux is relatively weak, and the up-draft prevails all over the urban area (e.g Fig. \[fig:vorticitycontour\_uhi\_ex\]a). On the other hand, a strong narrow up-draft is concentrated above the centre of the island, when the heat flux is relatively strong (e.g. Fig. \[fig:vorticitycontour\_uhi\_ex\]c). The simulation results in Fig \[fig:vorticitycontour\_uhi\_ex\]b and \[fig:vorticitycontour\_uhi\_ex\]d are in a very good agreement with the corresponding experimental results. It was found that if $H_0$ increases, the center of the circulation moves toward the center of the urban region. In Fig. \[fig:vorticitycontour\_uhi\_ex\]b, the surface heat flux is $28.93$ W m$^{-2}$, where the center of the circulation is away from the center of the heat island, and in Fig. \[fig:vorticitycontour\_uhi\_ex\]d, the surface heat flux is $115.74$ W m$^{-2}$, where the center of the circulation is at the center of the heat island. ![(Left) Two types of flow regimes observed in a laboratory experiment [@Kimura1975]. (a) Center of the circulation is located near the edge of the heating element; (c) the center of the circulation is near the center of the heating element. (Right) Numerically experimented results: (b) $H_0 = 28.93$ W m$^{-2}$, $\beta = 1$ K km$^{-1}$ (d) $H_0 = 115.74$ W m$^{-2}$, $\beta = 10$ K km$^{-1}$. Thick black horizontal line represents the heating element. Note, the center of the circulation approximately corresponds to each other in $(a-b)$ and $(c-d)$. The thick arrow on the left panel is about $z=1$ on the right panel. The domain is made dimensionless by half the length of heating element. \[fig:vorticitycontour\_uhi\_ex\]](experiment.pdf){width="\textwidth"} A comparison with field measurement ----------------------------------- To provide a primary assessment of the model for predicting the structure of a convective boundary layer (CBL), an idealized dry case is studied, which is driven solely by a surface heat flux. The result is analyzed with respect to the field measurements of the day-33 Wangara experiment [e.g. @Moeng84], where the temporal evolution of the mixed layer is studied. In a similar study, [@Moeng84] considered LES to reproduce Wangara data when the numerical model incorporates a prameterization of the moisture field, radiation effect, Coriolis effect, and surface roughness in addition to having the wind shear consistent with the field measurements. [@Mukherjee2016] provides an idealized numerical study of the CBL using a turbulence-resolving LES in the domain $3.6\times 3.6\times 1.9$ km$^3$. In comparison to the reference work mentioned above, the present study considers a simplified case in a two-dimensional domain. Here, the horizontal length of $100$ km is divided into $512$ segments ($\Delta x\approx 195$ m), and the vertical length of $2$ km is divided into $256$ segments ($\Delta z\approx 8$ m). Horizontally $\langle\cdot\rangle$ and temporally $\overline{(\cdot)}$ averaged vertical profile of the total potential temperature, [*i.e.*]{} $\langle\bar\theta\rangle = \langle\overline{\theta_0 + \beta z + \theta}\rangle$, for $H_0 = 462.96$ W m$^{-2}$ and $H_0 = 925.92$ W m$^{-2}$ are analyzed. We observed that the estimated mixed layer height, for $H_0 = 462.96$ W m$^{-2}$ and $H_0 = 925.92$ W m$^{-2}$, are about $0.9$ km and $0.8$ km, respectively, and the inversion layer appears about at $0.9 - 1.0$ km and $0.8 - 0.9$ km from the ground, respectively. In Figure \[fig:ts\_av\_theta\_x\_0\], we compare the vertical profile of $\langle\bar\theta\rangle(z)$ of the present simulation with the similar profile observed in the Wangara day-33 experiment [[*e.g.*]{}, @Moeng84]. The displayed data corresponds to the surface heat flux of $H_0 = 925.92$ W m$^{-2}$. We can see the development of a well-mixed layer within $2$ h. The mixed layer in the turbulent region is capped with the inversion layer approximately at $z\sim 800$ m. At the bottom of the boundary layer, the unstable surface layer appears at $\sim 50$ m. The mixed layer height did not fully agree between the simulation and the measurement because the idealized simulation ignored the realistic meteorological features, such as wind-shear, Coriolis effect etc. However, the vertical structure of the daytime boundary layer has been predicted with a good quality. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ The vertical profile of the average potential temperature $\langle\bar\theta\rangle$. The temporal average is obtained in the time interval of $t\in[0$ h, $6$ h$]$. The prediction of the JFNK model at $H_0=925.92$ W m$^{-2}$ is compared with the field measurement (by digitally extracting the data from Fig 3b of [@Moeng84]), which represents the vertical profile of $\langle\bar\theta\rangle$ at $12$-th hour of day $33$. The hourly evolution of the vertical temperature profile is also shown for clarity. Note for the unit conversion to $^{\hbox{\tiny o}}$C, which aims for being consistent with [@Moeng84]. \[fig:ts\_av\_theta\_x\_0\]](potentialtem_moeng_4 "fig:"){width="8cm" height="10cm"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Summary and future developments {#sec:sfd} =============================== A symmetry preserving JFNK method for accurate simulations of nonlinearly coupled atmospheric physics has been illustrated in this article. The method has been applied to simulate atmospheric convection using the nonhydrostatic atmospheric model equations. Unlike linearized methods commonly adopted in atmospheric flow solvers, the JFNK method emphasizes that nonlinear coupling of physics to be modelled without any compromise. In atmospheric flow simulations, capturing the tight nonlinear coupling may lead to a next generation atmospheric model to correctly forecast weather events. In this work we have shown that preserving the skew-symmetry of the convective operator by the wavelet method brings two modelling benefits. First, short-wavelength modes are properly cascaded toward the subgrid scale dissipation mechanism. This is done by ensuring the role of the convective operator it would play on the physics of the flow. Second, the subgrid-scale modes are properly transferred to the subgrid model for being dissipated at a rate offered by the subgrid model. In mathematical terms, an interplay between the skew-symmetric convection and symmetric, negative-definite diffusion leads to small-scale motion in a turbulent flow. With this hypothesis of energy cascade in mind, we have combined the JFNK method with a symmetry-preserving descretization that is based on the wavelet method. This article presents the efficiency and reliability of the JFNK methodology for numerical simulation of nonhydrostatic atmospheric flows in the context of penetrative convection. We have chosen to simulate idealized dry convection for the presentation of the JFNK method because the evolution of plumes and thermals offers much insight into the more complicated dynamics of atmospheric motion. Because of the tightly nonlinear coupling of atmospheric motions, present authors envision to perform simulations in such a manner that the discretized operators preserve the same symmetry properties and the same nonlinear coupling as the underlying differential operators. The main question becomes whether the symmetry-preserving nonlinearly coupled discretization is appropriate for atmospheric simulations since the atmospheric modelling community has accepted the discretization of the skew-symmetric convective opterator to a positive-definite convective operator in their publicly available codes [see @Klemp2007; @Piotr2014; @Smolarkiewicz2017 and the refs therein]. This question has been addressed by simulations for which conservation of energy is highly desired to correctly forecast intensification of particular weather events [see @Carpenter1990; @Klemp2007]. The tight nonlinear coupling of physics offered by the JFNK solver will bring full benefit to atmospheric simulations if a scale-adaptive subgrid model is considered [see @Alam2018]. Transition of scales – often labelled as the ‘gray-zone’ – is currently one of the most challenging problems in the field of meteorology [@Wyngaard2004; @Kurowski2018]. The findings of this article encourages to further test the JFNK method using a more appropriate scale-adaptive subgrid model that adapts the cut-off scale dynamically as the characteristic scale exhibits transitions. For example, a balance between the local production and the dissipation of turbulence occurs in the surface-layer at much smaller scales than the characteristic scale of eddies in the outer layer. In the future research, we plan on developing wavelet-based preconditioners based on lifting schemes, which would discretize the differential operators on to a hierarchy of ‘details’ wavelet space $\mathcal V^{j+1}\backslash\mathcal V^j$ instead of the approximation space $\mathcal V^j$ considered in the present work. Acknowledgements {#acknowledgements .unnumbered} ================ JA acknowledges financial support from Natural Science and Engineering Research Council (NSERC) in the form of a Discovery Grant. The article was benefited from comments from two anonymous reviewers. This research was enabled in part with support provided by SHARCNET (www.sharcnet.ca) and Compute Canada (www.computecanada.ca).
--- abstract: 'We show how the Gillespie algorithm, originally developed to describe coupled chemical reactions, can be used to perform numerical simulations of a granular intruder particle colliding with thermalized bath particles. The algorithm generates a sequence of collision “events” separated by variable time intervals. As input, it requires the position-dependent flux of bath particles at each point on the surface of the intruder particle. We validate the method by applying it to a one-dimensional system for which the exact solution of the homogeneous Boltzmann equation is known and investigate the case where the bath particle velocity distribution has algebraic tails. We also present an application to a granular needle in bath of point particles where we demonstrate the presence of correlations between the translational and rotational degrees of freedom of the intruder particle. The relationship between the Gillespie algorithm and the commonly used Direct Simulation Monte Carlo (DSMC) method is also discussed.' address: - '$^1$Department of Chemistry and Biochemistry, Duquesne University, Pittsburgh, PA 15282-1530' - '$^2$Laboratoire de Physique Théorique des Liquides, Université Pierre et Marie Curie, 4, place Jussieu, 75252 Paris Cedex, 05 France' author: - 'J. Talbot$^1$ and P. Viot$^2$' title: Application of the Gillespie algorithm to a granular intruder particle --- Introduction ============ Kinetic theories of granular systems are usually constructed starting from the Boltzmann equation or one of its variants[@GG01; @PB03]. Rarely, it is possible to obtain an exact, analytic solution of the Boltzmann equation[@PTV06]. More typically, however, approximations are required. It is then highly desirable to assess the quality of the theoretical prediction by comparing it with accurate numerical solutions of the Boltzmann equation. It is the purpose of this paper to show that, besides the celebrated Direct Simulation Monte Carlo (DSMC) introduced by Bird[@B94], there exists an alternative method, originally proposed by Gillespie[@G76; @G77] to study coupled chemical reactions. One class of system that is amenable to a Boltzmann approach and that has received considerable attention in recent years consists of a single intruder (or tracer) particle in a bath of thermalized particles[@DB05; @BRM05; @MP99; @VT04; @GTV05; @PVTW06], showing in particular the absence of equipartition. This phenomena has been observed experimentally in two-dimensional[@FM02] and three-dimensional[@WP02] granular gases. The intruder-bath particle collisions are dissipative, while the bath particles have an ideal gas structure and a specified velocity distribution characterized by a fixed temperature. Since Gaussian velocity statistics are quite rare, it is important that numerical and theoretical approaches be able to treat a general distribution. We begin by outlining the Gillespie algorithm and how it can be used to obtain a numerical solution of the Boltzmann equation. We then illustrate the application of the algorithm with two examples. The first is a one-dimensional system consisting of an intruder particle in a bath of thermalized point particles. If the bath particles have a Gaussian velocity distribution one recovers the exact Gaussian intruder velocity distribution function [@MP99] with a granular temperature that is smaller than the bath temperature. In addition when we impose a power-law velocity distribution on the bath particles, we find the same form for the intruder particle distribution. The second system is two-dimensional and consists of a needle intruder in a bath of point particles. It is known that equipartition does not hold between different degrees of freedom of the needle[@VT04]. Here we use the Gillespie algorithm to obtain a new physical result, namely the presence of correlations between the translational and rotational degree of freedom of the needle. Algorithm ========= The model consists of a single intruder particle that undergoes a series of collisions with the surrounding bath particles. Let $P(t)$ denote the probability that no event (collision) has occurred in the interval $(0,t)$. Then $$P(t+\Delta t)=P(t)(1-\phi(t)\Delta t+O(\Delta t^2))$$ where $\phi(t)$ is the event rate in general and the collision rate in this application. Expanding the lhs to first order in $\Delta t$ and taking the limit $\Delta t\to0$ leads to $$\frac{d\ln P(t)}{dt}=-\phi(t)$$ Integrating and using the boundary condition that $P(0)=1$ gives $$\label{eq:wait} P(t)=\exp\bigl[-\int_0^{t} \phi(t')dt'\bigr]$$ If $\phi(t)$ is constant between collisions this expression takes the simple form: $$\label{eq:waits} P(t)=\exp(-\phi t)$$ At the end of the waiting time an event (collision) occurs that alters the value of $\phi$ or the way that $\phi$ evolves with time. We now consider two specific applications. In the first, the flux of colliding particles is constant between collisions and the simpler form, Eq \[eq:waits\] may be used. Our second example, an anisotropic object that rotates between collisions, requires the use of the more general form, Eq \[eq:wait\]. Applications ============ One-dimensional system ---------------------- Consider first a one-dimensional system consisting of an intruder particle of mass $M$ moving in a bath of thermalized point particles each of mass $m$. The dynamics of the intruder is described by the Boltzmann equation. The velocity distribution of the bath particles is denoted by $f(v,a)$, where $a$ is related to the bath temperature, $T_B$. The flux of particles that collide with the right hand side of the intruder particle moving with a velocity $v_1$ is: $$\begin{aligned} \phi_+(v_1)&=&\rho\int_{-\infty}^{v_1}(v_1-v)f(v,a)dv,\end{aligned}$$ where $\rho$ is the number density of the bath particles. Similarly the flux on the left hand side is: $$\begin{aligned} \phi_-(v_1)&=&\rho\int_{v_1}^{\infty}(v-v_1)f(v,a)dv\end{aligned}$$ and the total collision rate is $$\begin{aligned} \phi(v_1)&=&\phi_+(v_1)+\phi_-(v_1)\end{aligned}$$ Let $F(v_1,t)$ denote the time-dependent distribution function of tracer particle velocity. Then this evolves according to $$\label{eq:master} \frac{\partial F(v_1,t)}{\partial t}=-\phi(v_1)F(v_1,t)+\int_{-\infty}^{\infty}\psi(v\to v_1)F(v,t)dv$$ which is equivalent to the homogeneous Boltzmann equation[@PTV06]. The first term on the rhs is a loss term corresponding to the probability that a tracer particle with velocity $v_1$ undergoes a collision (necessarily to a different velocity) per unit time. The second, or gain, term contains the function $\psi(v\to v_1)$ that is the rate that tracer particles moving with velocity $v$ are transformed (by collisions with the bath particles) to those moving with velocity $v_1$. For collisions with the rhs of the intruder particle the explicit expression is $$\psi_+(v\to v_1)=\rho\left(\frac{1+M}{1+\alpha}\right)^2(v-v_1)f(v+\frac{1+M}{1+\alpha}(v_1-v),a),$$ with $v_1<v$. A similar expression applies for collisions on the left hand side of the intruder particle. We note that a representation of the Boltzmann equation similar to Eq \[eq:master\] was employed by Puglisi et al. [@PVTW06]. The Gillespie algorithm provides an numerical solution of Eq \[eq:master\], [*including*]{} the transient case when the derivative is not equal to zero. If the intruder particle moves with a constant velocity the flux is itself (on average) constant. A waiting time consistent with Eq \[eq:wait\] is then generated: $$\label{eq:deltat} \Delta t=-\ln(\xi_1)/\phi(v_1),$$ where $0< \xi_1<1$ is a uniform random number. Given a collision at time $t$, the probability that the collision occurs on the right is given by $$\label{eq:side} p(+|t)=\phi_+(v_1)/ \phi(v_1).$$ This is sampled by generating a second uniform random number $0<\xi_2<1$. If $\xi_2<p(+|t)$ the collision is on the right hand side: otherwise it takes place on the left hand side. Having chosen the side, it is then necessary to sample the velocity of the bath particle that collides with this side. The probability distribution function of the colliding particle’s velocity depends on the collision side: $$\begin{aligned} \label{eq:vdist1} g_{+}(v,v_1)=\left\{ \begin{array}{ll} (v_1-v)f(v,a)/\phi_+(v_1) & \mbox{if $v\leq v_1$} \\ 0 & \mbox{otherwise} \end{array} \right. \end{aligned}$$ $$\begin{aligned} \label{eq:vdist2} g_{-}(v,v_1)=\left\{ \begin{array}{ll} (v-v_1)f(v,a)/\phi_{-}(v_1) & \mbox{if $v\geq v_1$} \\ 0 & \mbox{otherwise} \end{array} \right. \end{aligned}$$ The results presented so far apply to any bath velocity distribution - and the behavior depends strongly on the exact form [@PVTW06]. For a Gaussian $$\label{eq:maxwell} f(v,a)=\sqrt{\frac{a}{\pi}}\exp(-av^2),\;\;-\infty<v<\infty$$ where $a=m/(2k_BT_B)$. The collision fluxes on each side of the tracer particle are $$\phi_{\pm}(v_1)= \frac{\rho}{2}(\pm v_1+\frac{1}{\sqrt{\pi a}}\exp(-av_1^2)+v_1{\rm erf}(v_1\sqrt{a})),$$ and the total collision rate is $$\phi(v_1)=\rho(\frac{1}{\sqrt{\pi a}}\exp(-av_1^2)+v_1{\rm erf}(v_1\sqrt{a})).$$ This function is shown in Figure \[fig:flux\]. The velocity distribution of the colliding particles, Eq \[eq:vdist1\], in the case of a bath particle Gaussian velocity distribution is plotted in Fig \[fig:vdist\] for several values of the intruder particle velocity. Sampling of this distribution is accomplished with an acceptance-rejection method in which the sampling region is adapted to the velocity of the intruder. A rare, but possible, case is to select a collision on the right hand side when the surface is moving rapidly to the left. The distribution of colliding particles is sharply peaked at $v_1$ and it is necessary to take the range of $v$ from $v_1-2$ to $v_1$. If the surface is moving to the right, the distribution of colliding particle’s velocity is more symmetric and one can sample $v$ in the range $-3\leq v \leq 3$. Finally, when the intruder particle moving with a velocity $v_1$ collides with a bath particle of velocity $v$ the velocity of the former changes instantaneously to $$\label{collisionrule} v_1'=v_1+\frac{1+\alpha}{1+M}(v-v_1),$$ where $0<\alpha\leq 1$ is the coefficient of restitution and we have taken $m=1$ for convenience. The complete algorithm describing one iteration can now be summarized using pseudocode: - Generate a waiting time using Eq \[eq:deltat\]. - $t \to t + \Delta t$\ while $ ( t_{\rm out} < t)$\ $\{ t_{\rm out} \gets t_{\rm out}+\delta t$\ accumulate averages$\}$ - Choose the collision side using Eq \[eq:side\]. - The velocity of the colliding bath particle $v$ is sampled from the distribution given by Eq \[eq:vdist1\] or Eq \[eq:vdist2\]. - The post-collisional velocity of the intruder particle is determined from Eq \[collisionrule\]. Although the time increment between events is variable, averages (mean square velocities and velocity distributions) must be computed at equal time intervals. In step 2, $t$ denotes the total elapsed time and $\delta t$ is the constant time interval between the accumulation of quantities to be averaged. A convenient choice for $\delta t$ is the average collision time. Martin and Piasecki [@MP99] obtained an analytic solution of the homogeneous stationary Boltzmann equation and showed that the velocity distribution of the intruder particle in the steady state is Gaussian and characterized by a temperature, $T$, that is different from the bath temperature $T_B$. Specifically, the two are related by: $$\label{eq:MP} \frac{T}{T_B}=\frac{1+\alpha}{2+(1-\alpha)/M},$$ so that for $\alpha<1$, $T<T_B$. Figure \[fig:1dresults\] shows an excellent agreement of the simulation with this exact result. Since the existence of solutions of the Boltzmann equation with power-law tails has been shown recently [@BMM05; @BM05], it is interesting to investigate this phenomenon in the intruder particle system using the Gillespie algorithm. Therefore, we consider the case where the bath particle velocity distribution function takes the following power-law form: $$\label{eq:2} f(a,v)=\frac{\sqrt{2a}}{\pi}\frac{1}{1+a^2v^4}$$ The granular temperature of the bath is well defined since the average of the square velocity is finite, $<v^2>=1/a$. Although the Boltzmann equation can no longer be solved in general for an arbitrary bath particle velocity distribution (unlike the just-discussed Gaussian distribution) an exact solution is possible when the mass ratio is equal to the coefficient of restitution, $M/m=\alpha$. For this specific case one can show that the stationary solution of the intruder particle is exactly given by Eq.(\[eq:2\])[@PTV06] with a granular temperature equal to the bath temperature multiplied by the coefficient of restitution. Figure \[fig:8\] shows the variation of the granular temperature of the intruder particle with $\alpha$ for $M/m=1$ and $M/m=0.5$. When the intruder is light, $M<m$, the granular temperature of the intruder particle is close to the result obtained with the Gaussian bath when $\alpha>M/m$ (see Fig. \[fig:8\]). Figure \[fig:9\] displays the intruder particle velocity distribution function for different values of the coefficient of restitution $0.0\leq\alpha\leq1.0$ when $M=1/2$. The exact solution is known for $\alpha=0.5$, i.e. Eq.(\[eq:2\]) with a granular temperature equal to $\alpha$. The simulation results show that in all cases the velocity distribution functions exhibit a power-law tail (See inset of Fig.\[fig:9\]) with an exponent independent of $\alpha$ and equal to $-4$. Needle ------ In this application a needle intruder, confined to a two-dimensional plane, is immersed in a fluid of point particles, each of mass $m$, at a density $\rho$. The needle is characterized by its mass $M$, length $L$ and moment of inertia $I$ and its state is specified by its angular and center of mass velocities, $\omega$ and ${\bf v_1}$, respectively (see Fig.\[fig:needle\]). The velocity distribution of the point particles is again given by Eq \[eq:maxwell\]. Two main modifications of the Gillespie algorithm are required in order to simulate this system. First, if $v_1\neq0$ the flux is not (on average) constant between collisions. Second, it is necessary to select the point of impact on the needle. The collision flux on both sides of the needle at a point $-L/2\leq x\leq L/2$ is $\phi({\bf v_1}.{\bf n}+\omega x)$. Unlike the case considered above, this flux is time-dependent if $\omega\neq 0$ since the normal vector ${\bf n} $ rotates. Specifically $$\label{eq:vn} v_n={\bf v_1}.{\bf n}(t)=v_{1y}\cos(\omega t+\theta_0)-v_{1x}\sin(\omega t+\theta_0),$$ where $\theta_0$ is the orientation of the needle at $t=0$. The total flux over the entire length of the needle is $$\label{eq:fluxn} \Phi(t) =\int_{-L/2}^{L/2}\phi({\bf v_1}.{\bf n}(t)+\omega x)dx.$$ For the sake of simplicity we take $a=1$ and $L=1$ and we obtain $$\begin{aligned} \Phi(t)&=&\rho \left(\frac{v_n^2}{2\omega}+\frac{\omega}{8}+\frac{v_n}{2} +\frac{1}{4\omega}\right)erf \left(v_n+\frac{\omega}{2}\right)\nonumber\\ &+&\rho \left(\frac{v_n^2}{2\omega}+\frac{\omega}{8}-\frac{v_n}{2} +\frac{1}{4\omega}\right)erf\left(-v_n+\frac{\omega}{2}\right)\nonumber\\ &+&\rho \frac{e^{-v_n^2-\frac{\omega^2}{4}}}{4\sqrt{\pi}}\left[e^{v_n\omega}\left(1-\frac{2v_n}{\omega}\right)+ e^{-v_n\omega}\left(1+\frac{2v_n}{\omega}\right)\right].\end{aligned}$$ A collision time is generated by solving the equation $$\int_0^tdt'\Phi(t')=-\ln(\xi_3)$$ Since the integral cannot be performed analytically, we use the Newton-Raphson method: $$\label{eq:1} t_1=t-\frac{\int_0^t\Phi(t')dt'+\ln(\xi_3)}{\Phi(t)}$$ with an initial guess $t=-\frac{\ln(\xi_3)}{\Phi(0)}$. The procedure is iterative, i.e. $t$ is substituted by $t_1$ in Eq.(\[eq:1\]), etc. until $|\int_0^{t_1}\Phi(t')dt'+\ln(\xi_3)|$ is smaller than the required precision. In general the convergence is fast and only a few iterations are required. Occasionally, when $\Phi$ is small, the Newton-Raphson method oscillates between two “stable” positions and there is no convergence. When this situation arises, we switch to a bisection procedure. With this modification, the method seems to be robust. Once the time to collision has been selected, one updates the normal velocity of the center-of-mass of the needle using Eq \[eq:vn\]. In order to choose the position of the impact, one has to calculate the probability that the collision occurs at a distance $x$ from the center of mass of the needle, whatever the velocities of the bath particles, at a given time $t$ $$p(x|t)= \frac{\Phi (v_n+x\omega)}{\Phi(v_n,\omega)},\;\;-1/2\leq x \leq 1/2$$ This probability is clearly not uniform over the length of the needle. But since it is concave, the maximum is obtained for $x=1/2$ or $x=-1/2$ (if $v_n=0$ the probability is a maximum at both ends). We select $x$ using a standard acceptance-rejection method. Once the position of the impact has been decided, one has to choose if the bath particle collides on the right or left hand side of the needle. The probability of the former event is $$p(+|x,t)=\frac{\Phi_+(v_n+x\omega)}{\Phi(v_n+x\omega)}$$ One chooses a random number between $0$ and $1$. If $\xi<p(+|x,t)$, the particle collides with a particle from the right, otherwise the collision is on the left. Next the velocity of the colliding bath particle must be selected. The probability that the colliding particle has a velocity between $v$ and $v+dv$ is $g_{\pm}(v,v_n+x\omega)dv$. It is even more important to sample this distribution carefully than in the one-dimensional case as more extreme velocities are encountered in this system. Finally, the needle velocity and angular velocity are updated using $$\begin{aligned} {\bf v_1}'&=&{\bf v_1}+{\bf n}\frac{\Delta p}{M}\\ I\omega'&=&I\omega+x\Delta p\end{aligned}$$ where $$\Delta p=-\frac{(1+\alpha){\bf g}.{\bf n}}{\frac{1}{m}+\frac{1}{M}+\frac{x^2}{I}},$$ ${\bf n}$ is a unit vector normal to the length of the needle and ${\bf g}={\bf v_1}-{\bf v}+\omega x{\bf n}$ is the relative velocity at the point of impact (the velocity of the colliding bath particle also changes, but we do not need to know the new value). It is necessary to simulate a few hundred thousand to several million collisions in order to obtain good estimates for the properties of interest. The convergence of the simulation is slower when the mass of the needle is much bigger than the bath particle mass. We have previously used this algorithm to confirm a kinetic theory prediction that, when the coefficient of restitution is smaller than unity, the temperature of the bath is larger than the translational granular temperature which is in turn larger than the rotational granular temperature [@VT04]. Here we present new results for the cross-correlation between the two degrees of freedom of the needle as a function of the coefficient of restitution, $\alpha$, and for different values of the mass ratio $M/m$. Specifically, we have calculated $$\label{eq:cor} R=\frac{<v^2\omega^2>}{<v^2><\omega^2>},$$ which is equal to one in equilibrium systems and obviously independent of the mass ratio. Conversely, one observes that for small mass ratios in an inelastic system, there is a positive correlation that increases with decreasing $\alpha$: See Figure \[fig:R\]. We expect this to be a generic feature of anisotropic granular particles in any dimension, regardless of the bath particle velocity distribution. Unlike the one-dimensional model, where the velocity distribution of the intruder particle is Gaussian for all values of the coefficient of restitution, the translational and angular velocity distributions of the granular needle are never strictly Gaussian (except for elastic collisions). For a large range of values of $M/m$ and $\alpha$, however, the Gaussian is a very accurate approximation. It is only for a light needle and highly inelastic collisions that deviations start to become apparent: See Figure \[fig:om\]. DMSC versus Gillepsie algorithm =============================== The two simulation methods provide an exact numerical solution of a Boltzmann equation (they can also be used for inelastic Maxwell models[@BK03; @BM05; @BMM05]). The DSMC algorithm has been described in several articles [@B94; @MS00]. For convenience we describe here the version that would be applied to the one-dimensional system discussed in section 3.1. Velocities sampled from a Gaussian distribution, Eq \[eq:maxwell\], are assigned to $n_{\rm bath}$ bath particles. A side of the intruder particle is then selected at random: $\sigma_{i1}=+1$ for a collision on the left, $\sigma_{i1}=-1$ for a collision on the right. A bath particle, $i$, is then selected randomly. The collision is accepted with probability $\Theta(v_{i1}\sigma_{i1})\omega_{i1}/ \omega_{max}$ where $\Theta(x)$ is the Heaviside function, $\omega_{i1}=2\rho |v_{i1}|$ and $\omega_{max}$ is an upper bound estimate of the probability that a particle collides per unit time. If the collision is accepted a post-collisional velocity, computed from Eq \[collisionrule\], is assigned to the intruder particle. If $\omega_{i1}>\omega_{max}$ the estimate of the latter is updated: $\omega_{max}\gets\omega_{i1}$. We have implemented this algorithm for the one-dimensional intruder described in section 3.1. We obtain the same results with comparable computational effort. Conclusion ========== We have shown how the Gillespie algorithm can be used to obtain an exact numerical solution of the Boltzmann equation of an intruder particle in a bath of particles with an arbitrary velocity distribution. We used the method to obtain new results for a one-dimensional system consisting of an intruder particle in a bath with a power law distribution. We also used it to demonstrate, for the first time, the presence of correlations between the translational and rotational momenta of an anisotropic particle. Although the results presented here apply to the steady state, the method is equally valid for the transient case. It is clear that the Gillespie algorithm offers no significant computational advantage over the DSMC method for these intruder particle systems. It is of interest, however, that two apparently dissimilar methods can be applied to the same physical system. Finally, we note that, as with DSMC, the Gillespie method can be easily generalized to three-dimensional systems. We thank Alexis Burdeau, Jarosław Piasecki and Thorsten Pöschel for helpful discussions. REFERENCES {#references .unnumbered} ========== [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} P[ö]{}schel T and Luding S, eds 2001 [*Granular Gases*]{} (Berlin: Springer) Pöschel T and Brilliantov N 2003 [*Granular Gas Dynamics*]{} (Berlin: Springer) Piasecki J, Talbot J and Viot P 2006 [*Physica A*]{} In press Bird G 1994 [*Molecular gas dynamics and the direct simulation of gas flows*]{} (Clarendon Press, Oxford) Gillespie D T 1976 [*J. Comput. Phys.*]{} [**22**]{} 403–434 Gillespie D T 1977 [*J. Phys. Chem.*]{} [**81**]{} 2340–2361 Dufty J W and Brey J J 2005 [*New Journal of Physics*]{} [**7**]{} 20 Brey J J, Ruiz-Montero M J and Moreno F 2005 [*Phys. Rev. Lett.*]{} [**95**]{} 098001 Martin P A and Piasecki J 1999 [*Europhys. Lett*]{} [**46**]{} 613 Viot P and Talbot J 2004 [*Phys. Rev. E*]{} [**69**]{} 051106 Gomart H, Talbot J and Viot P 2005 [*Phys. Rev. E*]{} [**71**]{} 051306 Puglisi A, Visco P, Trizac E and van Wijland F 2006 [*Phys. Rev. E*]{} [ **73**]{} 021301 Feitosa K and Menon N 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 198301 Wildman R D and Parker D J 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 064301 Ben-Naim E, Machta B and Machta J 2005 [*Phys. Rev. E*]{} [**72**]{} 021302 Ben-Naim E and Machta J 2005 [*Phys. Rev. Lett.*]{} [**94**]{} 138001 Ben-Naim E and Krapivsky P 2003 [*Granular Gas Dynamics*]{} (Berlin: Springer) p 64 Montanero J M and Santos A 2000 [*Granular Matter*]{} [**2**]{} 53
--- abstract: 'We establish an exponential inequality for degenerated $U$-statistics of order $r$ of i.i.d. data. This inequality gives a control of the tail of the maxima obsulute values of the $U$-statistic by the sum of two terms: an exponential term and one involving the tail of $h{\left(X_1,\dots,X_r\right)}$. We also give a version for not necessarily degenerated $U$-statistics having a symmetric kernel and furnish an application to the convergence rates in the Marcinkiewicz law of large numbers. Application to invariance principle in Hölder spaces is also considered.' author: - Davide Giraudo title: 'An exponential inequality for $U$-statistics of i.i.d. data' --- Exponential inequalities for $U$-statistics =========================================== Goal of the paper ----------------- Let ${\left(\Omega,{\mathcal{F}},{\mathbb P}\right)}$ be a probability space, ${\left(S,{\mathcal{S}}\right)}$ be a measurable space and let $r{\geqslant}1$ be an integer. Let also $h\colon S^r\to {\mathbb R}$ be a measurable function and ${\left(X_i\right)}_{i{\geqslant}1}$ an i.i.d. sequence where $X_i\colon \Omega\to S$. The $U$-statistic (of order $r$) of kernel $h$ and data ${\left(X_i\right)}_{i{\geqslant}1}$ is defined as $$\label{eq:definition_U_stat} U_{r,n}{\left(h\right)}:=\sum_{i\in I_n^r}h{\left(X_{i_1},\dots, X_{i_r}\right)},$$ where $$\label{eq:definition_de_I_n_r} I_n^r:={\left\{ i\in{\mathbb Z}^r, 1{\leqslant}i_1<i_2<\dots<i_r{\leqslant}n \right\}}.$$ When there is no confusion with the kernel or the order, we will simply write $U_n$. Our goal is to control the tail of $U_n$. More precisely, we would like to give a control on the following quantity $${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}}, \quad N{\geqslant}r, x>0,$$ independently on $N$, and with the help of a functional of the tail of $h{\left(X_1,\dots,X_r\right)}$. This type of inequalities has been studied in the case where the kernel $h$ is bounded in [@MR0144363; @MR1235426; @MR1323145; @MR1130366; @MR2791056; @MR2336595]. An extension to the case of $U$-statisitics of the form $\sum_{i\in I_n^r}h_i{\left(X_{i_1}, \dots,X_{i_r}\right)}$ has been considered in [@MR1857312; @MR2073426]. We will then provide applications to the convergence rates in the law of large numbers. Moreover, the established inequality is also a good tool to verify tightness criterion for partial sum processes in Hölder spaces. Statement for degenerated $U$-statistics ---------------------------------------- We will first assume that this $U$-statistic is degenerated, in the sense that for all $s_1,\dots,s_{r-1}\in S$ and for all $q\in {\left\{ 1,\dots,r\right\}}$, $$\label{eq:condition_de_degeneration} {\mathbb E\left[h{\left(v_q{\left(s_1,\dots,s_{r-1},X_0\right)}\right)}\right]}=0,$$ where $v_q{\left(s_1,\dots,s_{r-1},X_0\right)}$ is a vector of elements of $S^r$ whose $q-1$ first components are respectively $s_1,\dots,s_{q-1}$, the component $q$ is $X_0$ and the remaining ones are $s_{q+1},\dots,s_{r-1}$. Notice that here, we are not making any symmetry assumption on the kernel $h$. \[thm:inegalite\_deviation\_U\_stats\] Let $r{\geqslant}1$ be an integer, ${\left(S,{\mathcal{S}}\right)}$ be measurable space, $h\colon S^r\to {\mathbb R}$ be a measurable function (with $S^r$ induced with the product $\sigma$-algebra) and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of $S$-valued random variables. Suppose that holds for all $s_1,\dots,s_{r-1}\in S$ and for all $q\in {\left\{ 1,\dots,r\right\}}$. Then the following inequality holds for all positive $x$ and all $y$ such that $x/y>3^r$: $$\begin{gathered} \label{eq:inegalite_exp_cas_degenere} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}} {\leqslant}A_{r}\exp{\left(- \frac 12{\left(\frac{x}y\right)}^{ \frac{2}{r }}\right)} \\+B_{r}\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left|h{\left(X_1,\dots,X_r\right)}\right|}> y u ^{1/2} C_{r} \right\}} {\left(1+\ln{\left( u\right)}\right)}^{q_{r}}\mathrm du,\end{gathered}$$ where $$A_{r}:=4{\left(1-2^{-r}\right)}$$ $$B_{r}:=2\prod_{i=1}^{r}\frac{2^{i-1 }}{\kappa_{i-1 }} {\left(1+\ln\frac{\kappa_{2 {\left(i-1\right)} }}{\kappa_{ i-1} }\right)}^{i-1}$$ where $\kappa_q$ is defined for a non-negative $q$ by $$\kappa_q=\begin{cases} 1&\mbox{ if }q{\leqslant}1\\ e^{q-1}/q^q&\mbox{ if }q> 1; \end{cases}$$ $$C_{r}:=4^{-r/2}\prod_{i=1}^r\kappa_{ i-1 }^{1/2}$$ and $$q_{r}:=\frac{{\left(r-1\right)}r}{2}.$$ Let us give some comments on the result. The second term of the right-hand-side of is of order ${\mathbb E\left[Y^2 \log{\left(1+Y\right)}^{q_{r}}\mathbf 1{\left\{ Y>1\right\}}\right]}$, where $Y= {\left|h{\left(X_1,\dots,X_r\right)}\right|}/y$. This implies that Theorem \[thm:inegalite\_deviation\_U\_stats\] is useful only if ${\mathbb E\left[Y^2 \log{\left(1+Y\right)}^{q_{r}} \right]}$ is finite. Proposition 2.3 in [@MR1235426] gives also an exponential inequality when $h$ is bounded. Theorem \[thm:inegalite\_deviation\_U\_stats\] gives a similar result up to the involved constants. The bigger $y$ is, the smaller the second term of the right hand side of is but the higher the first term of the right hand side is. Therefore, when applying this inequality, we try to use it with an appropriate value of $y$. The right hand side of is independent of $N$. Therefore, when Theorem \[thm:inegalite\_deviation\_U\_stats\] is applied with a stronger normalization than $N^{r/2}$, we can get a decay in $N$. However, this inequality does not seem to be sufficient to deduce directly a satisfactory result for the law of the iterated logarithms. Indeed, one would need to control $\sum_{N{\geqslant}r}{\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}2^N}{\left|U_n\right|}>2^{Nr/2} a \sqrt{\log N} \right\}} $, which forces the choice $x= a \sqrt{\log N} $ for each fixed $N$ and one should choose $y<3^rx$ hence we would need exponential moments for ${\left|h{\left(X_1,\dots,X_r\right)}\right|}$, which is of course suboptimal. Statement for general $U$-statistics with symmetric kernel ---------------------------------------------------------- We now would like to give a result similar to Theorem \[thm:inegalite\_deviation\_U\_stats\] but without assuming that the $U$-statistic is degenerated. To this aim, we use the Hoeffding decomposition. Here we assume that the kernel $h\colon S^r\to{\mathbb R}$ is symmetric in the sense that for all $x_1,\dots,x_r\in S$ and all permutation $\sigma\colon {\left\{ 1,\dots,r\right\}}\to {\left\{ 1,\dots,r\right\}}$, $$h{\left(x_{\sigma{\left(1\right)}},\dots,x_{\sigma{\left(r\right)}}\right)}= h{\left(x_1,\dots,x_r\right)}.$$ The $U$-statistic involved in Theorem \[thm:inegalite\_deviation\_U\_stats\] are such that $${\mathbb E\left[h{\left(X_1,\dots,X_r\right)}\mid \sigma{\left(X_i, 1{\leqslant}i{\leqslant}r,i\neq j \right)}\right]}=0$$ almost surely for all $j\in {\left\{ 1,\dots,r\right\}}$. In order to extend the result to a larger class of $U$-statistic, we need to define a more general notion of degeneratedness. Let ${\left(X_k\right)}_{k{\geqslant}1}$ be an i.i.d. sequence with values in the measurable space ${\left(S,{\mathcal{S}}\right)}$. Let $h\colon S^r\to {\mathbb R}$ be a measurable function such that $Y:= h{\left(X_1,\dots,X_r\right)}$ is integrable. Denote ${\mathcal{F}}_j$ the $\sigma$-algebra generated by the random variables $X_1,\dots, X_j$. We say that the $U$-statistic $U_{r,n}{\left(h\right)}$ is degenerated of order $i-1$ for some $i\in{\left\{ 2,\dots,r\right\}}$ if ${\mathbb E\left[Y\mid {\mathcal{F}}_{i-1}\right]}=0$ almost surely but ${\mathbb E\left[Y\mid {\mathcal{F}}_{i}\right]}$ is not equal to zero almost surely. We express the $U$-statistic associated to this kernel $h$ as a sum of $U$-statistics of order $k$ with symmetric kernel. Define $$\pi_{k,r}h{\left(x_1,\dots,x_k\right)}:= {\left(\delta_{x_1}-{\mathbb P}_{X_1}\right)} \dots {\left(\delta_{x_k}-{\mathbb P}_{X_1}\right)}{\mathbb P}_{X_1}^{r-k}h,$$ where $Q_1\dots Q_r h$ is defined as $$Q_1\dots Q_r h= \int\dots\int h{\left(x_1,\dots,x_r\right)}dQ_1{\left(x_1\right)}\dots dQ_r{\left(x_r\right)}.$$ Then the following equality holds: $$\label{eq:decomposition_de_Hoeffding} \binom{n}{r}U_{r,n}{\left(h-\theta\right)}=\sum_{k=1}^k\binom{r}{k}\binom{n}{k}U_{k,n}{\left(h_k\right)},$$ where $\theta:={\mathbb E\left[h{\left(X_1,\dots,X_n\right)}\right]}$ and $U_{k,n}{\left(h_k\right)}$ is a generated $U$-statistic of order $k$. If $U_{r,n}{\left(h\right)}$ is degenerated of order $i-1$ for some $i\in{\left\{ 2,\dots,r\right\}}$, then the first $i-1$ terms in vanish (and $\theta=0$) hence $$\label{eq:decomposition_de_Hoeffding_deg_ordre_i_1} \binom{n}{r}U_{r,n}{\left(h \right)}=\sum_{k=i}^r\binom{r}{k}\binom{n}{k}U_{k,n}{\left(h_k\right)}.$$ Therefore, appling Theorem \[thm:inegalite\_deviation\_U\_stats\] to each degenerated $U$-statistic $U_{k,n}{\left(h_k\right)}$ gives the following result. \[cor:extension\_inegalite\_de\_deviation\] Let $r{\geqslant}1$ be an integer, ${\left(S,{\mathcal{S}}\right)}$ be measurable space, $h\colon S^r\to {\mathbb R}$ be a measurable function (with $S^r$ induced with the product $\sigma$-algebra) and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of $S$-valued random variables. Assume that $h$ is degenerated of order $i-1$ for some $i\in{\left\{ 2,\dots,r\right\}}$. Then for all positive $x$ and $y$ such that $x/y>3^r$, $$\begin{gathered} \label{eq:extension_inegalite_de_deviation_non_deg} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r-\frac{i}{2}}x\right\}} {\leqslant}A_{r}\exp{\left(- \frac 12{\left(\frac{x}y\right)}^{ \frac{2}{i }}\right)} \\+B_{r}\sum_{k=i}^r\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left|{\mathbb E\left[h{\left(X_1,\dots,X_r\right)}\mid X_1,\dots,X_k \right]}\right|}> y N^{ \frac{ k-i }{2}}u ^{1/2} C_{r} \right\}} {\left(1+\ln{\left( u\right)}\right)}^{q_{k}}\mathrm du, \end{gathered}$$ where $q_{k }=\frac{ {\left(k-1\right)}k}{2}$ and the constants $A_{r}$, $B_{r}$ and $C_{r}$ depend only on $r$. When $h{\left(X_1,\dots,X_n\right)}$ has a finite exponential moments, it turns out that a simpler upper bound can be given. \[cor:inegalite\_moments\_exp\] Let $r{\geqslant}1$ be an integer and $\gamma>0$. There are constants $x_{r,\gamma}$, $A_{r_\gamma}$ and $B_{r,\gamma}$ such that if ${\left(S,{\mathcal{S}}\right)}$ is a measurable space, $h\colon S^r\to {\mathbb R}$ is a measurable function (with $S^r$ induced with the product $\sigma$-algebra), ${\left(X_i\right)}_{i{\geqslant}1}$ an i.i.d. sequence of $S$-valued random variables and $h$ is degenerated of order $i-1$ for some $i\in{\left\{ 2,\dots,r\right\}}$, then for all $x{\geqslant}x_{r,i,\gamma} :=3^{r\frac{2+i\gamma}{i\gamma}}2^{-1/\gamma} C_{r }^{-1} $ (with $C_{r }$ like in Corollary \[cor:extension\_inegalite\_de\_deviation\]) and all $N{\geqslant}r$, $$\label{eq:inegalite_Ustats_moments_expo} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r -i/2}x\right\}} {\leqslant}A_{r,\gamma} \exp{\left(- B_{r,\gamma} x ^{\frac{2\gamma}{i\gamma+2} } \right)} {\mathbb E\left[\exp{\left( {\left|h{\left(X_1,\dots,X_r\right)} \right|}^\gamma \right)}\right]}.$$ Corollary 1.2 in [@MR1655931] gives a result in the same spirit, but with the following differences. - Our inequality gives a bound on the tail probability of $\max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}$, while [@MR1655931] gives a control of the tail of ${\left|U_N\right|}$. - Our inequality show explicitely how the right hand side depends on ${\mathbb E\left[\exp{\left( {\left|h{\left(X_1,\dots,X_r\right)} \right|}^\gamma \right)}\right]}$. It allows in particular to apply the inequality to $R\cdot h$, $R>0$, instead of $h$ when ${\mathbb E\left[\exp{\left( R{\left|h{\left(X_1,\dots,X_r\right)} \right|}^\gamma \right)}\right]}$ is finite. - The case $0<\gamma{\leqslant}2$ was addressed in [@MR1655931], whereas we cover the case $\gamma>0$. Application to convergence rates in the strong law of large numbers ------------------------------------------------------------------- Let $U_n$ be the $U$-statistic defined by . Suppose that ${\mathbb E\left[{\left|h{\left(X_1,\dots,X_r\right)}\right|}\right]}$ is finite. Then $$\frac 1{n^r} \sum_{i\in I_n^r} {\left(h{\left(X_{i_1},\dots,X_{i_r}\right)}- {\mathbb E\left[h{\left(X_{i_1},\dots,X_{i_r}\right)}\right]}\right)}\to 0\mbox{ a.s.}$$ If we assume that the $U$-statistic is degenerated of order $i-1$ and if we impose more restrictive integrability conditions, an other normalization than $n^r$ can be chosen. Let $h\colon {\mathbb R}^r\to{\mathbb R}$ be a symmetric function, $r{\geqslant}2$ and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of random variables. Suppose that $h$ is degenerated of order $i-1$ where $i\in{\left\{ 2,\dots,r\right\}}$. Let $q\in{\left(1,2r/{\left(2r-i\right)}\right)}$. Suppose that for all $j\in{\left\{ i,\dots,r\right\}}$, $${\mathbb E\left[h{\left(X_1,\dots,X_r\right)}\mid X_1,\dots ,X_j\right]}\in \mathbb L^{\frac{jq}{r-{\left(r-j\right)}q}}.$$ Then $n^{-r/q}U_n\to 0$ almost surely. Information on the convergence rates in the Marcinkiewicz law of large number have been obtained in [@MR1227625] and results in the spirit of Baum-Katz [@MR198524] for partial sums has ben obtained in [@MR614652; @MR903815], in all the cases under polynomial moment conditions, that is, finiteness of ${\mathbb E\left[{\left|h{\left(X_1,\dots,X_r\right)}\right|}^q\right]}$ for some $q$. Our setting would also allow to derive result under similar assumptions but it seems that the inequality we obtain is not the most suitable in this context. However, under finite exponential moments, one can use Corollary \[cor:inegalite\_moments\_exp\] in order to quantify the convergence. \[thm\_Baum\_Katz\_Ustats\] Let ${\left(S,{\mathcal{S}}\right)}$ be a measurable space, $h\colon S^r\to{\mathbb R}$ be a symmetric function, $r{\geqslant}2$ and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of random variables with values in $S$. Suppose that $h$ is degenerated of order $i-1$ where $i\in{\left\{ 2,\dots,r\right\}}$. Let $\alpha\in {\left(r-i/2,r\right)}$. If there exists a positive $\gamma>0$ such that ${\mathbb E\left[\exp{\left(R {\left|h{\left(X_1,\dots,X_r\right)}\right|}^\gamma \right)}\right]}<+\infty$ is finite for all $R$, then $$\forall {\varepsilon}>0, \quad \sum_{N{\geqslant}1} \exp{\left( 2^{N{\left(\alpha-{\left(r-i/2\right)} \right)}\frac{2\gamma}{i\gamma+2} } \right)}{\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}2^N}{\left|U_n\right|}>{\varepsilon}2^{N\alpha}\right\}}<+\infty.$$ An other way to measure the speed of convergence in the law of large numbers is by bounding the probabilities of large deviation. In the case of partial sums of an i.i.d. centered sequence ${\left(X_i\right)}_{i{\geqslant}1}$, the involved probability is ${\mathbb P}{\left\{ \max_{1{\leqslant}n{\leqslant}N}{\left| \sum_{i=1}^nX_i\right|}>Nx \right\}}$. Extension to martingale with bounded moments have been investigated in [@MR1856684] and [@MR3005732]. In the context of the $U$-statistics, the normalization will depend on the degree of degeneracy. \[thm:large\_deviation\_Ustats\] Let ${\left(S,{\mathcal{S}}\right)}$ be a measurable space, $h\colon S^r\to{\mathbb R}$ be a symmetric function, $r{\geqslant}2$ and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of random variables with values in $S$. Suppose that $h$ is degenerated of order $i-1$ where $i\in{\left\{ 2,\dots,r\right\}}$. If there exists a positive $\gamma>0$ such that $M:=\sup_{t>0}\exp{\left(t^\gamma\right)} {\mathbb P}{\left\{ {\left|h{\left(X_1,\dots,X_r\right)}\right|}>t \right\}}$ is finite, then for all $x>0$, $${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}> N^{r}x\right\}}{\leqslant}K_1 \exp{\left( -K_2 N^{\frac{i\gamma}{2+i\gamma}}x^{\frac{2\gamma }{2+i\gamma} } \right)},$$ where $K_1$ and $K_2$ depend on $\gamma$ and $M$. Weak invariance principle in Hölder spaces {#subsec:WIP_Holder} ------------------------------------------ In this subsection, we will study an other limit theorem for $U$-statistics: the functional central limit theorem in Hölder spaces. Given a $U$-statistic $U_n$ of order $r$ and kernel $h\colon S^r\to {\mathbb R}$, we define a partial sum process by $$\label{eq:definition_processus_sommes_partielles} \sigma_n{\left(t\right)}:= \frac 1{n^{r/2}} {\left(U_{[nt]}-{\left(nt-[nt]\right)} {\left(U_{[nt]+1}- U_{[nt]} \right)}\right)}, t\in [0,1], n{\geqslant}r,$$ where for $x\in {\mathbb R}$, $[x]$ is the unique integer satisfying $[x]{\leqslant}x<[x]+1$. In other words, $\sigma_n{\left(k/n\right)}=U_k$ and the random function $t\mapsto \sigma_n{\left(t\right)}$ is affine on the intervals $\left[k/n,{\left(k+1\right)}/n\right]$. In [@MR740907], the convergence in distribution in the Skorohod space $D[0,1]$ of the process ${\left(n^{-r/2} U_{[n\cdot ]}\right)}_{n{\geqslant}r}$ is studied. In Corollary 1, it is shown that if $U_n$ is degenerated of order $i-1$, $i\in{\left\{ 2,\dots,r\right\}}$, then ${\left(n^{-i/2} U_{[n\cdot ]}\right)}_{n{\geqslant}r}$ converges in distribution to a process $I_i{\left(h_i\right)}$ symbolically defined as $$\begin{gathered} I_i{\left(h_i\right)}{\left(t\right)}=\int\dots\int h_i{\left(x_1,\dots,x_i\right)}\mathbf 1_{[0,t]}{\left(u_1\right)} \dots \mathbf 1_{[0,t]}{\left(u_i\right)}W{\left(dx_1,du_1\right)}\dots W{\left(dx_i,du_i\right)}, \end{gathered}$$ where $W$ denotes the Gaussian measure (see the Appendix A.1 and A.2 of the paper [@MR740907]). For $i=2$, the limiting process admits the expression $\sum_{j=1}^{+\infty} \lambda_j{\left(B_j^2{\left(t\right)}-t\right)}$, where ${\left(B_j{\left(\cdot\right)}\right)}_{j{\geqslant}1}$ are independent standard Brownian motions and $\sum_{j=1}^{+\infty} \lambda_j^2$ is finite. In particular, such a process has path in Hölder spaces and would be also the limiting process for ${\left(\sigma_n\right)}_{n{\geqslant}1}$ when $r=2$. Therefore, the study of the limiting behavior of ${\left(\sigma_n{\left(t\right)}\right)}_{n{\geqslant}r}$ in Hölder spaces can be considered. This question has been considered in the context of partial sum processes built on strictly stationary sequences of random variables, that is, of the form $$\label{eq:def_processus_sommes_partielles_stationnaire} W_n{\left(t\right)}:= \frac 1{a_n}{\left(\sum_{i=1}^{[nt]}X_i+{\left(nt-[nt]\right)}X_{[nt]+1} \right)},$$ where ${\left(X_j\right)}_{j{\geqslant}1}$ is a strictly stationary centered sequence and $a_n\to +\infty$. The asymptotic behaviour of such partial processes in $D[0,1]$ under dependence has concentrated a lot of effort; see for instence [@MR2206313] for a survey on the main results. Define $\mathcal R_i$ as the class of the real-valued functions $\rho$ defined on $[0,1]$ which can be expressed as $\rho{\left(t\right)}=t^\alpha L{\left(1/t\right)} $, where $L\colon [1,+\infty)\to {\mathbb R}$ is normalized slowly varying at infinity, positive and continuous, $\rho$ is increasing on $[0,1]$ and $$\lim_{\delta\to 0}\rho{\left(\delta\right)}\delta^{-1/2}{\left(\ln{\left(\delta^{-1}\right)} \right)}^{-i/2}=+\infty.$$ Notice that this implies that $0<\alpha{\leqslant}1/2$ and for $\alpha=1/2$, the constraint reads $$\lim_{\delta\to 0}L{\left(1/\delta\right)} {\left(\ln{\left(\delta^{-1}\right)} \right)}^{-i/2}=+\infty.$$ For example, if $c$ is such that the function $t\mapsto t^{1/2}{\left(\ln{\left(c/t\right)}\right)}^{\beta}$ is increasing, then the latter constraint forces $\beta>i/2$. For $\rho\in\mathcal R_i$, we denote by ${\mathcal{H}}_\rho$ the Hölder space associated to the modulus of regularity $\rho$, that is, the set of function $x\colon [0,1]\to{\mathbb R}$ such that ${\left\lVert x \right\rVert}_\rho:=\sup_{0{\leqslant}s<t{\leqslant}1}{\left|x{\left(t\right)}-x{\left(s\right)}\right|}/\rho{\left(t-s\right)}+{\left|x{\left(0\right)}\right|}$ is finite. Instead of dealing with the convergence in ${\mathcal{H}}_\rho$, we will work with a subspace which is more adapted to the study of convergence in distribution. Let $${\mathcal{H}}_\rho^o:={\left\{ x\colon [0,1]\to {\mathbb R}\mid \lim_{\delta\to 0} \sup_{\substack{s,t\in [0,1]\\ 0<t-s<\delta } } \frac{{\left|x{\left(t\right)}-x{\left(s\right)}\right|}}{\rho{\left(t-s\right)}}=0 \right\}}.$$ The convergence of partial sum processes of the form when ${\left(X_i\right)}_{i{\geqslant}1}$ is i.i.d. has been studied in [@MR2000642; @MR2054586]. The convergence of ${\left(W_n{\left(\cdot\right)}\right)}_{n{\geqslant}1}$ in ${\mathcal{H}}_\rho^o$ for $\rho\in{\mathcal{R}}_1$ holds if and only if $$\forall A>0, \lim_{t\to +\infty}t{\mathbb P}{\left\{ {\left|X_1\right|} >At^{1/2}\rho{\left(1/t\right)} \right\}}=0.$$ Generally, a strategy to prove such results is to establish the convergence on the finite dimensional distribution and prove tightness, which is usually the most difficult part. In Equation (1.3) in [@MR3615086], a tightness criterion for partial sum processes of the form with ${\left(X_j\right)}_{j{\geqslant}1}$ and $\rho$ of the form $t\mapsto t^\alpha$, $0<\alpha<1/2$. Its verification is done by using deviation inequalities, see for example [@MR3426520] or Section 3.3 in [@MR3583992]. For the purpose of the study of the convergence of ${\left(\sigma_n{\left(\cdot\right)}\right)}_{n{\geqslant}1}$ (defined by ), we need to extend this criterion in two directions: to partial sum processes like in for which the sequence ${\left(X_j\right)}_{j{\geqslant}1}$ is not necessarily stationnary and to the class of modulus of regularity ${\mathcal{R}}_1$. \[prop:critere\_de\_tension\] Let ${\left(X_j\right)}_{j{\geqslant}1}$ be a sequence of random variables. Let $W_n$ be the partial sum process built on $ {\left(X_j\right)}_{j{\geqslant}1}$ defined by . Let $\rho\in {\mathcal{R}}_i$ for some $i{\geqslant}1$. Suppose that for all positive ${\varepsilon}$, the following convergences hold: $$\label{eq:tightness_criterion} \lim_{J\to +\infty}\limsup_{n\to +\infty}\sum_{j=J}^{[\log_2n]}\sum_{k=0}^{2^j-1} {\mathbb P}{\left\{ {\left|S_{ [n{\left(k+1\right)}2^{-j} ] }-S_{ [nk2^{-j}] } \right|} >a_n {\varepsilon}\rho{\left(2^{-j}\right)} \right\}}=0 ;$$ where $S_N:=\sum_{i=1}^NX_i$ and ${\left( a_n\right)}_{n{\geqslant}1}$ is an increasing sequence diverging to infinity and such that $\sup_{n{\geqslant}1}a_{2n}/a_n$ is finite. Then the partial sum process defined by is tight in ${\mathcal{H}}_\rho^o$. Since the previous tightness criterion involves the tails of differences of partial sums, we have to establish a corresponding deviation inequality. \[prop:deviation\_accroissements\] Let $r{\geqslant}1$ be an integer, ${\left(S,{\mathcal{S}}\right)}$ be measurable space, $h\colon S^r\to {\mathbb R}$ be a measurable function (with $S^r$ induced with the product $\sigma$-algebra) and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of $S$-valued random variables. Suppose that holds for all $s_1,\dots,s_{r-1}\in S$ and for all $q\in {\left\{ 1,\dots,r\right\}}$. Then the following inequality holds for all positive $x$ and all $y$ such that $x/y>3^r$ and all $n_2>n_1{\geqslant}r$: $$\begin{gathered} \label{eq:inegalite_exp_cas_degenere_differences} {\mathbb P}{\left\{ \frac 1{\sqrt{n_2-n_1}n_2^{\frac{r-1}2}}{\left|U_{n_2}-U_{n_1}\right|}>x\right\}} {\leqslant}A_{r}\exp{\left(- \frac 12{\left(\frac{x}y\right)}^{ \frac{2}{r }}\right)} \\+B_{r}\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left|h{\left(X_1,\dots,X_r\right)}\right|}> y u ^{1/2} C_{r} \right\}} {\left(1+\ln{\left( u\right)}\right)}^{q_{r}}\mathrm du,\end{gathered}$$ where $A_r$, $B_r$ and $C_r$ depend only on $r$ and $q_r=r{\left(r-1\right)}/2$. By combining this tightness criterion with the obtained deviation inequalities, we get the following function central limit theorem. \[thm:PI\_Holderien\] Let $r{\geqslant}1$ be an integer, ${\left(S,{\mathcal{S}}\right)}$ be measurable space, $h\colon S^r\to {\mathbb R}$ be a symmetric measurable function (with $S^r$ induced with the product $\sigma$-algebra) and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of $S$-valued random variables. Assume that $h$ is degenerated of order $i-1$ for some $i\in{\left\{ 2,\dots,r\right\}}$ and that $h$ is symmetric. Let $\rho\in{\mathcal{R}}_r$. Assume that $$\label{eq:condition_suffisante_WIP} \forall c>0, \sum_{j=1}^{+\infty} \int_1^{+\infty}{\mathbb P}{\left\{ {\left|h{\left(X_1,\dots,X_r\right)}\right|}> c2^{j/2}\rho{\left(2^{-j}\right)}j^{-r/2} u^{1/2}\right\}} {\left(1+\log u\right)}^{q_r}du<+\infty.$$ Then $$\label{eq:WIP_Ustats} \frac 1{n^{i/2}}{\left(U_{[nt]}-{\left(nt-[nt]\right)} {\left(U_{[nt]+1} -U_{[nt]} \right)} \right)}\to I_i{\left(h_i\right)}{\left(t\right)}$$ in distribution in ${\mathcal{H}}_\rho^o{\left([0,1]\right)}$. In particular, denoting $Y:= {\left|h{\left(X_1,\dots,X_r\right)}\right|}$, when $\rho{\left(t\right)}=t^\alpha$, $0<\alpha<1/2$, the condition $$\label{eq:cond_suff_WIP_talpha} {\mathbb E\left[ Y^{\frac{1}{1/2-\alpha}} {\left(\log Y\right)}^{r/2} \right]}<+\infty$$ guarantees . When $\rho{\left(t\right)}=t^{1/2}{\left(\log {\left(C/t\right)}\right)}^{\beta}$, $\beta>r/2$, the condition $$\label{eq:cond_suff_WIP_t12} \forall A>0, {\mathbb E\left[\exp{\left(A Y^{\frac 1{\beta-r/2}} \right)} \right]}<+\infty$$ guarantees . When $r=1$ and $\rho{\left(t\right)}=t^\alpha$, we do not recover exactly the necessary and sufficient condition established in [@MR2000642], which reads $ \lim_{t\to +\infty} t^{1/{\left(1/2-\alpha\right)}}{\mathbb P}{\left\{ Y>t\right\}}=0$. However, when $\rho{\left(t\right)}=t^{1/2}{\left(\log {\left(C/t\right)}\right)}^{\beta}$, we recover the same condition. Proofs ====== Proof of Theorem \[thm:inegalite\_deviation\_U\_stats\] ------------------------------------------------------- Let us first give the general idea of proof of Theorem \[thm:inegalite\_deviation\_U\_stats\]. 1. We will proceed by an induction argument on $r$. We will use martingale inequality, which helps to control the tail maximum of partial sums of a martigale differences sequences with the help of the tail of the sum of squares and the sum of conditional variances. 2. Using the notion of convex ordering that will be made explicit later, the sum of squares and the sum of conditional variances satisfy a “convexity domination” hence we will be reduced to tails of $U$-statistics of the previous order. 3. In order to perform the induction step, we will also need a rearragement of integrals of the type $\int_1^{+\infty}\int_1^{+\infty} {\mathbb P}{\left\{ Y>u/{\left(1+\ln u\right)}^av\right\}}du{\left(1+\ln v\right)}^b dv$, whose treatment will be addressed later. ### Martingale inequality We will formulate the martingale inequality we need to handle the tail of $U$-statistics. First, the following inequality is established in [@MR3311214]. \[thm:Fan\_Grama\_Liu\] Let ${\left(D_j\right)}_{j{\geqslant}1}$ be a martingale differences sequence with respect to the filtration ${\left({\mathcal{F}}_i\right)}_{i{\geqslant}0}$. Suppose that there exists random variables $V_{i-1}$, $1{\leqslant}i{\leqslant}n$, which are non-negative, ${\mathcal{F}}_{i-1}$-measurable, non-negative functions $f$ and $g$ such that for some positive $\lambda$ and for all $i\in{\left\{ 1,\dots,n\right\}}$, $$\label{eq:condition_sur_les_accroissements} {\mathbb E\left[\exp{\left(\lambda D_i-g{\left(\lambda\right)}D_i^2\right)}\mid {\mathcal{F}}_{i-1}\right]}{\leqslant}1+f{\left(\lambda\right)} V_{i-1}.$$ Then for all $x$, $v$, $w>0$, $$\begin{gathered} \label{eq:inegalite_Fan_grama_Liu} {\mathbb P}{\left(\bigcup_{k=1}^n {\left\{ \sum_{i=1}^kD_i{\geqslant}x\right\}}\cap {\left\{ \sum_{i=1}^kD_i^2{\leqslant}v^2 \right\}}\cap{\left\{ \sum_{i=1}^kV_{i-1}{\leqslant}w \right\}}\right)}\\ {\leqslant}\exp{\left(-\lambda x+g{\left(\lambda\right)}v^2+f{\left(\lambda\right)}w\right)}.\end{gathered}$$ Notice that the condition is satisfied for all $\lambda>0$ when $f{\left(\lambda\right)}=g{\left(\lambda\right)}=\lambda^2/2$ and $V_{i-1}={\mathbb E\left[D_i^2\mid{\mathcal{F}}_{i-1}\right]}$. After having optimized over $\lambda$ the right hand side of and applying Theorem \[thm:Fan\_Grama\_Liu\] to $D_j$ and then to $-D_j$, we get that $${\mathbb P}{\left\{ \max_{1{\leqslant}k{\leqslant}n}{\left|\sum_{i=1}^kD_i\right|}>x \right\}} {\leqslant}2\exp{\left(-\frac{x^2}{2v^2}\right)}+{\mathbb P}{\left\{ \sum_{i=1}^n {\left(D_i^2+{\mathbb E\left[D_i^2\mid {\mathcal{F}}_{i-1}\right]}\right)}>v^2\right\}}$$ for all $x$ and $v>0$. ### Convex ordering Let us compare the tails of $Y$ and $ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}$. By Markov’s inequality, $${\mathbb P}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}} {\leqslant}\frac 1x{\mathbb E\left[ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}\mathbf{1}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}} \right]} = \frac 1x{\mathbb E\left[ Y\mathbf{1}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}} \right]}$$ and splitting the last expectation over the sets where $Y{\leqslant}x/2$ and $Y>x/2$ gives $${\mathbb P}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}}{\leqslant}\frac 12 {\mathbb P}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}}+ \frac 1x{\mathbb E\left[ Y\mathbf{1}{\left\{ Y>x/2\right\}} \right]}$$ hence $$x {\mathbb P}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}}{\leqslant}\ {\mathbb E\left[ 2Y\mathbf{1}{\left\{ 2Y>x\right\}} \right]}$$ and writing the last expectation as an integral over ${\left(0,+\infty\right)}$ involving the tail of $Y$, we get that $$x {\mathbb P}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}}{\leqslant}\ x{\mathbb P}{\left\{ 2Y>x\right\}}+\int_x^{+\infty}{\mathbb P}{\left\{ 2Y>u\right\}}\mathrm{d}u.$$ Bounding further $ x{\mathbb P}{\left\{ 2Y>x\right\}}$ by $2\int_{x/2}^x{\mathbb P}{\left\{ 2Y>u\right\}}\mathrm{d}u$ finally gives after the substitution $v=u/{\left(2x\right)}$ that $${\mathbb P}{\left\{ {\mathbb E\left[Y\mid{\mathcal{G}}\right]}>x\right\}}{\leqslant}\ 4\int_{1/4}^{+\infty}{\mathbb P}{\left\{ Y>xv\right\}}\mathrm{d}v.$$ hence one can control the tails of ${\mathbb E\left[Y\mid{\mathcal{G}}\right]}$ by a functional of those of $Y$. Therefore, if $Y$ and $Z$ are two non-negative random variables such that $$\label{eq:ordre_convexe_esperance} Z{\leqslant}{\mathbb E\left[Y\mid Z\right]}\mbox{ a.s.}$$ then for all $x>0$, $$\label{eq:inegalites_queues_generale} {\mathbb P}{\left\{ Z>x\right\}}{\leqslant}\int_{1}^{+\infty}{\mathbb P}{\left\{ Y>xv/4\right\}}\mathrm{d}v.$$ Observe also that if $Z$ and $Y$ are two non-negative random variables such that holds, then for each convex non-decreasing function $\varphi\colon {\mathbb R}_+\to{\mathbb R}_+$, $$\label{eq:ordre_convexe_definition} {\mathbb E\left[\varphi{\left(Z\right)}\right]}{\leqslant}{\mathbb E\left[\varphi{\left(Y\right)}\right]}.$$ When holds for each convex non-decreasing function, we write $Z{\leqslant}_{\mathrm{conv}} Y$. By Theorem 6 in [@MR606989], part (c), if two random variables $Y$ and $Z$ defined on a common probability space ${\left(\Omega,{\mathcal{F}},{\mathbb P}\right)}$ satisfy , then there exists a probability space ${\left(\Omega',{\mathcal{F}}',{\mathbb P}'\right)}$ and random variables $Y'$ and $Z'$ such that - for all real number $t$, ${\mathbb P}'{\left\{ Y'{\leqslant}t\right\}}={\mathbb P}{\left\{ Y{\leqslant}t\right\}}$ and ${\mathbb P}'{\left\{ Z'{\leqslant}t\right\}}={\mathbb P}{\left\{ Z{\leqslant}t\right\}}$; - the inequality $Z'{\leqslant}\mathbb{E}_{{\mathbb P}'}\left[Y'\mid Z'\right]$ holds almost surely. Combining this with gives the following lemma that will be used in the sequel. \[lem:ordre\_convexe\] Let $Y$ and $Z$ be two non-negative random variables such that for all convex non-decreasing function $\varphi\colon {\mathbb R}_+\to{\mathbb R}_+$, ${\mathbb E\left[\varphi{\left(Z\right)}\right]}{\leqslant}{\mathbb E\left[\varphi{\left(Y\right)}\right]}$. Then for all positive $x$, $$\label{eq:inegalites_queues_ordre_convexe} {\mathbb P}{\left\{ Z>x\right\}}{\leqslant}\int_{1}^{+\infty}{\mathbb P}{\left\{ Y>xv/4\right\}}\mathrm{d}v.$$ Using the previous lemma, one can control the tails of the maximum of a a martingale whose increments have common majorant for the order ${\leqslant}_{\mathrm{conv}} $. \[prop:inegalite\_deviation\_martingales\_dominee\] Let ${\left(D_j\right)}_{j{\geqslant}1}$ be a martingale differences sequence with respect to the filtration ${\left({\mathcal{F}}_i\right)}_{i{\geqslant}0}$. Suppose that ${\mathbb E\left[ D_i ^2\right]}$ is finite for all $i{\geqslant}1$. Suppose that there exists a random variable $Y$ such that for all $1{\leqslant}i{\leqslant}n$, $ D_i^2{\leqslant}_{\mathrm{conv}} Y^2$. Then for all $x,y>0$ and each $n{\geqslant}1$, the following inequality holds: $$\label{eq:inegalite_max_martingales_moments_ordre_p_domination_sto} {\mathbb P}{\left\{ \max_{1{\leqslant}k{\leqslant}n}{\left|\sum_{i=1}^kD_i\right|}>x n^{1/2} \right\}} {\leqslant}2\exp{\left(-\frac 12{\left(\frac{x}{y}\right)}^{2} \right)}\\ + 2\int_{1 }^{+\infty}{\mathbb P}{\left\{ Y^2>y^2u/4\right\}}\mathrm{d}u.$$ We apply Theorem \[thm:Fan\_Grama\_Liu\] in the following setting: $x$ is replaced by $xn^{1/2}$ and $v=n^{1/2}y$. We obtain that $$\begin{gathered} {\mathbb P}{\left\{ \max_{1{\leqslant}k{\leqslant}n}{\left|\sum_{i=1}^kD_i\right|}>xn^{1/2} \right\}} {\leqslant}2\exp{\left(-\frac 12{\left(\frac xy\right)}^{2 } \right)} \\ +{\mathbb P}{\left\{ \sum_{i=1}^n D_i^2>ny^2\right\}}+{\mathbb P}{\left\{ \sum_{i=1}^n{\mathbb E\left[ D_i ^2\mid {\mathcal{F}}_{i-1}\right]}>ny^2\right\}}. \end{gathered}$$ In order to control the last two terms, we will show that $$\label{eq:ordre_convexe_pour_variances_cond} \frac 1n \sum_{i=1}^nD_i^2{{\leqslant}_{\mathrm{conv}}}Y^2\mbox{ and } \frac 1n \sum_{i=1}^n{\mathbb E\left[D_i^2\mid {\mathcal{F}}_{i-1}\right]}{{\leqslant}_{\mathrm{conv}}}Y^2.$$ Let $\varphi\colon{\mathbb R}_+\to {\mathbb R}_+$ be a convex non-decreasing function. Then by convexity, $${\mathbb E\left[\varphi{\left( \frac 1n \sum_{i=1}^nD_i^2\right)}\right]}{\leqslant}{\mathbb E\left[ \frac 1n \sum_{i=1}^n\varphi{\left(D_i^2\right)}\right]}$$ and from the fact that $D_i^2{{\leqslant}_{\mathrm{conv}}}Y^2$, it follows that ${\mathbb E\left[ \varphi{\left(D_i^2\right)}\right]}{\leqslant}{\mathbb E\left[\varphi{\left(D_i^2\right)}\right]}$ hence $${\mathbb E\left[\varphi{\left( \frac 1n \sum_{i=1}^nD_i^2\right)}\right]}{\leqslant}{\mathbb E\left[\varphi{\left(Y^2\right)}\right]}.$$ Moreover, by convexity of $\varphi$ and Jensen’s inequality, $${\mathbb E\left[\varphi{\left( \frac 1n \sum_{i=1}^n{\mathbb E\left[D_i^2\mid{\mathcal{F}}_{i-1} \right]}\right)}\right]}{\leqslant}{\mathbb E\left[ \frac 1n \sum_{i=1}^n\varphi{\left({\mathbb E\left[D_i^2\mid{\mathcal{F}}_{i-1} \right]}\right)}\right]} {\leqslant}{\mathbb E\left[ \frac 1n \sum_{i=1}^n{\mathbb E\left[\varphi{\left(D_i^2\right)}\mid{\mathcal{F}}_{i-1} \right]}\right]}$$ hence $${\mathbb E\left[\varphi{\left( \frac 1n \sum_{i=1}^n{\mathbb E\left[D_i^2\mid{\mathcal{F}}_{i-1} \right]}\right)}\right]}{\leqslant}\frac 1n \sum_{i=1}^n{\mathbb E\left[\varphi{\left(D_i^2\right)} \right]}.$$ By the same argument as before, $${\mathbb E\left[\varphi{\left( \frac 1n \sum_{i=1}^n{\mathbb E\left[D_i^2\mid{\mathcal{F}}_{i-1} \right]}\right)}\right]}{\leqslant}{\mathbb E\left[\varphi{\left(Y^2\right)}\right]}$$ which proves . We conclude by applying Lemma \[lem:ordre\_convexe\] twice, first to $Z= n^{-1}\sum_{i=1}^n D_i^2$ and then to $Z= n^{-1}\sum_{i=1}^n{\mathbb E\left[ D_i^2\mid{\mathcal{F}}_{i-1}\right]}$. ### Step $r=1$ As said before, the proof of Theorem \[thm:inegalite\_deviation\_U\_stats\] is done by induction on $r$. In this subsubsection, we treat the case $r=1$. In this case, $U_n=\sum_{i=1}^nh{\left(X_i\right)}$ and condition means that $h{\left(X_i\right)}$ is centered. Therefore, an application of Proposition \[prop:inegalite\_deviation\_martingales\_dominee\] to $D_j:= h{\left(X_j\right)}$ gives the case $r=1$ of Theorem \[thm:inegalite\_deviation\_U\_stats\]. ### The induction step Let $A{\left(r\right)}$ be the following assertion: "for each measurable space ${\left(S,{\mathcal{S}}\right)}$, each measurable function $h\colon S^r\to {\mathbb R}$ (with $S^r$ induced with the product $\sigma$-algebra) and each i.i.d. sequence ${\left(X_i\right)}_{i{\geqslant}1}$ of $S$-valued random variables such that $${\mathbb E\left[h{\left(v_q{\left(x_1,\dots,x_{r-1},X_0\right)}\right)}\right]}=0,$$ holds for all $s_1,\dots,s_{r-1}\in S$ and for all $q\in {\left\{ 1,\dots,r\right\}}$, the following inequality holds for all positive $x$ and all $y$ such that $x/y>3^r$: $$\begin{gathered} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}} {\leqslant}A_{r}\exp{\left(-\frac 12{\left(\frac{x}y\right)}^{ \frac{2}{r }}\right)} \\+B_{r}\int_{1}^{+\infty} {\mathbb P}{\left\{ {\left|h{\left(X_1,\dots,X_r\right)}\right|}> y v^{1/2}C_{r} \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_{r}}\mathrm dv".\end{gathered}$$ We have shown in the previous subsubsection that $A{\left(1\right)}$ is true. Now, we assume that $A{\left(r-1\right)}$ is true for some $r{\geqslant}2$ and we will prove $A{\left(r\right)}$. Let ${\left(S,{\mathcal{S}}\right)}$ be a measurable space, $h\colon S^r\to {\mathbb R}$ (with $S^r$ induced with the product $\sigma$-algebra) be a measurable function, ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of $S$-valued random variables satisfying $${\mathbb E\left[h{\left(v_q{\left(x_1,\dots,x_{r-1},X_0\right)}\right)}\right]}=0,$$ for all $s_1,\dots,s_{r-1}\in S$ and for all $q\in {\left\{ 1,\dots,r\right\}}$, and $x$ and $y$ be positive numbers such that $x/y>3^r$. We will control ${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}}$ by a quantity that can be treated by the case of $U$-statistics of order $r-1$. We define $$D_j:= \frac 1{N^{\frac{r-1}2}}{\sum_{{\left(i_1,\dots,i_{r-1}\right)}\in I_j^{r-1}}^{}} h{\left(X_{i_1},\dots,X_{i_{r-1}},X_j\right)}, \quad j{\geqslant}r,$$ where the notation $I_{j}^{r-1}$ refers to and $D_j=0$ for $1{\leqslant}j{\leqslant}r-1$. Then for all $n{\geqslant}r$, $\frac 1{N^{r/2}} U_n= \frac 1{N^{1/2}}\sum_{j=1}^nD_j$. Moreover, defining ${\mathcal{F}}_j$ as the $\sigma$-algebra generated by the random variables $X_i$, $1{\leqslant}i{\leqslant}j$ for $j{\geqslant}1$ and ${\mathcal{F}}_0:={\left\{ \emptyset, \Omega\right\}}$, the sequence ${\left(D_j\right)}_{j{\geqslant}1}$ is a martingale differences sequence with respect to the filtration ${\left({\mathcal{F}}_j\right)}_{j{\geqslant}0}$. Let $$Y:= \frac 1{N^{\frac{r-1}2}}\max_{r-1{\leqslant}k{\leqslant}N}{\left| U'_k\right|},$$ where $$U'_k:= \sum_{{\left(i_1,\dots,i_{r-1}\right)}\in I_k^{r-1}} h{\left(X_{i_1},\dots,X_{i_{r-1}},X\right)}$$ and $X$ is a random variable which is independent of ${\left(X_i\right)}_{i{\geqslant}1}$ and has the same distribution as $X_0$. We will show that for all $j{\geqslant}r$, $D_i^2{{\leqslant}_{\mathrm{conv}}}Y^2$. First observe that since $X_j$ is independent of the vector ${\left(X_i\right)}_{i=1}^{j-1}$, it follows that $D_j$ has the same distribution as $$D'_j:= \frac 1{N^{\frac{r-1}2}}{\sum_{{\left(i_1,\dots,i_{r-1}\right)}\in I_j^{r-1}}^{}} h{\left(X_{i_1},\dots,X_{i_{r-1}},X\right)}, \quad j{\geqslant}r.$$ Let $\varphi\colon{\mathbb R}_+\to {\mathbb R}_+$ be a convex non-decreasing function. Then $$\label{eq:lien_Dj_D'j} {\mathbb E\left[\varphi{\left(D_i^2\right)}\right]}={\mathbb E\left[\varphi{\left( {D'_j}^2\right)}\right]}.$$ Since $${D'_j} ^2= \frac 1{N^{r-1}} {U'_j}^2{\leqslant}Y^2$$ we get from non-decreasingness of $\varphi$ that ${\mathbb E\left[\varphi{\left( {D'_j}^2\right)}\right]} {\leqslant}{\mathbb E\left[\varphi{\left(Y^2\right)}\right]}$ hence in view of , it follows that $D_j^2{{\leqslant}_{\mathrm{conv}}}Y^2$. Writing $${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}} ={\mathbb P}{\left\{ \max_{1{\leqslant}n{\leqslant}N}{\left|\sum_{j=1}^n D_j\right|}>N^{1/2}x\right\}},$$ we are in position to apply Proposition \[prop:inegalite\_deviation\_martingales\_dominee\] with $\widetilde{x}:=x$ and $\widetilde{y}:= x^{1-1/r}y^{1/r}$. We obtain $$\label{eq:se_ramener_au_cas_r-1} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}} {\leqslant}2\exp{\left(-\frac 12{\left(\frac{x}{y}\right)}^{2} \right)} + 2\int_{1 }^{+\infty}{\mathbb P}{\left\{ Y>x^{1-1/r}y^{1/r}{\left(u/4\right)}^{1/2}\right\}}\mathrm{d}u.$$ In order to control the last term, we use the fact that $X$ is independent of the sequence ${\left(X_i\right)}_{i{\geqslant}1}$ hence $$\begin{gathered} \label{eq:preparation_de_l_etape_de_recurrence} {\mathbb P}{\left\{ Y>x^{1-1/r}y^{1/r}{\left(u/4\right)}^{1/2}\right\}}\\ =\int_{S} {\mathbb P}{\left\{ \frac 1{N^{\frac{r-1}2} } \max_{r-1{\leqslant}k{\leqslant}n} {\left| \sum_{{\left(i_1,\dots,i_{r-1}\right)}\in I_k^{r-1}} h{\left(X_{i_1},\dots,X_{i_{r-1}},s\right)} \right|}>x^{1-1/r}y^{1/r}{\left(u/4\right)}^{1/2} \right\}}\mathrm{d}{\mathbb P}_X{\left(s\right)}.\end{gathered}$$ We bound the probability inside the integral for each fixed $s$. To this aim, we use the assumption that the assertion $A{\left(r-1\right)}$ is true. Define for a fixed $s\in S$ the function $\widetilde{h}\colon S^{r-1}\to {\mathbb R}$ by $$\widetilde{h}{\left(x_1,\dots,x_{r-1}\right)} =h{\left(x_1,\dots,x_{r-1},s\right)}.$$ Then $\widetilde{h}$ satisfied the condition with $r$ replaced by $r-1$. Let $$\widetilde{x}:= x^{1-1/r}y^{1/r}{\left(u/4\right)}^{1/2}, \quad \widetilde{y}:= y{\left(u/4\right)}^{1/2} {\left(1+2\ln{\left( u\right)}\right)}^{-{\left(r-1\right)}\frac{1}2}.$$ Since $x/y{\geqslant}3^r$, it follows that $\widetilde{x}/\widetilde{y}={\left(x/y\right)}^{1-1/r }{\left(1+ 2\ln{\left( u\right)}\right)}^{{\left(r-1\right)}\frac{1}2} {\geqslant}{\left(x/y\right)}^{1-1/r }{\geqslant}3^{r-1}$ hence we are in position to apply $A{\left(r-1\right)}$, which gives $$\begin{gathered} {\mathbb P}{\left\{ \frac 1{N^{\frac{r-1}2} } \max_{r-1{\leqslant}k{\leqslant}n} {\left| \sum_{{\left(i_1,\dots,i_{r-1}\right)}\in I_k^{r-1}} h{\left(X_{i_1},\dots,X_{i_{r-1}},s\right)} \right|}>x^{1-1/r}y^{1/r}{\left(u/4\right)}^{1/2} \right\}} \\ {\leqslant}A_{r-1}\exp{\left(-\frac 12{\left(\frac{\widetilde{x}}{\widetilde{y}}\right)}^{ \frac{2}{ r-1 }}\right)} \\+B_{r-1}\int_{1}^{+\infty} {\mathbb P}{\left\{ {\left| \widetilde{h}{\left(X_1,\dots,X_{r-1}\right)}\right|}> \widetilde{y} v^{1/2}C_{r-1} \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_{r-1}}\mathrm dv.\end{gathered}$$ Replacing $\widetilde{x}$, $\widetilde{y}$ and $\widetilde{h}$ by their corresponding expression and integrating over $S$ with respect to the law of $X$ gives in view of that $$\begin{gathered} \label{eq:control_queue_de_Y} {\mathbb P}{\left\{ Y>x^{1-1/r}y^{1/r}{\left(u/4\right)}^{1/p}\right\}} {\leqslant}A_{r-1}\exp{\left(-\frac 12{\left( {\left(\frac xy\right)}^{\frac{2}{r }} {\left(1+2\ln{\left( u\right)}\right)} \right)}\right)}\\ +B_{r-1}\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left| h{\left(X_1,\dots,X_{r-1},X\right)}\right|}> y{\left(u/4\right)}^{1/2} {\left(1+\ln{\left( u\right)}\right)}^{-{\left(r-1\right)}\frac{ 1}2} v^{1/2}C_{r-1} \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_{r-1}}\mathrm dv.\end{gathered}$$ We derive in view of and the fact that $x/y>3^r$ that $$\int_{1 }^{+\infty} \exp{\left(-\frac 12{\left( {\left(\frac xy\right)}^{\frac{2}{r }} {\left(1+2\ln{\left( u\right)} \right)} \right)}\right)}\mathrm du {\leqslant}\exp{\left(-\frac 12 {\left(\frac xy\right)}^{\frac{2}{r }}\right)} \int_{1}^{+\infty} t^{-6}\mathrm dt$$ hence computing the integral, $$\int_{1 }^{+\infty} \exp{\left(-\frac 12{\left( {\left(\frac xy\right)}^{\frac{2}{r }} {\left(1+2\ln{\left( u\right)} \right)} \right)}\right)}\mathrm du {\leqslant}\frac{1}{2 } \exp{\left(-\frac 12 {\left(\frac xy\right)}^{\frac{2}{r }}\right)}.$$ Integrating with respect to $u$ on ${\left(1 ,+\infty\right)}$, we obtain $$\begin{gathered} \label{eq:apres_utilisation_recurrence} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r/2}x\right\}} {\leqslant}{\left(2+A_{r-1}/2\right)}\exp{\left(-\frac 12{\left(\frac{x}{y}\right)}^{2} \right)} \\ + B_{r-1}\int_{1 }^{+\infty}\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left| h{\left(X_1,\dots,X_{r-1},X_r\right)}\right|}> y{\left(u/4\right)}^{1/2} {\left(1+\ln{\left(u\right)}\right)}^{-{\left(r-1\right)}\frac{ 1}2} v^{1/2}C_{r-1} \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_{r-1}}\mathrm dv\mathrm{d}u.\end{gathered}$$ In order to control the last term, we will make a use of the following lemma. \[lem:integrale\_double\_avec\_log\] Let $X$ be a non-negative random variable and let $q,q'$ be non-negative numbers. Define $$\kappa_q=\begin{cases} 1&\mbox{ if }q{\leqslant}1\\ e^{q-1}/q^q&\mbox{ if }q> 1. \end{cases}$$ Then $$\begin{gathered} \label{eq:resultat_lemme_double_integrale} \int_{1 }^{+\infty}\int_{1 }^{+\infty} {\mathbb P}{\left\{ X> u {\left(1+\ln{\left( u\right)}\right)}^{- q_1 } v \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_2}\mathrm dv\mathrm{d}u\\ {\leqslant}\frac{2^{q_1}}{\kappa{q_1} }{\left(1+\ln\frac{\kappa_{2q_1}}{\kappa_{q_1}}\right)}^{q_1} \int_1^{+\infty} {\mathbb P}{\left\{ X>t/\kappa_{q_1} \right\}}{\left(1+\ln t\right)}^{q_1+q_2+1}dt.\end{gathered}$$ Define the function $g_q\colon u\mapsto u/{\left(1+\ln u\right)}^{-q}$ for $u{\geqslant}1$. Then $g'_q{\left(u\right)}= {\left(1+\ln u\right)}^{-q}-uq {\left(1+\ln u\right)}^{-q-1}\frac 1u = {\left(1+\ln u\right)}^{-q-1}{\left(1+\ln u-q\right)}$ hence the function $g_q$ reaches its minimum at $u=e^{q-1}$. In particular, $$\label{eq:controle_fct_gq} {\left(1+\ln u\right)}^q{\leqslant}\kappa_q u, u{\geqslant}1,$$ where $$\kappa_q=\begin{cases} 1&\mbox{ if }q{\leqslant}1\\ e^{q-1}/q^q&\mbox{ if }q> 1. \end{cases}$$ We do the substitution $w=u {\left(1+\ln{\left( u\right)}\right)}^{- q_1 } v $ for a fixed $u$. Then $$\begin{gathered} \label{eq:simplication_double_integral} \int_{1 }^{+\infty}\int_{1 }^{+\infty} {\mathbb P}{\left\{ X> u {\left(1+\ln{\left( u\right)}\right)}^{- q_1 } v \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_2}\mathrm dv\mathrm{d}u\\= \int_{1 }^{+\infty}\int_{g_{q_1}{\left(u\right)} }^{+\infty} {\mathbb P}{\left\{ X> w \right\}} {\left(1+\ln{\left(w /g_{q_1}{\left(u\right)} \right)}\right)}^{q_2} g_{q_1}{\left(u\right)}^{-1}\mathrm dw\mathrm{d}u.\end{gathered}$$ Since $g_{q_1}{\left(u\right)}^{-1}{\leqslant}\kappa_q$, it follows that $$\begin{gathered} \label{eq:simplication_double_integral_bis} \int_{1 }^{+\infty}\int_{1 }^{+\infty} {\mathbb P}{\left\{ X> u {\left(1+\ln{\left( u\right)}\right)}^{- q_1 } v \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_2}\mathrm dv\mathrm{d}u\\ {\leqslant}\int_{0 }^{+\infty}{\mathbb P}{\left\{ X>w\right\}} I{\left(w\right)}{\left(1+\ln{\left(w\kappa_{q_1} \right)}\right)}^{q_2} dw\end{gathered}$$ where $$I{\left(w\right)}=\int_1^{+\infty} \mathbf{1}_{{\left(0,w\right)}}{\left(g_{q_1}{\left(u\right)} \right)} g_{q_1}{\left(u\right)}^{-1} du.$$ Observe that if $w< 1/\kappa_{q_1}$, then $I{\left(w\right)}=0$. Assume now that $w {\geqslant}1/\kappa_{q_1}$. Using with $q=2q_1 $, we get that $g_{2q_1 }{\left(u\right)}{\geqslant}\kappa_{2q_1}^{-1}$ hence $g_{q_1}{\left(u\right)}{\geqslant}\kappa_{2q_1}^{-1} \sqrt u$ and it follows that $$\mathbf{1}_{{\left(0,w\right)}}{\left(g_{q_1}{\left(u\right)} \right)} {\leqslant}\mathbf{1}_{{\left(0,\kappa_{2q_1}^2w^2\right)}}{\left(u\right)}$$ hence $$I{\left(w\right)}{\leqslant}\frac 1{q_1+1} {\left(1+2\ln{\left(w\kappa_{2q_1}\right)} \right)}^{q_1+1}$$ hence by , $$\begin{gathered} \int_{1 }^{+\infty}\int_{1 }^{+\infty} {\mathbb P}{\left\{ X> u {\left(1+\ln{\left( u\right)}\right)}^{- q_1 } v \right\}} {\left(1+\ln{\left( v\right)}\right)}^{q_2}\mathrm dv\mathrm{d}u \\ {\leqslant}\frac 1{q_1+1} \int_{1/\kappa_{q_1}}^{+\infty}{\mathbb P}{\left\{ X>w\right\}} {\left(1+2\ln{\left(w\kappa_{2q_1}\right)} \right)}^{q_1+1}{\left(1+\ln{\left(w\kappa_{q_1} \right)}\right)}^{q_2} dw\\ {\leqslant}2^{q_1} \int_{1/\kappa_{q_1}}^{+\infty}{\mathbb P}{\left\{ X>w\right\}} {\left(1+ \ln{\left(w\kappa_{2q_1}\right)} \right)}^{q_1+q_2+1} \mathrm{d}w.\end{gathered}$$ We get after the substitution $t=\kappa_{q_1}w$ and the elementary inequality $ {\left(1+\ln{\left(at\right)}\right)}^q{\leqslant}{\left(1+\ln a\right)}^q{\left(1+\ln t\right)}^q$. This ends the proof of Lemma \[lem:integrale\_double\_avec\_log\]. To conclude the proof of Theorem \[thm:inegalite\_deviation\_U\_stats\], we apply in the following setting: - $X:=4{\left|h{\left(X_1,\dots,X_r\right)}\right|}^2 C_{r-1}^{-2}y^{-2}$; - $q_1= r-1 $; - $q_2=q_{r-1}$. ### Proof of Corollary \[cor:extension\_inegalite\_de\_deviation\] We start from . We get that for each $r{\leqslant}n{\leqslant}N$, $$\frac{1}{N^{r-i/2 }}{\left|U_{r,n}{\left(h\right)}\right|} {\leqslant}\sum_{k=i}^r\frac 1{{\left(r-k\right)}!}N^{-k+i/2} {\left|U_{k,n}{\left(h_k\right)}\right|}$$ and taking the maximum over $n\in{\left\{ r,\dots,N\right\}}$ gives $$\frac{1}{N^{r-i/2}}\max_{r{\leqslant}n{\leqslant}N} {\left|U_{r,n}{\left(h\right)}\right|} {\leqslant}\sum_{k=i}^r\frac 1{{\left(r-k\right)}!}N^{-k+i/2} \max_{k{\leqslant}n{\leqslant}N} {\left|U_{k,n}{\left(h_k\right)}\right|}.$$ It follows that $${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r-i/2}x\right\}} {\leqslant}\sum_{k=i}^r {\mathbb P}{\left\{ \frac 1{{\left(r-k\right)}!}N^{-k+i/2} \max_{k{\leqslant}n{\leqslant}N} {\left|U_{k,n}{\left(h_k\right)}\right|}>x/r \right\}}.$$ For all $k\in{\left\{ i,\dots,r\right\}}$, the equality $${\mathbb P}{\left\{ \frac 1{{\left(r-k\right)}!}N^{-k+i/2} \max_{k{\leqslant}n{\leqslant}N} {\left|U_{k,n}{\left(h_k\right)}\right|}>x/r \right\}}= {\mathbb P}{\left\{ \frac 1{N^{k/2}}\max_{k{\leqslant}n{\leqslant}N} {\left|U_{k,n}{\left(h_k\right)}\right|}>\widetilde{x} \right\}}$$ holds, where $\widetilde{x}=N^{ {\left(k-i\right)}/2}x{\left(r-k\right)}!/r$. We can therefore apply Theorem \[thm:inegalite\_deviation\_U\_stats\] in the setting - $\widetilde{x}=N^{ {\left(k-i\right)}/2}x{\left(r-k\right)}!/r$; - $\widetilde{y}=N^{ {\left(k-i\right)}/2}y{\left(r-k\right)}!/r$ and - $\widetilde{r}=k$ to get the wanted result. Proof of Corollary \[cor:inegalite\_moments\_exp\] -------------------------------------------------- We first use Corollary \[cor:extension\_inegalite\_de\_deviation\] with a $y<3^{-r}x$ that will be specified later. Observe that in view of Markov’s inequality, $$\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left|Y_k\right|}> y N^{{\left(k-i\right)}\frac{1}{2}}u ^{1/2} C_{r,2} \right\}} {\left(1+\ln{\left( u\right)}\right)}^{q_{k,2}}\mathrm du {\leqslant}I{\left(y\right)}{\mathbb E\left[\exp{\left( {\left|Y_k\right|}^\gamma \right)}\right]},$$ where $$\label{eq:definition_de_I_y} I{\left(y\right)}:= \int_{1 }^{+\infty} \exp{\left(- {\left(y N^{{\left(k-i\right)}\frac{ 1}{2}}u ^{1/2} C_{r,2}\right)}^\gamma \right)} {\left(1+\ln{\left( u\right)}\right)}^{q_{k,2}}\mathrm du.$$ Now, noticing that ${\mathbb E\left[\exp{\left( {\left|Y_k\right|}^\gamma \right)}\right]}{\leqslant}c_\gamma {\mathbb E\left[\exp{\left( {\left|h{\left(X_1,\dots,X_r\right)}\right|}^\gamma \right)}\right]}$, it is sufficient to treat the case $k=i$. Doing the substitution $v=u^{\gamma/2}-1$, we obtain $$I{\left(y\right)}= \frac 2{\gamma}\int_{1 }^{+\infty} \exp{\left(- C_{r,2}^\gamma y^\gamma {\left(v+1\right)} \right)} {\left(1+\ln{\left( {\left(v+1\right)}^{2/\gamma} \right)}\right)}^{q_{r,2}}{\left(1+v\right)}^{\frac{2}{\gamma}-1} \mathrm dv.$$ Assume that $y$ is such that $$\label{eq:condition_sur_y} y^\gamma C_{r,2}^\gamma {\geqslant}1.$$ Then $$I{\left(y\right)}{\leqslant}\exp{\left(-R C_{r,2}^\gamma y^\gamma \right)} \cdot \frac 2{\gamma}\int_{1 }^{+\infty} \exp{\left(- v \right)} {\left(1+\ln{\left( {\left(v+1\right)}^{2/\gamma} \right)}\right)}^{q_{r,2}}{\left(1+v\right)}^{\frac{2}{\gamma}-1} \mathrm dv.$$ Now, we choose $y<x3^{-r}$ satisfying such that $$\frac 12{\left(\frac xy\right)}^{ 2/i }= C_{r,2}^\gamma y^\gamma,$$ This leads to the choice $$y:=x^{\frac{2}{2+i\gamma}}2^{-\frac{1}{\gamma+2/i}}C_{r,2}^{-\frac{\gamma}{\gamma+2/i}}.$$ Due to the definition of $x_{r,i\gamma}$, the inequalities $x/y{\geqslant}3^r$ and hold. This choice of $y$ combined with the observation that $1{\leqslant}{\mathbb E\left[\exp{\left( {\left|h{\left(X_1,\dots,X_r\right)} \right|}^\gamma \right)}\right]}$ gives , which ends the proof of Corollary \[cor:inegalite\_moments\_exp\]. Proof of Theorem \[thm\_Baum\_Katz\_Ustats\] -------------------------------------------- Let ${\varepsilon}>0$. Observe that for all positive $C$, $${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}2^N}{\left|U_n\right|}>{\varepsilon}2^{N\alpha}\right\}}= {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}2^N}{\left|\frac{CU_n}{{\varepsilon}}\right|}>2^{N{\left(r-i/2\right)}} C2^{N{\left(\alpha -{\left(r-i/2\right)} \right)} } \right\}}.$$ We now choose $C$ such that $C^{\frac{2\gamma}{i\gamma+2}}B_{r,\gamma}{\geqslant}1$, where $B_{r,\gamma}$ is like in Corollary \[cor:inegalite\_moments\_exp\]. We then apply for Corollary \[cor:inegalite\_moments\_exp\] in the settting $\widetilde{h} := Ch/{\varepsilon}$, $x = C2^{N{\left(\alpha -{\left(r-i/2\right)} \right)}} $ and for $N$ such that $C2^{N{\left(\alpha -{\left(r-i/2\right)} \right)}}{\geqslant}x_{r,i,\gamma}$, with $\widetilde{N}=2^N$. Proof of Theorem \[thm:large\_deviation\_Ustats\] ------------------------------------------------- We apply Corollary \[cor:extension\_inegalite\_de\_deviation\] with $x$ replaced by $xN^{i/2}$. This gives $$\begin{gathered} \label{eq:step_large_deviation} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r}x\right\}} {\leqslant}A_{r}\exp{\left(- \frac 12{\left(\frac{xN^{i/2}}y\right)}^{ \frac{2}{i }}\right)} \\+B_{r}\sum_{k=i}^r\int_{1 }^{+\infty} {\mathbb P}{\left\{ {\left|{\mathbb E\left[h{\left(X_1,\dots,X_r\right)}\mid X_1,\dots,X_k \right]}\right|}> y N^{{\left(k-i\right)}\frac{ 1}{2}}u ^{1/2} C_{r} \right\}} {\left(1+\ln{\left( u\right)}\right)}^{q_{k}}\mathrm du.\end{gathered}$$ Using , we notice that $\sup_{t>0}\exp{\left(t^\gamma\right)}{\mathbb P}{\left\{ {\left|{\mathbb E\left[h{\left(X_1,\dots,X_r\right)}\mid X_1,\dots,X_k \right]}\right|}>t \right\}} {\leqslant}\kappa \cdot M$, where $\kappa$ depends only on $\gamma$. Therefore, we will only focus on the term associated to $k=i$ in the right hand side of . After having used the condition on the tail, we obtain $$\begin{gathered} {\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r}x\right\}} {\leqslant}A_{r}\exp{\left(- \frac 12{\left(\frac{xN^{i/2}}y\right)}^{ \frac{2}{i }}\right)} \\+B_{r,\gamma}M \int_{1 }^{+\infty} \exp{\left(-{\left( y \frac{ 1}{2}u ^{1/2} C_{r} \right)}^{\gamma}\right)} {\left(1+\ln{\left( u\right)}\right)}^{q_{k}}\mathrm du.\end{gathered}$$ Then using similar arguments as in the control of $I{\left(y\right)}$ defined by , we get $${\mathbb P}{\left\{ \max_{r{\leqslant}n{\leqslant}N}{\left|U_n\right|}>N^{r}x\right\}} {\leqslant}A_{r}\exp{\left(- \frac 12{\left(\frac{xN^{i/2}}y\right)}^{ \frac{2}{i }}\right)} \\+B'_{r,\gamma}M \exp{\left(-C_{r,\gamma} y^{\gamma}\right)}.$$ We conclude the proof by choosing $y=N^{\frac{i}{2+i\gamma}}x^{\frac{2 }{2+i\gamma} }$. Proof of the results of Subsection \[subsec:WIP\_Holder\] --------------------------------------------------------- First, by using Theorem 3 in [@MR2054586] (which is a consequence of the Schauder decomposition of ${\mathcal{H}}_\rho^o$ and the tightness criterion given in [@MR1736910]), the following condition is sufficient for tightness of a sequence of processes ${\left(\xi_n\right)}_{n{\geqslant}1}$ in ${\mathcal{H}}_\rho^o$: $$\forall{\varepsilon}>0, \lim_{J\to +\infty}\limsup_{n\to +\infty} {\mathbb P}{\left\{ \sup_{j{\geqslant}J}\max_{t\in D_j} {\left|\lambda_{j,t }{\left(\xi_n\right)}\right|}/\rho{\left(2^{-j}\right)}>{\varepsilon}\right\}}=0,$$ where for $j{\geqslant}$, $D_j={\left\{ {\left(2k+1\right)}2^{-j},0{\leqslant}k{\leqslant}2^j-1\right\}}$ and $$\lambda_{j,t}{\left(x\right)}=x{\left(t\right)}-\frac 12{\left(x{\left(t+2^{-j}\right)}-x{\left(t-2^{-j}\right)} \right)}, x\in {\mathcal{H}}_\rho^o, t\in D_j.$$ Since $${\left|\lambda_{j,t }{\left(\xi_n\right)}\right|}{\leqslant}\frac 12{\left|x{\left(t\right)} - x{\left(t+2^{-j}\right)}\right|} + \frac 12{\left|x{\left(t\right)} - x{\left(t-2^{-j}\right)}\right|},$$ we infer that $$\max_{t\in D_j} {\left|\lambda_{j,t }{\left(\xi_n\right)}\right|}{\leqslant}\max_{1{\leqslant}\ell {\leqslant}2^j}{\left|\xi_n{\left( \ell 2^{-j} \right)} - \xi_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}$$ and we are therefore reduced to prove that $$\forall{\varepsilon}>0, \lim_{J\to +\infty}\limsup_{n\to +\infty} {\mathbb P}{\left\{ \sup_{j{\geqslant}J} \max_{1{\leqslant}\ell {\leqslant}2^j} {\left|W_n{\left( \ell 2^{-j} \right)} - W_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}/\rho{\left(2^{-j}\right)}>{\varepsilon}\right\}}=0.$$ We will now control the differences ${\left|W_n{\left( \ell 2^{-j} \right)} - W_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}$ by exploiting the fact that the graph of $W_n$ is a polygonal line. Fix an integer $n$ and define the interval $I_k:=\left[{\left(k-1\right)}/n,k/n\right]$, $1{\leqslant}k{\leqslant}n$. Define $$R_n:=\max_{1{\leqslant}k{\leqslant}n}{\left|W_n{\left(k/n\right)}-W_n{\left({\left(k-1\right)}/n\right)}\right|}.$$ Let $0{\leqslant}s<t{\leqslant}1$. 1. There exists a $k\in{\left\{ 1,\dots,n\right\}}$ such that $s,t\in I_k$. Since on $I_k$, $W_n$ is affine, we derive that $$\begin{aligned} {\left|W_n{\left(t\right)}-W_n{\left(s\right)}\right|}&{\leqslant}n{\left(t-s\right)}{\left|W_n{\left(k/n\right)}-W_n{\left({\left(k-1\right)}/n\right)}\right|}\\ &{\leqslant}n{\left(t-s\right)}R_n.\end{aligned}$$ 2. There exists a $k\in{\left\{ 1,\dots,n-1\right\}}$ such that $s\in I_k$ and $t\in I_{k+1}$. Starting from $${\left|W_n{\left(t\right)}-W_n{\left(s\right)}\right|}{\leqslant}{\left|W_n{\left(t\right)}-W_n{\left(k/n\right)}\right|}+{\left|W_n{\left(k/n\right)}-W_n{\left(s\right)}\right|}$$ and applying the reasoning of the first case to treat the two terms, we get $${\left|W_n{\left(t\right)}-W_n{\left(s\right)}\right|}{\leqslant}n{\left(t-s\right)}R_n.$$ 3. There exists a $k\in{\left\{ 1,\dots,n\right\}}$ such that $s\in I_k$ and $j\in{\left\{ k+2,\dots,n\right\}}$ such that $t\in I_j$. We start from $${\left|W_n{\left(t\right)}-W_n{\left(s\right)}\right|}{\leqslant}{\left|W_n{\left(t\right)}-W_n{\left(\frac{j-1}n\right)}\right|}+ {\left|W_n{\left(\frac{j-1}n\right)}-W_n{\left(\frac{k}{n}\right)}\right|}+{\left|W_n{\left(\frac{k}{n}\right)}-W_n{\left(s\right)}\right|}.$$ For the first and third terms of the right hand side, we use the reasoning of case 1 to get that their contribution does not exceed $2 R_n$. The second term is ${\left|S_{j-1}-S_k\right|}/a_n$ hence $${\left|W_n{\left(t\right)}-W_n{\left(s\right)}\right|}{\leqslant}3R_n+\frac{{\left|S_{\left[nt\right] }- S_{\left[ns\right] }\right|}}{a_n} .$$ Suppose that $j{\geqslant}[\log_2n]+1$ and let $t=\ell 2^{-j}$ and $s={\left(\ell-1\right)}2^{-j}$. Then $t-s=2^{-j}<1/n$ hence only the first two cases are possible. Consequently, $$\sup_{j{\geqslant}[\log_2n]+1}\max_{1{\leqslant}\ell{\leqslant}2^j}{\left|W_n{\left( \ell 2^{-j} \right)} - W_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}/\rho{\left(2^{-j}\right)}{\leqslant}\sup_{j{\geqslant}[\log_2n]+1}n 2^{-j}R_n/\rho{\left(2^{-j}\right)}.$$ Now, exploiting the fact that $\rho{\left(u\right)}=u^\alpha L{\left(1/u\right)}$, we infer that for some constant $C$ depending only on $\alpha$ and $L$, $$\sup_{j{\geqslant}[\log_2n]+1}\max_{1{\leqslant}\ell{\leqslant}2^j}{\left|W_n{\left( \ell 2^{-j} \right)} - W_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}/\rho{\left(2^{-j}\right)}{\leqslant}C R_n/\rho{\left(1/n\right)}.$$ Let now $j\in{\left\{ J,\dots,[\log_2n]\right\}}$. This time, with the choices $t=\ell 2^{-j}$ and $s={\left(\ell-1\right)}2^{-j}$, the third case applies hence $$\begin{gathered} \max_{J{\leqslant}j{\leqslant}[\log_2n]} \max_{1{\leqslant}\ell{\leqslant}2^j}{\left|W_n{\left( \ell 2^{-j} \right)} - W_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}/\rho{\left(2^{-j}\right)} \\ {\leqslant}3 \max_{J{\leqslant}j{\leqslant}[\log_2n]}\rho{\left(2^{-j}\right)}^{-1} R_n+\max_{J{\leqslant}j{\leqslant}[\log_2n]} \rho{\left(2^{-j}\right)}^{-1} \max_{1{\leqslant}\ell{\leqslant}2^j}\frac 1{a_n}{\left|S_{\left[n\ell 2^{-j}\right] } - S_{\left[n{\left(\ell-1\right)} 2^{-j}\right] } \right|}.\end{gathered}$$ In total, we got that for a constant $C$ depending only on $\rho$, $$\begin{gathered} \sup_{j{\geqslant}J} \max_{1{\leqslant}\ell{\leqslant}2^j}{\left|W_n{\left( \ell 2^{-j} \right)} - W_n{\left({\left(\ell-1\right)} 2^{-j}\right)}\right|}/\rho{\left(2^{-j}\right)} \\ {\leqslant}C\max_{J{\leqslant}j{\leqslant}[\log_2n]} \rho{\left(2^{-j}\right)}^{-1} R_n+\max_{J{\leqslant}j{\leqslant}[\log_2n]} \rho{\left(2^{-j}\right)}^{-1} \max_{1{\leqslant}\ell{\leqslant}2^j}\frac 1{a_n}{\left|S_{\left[n\ell 2^{-j}\right] } - S_{\left[n{\left(\ell-1\right)} 2^{-j}\right] } \right|}.\end{gathered}$$ Since guarantees that $$\lim_{J\to +\infty}\limsup_{n\to +\infty}{\mathbb P}{\left\{ \max_{J{\leqslant}j{\leqslant}[\log_2n]} \rho{\left(2^{-j}\right)}^{-1} \max_{1{\leqslant}\ell{\leqslant}2^j}\frac 1{a_n}{\left|S_{\left[n\ell 2^{-j}\right] } - S_{\left[n{\left(\ell-1\right)} 2^{-j}\right] } \right|}>{\varepsilon}\right\}}=0,$$ it remains to check that the sequence ${\left(\max_{1{\leqslant}j{\leqslant}[\log_2n]} \rho{\left(2^{-j}\right)}^{-1} R_n\right)}_{n{\geqslant}1}$ converges to $0$ in probability. Due to the construction of $W_n$ and the assumptions on the sequence ${\left(a_n\right)}_{n{\geqslant}1}$ and $\rho$, it suffices to check that the convergence in probability of ${\left(R_{2^N}/\rho{\left(2^{-N}\right)}\right)}_{N{\geqslant}1}$. To this aim, notice that implies (by considering $n=2^N$) that for each positive ${\varepsilon}$, $$\lim_{N\to +\infty} \sum_{k=0}^{2^N-1} {\mathbb P}{\left\{ {\left|S_{ [2^N{\left(k+1\right)}2^{-N} ] }-S_{ [2^Nk2^{-N}] } \right|} >a_{2^N} {\varepsilon}\rho{\left(2^{-N}\right)} \right\}}=0,$$ which implies the convergence in probability to $0$ of $R_{2^N}/\rho{\left(2^{-N}\right)}$. This ends the proof of Proposition \[prop:critere\_de\_tension\]. We start from the equalities $$\begin{aligned} \frac 1{\sqrt{n_2-n_1}n_2^{\frac{r-1}2}}{\left(U_{n_2}-U_{n_1}\right)}&= \frac 1{\sqrt{n_2-n_1}n_2^{\frac{r-1}2}}\sum_{\substack{1{\leqslant}i_1<\dots<i_r{\leqslant}n_2 \\ n_1+1{\leqslant}i_r{\leqslant}n_2 }} h{\left(X_{i_1},\dots,X_{i_r}\right)}\\ &=\frac 1{\sqrt{n_2-n_1}}\sum_{j=n_1+1}^{n_2}D_j,\end{aligned}$$ where $$D_j=\frac{1}{n_2^{\frac{r-1}2}}\sum_{1{\leqslant}i_1<\dots<i_{r-1}<j}h{\left(X_{i_1},\dots,X_{i_{r-1}},X_j\right)}.$$ Define the random variable $Y$ as $$\frac 1{n_2^{{\left(r-1\right)}/2}} \max_{r-1{\leqslant}j{\leqslant}n_2} {\left|\sum_{1{\leqslant}i_1<\dots<i_{r-1}<j} h{\left(X_{i_1},\dots,X_{i_{r-1}},X\right)} \right|},$$ where $X$ is independent of the sequence ${\left(X_i\right)}_{i{\geqslant}1}$ and has the same distribution as $X$. Then in the same way as in the proof of Theorem \[thm:inegalite\_deviation\_U\_stats\], we can prove that ${\left(D_j\right)}_{j=n_1+1}^{n_2}$ is a martingale differences sequence and that $D_j^2{{\leqslant}_{\mathrm{conv}}}Y^2$ for all $j\in{\left\{ n_1+1, \dots,n_2\right\}}$. Applying Proposition \[prop:inegalite\_deviation\_martingales\_dominee\], we derive that $$\begin{gathered} {\mathbb P}{\left\{ \frac 1{\sqrt{n_2-n_1}n_2^{\frac{r-1}2}}{\left|U_{n_2}-U_{n_1}\right|}>x\right\}}\\ {\leqslant}2\exp{\left(-\frac 12 \frac{x^2}{y^2}\right)}+2\int_1^{+\infty} {\mathbb P}{\left\{ \frac 1{n_2^{{\left(r-1\right)}/2}} \max_{r-1{\leqslant}j{\leqslant}n_2} {\left|\sum_{1{\leqslant}i_1<\dots<i_{r-1}<j} h{\left(X_{i_1},\dots,X_{i_{r-1}},X\right)} \right|}>y\sqrt{u}/2 \right\}}\mathrm{d}u.\end{gathered}$$ Then we treat the last integral in the following way: we integrate with respect to the law of $X$, apply Theorem \[thm:inegalite\_deviation\_U\_stats\] to the $U$-statistics of order $r-1$ and rearrange the integrals. The convergence of the finite dimensional distributions follows from Corollary 1 in [@MR740907] and the fact that for a fixed $t$, the contribution of $n^{-i/2}{\left( nt-[nt]\right)}{\left(U_{[nt]+1}-U_{[nt]}\right)}$ is negligible. It remain to prove tightness in ${\mathcal{H}}_\rho^o$. To this aim, we apply the Hoeffding’s decomposition and it suffices to show the following. \[lem:tension\_termes\_de\_la\_decomposition\_de\_Hoeffding\] Let $r{\geqslant}1$ be an integer, ${\left(S,{\mathcal{S}}\right)}$ be measurable space, $h\colon S^r\to {\mathbb R}$ be a symmetric measurable function (with $S^r$ induced with the product $\sigma$-algebra) and let ${\left(X_i\right)}_{i{\geqslant}1}$ be an i.i.d. sequence of $S$-valued random variables. Assume that $h$ is degenerated. Let $\rho\in{\mathcal{R}}_r$. Suppose that holds. Then $${\left(\frac 1{n^{r/2}}{\left(U_{[nt]}-{\left(nt-[nt]\right)} {\left(U_{[nt]+1} -U_{[nt]} \right)} \right)} \right)}_{n{\geqslant}1}$$ is tight in ${\mathcal{H}}_\rho^o$. Let us first show how Lemma \[lem:tension\_termes\_de\_la\_decomposition\_de\_Hoeffding\] helps to conclude. After having applied the Hoeffding decomposition, the considered partial sum is a sum of partial sum process defined like in but with $U$-statistics of lower order and overall degenerated, with a stronger normalization than $n^{k/2}$ for each term of the sum in . This shows tightness of the initially considered process and concludes the proof. By an application of Proposition \[prop:critere\_de\_tension\], it suffices to show that for all positive $\varepsilon$, $$\lim_{J\to +\infty}\limsup_{n\to +\infty}\sum_{j=J}^{[\log_2n]}\sum_{k=0}^{2^j-1} {\mathbb P}{\left\{ {\left|U_{ [n{\left(k+1\right)}2^{-j} ] }-U_{ [nk2^{-j}] } \right|} >n^{r/2} {\varepsilon}\rho{\left(2^{-j}\right)} \right\}}=0.$$ In order to control the involved probabilities, we apply Proposition \[prop:deviation\_accroissements\] for fixed $n$, $J$, $j\in {\left\{ J,\dots,[\log_2 n]\right\}}$ and $k\in{\left\{ 0,\dots,2^j-1\right\}}$ in the following setting: $n_1=[nk2^{-j}] $, $n_2=[n{\left(k+1\right)}2^{-j} ] $ and $$x:= \frac{n^{r/2} {\varepsilon}\rho{\left(2^{-j}\right)} }{ \sqrt{[n{\left(k+1\right)}2^{-j} ] -[nk2^{-j}]} [n{\left(k+1\right)}2^{-j} ]^{\frac{r-1}{2} } }.$$ Then we choose $y$ such that $${\left(\frac xy\right)}^{2/r}=3 j \ln 2.$$ Observing that $x{\geqslant}c 2^{j/2}\rho{\left(2^{-j}\right)} $, we derive that $$\begin{gathered} {\mathbb P}{\left\{ {\left|U_{ [n{\left(k+1\right)}2^{-j} ] }-U_{ [nk2^{-j}] } \right|} >n^{r/2} {\varepsilon}\rho{\left(2^{-j}\right)} \right\}}\\ {\leqslant}A_r 2^{-3j/2}+ B_r\int_1^{+\infty}{\mathbb P}{\left\{ {\left|h{\left(X_1,\dots,X_r\right)}\right|}> c2^{j/2}\rho{\left(2^{-j}\right)}j^{-r/2} u^{1/2}\right\}} {\left(1+\log u\right)}^{q_r}du.\end{gathered}$$ We conclude by summing over $j{\geqslant}J$ and exploiting the convergence of the involved series. Now, it remains to show that conditions (respectively ) are sufficient in the case $\rho{\left(t\right)}=t^\alpha$ (respectively $\rho{\left(t\right)}=t^{1/2}{\left(\log{\left(c/t\right)}\right)}^\beta$). When $\rho{\left(t\right)}=t^\alpha$, we first use the fact that if $a$ and $b$ are two positive real numbers and $Z$ is a non-negative random variable, then $$\sum_{j{\geqslant}1}2^j{\mathbb P}{\left\{ Z >\frac{2^{ja}}{{\left(1+j\right)}^b} \right\}}{\leqslant}C_{a,b}{\mathbb E\left[Z^{1/a}{\left(1+Z\right)}^b\mathbf 1{\left\{ Z>1\right\}}\right]}.$$ This can be seen by cutting the tail probability on a sum of probabibilities that $Z$ lies in $\left(c_j,c_{j+1}\right]$, where $c_j=\frac{2^{ja}}{{\left(1+j\right)}^b} $ then switching the sums. Applying this to $Z=Y/v^{1/2}$ for a fixed $v$, $a=1/2-\alpha$ and $b=r/2$, bounding the logarithms by ${\left(1+\log Y\right)}^{r/2 }= $ and accounting $1/{\left(1/2-\alpha\right)}$ gives . When $\rho{\left(t\right)}=t^{1/2}{\left(\log{\left(c/t\right)}\right)}^\beta$, this follows from similar estimates as in the proof of Corollary \[cor:inegalite\_moments\_exp\]. This concludes the proof of Theorem \[thm:PI\_Holderien\]. \#1[0=]{} \#1[0=]{} \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} Miguel A. Arcones, *A [B]{}ernstein-type inequality for [$U$]{}-statistics and [$U$]{}-processes*, Statist. Probab. Lett. **22** (1995), no. 3, 239–247. Miguel A. Arcones and Evarist Giné, *Limit theorems for [$U$]{}-processes*, Ann. Probab. **21** (1993), no. 3, 1494–1542. Leonard E. Baum and Melvin Katz, *Convergence rates in the law of large numbers*, Trans. Amer. Math. Soc. **120** (1965), 108–123. Tasos C. Christofides, *Probability inequalities with exponential bounds for [$U$]{}-statistics*, Statist. Probab. Lett. **12** (1991), no. 3, 257–261. Jérôme Dedecker and Florence Merlevède, *A deviation bound for [$\alpha$]{}-dependent sequences with applications to intermittent maps*, Stoch. Dyn. **17** (2017), no. 1, 1750005, 27. Xiequan Fan, Ion Grama, and Quansheng Liu, *Large deviation exponential inequalities for supermartingales*, Electron. Commun. Probab. **17** (2012), no. 59, 8. , *Exponential inequalities for martingales with applications*, Electron. J. Probab. **20** (2015), no. 1, 22. Evarist Giné, Rafal Latala, and Joel Zinn, *Exponential and moment inequalities for [$U$]{}-statistics*, High dimensional probability, [II]{} ([S]{}eattle, [WA]{}, 1999), Progr. Probab., vol. 47, Birkhäuser Boston, Boston, MA, 2000, pp. 13–38. Evarist Giné and Joel Zinn, *Marcinkiewicz type laws of large numbers and convergence of moments for [$U$]{}-statistics*, Probability in [B]{}anach spaces, 8 ([B]{}runswick, [ME]{}, 1991), Progr. Probab., vol. 30, Birkhäuser Boston, Boston, MA, 1992, pp. 273–291. Davide Giraudo, *Holderian weak invariance principle under a [H]{}annan type condition*, Stochastic Process. Appl. **126** (2016), no. 1, 290–311. , *Holderian weak invariance principle for stationary mixing sequences*, J. Theoret. Probab. **30** (2017), no. 1, 196–211. Wassily Hoeffding, *Probability inequalities for sums of bounded random variables*, J. Amer. Statist. Assoc. **58** (1963), 13–30. Christian Houdré and Patricia Reynaud-Bouret, *Exponential inequalities, with constants, for [U]{}-statistics of order two*, Stochastic inequalities and applications, Progr. Probab., vol. 56, Birkhäuser, Basel, 2003, pp. 55–69. P. N. Kokic, *Rates of convergence in the strong law of large numbers for degenerate [$U$]{}-statistics*, Statist. Probab. Lett. **5** (1987), no. 5, 371–374. Emmanuel Lesigne and Dalibor Voln[ý]{}, *[Large deviations for martingales]{}*, Stochastic Process. Appl. **96** (2001), no. 1, 143–159. Kuang Hsien Lin, *Convergence rate and the first exit time for [$U$]{}-statistics*, Bull. Inst. Math. Acad. Sinica **9** (1981), no. 1, 129–143. Péter Major, *On a multivariate version of [B]{}ernstein’s inequality*, Electron. J. Probab. **12** (2007), 966–988. , *Estimation of multiple random integrals and [$U$]{}-statistics*, Mosc. Math. J. **10** (2010), no. 4, 747–763, 839. Avi Mandelbaum and Murad S. Taqqu, *Invariance principle for symmetric statistics*, Ann. Statist. **12** (1984), no. 2, 483–496. Florence Merlev[è]{}de, Magda Peligrad, and Sergey Utev, *[Recent advances in invariance principles for stationary sequences]{}*, Probab. Surv. **3** (2006), 1–36. Alfredas Ra[č]{}kauskas and Charles Suquet, *Necessary and sufficient condition for the [L]{}amperti invariance principle*, Teor. Ĭmovīr. Mat. Stat. (2003), no. 68, 115–124. Alfredas Račkauskas and Charles Suquet, *[Necessary and sufficient condition for the functional central limit theorem in [H]{}[ö]{}lder spaces]{}*, J. Theoret. Probab. **17** (2004), no. 1, 221–243. Ludger Rüschendorf, *Ordering of distributions and rearrangement of functions*, Ann. Probab. **9** (1981), no. 2, 276–283. Ch. Suquet, *Tightness in [S]{}chauder decomposable [B]{}anach spaces*, Proceedings of the [S]{}t. [P]{}etersburg [M]{}athematical [S]{}ociety, [V]{}ol. [V]{}, Amer. Math. Soc. Transl. Ser. 2, vol. 193, Amer. Math. Soc., Providence, RI, 1999, pp. 201–224. Henry Teicher, *On the [M]{}arcinkiewicz-[Z]{}ygmund strong law for [$U$]{}-statistics*, J. Theoret. Probab. **11** (1998), no. 1, 279–288. Qiying Wang, *Bernstein type inequalities for degenerate [$U$]{}-statistics with applications*, Chinese Ann. Math. Ser. B **19** (1998), no. 2, 157–166, A Chinese summary appears in Chinese Ann. Math. Ser. A [[**[1]{}**]{}9]{} (1998), no. 2, 283.
--- bibliography: - 'flip\_references.bib' --- [**The geometry of flip graphs and mapping class groups**]{} [**Valentina Disarlo, Hugo Parlier**]{} [*Abstract.*]{} The space of topological decompositions into triangulations of a surface has a natural graph structure where two triangulations share an edge if they are related by a so-called flip. This space is a sort of combinatorial Teichmüller space and is quasi-isometric to the underlying mapping class group. We study this space in two main directions. We first show that strata corresponding to triangulations containing a same multiarc are strongly convex within the whole space and use this result to deduce properties about the mapping class group. We then focus on the quotient of this space by the mapping class group to obtain a type of combinatorial moduli space. In particular, we are able to identity how the diameters of the resulting spaces grow in terms of the complexity of the underlying surfaces. Introduction {#sec:intro} ============ The many relationships between curves, arcs and homeomorphisms of surfaces have provided numerous, rich and fruitful insights into the study of Teichmüller spaces and mapping class groups. In particular, combinatorial structures such as curve, arc and pants complexes have been shown to be closely related to metric structures on Teichmüller spaces and in particular all share the mapping class group as an automorphism group. Flip graphs are an example of one of these natural combinatorial structures. For a given topological surface with a prescribed set of points, the vertex set of the associated flip graph is the set of maximal multiarcs (which have begin and terminate in the prescribed points). Just like the other combinatorial objects, the multiarcs are considered up to isotopy (which preserve the prescribed set of points). As they are maximal, they decompose the surface into triangles and thus we refer to them as triangulations (although they may not be triangulations in the usual sense). Two triangulations share an edge in the flip graph if they are related by a flip - so if they differ by at most one arc. Provided the surface is complicated enough, flip graphs are infinite objects but are always locally finite connected graphs. Flip graphs can be thought of (and this is the point of view we take in this article) as metric objects by associating length 1 to every edge. As metric spaces they describe how different triangulations are from one another and are a sort of combinatorial analogue of Teichmüller space. With a few exceptions, the mapping class group is again the full automorphism group [@Kork-Pap] and as such the finite quotient, which we call a [*modular flip graph*]{}, becomes a combinatorial analogue of a moduli space. In contrast to some of the other spaces mentioned before, the action of the mapping class group is proper and, via the Švarc-Milnor lemma, the flip graph is a quasi-isometric model of the mapping class group which makes it an ideal tool for studying its geometry. Mosher [@Mosher2] implicitly uses the flip graph to study the mapping class group from the combinatorial point of view. This point of view has recently been exploited by Rafi and Tao [@RT2]. Flip graphs of surfaces also appear in a number of other contexts. As hinted at above, flip graphs naturally appear in Teichmüller theory. They appear for instance in Penner’s decorated Teichmüller space [@Pen3] and in the work of Fomin, Shapiro and D. Thurston ([@FST1] and [@FST2]) in their study of cluster algebras related to bordered surfaces. Flip graphs and some slight variations have been studied in combinatorics and computational geometry by a variety of authors, for instance Negami [@Negami1], Bose [@Bose1] and De Loera-Rambau-Santos [@triang-book]. One of the simplest and most studied flip graphs is the flip graph of a polygon, the so-called associahedron [@Stasheff; @Tamari]. It is a finite graph with a number of remarkable properties including being the graph of a polytope. The celebrated result of Sleator, Tarjan and W. Thurston [@STT2] about the diameter of the associahedron, and proved using 3-dimensional hyperbolic polyhedra, was recently extended by Pournin [@Pournin] who also provided a purely combinatorial proof. The diameter of this graph is exactly $2n-10$ for all $n>12$. Sleator, Tarjan, and W. Thurston [@STT1] also studied triangulations of spheres up to homeomorphism, which essentially amounts to studying the diameter of a modular flip graph. In this case, they show that the diameter grows like $n \log n$ where $n$ is the number of labelled points on the sphere. In this article, we study both the geometry of flip graphs and of modular flip graphs. One of the main motivations we have in mind is the study of the mapping class group. We begin by studying the geometry of flip graphs. Our first main result comes from a very natural question about two triangulations that have an arc $a$ in common. Given any two such triangulations, there is at least one minimal path between them: do all the triangulations of any minimal path contain the arc $a$? The answer is yes. \[thm:convex\] For every multiarc $A$, the stratum ${{\mathcal F}}_A$ is strongly convex. In the above result, for any given flip graph, we’ve denoted ${{\mathcal F}}_A$ the set of triangulations which contained a prescribed multiarc $A$. We note that this result for flip graphs of polygons was previously known and an essential tool in [@STT1] and in [@Pournin]. We observe that the same question can be asked for the pants graph (where multicurves play the part of multiarcs). For the pants graph, this is known to be true for certain types of multicurves but is in general completely open [@APS1; @APS2; @ALPS; @TaylorZupan]. We give two applications of this result. It is a recent result of the second author together with Aramayona and Koberda that, under certain conditions, simplicial embeddings between flip graphs only arise [*naturally*]{} [@AKP]. By naturally, we mean that the injective simplical map comes from an embedding between the two surfaces. The conditions are on surface in the domain flip graph that is required to be non-exceptional (or “sufficiently complicated", see Section \[ss:applications\] for a precise definition). Now together with the above theorem, this implies the following. Suppose $\Sigma$ is non-exceptional, and let ${{\mathcal F}}(\Sigma) \to {{\mathcal F}}(\Sigma')$ be an injective simplicial map. Then ${{\mathcal F}}(\Sigma)$ is strongly convex inside of $ {{\mathcal F}}(\Sigma')$. As geometric properties of flip graphs translate into a quasi properties for mapping class groups, we also obtain the following result for mapping class groups. This result also follows by results of Masur-Minsky [@MM2] and Hamenstädt [@Ham2]. \[cor:convex1\] For every vertex $T \in {{\mathcal F}}_A$, there is a commutative diagram: $$\xymatrix{ {{\mathcal F}}_A \ar@{^{(}->}[r] &{{\mathcal F}}(\Sigma) \\ \mathrm{Stab}(A) \ar[u]^{{\omega_T}_|} \ar@{^{(}->}[r] & {\ensuremath{\mathrm{Mod}}}(\Sigma) \ar[u]_{\omega_T} }$$ where the inclusion ${{\mathcal F}}_A \hookrightarrow {{\mathcal F}}(\Sigma)$ is an isometry and the orbit map $\omega_T: {\ensuremath{\mathrm{Mod}}}(\Sigma) \to {{\mathcal F}}(\Sigma)$ restricts to a quasi-isometry ${\omega_T}_|: {\ensuremath{\mathrm{Stab}}}(A) \to {{\mathcal F}}_A $. Moreover, the inclusion $\mathrm{Stab}(A) \hookrightarrow {\ensuremath{\mathrm{Mod}}}(\Sigma)$ is a quasi-isometric embedding. After these results about the geometry of flip graphs and mapping class groups, we shift our focus to the quotient of the former by the latter, namely the geometry of modular flip graphs ${\mathcal M \mathcal F}(\Sigma)$. In particular, we study their diameter and how it grows in function of the topology of the base surface. Our main results are upper and lower bounds that have the same growth rates in terms of the number of punctures and genus. We summarize them in the following theorem. \[thm:diameters\] There exist constants $L>0$ and $U>0$ such that if $\Sigma$ be a surface of genus $g$ with $n$ labelled punctures then $$L \left( g \log(g+1) + n \log(n+1)\right)\leq {\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right) \leq U \left( g \log(g+1) + n \log(n+1)\right).$$ The above result is a combination of results (namely Theorems \[thm:uppergenus\], \[thm:upperpuncture\] and Corollary \[cor:card\]) from which the constants $L,U$ can be made explicit. When the punctures are not labeled, we obtain similar results and this time the growth rate is linear in $n$ (Theorem \[thm:unmarked\] and Corollary \[cor:count\]). We note that this result is a generalization of the result of Sleator, Tarjan and W. Thurston mentioned above about the diameters of modular flip graphs of punctured spheres and in fact our lower bounds are obtained using a counting argument and one of their results. Our result also provides a lower bound on the diameters of some slight refinements of the flip graph used in computational geometry and combinatorics. Indeed, it follows that the distance between any two simple triangulations (i.e. not containing multiple edges or loops) of a surface with labelled punctures grows at least like $$n\log(n) + g\log(g).$$ This can be compared with results of Negami [@Negami2; @Negami1] and Cortes et al. [@Hurtado]. We also note that the growth rates are reminiscent of the growth rates of a type of combinatorial moduli space related to cubic graphs. More precisely, one can endow the set of isomorphism types of cubic graphs with $m$ vertices with a metric where one counts the minimal number of [*Whitehead moves*]{} (or $\tilde{S}$-transformations in the language of [@tsukui]). We refer the reader to [@Cavendish] or [@RT] for the definitions of these terms. With this metric, the diameter of this space is also of rough growth $m \log m$ (Cavendish [@Cavendish], Cavendish-Parlier [@CP] and Rafi-Tao [@RT]). Dual to a triangulation is a cubic graph and flips correspond to specific types of Whitehead moves. One might think in first instance that the two results are in fact the same, but one does not seem to imply the other. On the one hand, flipping only allows for certain moves so the result on flip graphs certainly seems stronger. However, given two cubic graphs with the same number of vertices, there is no guarantee that they are both the dual graph triangulations that lie in the same flip graph. This article is organized as follows. In the preliminary section, we provide detailed descriptions of the objects we study and some known results. We also prove a number of preliminary results including for instance a new algorithm to reach a stratum with distance bounded by the intersection number. In particular this provides a new proof that intersection number bounds the flip distance between two triangulations. We also provide a lower bound on distance in terms of intersection number. We conclude the section with two results that are somewhat parallel to the rest of the paper about the mapping class group and flip graphs. As far as we know, although both are known, our proofs are new. We provide these results to illustrate the point that flip graphs can be used to effectively study the mapping class group. In the third section, we prove Theorem \[thm:convex\] stated above. We then provide two applications of this result. The first is about projections to strata and the second is to the large scale geometry of the mapping class group as discussed above. The final section is about the diameters of modular flip graphs. We begin with upper bounds - first in terms of genus and then in terms of the number of punctures - and we end with the lower bounds. [**Acknowledgements.**]{} Part of this work was carried out while the first author was visiting the second author at the University of Fribourg. She is grateful to the department and the staff for the warm hospitality. She also acknowledges the support of Indiana University Provost’s Travel Award for Women in Science. The authors acknowledge support from U.S. National Science Foundation grants DMS 1107452, 1107263, 1107367 “RNMS: GEometric structures And Representation varieties” (the GEAR Network). We would like to thank Javier Aramayona, Chris Connell, Chris Judge, Chris Leininger, Athanase Papadopoulos, Bob Penner, Lionel Pournin and Dylan Thurston for their encouragement and enlightening conversations. Preliminaries ============= In this section we describe in some detail the objects we are interested in and introduce tools we use in the sequel. Most of the results we state are already known, although some of the proofs we provide are new (or at least we did not find them in the literature). In particular, at the end of this section we give two quick examples of results one can prove using flip graphs. Neither are essential in the sequel and are just provided for illustrative purposes. Definitions and setup --------------------- We begin with the basic setup which starts with a topological orientable connected surface $\Sigma$ and finite set of marked points on it. Unless specifically stated, $\Sigma$ will be assumed to be triangulable. It is of finite type, has boundary which can consist of marked points, and boundary curves, and each boundary curve must have at least one marked point on it. We make the distinction between [*labelled*]{} and [*unlabelled*]{} marked points when we look at how homeomorphisms are allowed to act on $\Sigma$ - this will made explicit in what follows. Sometimes marked points that do not lie on a boundary curve will be referred to as [*punctures*]{}. To such a $\Sigma$ one can associate its [*arc complex*]{} ${\mathcal A}(\Sigma)$, a simplicial complex where vertices are isotopy classes of simple [*arcs*]{} based at the marked points of $\Sigma$. Simplices are spanned by [*multiarcs*]{} (unions of isotopy classes of arcs disjoint in their interior). We won’t explicitly use this complex so we won’t describe it in full detail, but we will be interested in the graph which is the $1$-skeleton of the cellular complex dual to $A(\Sigma)$: the flip graph of $\Sigma$. The flip graph ${{\mathcal F}}(\Sigma)$ can be described differently as follows. Vertices of this graph are maximal multiarcs so they decompose $\Sigma$ into triangles. We refer to these multiarcs as [*triangulations*]{} (although they are not always proper triangulations in the usual sense - we apologize any confusion which incurs from this by quite common terminology). Two vertices of ${{\mathcal F}}(\Sigma)$ share an edge if they differ by a so-called [*flip*]{}. If $a$ is an arc of a triangulation $T$ which belongs to two triangles which form a quadrilateral, a [*flip*]{} is the operation which consists of replacing $a$ by the other diagonal arc $a'$ of the quadrilateral. ![A local picture of a flip[]{data-label="fig:flip"}](Figures/flip.pdf){width="6cm"} Note that certain arcs are not [*flippable*]{} - this occurs exactly when an arc is contained in a punctured disc surrounded by another arc. ![The central arc is unflippable[]{data-label="fig:unflippable"}](Figures/unflippable.pdf){width="3.0cm"} We denote by $\kappa = \kappa(\Sigma)$ the number of arcs in (any) triangulation of $\Sigma$ and by $\tilde{\kappa}=\tilde{\kappa}(\Sigma)$ the number of triangles in the complement of any triangulation of $\Sigma$. Via an Euler characteristic argument, one obtains $\tilde{\kappa}(\Sigma) = 4g + 4b +2s +p -6$ and $\kappa(\Sigma)= 6g + 3b + 3s + p - 6$, where $g$ is the genus of $\Sigma$, $s$ is the number of punctures, $b$ is the number of boundary components and $p$ is the number of points on the boundary curves of $\Sigma$. Some flip graphs are finite - such as the flip graph of a polygon - but provided the underlying surface has enough topology, ${{\mathcal F}}(\Sigma)$ is infinite. A simple example of an infinite flip graph is given by the flip graph of a cylinder with a single marked point on each boundary curve. By the above formula, vertices of ${{\mathcal F}}(\Sigma)$ are of degree $2$, and as it is both infinite and connected, it is isomorphic to the infinite line graph. Associated to a multiarc $A$ is a [*stratum*]{} ${{\mathcal F}}_A(\Sigma)$ which is the flip graph of triangulations of $\Sigma$ which contain the multiarc $A$. We say that a stratum is [*strongly convex*]{} if any geodesic between two of its points is entirely contained in the stratum (sometimes this property is referred to as being [*totally geodesic*]{}). Naturally, if $A$ is not separating, ${{\mathcal F}}_A(\Sigma)$ is isomorphic to the flip graph of $\Sigma \setminus A$ (the surface [*cut along*]{} the multiarc $A$). There is something to be said here - we think of the result of the operation of cutting not as being the deletion of the arcs but by doubling the arcs and then separating them. For instance, if you cut a once punctured torus along an arc, the result is a cylinder with a marked point on each boundary curve. If $A$ is separating, ${{\mathcal F}}_A(\Sigma)$ is isomorphic to the product of the product of the flip graphs of the connected components of $\Sigma \setminus A$. In the rest of the paper we will denote by $|A|$ the number of arcs in $A$. As arcs are thought of as isotopy classes of arcs, the intersection $i(a,b)$ between arcs $a$ and $b$ is defined to be the minimum number of intersection points between their representatives. Generally we assume arcs and multiarcs to be realized in minimal position. If $A$ and $B$ are multiarcs, their *intersection number* is defined as $$i(A,B) = \sum_{b \in B} \sum_{a\in A} i(a,b).$$ In terms of intersection, two triangulations $T,T'$ are related by a flip if they satisfy $$i(T,T')=1.$$ The flip graph is known to enjoy a number of properties. First of all, for any topological type of $\Sigma$, ${{\mathcal F}}(\Sigma)$ is a connected graph. There are several different proofs of this fact (see for instance Hatcher [@Hat]). We will consider the edges of the flip graph of length 1 and we will endow the flip graph with its shortest path distance. The distance between two triangulations is then equal to the minimum number of flips required to pass from one to the other. In particular there is the following quantitative version relating distance and intersection number which can be deduced from an algorithm described by Mosher in [@Mosher1] and Penner in [@Pen5]. \[lem:mosher\] For any triangulation $S,T \in {{\mathcal F}}(\Sigma)$ we have $d(S,T) \leq i(T,S)$. We will give an alternative proof of this lemma in Section \[ss:upper\]. The homeomorphisms of $\Sigma$ considered here always fix pointwise the labelled points of $\Sigma$ and setwise the unlabelled. Permutations of the unlabelled points are allowed. The *mapping class group* ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ of $\Sigma$ is the group of orientation preserving homeomorphisms of $\Sigma$ up to isotopy. Isotopies here always fix pointwise the set of the marked points of $\Sigma$. The group ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ act simplicially by automorphisms on ${{\mathcal F}}(\Sigma)$. It is a result of Korkmaz and Papadopoulos [@Kork-Pap] that except for some low complexity cases, the automorphism group of ${{\mathcal F}}(\Sigma)$ is exactly the [*extended mapping class group*]{} of $\Sigma$ - i.e. the group of homeomorphisms up to isotopy (orientation reversing homeomorphisms are also allowed). Related to this result, is a result about subgraphs of flip graphs that are graph isomorphic to other flip graphs. Except for some complexity cases again, such subgraphs only arise in the natural way - as strata associated to a certain multiarc [@AKP]. We will be interested in the geometry of the flip graph as a metric space. We recall a few notions of metric geometry that we will use later in the paper. Let $(X, d_X)$ and $(Y, d_Y)$ be two metric spaces. A map $$f:(X,d_X) \to (Y,d_Y)$$ is a $(\lambda, \epsilon)$-*quasi-isometric embedding* if for all $x, x' \in X$ we have $$\frac{1}{\lambda} d_X(x, x') - \epsilon \leq d_Y(f(x), f(x')) \leq \lambda d_X(x, x') + \epsilon.$$ We say that a quasi-isometric embedding $f$ is a *quasi-isometry* there exists $R \geq 0$ such that the image of $f$ is $R$-dense in $Y$. Equivalently, $f$ is a quasi-isometry if there exists a quasi-isometric embedding $g: Y \to X$ and a constant $K\geq 0$ such that for all $x \in X$ and for all $y \in Y$ we have $d_X( g\circ f(x), x) \leq K$ and $d_Y( f\circ g(y), y) \leq K$. We say that $g$ is a *quasi-inverse* of $f$. The following lemma is a classic result in geometric group theory. Let $G$ be a group acting on a metric space $(X, d_X)$ properly and cocompactly by isometries. Then $G$ is finitely generated and for every $x\in X$ the orbit map $$\begin{aligned} \omega_X : G & \to X \\ g &\mapsto g(x)\end{aligned}$$ is a quasi-isometry. The mapping class group acts on the flip graph by isometries. The Švarc-Milnor lemma applies directly to the flip graph and the mapping class group, and of course is only interesting when ${{\mathcal F}}(\Sigma)$ is of infinite diameter. \[mcg-flipgraph\] For every triangulation $T \in {\ensuremath{\mathrm{Mod}}}(\Sigma)$ the orbit map $$\begin{aligned} \omega_T : {\ensuremath{\mathrm{Mod}}}(\Sigma) & \to {{\mathcal F}}(\Sigma) \\ g &\mapsto g(T)\end{aligned}$$ is a quasi-isometry. We will use the Švarc-Milnor Lemma. The action of ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ on ${{\mathcal F}}(\Sigma)$ is cocompact since there is only a finite number of ways to glue $\tilde{\kappa}$ triangles to get a surface homeomorphic to $\Sigma$. For a triangulation $T \in {{\mathcal F}}(\Sigma)$, we denote by ${\ensuremath{\mathrm{Stab}}}(T)$ its stabilizer in ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ (the group of mapping classes that fix $T$ setwise). We will prove that for every $T$ the stabilizer ${\ensuremath{\mathrm{Stab}}}(T)$ is finite and this suffices to prove that the action of ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ on ${{\mathcal F}}(\Sigma)$ is proper. Indeed, every mapping class in ${\ensuremath{\mathrm{Stab}}}(T)$ induces a permutation of the arcs in $T$, and there is a short sequence of groups $$1 \to {\ensuremath{\mathrm{Stab}}}(T) \to \mathcal S_\kappa$$ where $\mathcal S_\kappa$ is the symmetric group on $\kappa$ elements. The sequence is exact since a mapping class that fixes every arc of a triangulation is the identity by the Alexander lemma (see for instance [@FM]). The last assertion follows directly from the Švarc-Milnor lemma. We define the *modular flip graph* ${\mathcal M \mathcal F}(\Sigma)$ as the quotient of ${{\mathcal F}}(\Sigma)$ under the action of ${\ensuremath{\mathrm{Mod}}}(\Sigma)$. We remark that points in ${\mathcal M \mathcal F}(\Sigma)$ are triangulations of $\Sigma$ up to homeomorphisms. By the above lemma ${\mathcal M \mathcal F}(\Sigma)$ is a connected finite graph that inherits a well-defined distance from ${{\mathcal F}}(\Sigma)$. We note that an orbit map in Lemma \[mcg-flipgraph\] is $ {\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right)$-dense in ${{\mathcal F}}(\Sigma)$ by the Švarc-Milnor Lemma. We will later investigate the diameter of ${\mathcal M \mathcal F}(\Sigma)$. The following result will be a helpful tool in our computation. This result was first proved by Sleator-Tarjan-Thuston [@STT2] provided that $n$ is large enough. Recently Pournin [@Pournin] provided a combinatorial proof and proved the lower bounds for all $n>12$. \[th:STT\] If $\Sigma$ is a disk with $n > 12$ labelled points on the boundary then ${{\mathcal F}}(\Sigma)$ has diameter $2n - 10$. We finally note that an orbit map in Lemma \[mcg-flipgraph\] is ${\ensuremath{\mathrm{diam}}}( {\mathcal M \mathcal F}(\Sigma))$-dense in ${{\mathcal F}}(\Sigma)$ by the Švarc-Milnor Lemma. Intersection number and distances --------------------------------- ### An upper bound {#ss:upper} In this section we describe an algorithm to get from a triangulation $T$ to a stratum associated to a multiarc $A$. To do this, we prove that there exists an arc in $T$ that intersects $A$ maximally and such that its flip reduces the number of intersections with $A$. This provides an alternative proof of Lemma \[lem:mosher\] above. Let $\Delta$ be a triangle in $\Sigma \setminus T$ and $A$ a multiarc. We say that $\Delta$ is *terminal* for $A$ if there exists $a \in A$ such that $\Delta$ is the first or the last triangle crossed by $a$ (see Figure \[fig:terminal\] for an example). Ł(.51\*.56) $a$\ Let $T$ be a triangulation and $A$ a multiarc. Let $t \in T$ be a flippable edge and $T'$ the triangulation obtained by the flip of $t$. We say that flipping $t$ is *convenient* if $i(T', A) < i(T, A)$ and flipping $t$ is *neutral* if $i(T', A) = i(T, A)$. Denote by $t'$ is the arc obtained flipping $t$. Note that flipping $t$ is convenient if and only if $i(t',A) < i(t,A)$. Similarly, flipping $t$ is neutral if and only if $i(t',A) = i(t,A) $. Also note that an arc may be neither convenient nor neutral. \[flip-convenient-0\] Assume $i(h,A) >0$. If $h\in T$ is an arc such that $i(h, A) = \max_{t \in T} i(t, A)$, then $h$ is flippable in $T$. If $h$ is not flippable then there exists an arc $h^\star \in T$ that surrounds $h$ and bounds a once-punctured disk. Hence $i(h^\star, A) \geq 2 i(h, A)$ in contradiction with our condition on $h$. \[flip-convenient-1\] Assume $i(T,A) >0$. Let $h \in T$ be an arc such that $i(h, A) = \max_{t \in T} i(t, A)$. Let $Q$ be the quadrilateral containing $h$ as a diagonal. If $Q$ contains a terminal triangle for $A$ then flipping $h$ is convenient. Let $a\in A$ be an arc that terminates on $Q$. We begin by observing that if $h$ is not the first arc of $T$ that $a$ crosses from its terminal point, $h$ is not maximal. Indeed, if $h'\in T$ is the first arc crossed, then any arc that crosses $h$ is forced to cross $h'$ and it has $$i(h',A) \geq i(h,A)+1,$$ in contradiction with the maximality of $h$ (see Figure \[fig:Hmax1\]). Ł(.505\*.45) $h$\ Ł(.45\*.08) $h'$\ We now prove that if $h$ is the first (or the last) edge crossed by an arc in $A$ (as in Figure \[fig:figura1\]), then flipping $h$ is convenient. Let $h'$ be the arc obtained from flipping $h$. Let us now compute $i(h', A) - i(h, A) $. Up to possible symmetries, we may assume that $A$ contains at least an arc that crosses $z$ and $h$ and terminates in the top vertex of $Q$. Up to isotopy, only three configurations of $A$ and $Q$ are possible, and these are described in Figure \[fig:figura1\]. We will use the following notation: - $\epsilon$ is the number of arcs in $A\cap Q$ that terminate in the top vertex of $Q$, crossing $z$ and $h$. Under our assumption, $\epsilon \geq 1$. - $\epsilon'$ is the number of those that terminate in the top vertex of $Q$, crossing $w$ and $h$; - $\epsilon''$ is the number of those that terminate in the bottom vertex of $Q$, crossing $h$ and $y$; - $\alpha $ is the number of arcs in $A\cap Q$ that wrap around the left endpoint of $h$ crossing $z,h$ and $x$; - $\beta$ is the number of arcs in $A \cap Q$ that wrap around the right endpoint of $h$ crossing $w,h$ and $y$; - $\gamma$ is the number of arcs in $A\cap Q$ that wrap around the bottom vertex of $Q$, crossing $z$, $h$ and $w$; - $\eta$ is the number of arcs that cross $z$, $h$ and $y$. Ł(.07\*.76) $x$\ Ł(.2\*.9) $y$\ Ł(.12\*.18) $z$\ Ł(.21\*.28) $w$\ Ł(.23\*.45) $h$\ Ł(.004\*.42) $\alpha$\ Ł(.313\*.39) $\beta$\ Ł(.18\*.05) $\gamma$\ Ł(.262\*.81) $\eta$\ Ł(.04\*.27) $\epsilon$\ Ł(.404\*.76) $x$\ Ł(.534\*.9) $y$\ Ł(.456\*.18) $z$\ Ł(.544\*.28) $w$\ Ł(.58\*.45) $h$\ Ł(.338\*.42) $\alpha$\ Ł(.647\*.39) $\beta$\ Ł(.62\*.7) $\epsilon''$\ Ł(.596\*.81) $\eta$\ Ł(.374\*.27) $\epsilon$\ Ł(.735\*.76) $x$\ Ł(.865\*.9) $y$\ Ł(.785\*.18) $z$\ Ł(.855\*.22) $w$\ Ł(.91\*.45) $h$\ Ł(.669\*.42) $\alpha$\ Ł(.978\*.39) $\beta$\ Ł(.18\*.05) $\gamma$\ Ł(.915\*.22) $\epsilon'$\ Ł(.705\*.27) $\epsilon$\ We remark that in Figure \[fig:figura1\] - (1) we have $ \epsilon' = \epsilon'' = 0$, in Figure \[fig:figura1\] - (2) we have $ \gamma =0$, in Figure \[fig:figura1\] - (3) we have $\eta =0$. Moreover, in each configuration in Figure \[fig:figura1\] the following holds: $$\begin{aligned} i(h,A) &= \alpha + \beta + \epsilon + \eta + \epsilon' + \epsilon'' \\ i(h', A) &= \gamma + \eta \\ i(z,A) &= \alpha + \eta + \gamma + \epsilon = \alpha + \epsilon + i(h',A)\end{aligned}$$ By definition of $h$, we have $i(z,A) \leq i(h,A)$. It follows: $$\begin{aligned} \alpha + \epsilon + i(h',A) &\leq i(h,A) \\ i(h',A) -i (h,A) &\leq - \alpha - \epsilon \leq -1. \end{aligned}$$ We conclude that flipping $h$ is convenient. \[flip-convenient-2\] Assume $i(T,A) >0$. Let $h\in T$ be an arc such that $i(h, A) = \max_{t \in T} i(t, A)$. Let $Q$ be the quadrilateral containing $h$ as a diagonal, and assume that $Q$ does not contain a terminal triangle for $A$. Then flipping $h$ is neutral if and only if, according the notation of Figure \[fig:figura2\], $i(h, A) = i(y, A) = i(z,A)$. Moreover, if $i(h, A) \neq i(y, A)$ or $i(h, A) \neq i(z,A)$ then flipping $h$ is convenient. We will compute $i(h',A) - i(h,A)$. Since $Q$ is not terminal for $A$, $A$ and $Q$ look like in Figure \[fig:figura2\], up to isotopy. Ł(.40\*.76) $x$\ Ł(.54\*.87) $y$\ Ł(.45\*.18) $z$\ Ł(.545\*.28) $w$\ Ł(.55\*.45) $h$\ Ł(.335\*.415) $\alpha$\ Ł(.65\*.39) $\beta$\ Ł(.52\*.05) $\gamma$\ Ł(.598\*.805) $\eta$\ Ł(.448\*.92) $\delta$\ Denote by $\delta$ the number of arcs in $A\cap Q$ that wrap around the top corner of $Q$. In the notation of the proof of Lemma \[flip-convenient-1\] we have: $$\begin{aligned} i(h,A) &= \alpha + \eta + \beta \\ i(z,A) &= \alpha + \eta + \gamma \\ i(y,A) &= \eta + \beta + \delta \\ i(h',A) &= \gamma +\delta + \eta \\ i(h',A) - i(h,A) &= (\gamma - \beta) + (\delta - \alpha).\end{aligned}$$ By hypothesis on $h$, we have $i(z,A) \leq i(h,A)$, so $\gamma - \beta \leq 0$. Similarly, $i(y,A) \leq i(h,A)$, so $\delta - \alpha \leq 0$. It follows that $i(h', A) - i(h,A) = 0$ if and only if $\gamma = \beta$ and $\delta = \alpha$, that is, if and only if $i(z,A) = i(h,A)$ and $i(y,A) = i(h,A)$. We remark that in all the other cases $i(h',A) - i(h,A) <0$, and flipping $h$ is convenient. \[flip-convenient-lemma\] If $i(T,A) > 0$, then there exists an arc $h \in T$ such that $i(h,A) = \max_{t \in T} i(t,A)$ and flipping $h$ is convenient. We will describe a procedure to find $h$. First pick an arc $m$ such that $i(m,A) = \max_{t \in T} i(t,A)$ . By Lemma \[flip-convenient-0\] $m$ is flippable. If flipping $m$ is convenient, we set $h= m$ and we are done. Otherwise, by Lemma \[flip-convenient-1\] the quadrilateral containing $m$ as a diagonal contains two more edges, $y$ and $z$, such that $i(y, A) = i(z, A) = i(m, A) = \max_{t \in T} i(t,A)$. Now set $m=y$ or $m=z$, and repeat this procedure. Lemma \[flip-convenient-1\] ensures that the algorithm terminates. Indeed, if the quadrilateral containing $m$ as a diagonal also contains a terminal triangle for $A$ then flipping $m$ is convenient. This now allows to describe a path from $T$ to ${{\mathcal F}}_A$. \[our-algo\] Let $T$ be a triangulation and $A$ a multiarc. There is a sequence of triangulations $T = T_0 \rightarrow \ldots \rightarrow T_{i+1} \rightarrow \ldots$ iteratively constructed as follows: 1. If $i(T_i, A) >0$, choose $h_i \in T_i$ as in Lemma \[flip-convenient-lemma\]. Denote by $T_{i+1}$ the triangulation obtained flipping $h_i$. 2. If $i(T_i, A) =0$, terminate. Any such sequence constructed this way has at most $i(T,A)$ elements, and if $T_n$ is the terminal triangulation then $T_n \in {{\mathcal F}}_A$. Moreover, the above sequence satisfies $\max_{t \in T_{i+1}} i(t, A) \leq \max_{t \in T_{i}} i(t, A)$ for every $i = 0, \ldots, n$. By Lemma \[flip-convenient-lemma\] flipping $h_i$ is convenient, so $$i(T_{i+1}, A) \leq i(T_i, A) - 1.$$ After at most $i(T,A)$ steps, we have $i(T_n, A) = 0$, that is, $T_n \in {{\mathcal F}}_A$. From this the following corollary is immediate. \[dist-up\] For every triangulation $T \in {{\mathcal F}}(\Sigma)$ and for every multiarc $A$, we have $$d(T, {{\mathcal F}}_A) \leq i(T,A).$$ We remark that if $S$ is a triangulation, ${{\mathcal F}}_S=S$ and the path described in Theorem \[our-algo\] is a path joining $T$ and $S$. As such we also have the following corollary. The flip graph ${{\mathcal F}}(\Sigma)$ is connected, and for any two triangulations $T, S \in {{\mathcal F}}(\Sigma)$ we have $d(T,S) \leq i(T,S).$ Our construction also enjoys the following properties. \[corollary\] The path from $T$ to ${{\mathcal F}}_A$ described in Theorem \[our-algo\] has the following properties: 1. If there exists $a \in A$ such that $a \in T_i$ then $a \in T_j$ for all $j \geq i$; 2. If $t \in T_i$ is such that $i(t, A) = 0$ then $t \in T_j$ for all $j \geq i$. We will see later that all the geodesic paths between a triangulation and the stratum of a multiarc also have these properties. We use this corollary to deduce the following. \[strata-conn\] For every multiarc $A$, the stratum ${{\mathcal F}}_A$ is arcwise connected. Let $S,T \in {{\mathcal F}}_A$ be two triangulations, and consider the path from $T$ to ${{\mathcal F}}_S = S$ described in Theorem \[our-algo\]. We remark that for all $a \in A$ we have $a \in T$ and $i(a, S) = 0$. By Corollary \[corollary\], for all $a \in A$ we have $a \in T_i$, so $T_i \in {{\mathcal F}}_A$ for all $i = 0, \ldots, n$. We conclude that the path described is contained in ${{\mathcal F}}_A$. ### A lower bound on distances As a complement to our upper bound on distance in terms of intersection number, we now show how intersection also provides a lower bound on distance. We begin with the following observation. \[low:lemma-1\] Let $T$ and $T'$ be two triangulations in $\Sigma$ that differ by one flip, and let $A$ be a multiarc with $|A|$ components. Then we have: $$i(T', A) \geq 2 \cdot \max_{t \in T} i(t, A) - 2 \cdot |A| .$$ Assume that $T$ and $T'$ differ by a flip on $t$, we set $T' = T \setminus \{t\} \cup \{t'\}$. Let $h\in T$ be an arc such that $i(h, A) = \max_{t\in T} i(t,A)$. We have: $$\label{eq:2} \begin{split} i(T',A) &= i(T,A) - i(t,A) + i(t',A) \geq i(T,A) - i(t,A) \\ &\geq i(T,A) - i(h,A) \end{split}$$ By Lemma \[flip-convenient-0\] the arc $h$ is flippable. In the notation of Lemma \[flip-convenient-1\] , we have $$\begin{aligned} \label{eq:3} \begin{split} i(h, A) &= \alpha + \eta + \beta + \delta + \epsilon + \epsilon' + \epsilon'' \\ i(x, A) & = \alpha + \delta \\ i(y, A) &= \eta + \beta + \delta + \epsilon'' \\ i(w, A) & = \beta + \epsilon' \\ i(z, A) & = \alpha + \eta + \epsilon \end{split}\end{aligned}$$ Remark that $\epsilon + \epsilon' + \epsilon''' $ is the total number of arcs that terminates in the quadrilateral containing $h$, therefore $\epsilon + \epsilon' + \epsilon''' \leq 2|A|$. Combining Equations \[eq:3\], we have: $$\label{eq:4} \begin{split} i(T', A) & \geq i(T, A) - i(h,A) \\ &\geq (i(x,A) + i(y,A) + i(w,A) + i(z,A) + i(h,A) ) - i(h,A) \\ & = (\alpha + \delta) + (\eta + \beta + \delta + \epsilon'') + (\beta + \epsilon' ) + (\alpha + \eta + \epsilon) \\ & = 2 i(h, A) - (\epsilon + \epsilon' + \epsilon'') \\ & \geq 2 i(h,A) - 2 \cdot |A| \end{split}$$ If $T$ is a triangulation and $A$ is a multiarc such that $\max_{t \in T} i(t, A) \geq 2|A|$, then $$d(T, {{\mathcal F}}_A) \geq \left \lfloor \frac{\log(i(T,A)) - \log( 2|A| - 1 )}{\log(\kappa)} \right \rfloor - 2.$$ Let $h \in T$ be an arc such that $i(h,A) = \max_{t \in T} i(t,A)$. We have: $$\label{eq:1} i(h, A) \geq \frac{i(T,A)}{\kappa}.$$ Let $T'$ be a triangulation that differs from $T$ by one flip. By Lemma \[low:lemma-1\] the following holds when $i(h,A) \geq 2 |A|$ : $$\label{eq:5} \begin{split} i(T',A) &\geq 2 i(h,A) - 2 \cdot |A| \\ &\geq i(h,A) \\ & \geq \frac{i(T,A)}{\kappa} . \end{split}$$ We note that the case $i(h,A) \leq 2 |A|$ is not very interesting because in this case $T$ is not too far from ${{\mathcal F}}_A$. Indeed, by Lemma \[dist-up\] we have: $d(T,{{\mathcal F}}_A) \leq 2 \kappa \cdot i(h,A) \leq 2 \kappa \cdot |A|$. Let $d = d(T, {{\mathcal F}}_A) $, and let $T=T_0 \rightarrow \ldots \rightarrow T_d \in {{\mathcal F}}_A$ be a geodesic path from $T$ to ${{\mathcal F}}_A$. Let $m \leq d$ be the smallest integer such that $\max_{t \in T_{m+1}} i(t,A) \leq 2 |A| -1$ and for every $j \leq m$ we have $\max_{t \in T_{j}} i(t, A) \geq 2 |A|$. We have: $$\begin{split} i(T_{m+1}, A) \leq (2|A| -1) \cdot \kappa. \end{split}$$ By Lemma \[eq:5\] and the above remark, we also have: $$\begin{split} i(T_{m+1}, A) \geq \frac{i(T_{m}, A)}{\kappa} \geq \ldots \geq \frac{i(T_0, A)}{\kappa^{m+1}} \end{split}$$ We have the following inequality that we solve for $m$: $$\begin{split} (2|A| -1) \cdot \kappa &\geq \frac{i(T_0, A)}{\kappa^{m+1}} \\ \kappa^{m+2} &\geq \frac{i(T,A)}{2|A| - 1} \\ m & \geq \frac{\log i(T,A) - \log(2|A| - 1)}{\log(\kappa)} - 2. \end{split}$$ We conclude: $$d(T, {{\mathcal F}}_A) = d \geq m \geq \frac{\log i(T,A) - \log(2|A| - 1)}{\log(\kappa)} -2.$$ We remark that if $\max_{t \in T} i(t, A) \leq 2|A| - 1$ then by Lemma \[dist-up\] $$d(T, {{\mathcal F}}_A) \leq i(T,A) \leq (2|A|-1)\cdot \kappa.$$ If $T$ and $S$ are two triangulations, then $$d(T,S) \geq \left \lfloor \frac{\log(i(T,S))}{\log(\kappa)} \right \rfloor - 4.$$ Examples of the relationship between flip graphs and the mapping class group ---------------------------------------------------------------------------- In this section, we provide two examples of how one can use the flip graph to study the mapping class group. They are completely independent from the rest of the paper but are provided to illustrate the variety of ways in which the quasi-isometry between the two objects can be used. ### Mapping tori and pseudo-Anosov homeomorphisms The following proposition follows from a standard construction in 3-dimensional topology known as the *layered triangulation* of the mapping torus of a pseudo-Anosov homeomorphism. \[Agol\] For every pseudo-Anosov $\phi \in {\ensuremath{\mathrm{Mod}}}(\Sigma)$ and for every triangulation $T \in {{\mathcal F}}(\Sigma)$ we have: $$d(T, \phi(T)) \geq \frac{ \mathrm{vol}(M_\phi)}{\pi},$$ where $\mathrm{vol}(M_\phi)$ is the volume of the mapping torus $M_\phi$ of $\phi$. Consider a geodesic path of flips $T \rightarrow \ldots \rightarrow \phi(T)$. The number of hyperbolic tetrahedra in the layered triangulation of $M_\phi$ associated to this path is equal to $d(T, \phi(T))$. For details on the layered triangulation of a mapping torus, we refer to [@Agol]. Our second application is the following. For every pseudo-Anosov $\phi \in {\ensuremath{\mathrm{Mod}}}(\Sigma)$ the cyclic subgroup $\langle \phi \rangle$ is undistorted in ${\ensuremath{\mathrm{Mod}}}(\Sigma)$. We first prove that for every triangulation $T$ and for every $\phi$ we have: $$\frac{n \cdot \mathrm{vol}(M_\phi)}{\pi} \leq d(T, \phi^n (T)) \leq n \cdot d(T, \phi(T)).$$ By Lemma \[mcg-flipgraph\] this suffices to prove the corollary. The upper bound follows immediately by the triangle inequality. For the lower bound we use Proposition \[Agol\]. We remark that since $M_{\phi^n}$ is a finite cover of degree $n$ of $M_\phi$ then $$\mathrm{vol}(M_{\phi^n}) = n \cdot \mathrm{vol}(M_\phi).$$ It follows $$d(T, \phi^n(T)) \geq \frac{ \mathrm{vol}(M_{\phi^n})}{\pi} =\frac{ n \cdot \mathrm{vol}(M_\phi)}{\pi} .$$ ### The cone construction Fix a complete finite-area hyperbolic metric $M$ on $\Sigma$ and a homeomorphism $\varphi$ between $M$ and $\Sigma$. Let $P = \{ p_1, \ldots, p_n \}$ be the set of punctures of $\Sigma$. It is a classical result of Birman and Series that the set of all simple geodesics on $M$ is nowhere dense on $M$. We choose a point on the complement of the closure of all the simple geodesics of $M$ and consider its image by $\varphi$ on $\Sigma$. We denote this point $p_{n+1}$ (on both $\Sigma$ and $M$). We now set $P' = P \cup \{p_{n+1} \}$ and let $\Sigma'$ be the punctured surface $\Sigma$ with an extra marked point at $p_{n+1}$. This construction is known as *puncturing* (see [@RS]). Let $T$ be a triangulation of $\Sigma$, denote by $\mathcal{G}_M(T)$ the unique $M$-geodesic representative in its isotopy class (it is an ideal triangulation as the marked points become punctures). Then $p_{n+1}$ is contained in a unique triangle of $\Sigma \setminus{\mathcal{G}_M(T)}$. We then [*cone*]{} the triangle in $p_{n+1}$: by this we mean add arcs between $p_{n+1}$ and the three vertices of the triangle to obtain a triangulation of $\Sigma'$ that we denote by $\widehat{T}$. We will also refer to the arcs going to $p_{n+1}$ as the *cone* on $p_{n+1}$. In the following we will denote by $d$ the flip distance on ${{\mathcal F}}(\Sigma)$ and by $d'$ the flip distance on ${{\mathcal F}}(\Sigma')$. \[cone-lemma\] The cone map $$\begin{aligned} \mathrm{{\ensuremath{\mathrm{cone}}}}_{M}: {{\mathcal F}}(\Sigma) &\to {{\mathcal F}}(\Sigma') \\ T &\mapsto \widehat{T}\end{aligned}$$ is well-defined and 2-Lipschitz. If $T'$ is a triangulation of $\Sigma$ isotopic to $T$, then $\mathcal{G}_M(T') = \mathcal{G}_M(T)$, so $\widehat{T'} = \widehat{T}$. Figure \[flips-cone-1\] shows that if two triangulations differ by one flip, their images by the cone map differ by at most 2 flips. We will now prove that ${\ensuremath{\mathrm{cone}}}_M$ is a quasi-isometric embedding. Fix a triangulation $H \in {{\mathcal F}}(\Sigma)$. Denote by $\omega_{H}: {\ensuremath{\mathrm{Mod}}}(\Sigma) \to {{\mathcal F}}(\Sigma)$ the orbit map of $H$ under ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ as in Lemma \[mcg-flipgraph\]. Similarly, denote by $\omega_{\widehat{H}}: {\ensuremath{\mathrm{Mod}}}(\Sigma') \to {{\mathcal F}}(\Sigma')$ the orbit map of $\widehat H$ under ${\ensuremath{\mathrm{Mod}}}(\Sigma')$. Recall that ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ and ${\ensuremath{\mathrm{Mod}}}(\Sigma')$ are related by the Birman exact sequence, where the map $f: {\ensuremath{\mathrm{Mod}}}(\Sigma') \to {\ensuremath{\mathrm{Mod}}}(\Sigma)$ is the *forgetful map*: $$1 \to \pi_1(\Sigma, p_{n+1}) \to {\ensuremath{\mathrm{Mod}}}(\Sigma') \overset{f} \to {\ensuremath{\mathrm{Mod}}}(\Sigma) \to 1$$ \[F-lipschitz\] Let $f: {\ensuremath{\mathrm{Mod}}}(\Sigma') \to {\ensuremath{\mathrm{Mod}}}(\Sigma)$ be the forgetful map. Let $\omega^{-1}_{\widehat H}: {{\mathcal F}}(\Sigma') \to {\ensuremath{\mathrm{Mod}}}(\Sigma')$ be a quasi-inverse of $\omega_{\widehat{H}}$. The following map is quasi-Lipschitz. $$\begin{aligned} F: {{\mathcal F}}(\Sigma') & \to {{\mathcal F}}(\Sigma) \\ T & \mapsto \omega_{H} \circ f \circ \omega^{-1}_{\widehat{H}}(T) \end{aligned}$$ For all $\psi' \in {\ensuremath{\mathrm{Mod}}}(\Sigma')$ we have $ F(\psi'( \widehat{H})) = f (\psi')(H)$. It is immediate to see that $f$ is 1-Lipschitz with respect to the Humphreis generators of ${\ensuremath{\mathrm{Mod}}}(\Sigma)$. The assertion follows by composition with the two quasi-isometries. We remark that the quasi-Lipschitz constants of $F$ depend on the diameter of ${\mathcal M \mathcal F}(\Sigma)$ and a choice of generators for ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ and ${\ensuremath{\mathrm{Mod}}}(\Sigma') $. \[2N\] For every $\psi \in {\ensuremath{\mathrm{Mod}}}(\Sigma)$ there exists $\phi' \in f^{-1}(\psi) \subset {\ensuremath{\mathrm{Mod}}}(\Sigma')$ such that $$d'( \widehat{\psi H}, \phi' (\widehat{H})) \leq 2 \tilde{\kappa}$$ Fix $\psi \in {\ensuremath{\mathrm{Mod}}}(\Sigma)$ and choose $\psi' \in {\ensuremath{\mathrm{Mod}}}(\Sigma')$ such that $f (\psi')= \psi$, that is, $\psi$ and $\psi'$ are homeomorphisms of $\Sigma$ isotopic rel $P$ . Let us first compare $\widehat{\psi (H)}$ and $\psi'(\widehat{H})$. To construct $\psi'(\widehat{H})$ we proceed as follows. Set $\Sigma \setminus \mathcal{G}_M(H) = \bigcup \Delta_i$ where $\Delta_i$ is a triangle. We assume $p_{n+1} \in \Delta_1$, so that $\hat{H}$ is obtained by $H$ coning $\Delta_1$ and $\phi'(\widehat H)$ is obtained coning $\psi'(\Delta_1)$. To construct $\widehat{\psi(H)}$ we proceed as follows. Set $\Sigma \setminus \mathcal{G}_M(\psi(H)) = \bigcup \Delta_i'$, where $\Delta'_i$ is a triangle. Since $\psi' \in f^{-1}(\psi)$, we can assume (up to reordering) that $\Delta_i' $ is isotopic to $\psi'(\Delta_i)$ relative to $P$. We have two cases: 1. $p_{n+1} \in \Delta_1'$; 2. $p_{n+1} \not \in \Delta_1'$. In case (1), we can glue the homeomorphisms $\Delta_i' \to \psi'( \Delta_i)$ in order to construct a homeomorphism $\theta: \Sigma \to \Sigma $ that also fixes $p_{n+1}$. By construction, $\theta$ is an element of ${\ensuremath{\mathrm{Mod}}}(\Sigma')$ isotopic to the identity rel $P$, that is, $\theta$ belongs to the kernel of the forgetful map $f$, and we obtain $\theta(\widehat{\psi H}) = \psi'( \widehat{H})$. Consider the mapping class $\phi' = \theta^{-1} \circ \psi' \in {\ensuremath{\mathrm{Mod}}}(\Sigma')$, by construction $$f(\theta^{-1} \circ \psi') = f(\psi') = \psi \mbox{ and } \widehat{\psi H} = \phi'( \widehat{H}),$$ and we are done. In case (2), assume $p_{n+1} \in \Delta_j'$ with $j \neq 1$. We will now see that using at most $2 \tilde{\kappa}$ flips we can move the cone on $p_{n+1}$ inside a triangle isotopic to $\Delta_1'$. More precisely, a sequence of two flips as in Figure \[flips-cone-2\] moves the cone in a triangle adjacent to $\Delta_j'$. Note that this sequence of flips does not change the isotopy class relative to $P$ of the arcs not connected to $p_{n+1}$. The final triangulation $T_1$ we obtain has the following properties: - $T_1$ has a cone on $p_{n+1}$; - $T_1$ agrees with $\widehat{\psi H}$ outside the quadrilateral in Figure \[flips-cone-2\]; - the arcs of $\widehat{\psi H}$ and $T_1$ that are not connected to $p_{n+1}$ are pairwise isotopic relative $P$. If the triangle of $T_1$ containing the cone on $p_{n+1}$ is isotopic to $\Delta_1'$ relative to $P$, then we can proceed as in case (1). Indeed, we construct an homeomorphism $\theta: \Sigma' \to \Sigma'$ such that $\theta(T_1) = \psi'(\widehat{H})$ and $\theta$ is isotopic to the identity relative to $P$. We then set $\phi'= \theta^{-1} \circ \psi'$, and we have $T_1 = \phi'(\widehat{H})$. Otherwise, if the triangle of $T_1$ containing $p_{n+1}$ is not isotopic to $\Delta_1'$, we keep on performing sequences of flips like in Figure \[flips-cone-2\] in order to move the cone on $p_{n+1}$. After at most $\tilde{\kappa}$ sequences of flips, we get to a triangulation $T_{\tilde{\kappa}}$ whose cone on $p_{n+1}$ lies inside a triangle isotopic to $\Delta_1'$. Arguing as above, we get a homeomorphism $\theta: \Sigma' \to \Sigma'$, isotopic relative to $P$ to the identity, such that $\theta(T_{\tilde{\kappa}}) = \psi'(\widehat{H})$. We then set $\phi'= \theta^{-1} \circ \psi'$, we have $T_{\tilde{\kappa}} = \phi'(\widehat{H})$. We conclude as follows: $$\begin{aligned} d'( \widehat{\psi (H)}, \phi'( \widehat{H})) & \leq d'( \widehat{\psi (H)}, T_{\tilde{\kappa}}) + d'( T_{\tilde{\kappa}} , \phi'( \widehat{H})) \\ & \leq 2 \cdot \tilde{\kappa}. \end{aligned}$$ $\mathrm{cone}_M: {{\mathcal F}}(\Sigma) \to {{\mathcal F}}(\Sigma')$ is a quasi-isometric embedding. By Lemma \[cone-lemma\], ${\ensuremath{\mathrm{cone}}}_M$ is 2-Lipschitz. We will now prove that there exists some universal constants $A', B'>0$ such that for any $S,T \in {{\mathcal F}}(\Sigma)$ we have $d(S, T) \leq A' \cdot d'(\widehat{S}, \widehat{T}) + B'$. Set $R = {\ensuremath{\mathrm{diam}}}{\mathcal M \mathcal F}(\Sigma)$. Since the orbit of $H$ under ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ is $R$-dense, we can find $\psi, \phi \in {\ensuremath{\mathrm{Mod}}}(\Sigma)$ such that $$\label{eq:R} d(S, \psi H) \leq R ~~ \mbox{ and } ~~ d(T, \phi H) \leq R$$ By Lemma \[cone-lemma\] we have: $$\label{eq:2R} d'(\widehat{S}, \widehat{\psi H}) \leq 2R~~ \mbox{ and } ~~ d'(\widehat{T}, \widehat{\phi H}) \leq 2R$$ Finally fix $\psi', \phi' \in {\ensuremath{\mathrm{Mod}}}(\Sigma')$ as in Lemma \[2N\]. We use the notation $a \prec b$ to mean $a \leq k b + h$ for constants $k$ and $h$. With this notation: $$\begin{aligned} d(S, T) & \leq d(\psi H, \phi H) + 2R && \mbox{by \ref{eq:R}} \\ & = d( F(\psi'( \widehat H)), F(\phi' (\widehat H))) + 2R \\ & \prec d'(\psi' (\widehat H) , \phi'( \widehat H)) && \mbox{ by Remark \ref{F-lipschitz}} \\ & \prec d'(\psi'( \widehat H), \widehat{\psi H}) + d'(\widehat{\psi H}, \widehat{\phi H}) + d'(\phi'( \widehat H), \widehat{\phi H}) \\ & \prec d'(\widehat{\psi H}, \widehat{\phi H}) + 4\cdot \tilde{\kappa} && \mbox{by Lemma \ref{2N} } \\ & \prec d'(\widehat{\psi H} , \widehat{S} ) + d'(\widehat{S} , \widehat{T} ) + d'(\widehat{\phi H} , \widehat{T} ) \\ & \prec d'(\widehat{S} , \widehat{T} ) + 4R && \mbox{ by Equation \ref{eq:2R}} \end{aligned}$$ For $F$ is $(A,B)$ quasi-Lipschitz, and if we keep track of the constants in the above inequalities, we have: $$d(S, T) \leq A' \cdot d'(\widehat{S} , \widehat{T} ) + B',$$ where $A' = A$ and $B' = 4 A R + 4 A \tilde{\kappa} + B + 2R$. The following result was already proved by Mosher [@Mosher2] using a different method, and stated by Rafi-Schleimer [@RS] using the marking graph as a large scale model for ${\ensuremath{\mathrm{Mod}}}(\Sigma)$. There is a quasi-isometric embedding ${\ensuremath{\mathrm{Mod}}}(\Sigma) \hookrightarrow {\ensuremath{\mathrm{Mod}}}(\Sigma')$. Consider the following commutative diagram: $$\xymatrix{ {{\mathcal F}}(\Sigma) \ar[r]^{\mathrm{cone}_M} &{{\mathcal F}}(\Sigma') \\ {\ensuremath{\mathrm{Mod}}}(\Sigma) \ar[u]^{{\omega_H}_|} \ar[r] & {\ensuremath{\mathrm{Mod}}}(\Sigma') \ar[u]_{\omega_{\widehat{H}}} }$$ By Lemma \[mcg-flipgraph\] both $\omega_H$ and $\omega_{\widehat{H}}$ are quasi-isometries. The assertion then follows from the above theorem. Convexity of strata and applications ==================================== As we saw previously, for any multiarc $A$ the stratum ${{\mathcal F}}_A$ is connected. We denote by $d_A$ the shortest path distance on ${{\mathcal F}}_A$. In this section we prove that the natural inclusion $({{\mathcal F}}_A, d_A) \hookrightarrow ({{\mathcal F}},d)$ is an isometric embedding. Furthermore, we prove that ${{\mathcal F}}_A$ is strongly convex in ${{\mathcal F}}(\Sigma)$. The main ingredient in our proof is a 1-Lipschitz retraction of ${{\mathcal F}}(\Sigma)$ on ${{\mathcal F}}_A$. The projection theorem ---------------------- Let $a$ and $t$ be two arcs. Choose an orientation on $a$, denote by $a^+$ the oriented arc, and let $\mathrm{push}_{a^+}(t)$ be the multiarc defined as follows: - if $i(a, t) =0$ then $\mathrm{push}_{a^+}(t) = t$; - if $i(a, t) \neq 0$ then $\mathrm{push}_{a^+}(t)$ is the multiarc obtained by “combing” $t$ following the orientation of $a$ as in Figure \[fig:combing\]. Each arc in $\mathrm{push}_{a^+}(t)$ (provided $i(a, t) \neq 0$) has at least one endpoint that coincides with the final endpoint of $a$. ![Combing along an oriented arc[]{data-label="fig:combing"}](Figures/Combing.pdf){width="9cm"} The following lemma follows immediately by the above construction. \[geo-strata:lemma\] If $s$ and $t$ are arcs, then $ i(\mathrm{push}_{a^+}(s), \mathrm{push}_{a^+}(t)) \leq i(s,t)$. If $T=(t_1, \ldots, t_\kappa)$ is a triangulation of $\Sigma$, we denote by $\mathrm{push}_{a^+}(T)$ the multiarc obtained collecting the isotopy classes of all the arcs $\mathrm{push}_{a^+}(t_i)$: $\mathrm{push}_{a^+}(T) = [\mathrm{push}_{a^+}(t_1), \ldots , \mathrm{push}_{a^+}(t_\kappa)]$. We remark that the set $\{\mathrm{push}_{a^+}(t_i)\}$ may contain isotopic arcs. \[projection1\] The map $$\begin{aligned} \pi_{a^+} : {{\mathcal F}}(\Sigma) & \to {{\mathcal F}}_a \\ T & \mapsto (\mathrm{push}_{a^+}(T), a) \end{aligned}$$ is a 1-Lipschitz retraction on $({{\mathcal F}}_a, d_a)$. We first prove that $\pi_{a^+}(T)$ is also a triangulation of $\Sigma$. If $a$ is one of the arcs in $T$, the assertion is trivial. Suppose $a\cap T \neq \emptyset$. We consider a parametrization of $a:[0,1]\to \Sigma$ following the orientation of $a^{+}$. We suppose that $a$ intersects $T$ minimally and all intersections are transversal so we denote $$\tau_0=0<\tau_1<\hdots <\tau_{N+1}=1$$ the values of $\tau$ for which $a(\tau) \in T$. Note that $N = i(a,T)$. For each $\tau' \in [0,1]$, we consider the following decomposition $D_{\tau'}$ of $\Sigma$ constructed as follows. To begin, $D_{\tau'}$ contains all arcs of $T$ that do not cross $a|_{\tau=0}^{\tau'}$, contains all vertices of $T$ and has one extra vertex $a(\tau')$. Furthermore, it also contains the arc $a|_{\tau=0}^{\tau'}$. We add arcs iteratively as follows. For each $\tau_i$, $i=1,\hdots, N-2$, the point $a(\tau_i)$ will cut a preexisting arc, say $b_i$, into two subarcs $b'_{i}$ and $b''_{i}$. We add these to the decomposition and they continue to belong to the decomposition for $\tau'>\tau_i$ by concatenating them with the arc $a|_{\tau=\tau_i}^{\tau'}$ in the obvious way. At parameter $\tau'$ we denote the resulting arcs $b'_{i}(\tau')$ and $b''_{i}(\tau')$. $D_{\tau'}$ is the union of all these arcs up to isotopy fixing the vertices (so any isotopy class is only counted once). We want to show that $D_{\tau_{N+1}}$ is a triangulation of $\Sigma$ with the same vertex set as $T$. Before showing this we claim that for $0\leq i<N+1$, $D_{\tau_i}$ is a set of arcs decomposing $\Sigma$ into triangles and into one quadrilateral which is simply a triangle with an additional vertex $a(\tau_i)$. We prove our claim by analyzing the decomposition as $\tau$ varies. The key point is that the decomposition only changes for the values $\tau_i$. For $i=1$ as all we have added is an arc and a point that splits the first triangle traversed by $a$ into two triangles. As we have added a vertex in $a(\tau_1)$, the following triangle of $T$ traversed by $a$ is now a quadrilateral (see Figure \[fig:firststep\]). Ł(.56\*.97) $a(0)$\ Ł(.47\*.41) $a(\tau_1)$\ Now suppose by induction that at parameter $\tau_i$ with $i<N$ the decomposition $D_{\tau_i}$ is as claimed and we now analyze $D_{\tau_{i+1}}$. To obtain $D_{\tau_{i+1}}$ from $D_{\tau_i}$ we have a continuous family $D_\tau$ with $\tau \in [\tau_{i},\tau_{i+1}]$. Note that $a|_{\tau=\tau_i}^{\tau_{i+1}}$ is a simple path crossing the only quadrilateral of $\Sigma \setminus D_{\tau_i}$. While $\tau' \in [\tau_{i},\tau_{i+1}[$, $D_\tau$ is just obtained by pushing $a(\tau)$ along the path $a|_{\tau=\tau_i}^{\tau'}$ and thus (up to homeomorphism) is a carbon copy of $D_{\tau_i}$. Ł(.66\*.41) $a(\tau_{i+1})$\ Ł(.29\*.73) $a(\tau_i)$\ We need to analyse what happens at $\tau = \tau_{i+1}$. Two of the arcs of the quadrilateral become one and as such the quadrilateral collapses to a triangle. More precisely, the point $a(\tau_{i+1})$ lies on an arc of $D_{\tau_i}$ so divides this arc into two arcs in $D_{\tau_{i+1}}$; adding this vertex turns the “next" triangle into a quadrilateral. This proves the general step. The above process is illustrated in Figure \[fig:inductionstep\]. What remains to be seen is the final step, when $\tau\in [\tau_N,\tau_{N+1}]$. This final step is very similar to what happens before with the notable difference that the point $a(\tau_{N+1})$ was already a vertex of the decompositions $D_\tau$. So instead of splitting a previous arc into two parts, the quadrilateral containing $a(\tau)$ for $\tau\in [\tau_N,\tau_{N+1}[$ collapses completely leaving only triangles in $D_{\tau_{N+1}}$ (see Figure \[fig:finalstep\]). Ł(.825\*.06) $a(\tau_{N+1})$\ Ł(.285\*.50) $a(\tau_N)$\ This concludes the proof that $\pi_{a^+}(T)$ is a triangulation. It is straightforward to see that $\pi_{a^+}$ is a retraction. In fact, the restriction of $\pi_{a^+}$ to ${{\mathcal F}}_a$ is the identity and $\pi_{a^+}$ is onto by construction. Let us now prove that $\pi_{a^+}$ is 1-Lipschitz. Recall that $T_1, T_2 \in {{\mathcal F}}(\Sigma)$ differ by a flip if and only if $ i(T_1, T_2)=1$. By Lemma \[geo-strata:lemma\] we have $ i(\pi_{a^+}(T_1),\pi_{a^+}(T_2))\leq 1 $. We deduce that either $\pi_{a^+}(T_1)$ and $\pi_{a^+}(T_2)$ also differ by a flip or they coincide. Let $T$ and $T'$ be two vertices in ${{\mathcal F}}(\Sigma)$ and $\gamma: T=T_0 \ldots T_m=T'$ is a geodesic path in ${{\mathcal F}}(\Sigma)$ joining them. By the above argument, $\pi_{a^+}(\gamma): \pi_{a^+}(T) \ldots \pi_{a^+}(T')$ is a path in ${{\mathcal F}}_a$ of length at most $m$, so $d(\pi_{a^+}(T), \pi_{a^+}(T')) \leq d(T,T')$ and $\pi_{a^+}$ is 1-Lipschitz. \[pre-convexity\] Let $A$ be a multiarc and $T$ be a triangulation. If there exists $t \in T$ such that $ i(t,A) = 0$ then every geodesic path from $T$ to ${{\mathcal F}}_A$ is contained in ${{\mathcal F}}_t$. Let $\gamma: T=T_0 \ldots T_n$ be a shortest path from $T_0$ to ${{\mathcal F}}_A$. We shall prove that for all $i$, $T_i \in {{\mathcal F}}_t$. We begin by choosing an orientation on $t$. Observe that $p_{t^+}(T_n) \in {{\mathcal F}}_A$ and $T_0 = \pi_{t^+}(T_0)$ by construction, so $\pi_{t^+}(\gamma)$ is also a path from $T_0$ to ${{\mathcal F}}_A$. We now argue by contradiction. Let $i \geq 0$ the smallest integer such that $t \in T_i$ and $t \not \in T_{i+1}$ (that is, the arc $t$ is flipped). Necessarily we have $i(t, T_{i+1}) = 1$ and by construction $$\pi_{t^+}(T_i) = \pi_{t^+}(T_{i+1}) = T_i$$ so the length of $\pi_{t^+}(\gamma)$ is at most $n-1$. This implies that $\pi_{t^+}(\gamma)$ is shorter than $\gamma$, in contradiction with the assumption that $\gamma$ is geodesic. \[convexity-1\] For every arc $a$, the stratum ${{\mathcal F}}_a$ is strongly convex. Let $T_0$ and $T_m$ be two vertices in ${{\mathcal F}}_a$ and let $\gamma: T_0 \ldots T_m$ be a geodesic path in ${{\mathcal F}}(\Sigma)$ joining them. By Lemma \[projection1\] $\pi_{a^+}(\gamma) $ is a path in ${{\mathcal F}}_a$ with endpoints $T_0$ and $T_m$ and we have $d_a(T_0, T_m) \leq m = d(T_0,T_m) $. It follows that the inclusion ${{\mathcal F}}_a \hookrightarrow {{\mathcal F}}(\Sigma)$ is an isometric embedding. The strong convexity of ${{\mathcal F}}_a$ follows from Lemma \[pre-convexity\] with $A = a$ and $t=a$: for all $i=0, \ldots, m$ we have $T_i \in {{\mathcal F}}_a$. \[projection2\] Let $A^\sigma= (a_1^+, \ldots, a_m^+)$ be a multiarc whose $m$ components are enumerated and oriented. The map $\pi_{A^\sigma} = \pi_{a_m^+} \circ \ldots \circ \pi_{a_1^+}: {{\mathcal F}}(\Sigma) \to {{\mathcal F}}_A$ is well-defined and a 1-Lipschitz retraction. Since the arcs in $A$ are all disjoint, by Lemma \[pre-convexity\] we have $$\pi_{A^\sigma}({{\mathcal F}}(\Sigma)) = {{\mathcal F}}_{a_1} \cap \ldots \cap {{\mathcal F}}_{a_m} = {{\mathcal F}}_A.$$ By Lemma \[projection1\] the map $\pi_{A^\sigma}$ is 1-Lipschitz and a retraction. We remark that the map $\pi_{A^\sigma}$ does depend on the choice of the orientation and enumeration of the arcs in $A$. We will study this dependence later. \[convexity-2\] For every multiarc $A$, the stratum ${{\mathcal F}}_A$ is strongly convex. This is a direct corollary of Theorem \[convexity-1\]. Note that ${{\mathcal F}}_A = \bigcap_{a \in A} {{\mathcal F}}_a$ and the intersection of strongly convex subspaces is strongly convex. Applications {#ss:applications} ------------ We now focus on some applications of the above results and in particular of Theorem \[convexity-2\]. ### Projections and distances We begin by looking at some immediate consequences on distances and projection distances to strata. The following proposition is essentially the definition of distance on ${{\mathcal F}}_A$ combined with Theorem \[convexity-2\]. \[sep-multiarc\] Assume that $A$ is a multiarc such that ${\Sigma \setminus A} = \bigcup_{i=1}^h \Sigma_i$ where $\Sigma_i$ is a connected surface with boundary. Denote by $d_i$ the distance on ${{\mathcal F}}(\Sigma_i)$. For every $T \in {{\mathcal F}}_A$ denote by $T_i$ the triangulation of $\Sigma_i$ induced by $T$. Then the map $$\begin{aligned} {{\mathcal F}}_A &\longrightarrow ~{{\mathcal F}}(\Sigma_1) \times \ldots \times {{\mathcal F}}(\Sigma_h) \\ T & \mapsto (T_1 , \ldots, T_h) \end{aligned}$$ is an isometry between $({{\mathcal F}}_A, d)$ and $({{\mathcal F}}(\Sigma_1) \times \ldots \times {{\mathcal F}}(\Sigma_h)~,~ d_1 + \ldots + d_h )$ By definition of ${{\mathcal F}}_A$, the map is an isometry from $({{\mathcal F}}_A, d_A)$. By Theorem \[projection2\] $d = d_A$ and the assertion follows. Let $A$ be a multiarc. For every choice $\sigma$ of enumeration and orientation of the arcs in $A$, we have: $d(\pi_{A^\sigma}(T), \pi_{A^\sigma}(S)) \leq i(T,S)$. It is a straightforward application of Lemmas \[dist-up\] and \[geo-strata:lemma\]. \[sigma\] Let $A$ be a multiarc. For every choice $\sigma$ of enumeration and orientation of the arcs in $A$, we have $d(T, {{\mathcal F}}_A) \leq d(T, \pi_{A^\sigma}(T)) \leq 2\cdot d(T, {{\mathcal F}}_A) .$ Let $S$ be a triangulation in ${{\mathcal F}}_A$ at minimal distance from $T$, so that $d(T,S) = d(T,{{\mathcal F}}_A)$. By Theorem \[projection2\] $\pi_{A^\sigma}(S)=S$, it follows: $$\begin{aligned} d(T, \pi_{A^\sigma}(T)) &\leq d(T,S) + d(S, \pi_{A^\sigma}(T)) \\ & \leq d(T,S) + d(\pi_{A^\sigma}(S), \pi_{A^\sigma}(T)) \\ & \leq d(T,S) + d(S,T) \\ & = 2 d(T, {{\mathcal F}}_A).\end{aligned}$$ Let $A$ be a multiarc. For every choice $\sigma, \epsilon$ of enumeration and orientations of the arcs in $A$, we have $d(\pi_{A^\sigma}(T), \pi_{A^\epsilon}(T)) \leq d(T, {{\mathcal F}}_A)$. It follows immediately by Proposition \[sigma\]. The next consequence will use a result by Aramayona, Koberda and the second author about simplicial maps between flip graphs. To state the result we require the following notation: we say that a surface $\Sigma$ is exceptional if it is an essential subsurface of (and possibly equal to) a torus with at most two marked points, or a sphere with at most four marked points. In [@AKP], it is proved that, for surfaces $\Sigma, \Sigma'$ with $\Sigma$ non-exceptional, all injective simplicial maps $$\phi: {{\mathcal F}}(\Sigma) \to {{\mathcal F}}(\Sigma')$$ come from embeddings $\Sigma\to \Sigma'$ (that is $\Sigma$ is homeomorphic to a subsurface of $\Sigma'$). Note that it’s obvious that you can construct simplicial maps this way; what’s more surprising is that this is, provided your base surface is complicated enough, the only way such maps appear. Together with Theorem \[convexity-2\], the following is then immediate. Suppose $\Sigma$ is non-exceptional, and let ${{\mathcal F}}(\Sigma) \to {{\mathcal F}}(\Sigma')$ be an injective simplicial map. Then ${{\mathcal F}}(\Sigma)$ is strongly convex inside of $ {{\mathcal F}}(\Sigma')$. ### On the large scale geometry of the mapping class group We now turn our attention to the large scale geometry of the mapping class group. \[stabilizers\] Let $A$ be a multiarc and ${\ensuremath{\mathrm{Stab}}}(A)$ be the subgroup of ${\ensuremath{\mathrm{Mod}}}(\Sigma)$ that fixes the isotopy class of each arc in $A$. Then ${\ensuremath{\mathrm{Stab}}}(A)$ has a finite index subgroup isomorphic to ${\ensuremath{\mathrm{Mod}}}(\Sigma \setminus A)$. Assume that $A$ has $m$ connected components. Fix an orientation on each arc of $A$. It is immediate to see that the subgroup ${\ensuremath{\mathrm{Stab}}}^+ (A) < {\ensuremath{\mathrm{Stab}}}(A)$ consisting of the mapping classes that also fix the orientation of every arc in $A$ is isomorphic to ${\ensuremath{\mathrm{Mod}}}( \Sigma \setminus A)$, that is, the subgroup of the surface obtained cutting $\Sigma$ along $A$. The assertion follows from the exactness of the short sequence: $$1 \to {\ensuremath{\mathrm{Stab}}}^+(A) \to {\ensuremath{\mathrm{Stab}}}(A) \to \mathbb Z_2^m \to 1 .$$ We can now prove the following. \[proper\] For every vertex $T \in {{\mathcal F}}_A$, there is a commutative diagram: $$\xymatrix{ {{\mathcal F}}_A \ar@{^{(}->}[r] &{{\mathcal F}}(\Sigma) \\ \mathrm{Stab}(A) \ar[u]^{{\omega_T}_|} \ar@{^{(}->}[r] & {\ensuremath{\mathrm{Mod}}}(\Sigma) \ar[u]_{\omega_T} }$$ where the inclusion ${{\mathcal F}}_A \hookrightarrow {{\mathcal F}}(\Sigma)$ is an isometry and the orbit map $\omega_T: {\ensuremath{\mathrm{Mod}}}(\Sigma) \to {{\mathcal F}}(\Sigma)$ restricts to a quasi-isometry ${\omega_T}_|: {\ensuremath{\mathrm{Stab}}}(A) \to {{\mathcal F}}_A $. Moreover, the inclusion $\mathrm{Stab}(A) \hookrightarrow {\ensuremath{\mathrm{Mod}}}(\Sigma)$ is a quasi-isometric embedding. The inclusion ${{\mathcal F}}_A \hookrightarrow {{\mathcal F}}(\Sigma)$ is an isometry by Theorem \[convexity-2\]. By Proposition \[sep-multiarc\] ${{\mathcal F}}_A$ is isomorphic and isometric to ${{\mathcal F}}(\Sigma\setminus A)$. Since the action of ${\ensuremath{\mathrm{Mod}}}(\Sigma\setminus A)$ on ${{\mathcal F}}(\Sigma\setminus A)$ is cocompact, so it is the action of ${\ensuremath{\mathrm{Stab}}}(A)$ on ${{\mathcal F}}_A$ by Lemma \[stabilizers\]. By the Švarc-Milnor lemma the orbit map $ {\ensuremath{\mathrm{Mod}}}(\Sigma) \ni \psi \mapsto \psi T \in {{\mathcal F}}_A$ is a quasi-isometry. By composition the inclusion ${\ensuremath{\mathrm{Stab}}}(A) \hookrightarrow {\ensuremath{\mathrm{Mod}}}(\Sigma)$ is a quasi-isometric embedding. The diameters of the modular flip graphs ======================================== The goal of this section is to prove upper and lower bounds on the diameters of modular flip graphs in terms of the topology of the surface (namely Theorem \[thm:diameters\] from the introduction). Let $\Sigma$ be a surface of genus $g$ with $n$ labelled points. We assume $g\geq 1$ and $n \geq 2$ (for the case $n=1$ see Theorem \[thm:onepunctureupper\] and Remark \[rem:onepuncture\], for the case $g=0$ see Theorem \[thm:genuszeroupper\]). The case where the points are unlabelled is slightly easier and it will also be treated separately - see Remark \[rem:unmarked\]. We begin with a general observation which allows us to break bounds on ${\mathcal M \mathcal F}(\Sigma)$ into different parts. The idea is to work with the punctures on one side and genus on the other. To do this we consider triangulations that contain an arc which separates the genus from the punctures: more precisely an arc $a$ which forms a loop based in a puncture and such that $\Sigma \setminus a = \Omega \cup \Gamma$ where $\Omega$ is a disk with $n-1$ punctures and one labelled point on the boundary and $\Gamma$ is of genus $g$ with a boundary component with a single marked point. Such a loop we call [*puncture separating*]{}. For any choice of puncture on $\Sigma$, it is clear that (infinitely many) such loops based in this point exist but up to homeomorphism there is only one such loop (see Figure \[fig:cutpunctures\]). From this we can make the following observation: any two triangulations which are distinct up homeomorphism and both contain a puncture separating loop must be either distinct on $\Gamma$ or $\Omega$. As such: $${{\rm card}}\left( {\mathcal M \mathcal F}(\Sigma) \right) > {{\rm card}}({\mathcal M \mathcal F}(\Gamma))\,\, {{\rm card}}({\mathcal M \mathcal F}(\Omega)).$$\[eq:card\] We will use this for our lower bounds in Section \[ss:lower\]. For our upper bounds the following lemma will allow us to introduce a puncture separating arc in a minimal amount of flips. \[lem:separatearc\] For any $T\in {\mathcal M \mathcal F}(\Sigma)$ and any marked point $p$ of $\Sigma$, there exists a puncture separating loop $a$ based in $p$ with $$i(a, T) \leq 2 (\kappa - n + 1)$$ We think of $T$ as a graph embedded on $\Sigma$ and consider a spanning tree of this graph. A regular neighborhood of this tree is a simple closed curve $\gamma$ which satisfies $$i(\gamma,T) \leq 2 ( \kappa - (n-1))$$ as it intersects only half edges that do not belong to the tree and the tree has $n-1$ edges - see Figure \[fig:curve\]. (The above inequality is in fact an equality but it is the inequality that we need.) From $\gamma$ and given a marked point $p$, we shall construct an arc as follows: as $\gamma$ surrounds all punctures, it must pass through a triangle that has $p$ as a vertex. We consider a simple arc $c$ in the triangle between $\gamma$ and $p$. Choosing an orientation on $c$ and $\gamma$, the concatenation of $c \gamma c^{-1}$ gives an isotopy class of arc which is the arc $a$ we are looking for. Notice that by construction it intersects $T$ in at most as many points as $\gamma$ and we have $$i(a, T) \leq 2 ( \kappa - n +1)$$ as desired. Ł(.65\*.85) $\gamma$\ Ł(.474\*.61) $\tiny{c}$\ Ł(.65\*.57) $a$\ Using this lemma and the upper bound on flip distance in terms of intersection number, we can establish the following. \[lem:globalupper\] For $\Sigma$, $\Omega$ and $\Gamma$ as above: $${\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Sigma) \right) \leq {\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Omega) \right)+ {\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Gamma) \right)+ 2(\kappa-n +1).$$ The above inequality will allow us to treat the upper bounds by treating ${\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Gamma)\right)$ and ${\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Omega) \right)$ separately. We begin with the former. Upper bounds in terms of genus ------------------------------- As above, $\Gamma$ is a genus $g\geq 1$ surface with a single boundary curve and a single marked point on the boundary. Our goal here is to show the following result. \[thm:uppergenus\] The diameter of the modular flip graph of $\Gamma$ satisfies $${\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Gamma) \right) < A g \log (g+1)$$ where $A$ can be taken to be $1000$. Before proving the theorem we’ll need two topological lemmas. \[lem:uppergenus1\] Let $T$ be a triangulation of $\Lambda$, a genus $g\geq 1$ surface with a single boundary curve and all $k$ marked points on the boundary. Then there exists $a \in T$ such that $\Lambda \setminus a$ is connected and of genus $g-1$. Observe that for an arc $a\in T$, $\Lambda \setminus a$ being connected and of genus $g-1$ is equivalent (cutting along a separating arc does not reduce genus). We now claim that $T$ always contains a non-separating arc. As $\Lambda \setminus T$ is a collection of triangles, it is of genus $0$. Now as $g(\Lambda) \geq 1$, one of the arcs of $T$ must be non-separating, otherwise $\Lambda \setminus T$ would still have positive genus. \[lem:uppergenus2\] Let $T$ be a triangulation of $\Lambda$, a genus $g\geq 0$ surface with two boundary curves, both with marked points, and all marked points on the boundary. Then there exists $a \in T$ such that $\Lambda \setminus a$ has only one boundary component. All marked points are on the boundary so it is impossible to triangulate $\Lambda$ without a triangle that has vertices on both boundary components. To see this we can argue by contradiction. If this is not the case, then we can split the triangles into two non-empty groups depending on whether they have all of vertices on one or the other boundary curve. But as the surface is connected, there must be a triangle of the first group which shares an arc with a triangle of the second. Thus, they must also share vertices, a contradiction. We now proceed to the proof of Theorem \[thm:uppergenus\]. Let $T$ be any triangulation of $\Gamma$. Denote by $a_0$ the arc that forms the boundary of $\Gamma$. The first step will be to divide the surface along an arc that has equal genus (or close to equal) on both parts. By Lemma \[lem:uppergenus1\], there is an arc $a_1 \in T$ such that $\Gamma \setminus a_1$ is of genus $g-1$. The resulting surface $\Gamma^{1}:=\Gamma \setminus a_1$ now has two boundary components, one consisting of two arcs and the other of a single arc. Now by Lemma \[lem:uppergenus2\], there exists $a_2\in T$ such that $\Gamma_2:=\Gamma^{1} \setminus a_2$ has a single boundary curve consisting of $5$ arcs. In short, we found two arcs of $T$ such that cutting along these arcs produces a surface of genus $g-1$ with a single boundary component with $4$ more arcs than the original surface $\Gamma$. We can iterate the above process at total of $\floor{ \frac{g}{2}}$ times to obtain a collection of $ 2\floor{ \frac{g}{2}}$ arcs such that cutting along these arcs results in a genus $g - \floor{ \frac{g}{2}}$ surface $\overline {\Gamma}$ with a single boundary curve formed by $1 + 4 \floor{ \frac{g}{2}} $ arcs. One of these is $a_0$. Denote $p_0$ and $p_0'$ the two vertices of $a_0$ on $\overline {\Gamma}$. We denote $b$ the unique loop based in $p_0$ homotopic to the boundary of $\overline {\Gamma}$ and $b'$ the arc from $p_0'$ to $p_0$ which forms a triangle with $a_0$ and $b$ (see Figure \[fig:thmgenus1\]). Ł(.39\*.97) $p_0$\ Ł(.602\*.96) $p_0'$\ Ł(.51\*.98) $a_0$\ Ł(.53\*.825) $b$\ Ł(.574\*.71) $b'$\ Both $b$ and $b'$ have a nice property: they don’t intersect $T$ too much. More precisely, as there are parallel to the boundary of $\overline{\Gamma}$ which is formed by arcs of $T$, they intersect each of the the remaining arcs at most twice. Thus $$i(x, T) \leq 2 (\kappa(\Gamma) - 2 \left \lfloor \frac{g}{2} \right \rfloor ), \,\,x=b,b'.$$ Now $\kappa(\Gamma) = 6g -2$ so we can deduce that $$i(b,T) + i(b',T) \leq 20 g - 4.$$ Now using the upper bound on the distance to a stratum in function of intersection number, we can introduce the arcs $b$ and $b'$ in at most $20 g - 4$ flips. The reason one might want to do this is that these arcs separate the surface into three canonical surfaces: a triangle containing $a_0$ and two surfaces with a single boundary curve and of genus $\floor{\frac{g}{2}}$ and $g - \floor{\frac{g}{2}}$. As such, up to homeomorphism, the pair of arcs $b$ and $b'$ are unique (see Figure \[fig:thmgenus2\]). Ł(.51\*1.01) $p_0$\ Ł(.60\*.92) $a_0$\ Ł(.453\*.08) $b$\ Ł(.51\*.08) $b'$\ With this in hand, we will prove the bound by induction. We begin by checking the result for $g=1$. Here we need to check that the diameter is at most $1000 \log(2) > 693 > 5$. But there are at most $5$ different possible triangulations. Indeed such a triangulated surface is obtained by pasting four sides of a triangulated $5$-gon together. There are $C_3= 5$ different possible triangulations of the $5$-gon and only one to paste together the $5$-gon to get a one holed torus. We now suppose $g(\Gamma)\geq 2$. Given two different triangulations $S$ and $T$ in ${\mathcal M \mathcal F}(\Gamma)$, we flip both triangulations to obtain triangulations $S'$ and $T'$ with arcs as above. These triangulations now both belong to a stratum of ${\mathcal M \mathcal F}(\Gamma_{b,b'})$ where $b$ and $b'$ are as above. We denote $\Gamma_1$ and $\Gamma_2$ the two non-triangular surfaces in $\Gamma \setminus \{b \cup b' \}$. Denote (for $k=1,2$) $S'_k$, resp. $T'_k$, the restrictions of $S'$, resp. $T'$, to $\Gamma_k$. We shall now flip $S'_k$ and $T'_k$ inside ${\mathcal M \mathcal F}(\Gamma_k)$ for $k=1,2$. Once the triangulations coincide on both $\Gamma_1$ and $\Gamma_2$. they will coincide on $\Gamma$. By induction for $k=1,2$: $$d(S'_k,T'_k) \leq {\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Gamma_k) )\leq A\, \frac{g+1}{2} \log \frac{g+3}{2}.$$ By induction (here we take into account that $g$ can be odd in the bound of $g- \floor{\frac{g}{2}}$): $$d(S'_1,T'_1) \leq {\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Gamma_1)) \leq A\, \frac{g}{2} \log \frac{g+2}{2}$$ and $$d(S'_2,T'_2) \leq {\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Gamma_2)) \leq A\, \frac{g+1}{2} \log \frac{g+3}{2}.$$ Putting this all together: $$\begin{aligned} d(S,T) & \leq & d(S,S')+d(T,T') + d(S'_1,T'_1)+ d(S'_2,T'_2)\\ & \leq & 40 g - 8 + A\, \frac{g}{2} \log \frac{g+2}{2} + A\, \frac{g+1}{2} \log \frac{g+3}{2}\\ & \leq &A \, g \log (g+1).\\\end{aligned}$$ The last inequality can be checked via a computation using $A=1000$ and $g\geq 2$. \[rem:onepuncture\] In light of Lemma \[lem:globalupper\], in the above theorem we’ve treated the case where the boundary of $\Gamma$ is a loop. The above proof however applies verbatim to the case where $\Gamma$ has a single puncture and no other boundary. The resulting theorem is the following. \[thm:onepunctureupper\] If $\Gamma$ is a surface with genus $g$ and one puncture, then the diameter of the modular flip graph of $\Gamma$ satisfies $${\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Gamma)) < A g \log(g+1)$$ where $A$ can be taken to be $1000$. Upper bounds in terms of number of punctures -------------------------------------------- We now focus our attention on the flip graph of $\Omega$, a disk with $n-1$ interior punctures and one marked point on the unique boundary curve of $\Omega$. Our goal is to prove the following upper bound which is very similar to the upper bound for $\Gamma$. \[thm:upperpuncture\] If $\Omega$ has $n-1$ labelled punctures then the diameter of the modular flip graph of $\Omega$ satisfies $${\ensuremath{\mathrm{diam}}}\left( {\mathcal M \mathcal F}(\Omega) \right) < A n \log (n+1)$$ where $A$ can be taken equal to $400$. Before proceeding to the proof, we state a preliminary lemma. \[lem:upperpuncture\] Let $T$ be a triangulation of $\Lambda$, a $m\geq 1$-punctured disk with $k\geq 1$ marked points on the boundary. Then $T$ contains an arc $a$ between an interior puncture and marked point on the boundary. If not, then a simple curve parallel to boundary does not intersect $T$ and hence $\Lambda \setminus T$ contains an embedded annulus. With that observation in hand, we now proceed to the proof of Theorem \[thm:upperpuncture\]. Let $T$ be a triangulation of $\Omega$ where we suppose that $n\geq 2$ (if $n=1$ then there the flip graph has a single triangulation). We denote the boundary arc of $\Omega$ $a_0$, the boundary marked point $p_0$ and the remaining punctures $p_j$, $j=1,\hdots,n$. Our goal will be to flip our triangulation to a canonical triangulation and argue by induction on the distance to this canonical triangulation. The upper bound on distance between arbitrary triangulations is then at most twice this distance. Our canonical triangulation $S$ is the following. Ł(.51\*1.01) $p_0$\ Ł(.51\*.92) $p_1$\ Ł(.65\*.945) $a_0$\ Ł(.33\*.843) $a_1$\ Ł(.61\*.763) $a_k$\ Ł(.51\*.83) $p_k$\ Ł(.43\*.565) $p_{n-2}$\ Ł(.47\*.45) $p_{n-1}$\ Ł(.532\*.523) $a_{n-2}$\ The triangulated surface is formed of layers. Each layer except for the last one is a cylinder with two boundary arcs $a_{k-1}$ and $a_{k}$ with punctures $p_{k-1}\in a_{k-1}$ and $p_{k}\in a_{k}$ for $k=1,\hdots,n-1$. The cylinders all contain a single interior arc from the triangulation as in the figure. The last layer is a disk with boundary $a_{n-1}$ and puncture $p_{n-1}\in a_{n-1}$ and interior puncture $p_n$. There is an arc in the triangulation between $p_{n-1}$ and $p_n$. To reach this triangulation from $T$ we proceed as follows. We begin by finding arcs that will divide the surface into punctured disks with the same (or close to the same) number of punctures in each disk. By Lemma \[lem:upperpuncture\], there is an arc $c\in T$ such that $\Omega\setminus c$ is a disk with $3$ boundary arcs: the arc $a_0$ and the two copies of $c$. We reiterate the above process $\floor{\frac{n}{2}}$ times cutting along $\floor{\frac{n}{2}}$ arcs to obtain a disk $\overline{\Omega}$ with $1 + 2 \floor{\frac{n}{2}}$ boundary arcs. On this boundary, $a_0$ joins two vertices: $p'_0$ and another, say $p_0''$, both copies of $p_0$. Consider the arc $b$ which forms a loop in $p'_0$ parallel to the boundary of $\overline{\Omega}$. Similarly, consider $b'$ which forms a triangle with $a_0$ and $b$: $b'$ is an arc between $p_0'$ and $p_0''$ which runs parallel to the boundary of $\overline{\Omega}$. Both $b$ and $b'$ have a nice property: they don’t intersect $T$ too much. More precisely, as there are parallel to the boundary of $\overline{\Omega}$ which is formed by arcs of $T$, they intersect each of the the remaining arcs at most twice. Thus $$i(x, T) \leq 2 (\kappa(\Omega) - \left \lfloor{ \frac{n}{2}} \right \rfloor), \,\,x=b,b'.$$ Now $\kappa(\Omega) = 3n-2$ and $- 2 \floor{ \frac{n}{2}} \leq -n+1$ so we can deduce that $$i(b,T) + i(b',T) \leq 10n -10.$$ Now using the upper bound on the distance to a stratum in function of intersection number, we can introduce the arcs $b$ and $b'$ in at most $10n - 10$ flips. The resulting triangulation now has an arc surrounding $\floor{ \frac{n}{2}}$ punctures, another surrounding $n - \floor{ \frac{n}{2}}$ punctures and the two arcs form a triangle with $a_0$ (see Figure \[fig:dividepunctures\]). Ł(.51\*1.01) $p_0$\ Ł(.60\*.92) $a_0$\ Ł(.453\*.08) $b$\ Ł(.51\*.08) $b'$\ We now argue by induction on the two subsurfaces $\Omega_b$ and $\Omega_{b'}$ surrounded by $b$ and $b'$ to flip them towards their canonical triangulations. We have no control over which punctures are found in $\Omega_b$ and $\Omega_{b'}$ but the punctures do inherit an order from $\Omega$ and their canonical triangulations are meant with respect to that order. The number of flips inside each of the two subsurfaces, by induction, is at most $$A (\left \lfloor{ \frac{n}{2}} \right \rfloor + 1) \log(\left \lfloor{ \frac{n}{2}} \right \rfloor +2).$$ Denote the resulting triangulation $T'$. We now need to merge the two subtriangulations of $T'$ to obtain the canonical one. To do this we proceed by steps where each step in the process will be to add a cylinder bounded by arcs $a_{k-1}$ and $a_k$ with punctures $p_{k-1}$ and $p_k$. We begin with the first step. Puncture $p_1$ is either found in $\Omega_b$ (the lefthand subsurface) or in $\Omega_{b'}$ (the righthand subsurfaces). In either event it shares an arc with $p_0$ as both sub triangulations are canonical (and thus ordered). If $p_1$ on the left, we flip as in Figure \[FLIPLEFT\], and similarly if $p_1$ is on the right. As illustrated in the figures, the process takes 6 flips. We’ve constructed the first ring of the canonical triangulation. This ring surrounds a divided subsurface and we are in the same situation as above, where $p_1$ and $a_1$ play the part of $p_0$ and $a_0$ and with one less interior puncture. We can iterate the process a total of $n-1$ times (the last step is automatic) and arguing by induction we have reached the canonical triangulation in $6(n-1)$ steps from $T'$. Putting this all together we have that for any $T\in {\mathcal M \mathcal F}(\Omega)$ $$d(T,S) \leq 10n - 10 + A (\left \lfloor{ \frac{n}{2}} \right \rfloor + 1) \log(\left \lfloor{ \frac{n}{2}} \right \rfloor +2) + 6(n-1).$$ Arguing like in the genus case (see the proof of Theorem \[thm:uppergenus\]) we obtain that $$d(T,S)\leq A n \log(n+1).$$ This shows that any two triangulations are at distance at most $2 A n \log(n+1)$ where $A$ can be taken equal to $200$. \[rem:unmarked\] The upper bound on ${\mathcal M \mathcal F}(\Omega)$ is much easier if the punctures are unlabelled. Indeed, given a vertex $p$, if a triangulation contains arcs that are not incident to $p$, you can always find a flip that increases the incidence in $p$. Let $S$ and $T$ be two triangulations. After at most $4 \kappa - 2n$ valence-increasing flips, both $T$ and $S$ look like in Figure \[fig:flower\_0\], that is, up to homeomorphisms they differ only in the shaded area. The shaded area can be thought as a triangulated $n$-agon. By Theorem \[th:STT\] $T$ and $S$ differ by at most $4 \kappa - 2n + 2n = 4 \kappa$ flips. Ł(.505\*1.01) $p$\ Ł(.595\*.93) $ $\ \[thm:unmarked\] If $\Omega$ has $n-1$ unlabelled punctures then the diameter of the modular flip graph of $\Omega$ satisfies $${\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Omega)) < A n$$ where $A$ can be taken equal to $12$. \[rem:allpunctures\] The above proof however applies verbatim to the case where $\Omega$ is a punctured sphere (in this case the arcs $b$ and $b'$ in Figure \[fig:dividepunctures\] coincide.)We thus have the following. \[thm:genuszeroupper\] If $\Omega$ is a sphere with $n$ labelled punctures then $${\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Omega)) < A n \log(n)$$ where $A$ can be taken equal to $410$. \[thm:genuszeroupperunlabelled\] If $\Omega$ is a sphere with $n$ unlabelled punctures then $${\ensuremath{\mathrm{diam}}}({\mathcal M \mathcal F}(\Omega)) < A n$$ where $A$ can be taken equal to $22$. Lower bounds via counting arguments {#ss:lower} ----------------------------------- We now focus on lower bounds. They will essentially follow from a theorem of Sleator, Tarjan and Thurston [@STT1] and a counting argument. We begin with the following general lemma which follows from a theorem on grammars on graphs [@STT1]. \[lem:graphgrammar\] Let $\Lambda$ be a surface with $n$ punctures and ${\mathcal M \mathcal F}(\Lambda)$ its modular flip graph. Then for a fixed triangulation $T_\mu\in {\mathcal M \mathcal F}(\Lambda)$ we have: $${{\rm card}}\{ T \in {\mathcal M \mathcal F}(\Lambda) \,| \, d(T,T_\mu) \leq m\} \leq 4^{10 m} 4^{\tilde{\kappa}(\Lambda)}.$$ This is a direct consequence of Theorem 2.3 of [@STT1] and the discussion in Section 5 in [@STT1]. For any triangulation $T$ one can construct its dual graph $G(T)$ (see Figure \[DualFlip\]). The graph $G(T)$ is a trivalent graph that has exactly $\tilde{\kappa}(\Lambda)$ vertices. We consider the three half-edges incident to a vertex labelled by the integers $1,2,3$ in clockwise order. Changing $T$ by one flip is equivalent to evolve $G(T)$ according the grammar in Figure \[flip\_grammar\]. Ł(.41\*.92) $1$\ Ł(.34\*.62) $3$\ Ł(.455\*.62) $2$\ Ł(.595\*.92) $3$\ Ł(.525\*.62) $2$\ Ł(.64\*.62) $1$\ Ł(.39\*.18) $3$\ Ł(.335\*.3) $2$\ Ł(.335\*.18) $1$\ Ł(.41\*.29) $3$\ Ł(.465\*.3) $1$\ Ł(.465\*.18) $2$\ Ł(.57\*.27) $3$\ Ł(.602\*.45) $2$\ Ł(.563\*.45) $1$\ Ł(.597\*.2) $3$\ Ł(.601\*.015) $1$\ Ł(.563\*.015) $2$\ This grammar has two productions: one for doing the flip and the other for preparing the half-edges labels to allow the flip. Indeed, one flip on $T$ corresponds to perform at most 5 productions on $G(T)$: two to prepare the half-edge labels on the first vertex, two to prepare the half-edge labels on the second vertex, and one for the flip. It follows that the number of triangulations that can be obtained from $T_\mu$ in at most $m$ flips is bounded above by the number of graphs that can be derived by $G(T_\mu)$ with at most $10m$ productions. The latter is bounded above by $4^{10m}4^{\tilde{\kappa(\Lambda)}}$ by a straightforward application of Theorem 2.3 [@STT1] to the grammar we described. The same proof works verbatim for $T_\nu$. Setting $m = {\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right)$ in the lemma above, and then solving for $m$ using Inequality \[eq:card\], one obtains the following result: \[cor:card\] Let $\Sigma$ be a surface of genus $g$ with $n$ marked points, $\Gamma$ be a surface of genus $g$ with one boundary component and exactly one marked point on it, and $\Omega$ be a disk with $n-1$ interior punctures. We have: $${\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right) > \frac{\log ({{\rm card}}({\mathcal M \mathcal F}(\Gamma))) + \log ({{\rm card}}({\mathcal M \mathcal F}(\Omega))) - \tilde{\kappa}(\Sigma)\log(4)}{10 \log(4)}$$ We now count vertices of our combinatorial moduli spaces. \[lem:counting\] Let $\Gamma$ be a surface of genus $g\geq 2$ with a single boundary loop and one marked point on the boundary. Let $\Omega$ be a disk with a single boundary component with a marked point on the boundary and with $n-1$ interior labelled points. Then $$\begin{aligned} \label{card1} {{\rm card}}\{ {\mathcal M \mathcal F}(\Gamma)\} &\geq& \frac{g-1}{2} \, (g-1)!\\ \label{card2} {{\rm card}}\{ {\mathcal M \mathcal F}(\Omega)\} &\geq& C_{n-2}\, (n-1)! $$ where $C_{k}$ is the $k$-th Catalan number. We begin with Inequality \[card1\]. For a given triangulation $T \in {\mathcal M \mathcal F}(\Gamma)$, if we collapse the triangle which contains the boundary arc by cutting the triangle and pasting the two loose arcs together, we obtain a triangulated surface of genus $g$ with a single marked point. If you perform this on two triangulations $S,T \in {\mathcal M \mathcal F}(\Gamma)$ and obtain different triangulations up to homeomorphism, then the triangulations we necessarily different to begin with. As such, there are at least as many triangulations in ${\mathcal M \mathcal F}(\Gamma)$ then triangulations of a genus $g$ surface with a single marked point. It is a result of Penner [@PenWP] that there are at least $$\frac{g-1}{2} (g-1)!$$ such triangulations and so the inequality is proved. For Inequality \[card2\] we argue as follows. Denote by $p_0$ the marked point on the boundary curve and $a_0$ the boundary loop. We begin by considering only triangulations where each interior puncture is surrounded by a single loop based at $p_0$ (see Figure \[fig:flower\_0\]). For two triangulations to be the same, they must coincide on the exterior of these loops. Cutting along the loops, one obtains an $n$-gon with one privileged side $a_0$. As such, we are in the classical case of counting triangulations of a polygon with an order on the sides and there are $C_{n-2}$ such triangulations. Any permutation of the vertex labelling gives a different polygon and thus we obtain the stronger lower bound $$(n-1)! \, C_{n-2}.$$ From this we obtain the following lower bound. \[cor:count\] Let $\Sigma$ be a surface with $n$ labelled punctures. We have $${\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right)> B \left (n \log(n+1) + g \log(g+1) \right),$$ where $B$ can be taken equal to $2 \cdot 10^{-5}$. We will use the following inequalities: 1. $\log( C_n) > n $; 2. $\log n! > n \log(n) - n $. Assume $n\geq 3$ and $g \geq 3$. From Lemma \[lem:counting\] we get: $$\begin{aligned} \log ({{\rm card}}({\mathcal M \mathcal F}(\Gamma))) &> \log (g-1)! > (g-1) \log(g-1) - g \\ \log({{\rm card}}({\mathcal M \mathcal F}(\Omega))) &> \log(n-1)! > (n-1) \log(n-1) -n $$ Assume that the punctures of $\Sigma$ are labelled. Plugging in the inequality in Corollary \[cor:card\] we have: $$\begin{aligned} {\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right) & > \frac{(g-1) \log(g-1) -g + (n-1) \log(n-1) - n - \tilde{\kappa}(\Sigma) \log(4)}{10 \log(4)} \\ & > \frac{(g-1) \log(g-1) - g + (n-1) \log(n-1) - n - (4g + 2n - 6) \log(4) }{10 \log(4)} \\ & > \frac{(g-1)\log(g-1) - (4 \log(4) +1)g}{10\log(4)} + \frac{(n-1) \log(n-1) - (2\log(4) +1)n}{10\log(4)} \\ & > B\, (g \log(g+1) + n\log(n+1)) \end{aligned}$$ where $B$ can be taken to be $2 \cdot 10^{-5}$ and for $g \geq 705$ and $n \geq 50$. It is immediate to verify that $${\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right) > B\, (g \log(g+1) + n\log(n+1))$$ also holds in the remaining cases ($g\leq 704$ or $n\leq 49$). We note that we can improve the constant $B$ by conditioning $g$ and $n$ (giving them both lower bounds) but our principle interest is in the order of growth. We obtain a similar result on lower bounds for unlabeled marked points. If $\Sigma$ has $n \geq 511$ unlabelled punctures and is of genus $g$ then $${\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right) > B (g \log(g+1) + n )$$ where $B$ can be taken to be $10^{-3}$ . The graph grammar described in Lemma \[lem:graphgrammar\] can be refined (see [@STT1] for details) so that $${{\rm card}}{{\mathcal M \mathcal F}(\Sigma)} \leq 3^{\tilde{\kappa}(\Sigma)} 8^m .$$ We have: $$m \geq \frac{\log( {{\rm card}}({\mathcal M \mathcal F}(\Sigma)) - \tilde{\kappa}(\Sigma) \log(3)}{\log(8)}.$$ Let $\tilde{\Omega}$ be a disk with a single boundary component with a marked point on the boundary and with $n-1$ interior unlabelled points. As in Lemma \[lem:graphgrammar\] we have $${{\rm card}}{{\mathcal M \mathcal F}(\Sigma)} \geq {{\rm card}}{{\mathcal M \mathcal F}(\Gamma)} \, {{\rm card}}{{\mathcal M \mathcal F}(\tilde{\Omega})} .$$ Now we use a result of Brown [@Brown] that provides lower bounds on the cardinality of ${\mathcal M \mathcal F}(\tilde{\Omega})$: $${{\rm card}}{{\mathcal M \mathcal F}(\tilde{\Omega})} > \frac{2(4n - 7)!}{(n-1)! (3n - 4)!} .$$ An explicit computation shows that, for $n \geq 511$, the following holds: $${{\rm card}}{{\mathcal M \mathcal F}(\tilde{\Omega})} > (9.1)^{n} > 3^{2n} .$$ From this we can conclude that $$\begin{aligned} {\ensuremath{\mathrm{diam}}}\left({\mathcal M \mathcal F}(\Sigma)\right) & > \frac{(g-1) \log(g-1) -g - \tilde{\kappa}(\Sigma) \log(3)}{\log(8)} \\ & > \frac{(g-1) \log(g-1) - g + \log(9.1) n - (4g + 2n - 6) \log(3) }{\log(8)} \\ & > \frac{(g-1)\log(g-1) - (4 \log(3) +1)g}{\log(8)} + \frac{\log(9.1) n - \log(9)n }{\log(8)} \\ & > B (g \log(g+1) + n) \end{aligned}$$ where the latter inequality holds for $g \geq 705$ and $B$ can be taken to be equal to $10^{-3}$. The final assertion can be checked directly for the cases $g < 705$. As before, we note that by putting lower bounds on $g$ and $n$, the constant $B$ can be improved. [*Addresses:*]{}\ Department of Mathematics, University of Fribourg, Switzerland\ Indiana University, Bloomington IN, USA\ [*Emails:*]{} <hugo.parlier@unifr.ch>, [vdisarlo@indiana.edu](mailto:valentina)\
--- abstract: 'Heusler alloys are widely studied due to their interesting structural and magnetic properties, like magnetic memory shape ability, coupled magneto-structural phase transitions and half-metallicity; ruled, for many cases, by the valence electrons number ($N_v$). The present work focuses on the magnetocaloric potentials of half-metals, exploring the effect of $N_v$ on the magnetic entropy change, preserving half-metallicity. The test bench is the Si-rich side of the half-metallic series Fe$_2$MnSi$_{1-x}$Ga$_x$. From the obtained experimental results it was possible to obtain $|\Delta S|_{max}=\Delta H^{0.8}(\alpha+\beta N_v)$, i.e., the maximum magnetic entropy change depends in a linear fashion on $N_v$, weighted by a power law on the magnetic field change $\Delta H$ ($\alpha$ and $\beta$ are constants experimentally determined). In addition, it was also possible to predict a new multifunctional Heusler alloy, with enhanced magnetocaloric effect, Curie temperature close to 300 K and half-metallicity.' author: - 'R. J. Caraballo Vivas' - 'S. S. Pedro' - 'C.Cruz' - 'J. C. G. Tedesco' - 'A. A. Coelho' - 'A. Magnus G. Carvalho' - 'D. L. Rocco' - 'M. S. Reis' title: 'Multifunctional Heusler alloy: experimental evidences of enhanced magnetocaloric properties at room temperature and half-metallicity' --- Introduction ============ Heusler alloys have been attracted considerable attention due to their several possible applications, such as on spintronics [@spin; @galanakis], magneto-optics [@Pons200857], magnetoeletronics [@Lielec], solar thermoeletrics and other technological devices [@Graf20111]. The physical properties for these applications, such as magnetization [@galanakis; @brown2000magnetization] and the Curie temperature [@varaprasad2009; @okubo2010magnetic], can be further optimized managing some parameters, as, for instance, lattice parameter and valence electrons numbers ($N_v$), in which are possible to be ruled by chemical substitution. An example of the above are alloys that were optimized to exhibit the curious memory shape behavior, defined as the ability of the material to come back to its original shape after deformed by a change in temperature and magnetic field; found, for instance, on Ni$_{45}$Co$_5$Mn$_{36.6}$In$_{13.4}$[@kainuma2006]. Also remarkable is the magnetocaloric effect around the magnetic transition temperature due to the occurrence of coupled magneto-structural transitions; found, for instance, on non-stoichometric Ni-Mn-Ga alloys [@planes2009; @PhysRevB.59.1113]. Other important example is the half-metallicity, in which the alloy presents a gap in the minority band, working therefore as a perfect spin filter, since electrons at the Fermi level are fully polarized. This feature is useful for spintronic purposes and is found, for instance, on Fe$_2$MnSi[@Pedro], Fe$_2$MnP[@Kervan] and the high temperature ferromagnets Co$_2$MnSi and Co$_2$MnGa[@galanakis]. The aim of this work is thus to predict a multifunctional Heusler alloy, with enhanced magnetocaloric effect at room temperature and half-metallicity. More precisely, the magnetocaloric effect (MCE) has been studied by several researchers in order to develop magnetocaloric materials of low cost, good thermal conductivity, low electrical resistivity and mainly maximized magnetocaloric potential. The MCE can be seen from either an adiabatic or isothermal process, both due to a change of the applied magnetic field. From an adiabatic process, the magnetic material changes its temperature; while for an isothermal process it exchanges heat with a thermal reservoir. It is therefore possible to create a thermomagnetic cycle and a magnetic refrigerator based on these processes [@BookMario; @tishin]. Some compounds that exhibit remarkable MCE potentials are, for instance, manganites [@Manga1; @Manga2; @Andrade; @PhysRevB.71.144413]; MnAs-based compounds [@MnCuAs; @rocconat; @Leitao; @PhysRevB.77.104439]; Heusler alloys [@planes2009; @PhysRevB.59.1113]; La-Fe-Si alloys [@PhysRevB.67.104416; @PhysRevB.65.014410]; intermetallics like RNi$_2$ (R = Nd, Gd, Tb)[@plaza], RCo$_2$ (R = Er, Tb)[@Gomes2002870] and PrNi$_{1-x}$Co$_x$ [@PhysRevB.79.014428]; and even diamagnetic materials like graphenes[@paixao2014oscillating; @reis2012oscillating2]. On the other hand, half-metal materials is one of the key rules to spintronics, since these materials are able to filter majority spins of an incoming non-polarized current. These are therefore useful for tunnel junctions, spin-injection and giant magnetoresistance devices[@galanakis], specially those with high Curie temperature. More precisely, the tunnel magnetoresistance ratio (TMR) become theoretically infinity based on the Julliere’s model, for tunnel junctions using half-metals in both electrodes[@miyazaki2012physics]. The half-metallicity can be verified from either the theoretical density of states, obtained from first-principle methods, or the total magnetic moment of the compound, that must obey the generalized Slater-Pauling rule $M = (N_v - 24) \mu_B$[@galanakis], where the valence electrons number $N_v$ is written, for our case, as[@Pedro]: $$\label{valencias} N_v = (2 \times N_{\mbox{Fe}}) + N_{\mbox{Mn}} + (1-x) N_{\mbox{Si}}+ xN_{\mbox{Ga}}$$ Above, $N_{\mbox{Fe}}= 8$, $N_{\mbox{Mn}}= 7$, $N_{\mbox{Si}}= 4$ and $N_{\mbox{Ga}}= 3$ are the valence electrons for each atom. Considering this scenario, our aim is to provide a multifunctional Heusler alloy, with half-metallicity and enhanced magnetocaloric effect, tuning thus this multifunctionallity with the valence electrons number $N_v$. To this purpose, our test bench materials are the Si-rich side of the half-metallic series Fe$_2$MnSi$_{1-x}$Ga$_x$. Further details on the test bench material ========================================== The magnetic and structural properties of parent Fe$_2$MnSi and Fe$_2$MnGa Heusler alloys have been previously investigated [@hiroi2012substitution; @kawakami; @kudryavtsev]. The former is a well know half-metallic ferromagnet alloy with $T_c$ around 224 K[@Pedro] and $Fm\overline{3}m$ spacial group (Cu$_2$MnAl type structure); while Fe$_2$MnGa is also a half-metallic ferromagnet with $T_c$ far above room temperature, around 800 K as previously reported [@kudryavtsev; @Gasi], and crystallizes in the $Pm\overline{3}m$ spacial group (Cu$_3$Au type structure). In spite of different crystallographic structures of those parent compounds, it is possible to achieve single phase samples of the series Fe$_2$MnSi$_{1-x}$Ga$_x$; however, only for the Si-rich side up to 50%; i.e., the crystal structure of Fe$_2$MnSi parent compound supports Ga substitution up to $x=0.50$, with the lattice parameter $a$ increasing by increasing Ga content[@Pedro]. In spite of these finds, the literature has no results on the magnetocaloric effect of these materials, but the Curie temperature of this series was detailed explored by this group and reported in reference [@Pedro]. This last was presented as a function of the valence electrons number $N_v$ and an interesting linear behavior was found (see figure \[tc\]). Thus, since the aim of the present work is to provide a multifunctional Heusler alloy with enhanced magnetocaloric properties ruled by $N_v$ and half-metallicity, from figure \[tc\] it is straightforward to extrapolate the Curie temperature to 300 K and verify that $N_v = 27.44$ would bring the ferromagnetic transition up to room temperature (a desired feature expected to optimize magnetocaloric materials). Other studies connecting $T_c$ and $N_v$ confirm the linear growth tendency of these quantities for half-metallic Heusler alloys [@Graf20111]. ![Linear behavior of the Curie temperature ($T_c$) as a function of the valence electrons number $N_v$. The ‘+’ signal marks the necessary $N_v$ value to reach $T_c$ at room temperature, found to be $N_v = 27.44$.[]{data-label="tc"}](tc.eps){width="50.00000%"} Thus, as a consequence of the above reported, we must increase the valence electrons number $N_v$ to further optimize the magnetocaloric properties of half-metal Heusler alloys. To achieve this goal, either Si or Ga must be replaced by other element (or elements) that can contribute with more electrons; i.e., those elements belonging to, for instance, group 15 of the periodic table, such as P or As. These elements contribute with 5 electrons and can indeed increase the overall valence electrons number of the system. On the other hand, a substitution by a group 14 element, such as Ge and Sn, does not increase the overall valence electrons number for this series, since these have only 4 valence electrons (the same valence electron number of Si). In fact, Zhang [@Zhang2] found values for $T_c$ between only 243 and 260 K in the same structural phase, by replacing Si by Ge in parental Fe$_2$MnSi compound. From the above, we propose therefore Fe$_2$MnSi$_{0.56}$P$_{0.44}$, since Kervan and Kervan [@Kervan] conducted an *ab initio* calculations concerning the Fe$_2$MnP and confirmed the half-metallic features of the systems. Experimental details ==================== Polycrystaline ingots of Fe$_{2}$MnSi$_{1-x}$Ga$_{x}$ Heusler alloys were synthesized in arc furnace under Ar atmosphere. The mass of the reactants of high purity were calculated in stoichiometric quantities, with exception of Mn, which was added in excess of 3% from the stoichiometry to compensate possible losses during the melting process. This additional amount was found using the preparation process described in [@Caraballo]. The ingots were wrapped in tantalum foils, sealed in a quartz tube filled with argon and annealed for 3 days at 1323 K with subsequent quench in water to obtain single phase samples. X-ray powder diffraction data were obtained at room temperature using a Bruker AXS D8 Advance diffractometer with Cu-K$\alpha$ radiation ($\lambda$ = 1.54056 Å) at [*Laboratório de Difração de Raios-X*]{} at UFF and confirmed the single phase formation for all samples. Energy dispersive X-ray spectroscopy (EDS) performed at [*Laboratório de Caracterização de Materiais*]{} at IF-Sudeste MG was used to obtain the samples composition. The found average values are in very good agreement with the nominal composition. Magnetization data were acquired as a function of temperature and magnetic field using a commercial Superconducting Quantum Interference Device (SQUID, from Quantum Design$^{\circledR}$) at [*Laboratório de Baixas Temperaturas*]{} at UNICAMP. Further details about samples preparation and structural characterization can be found in the reference [@Pedro]. Magnetocaloric effect and the valence electrons number ====================================================== The physical quantities that measure the magnetocaloric potential are the magnetic entropy change $\Delta S(T,\Delta H)$ and the adiabatic temperature change $\Delta T(T,\Delta H)$. The magnetic entropy change is more common to be found in the literature, since it only needs the magnetization map, i.e., $M(T,H)$. Thus, we performed magnetization measurements as a function of magnetic field for several temperatures around the magnetic transition temperature $T_c$ (see figure \[mce\]-left), for some selected compositions ($x=0.50$, 0.12 and 0.02), chosen among those available on figure \[tc\]. ![image](mh_mce.eps){width="80.00000%"} From $M$ *vs.* $H$ isothermal curves it is possible to obtain $\Delta S(T,\Delta H)$ accordingly to: $$\Delta S(T,\Delta H)=\int^{H}_{0}\left(\frac{\partial M(T,H)}{\partial T}\right)_{H}dH. \label{equ1}$$ The calculated values of $\Delta S(T,\Delta H)$ are presented on figure \[mce\]-right for $\Delta H$ = 10, 20, 30, 40 and 50 kOe. A really interesting result was then found by increasing $N_v$ (replacing Ga by Si), in which the maximum magnetic entropy change increases. As mentioned above, the Curie temperature also increases by increasing $N_v$ (see figure \[tc\]), and therefore a shift towards higher temperatures on the magnetic entropy change peak is observed (and expected). These results are clearly seen on figure \[mce\]-left and summarized on figure \[ds\]. ![(left axis) Maximum magnetic entropy change $|\Delta S|_{max}$ for several values of $\Delta H$ and as a function of $N_v$ for Fe2MnSi$_{1-x}$Ga$_x$ series. The ‘+’ signals mark the possible $|\Delta S|_{max}$ values of the proposed compound with $T_c$ close to room temperature. (right axis-color online) Scaling law to collapse the experimental data, ratifying therefore the validity of equation \[finalfwnfew\] and the parameters obtained from figure \[ab\].[]{data-label="ds"}](mcenv.eps){width="50.00000%"} From now on let us then focus on figure \[ds\]. Note the maximum magnetic entropy change $|\Delta S|_{max}$ has a linear behavior depending on Ga by Si substitution, i.e, depending on $N_v$; and, for $N_v=27.44$, it is expected to achieve 1.2 J/kg.K@20 kOe, that would indeed be comparable to standard metallic Gd (4 J/kg.K@20 kOe). Thus, the Heusler alloy with $N_v=27.44$ would optimize the Curie temperature, shifting $|\Delta S|_{max}$ towards room temperature, and, in addition, enhance the magnetocaloric properties (see figure \[ds\]). Let us see deeper into this result. Note $|\Delta S|_{max}$ has a linear dependence with $N_v$ for any value of applied field change $\Delta H$, but the slope and intercept parameters of these straight lines are $\Delta H$ dependent. Thus, it is reasonable to propose: $$|\Delta S|_{max}(N_v,\Delta H)=a(\Delta H)+b(\Delta H) N_v$$ Note therefore the value of $N_v$ for which $|\Delta S|_{max}$ goes to zero does not depend on $\Delta H$ and then $a(\Delta H)/b(\Delta H)$ ratio is a constant ($N_v=25.7$), i.e., independent of $\Delta H$. In addition, there is also a boundary condition: $|\Delta S|_{max}(N_v,\Delta H=0)=0$ and thus we can write $a(\Delta H=0)=b(\Delta H=0)=0$. Taking advantage of these ideas, it is reasonable to propose: $$\label{abequations} a(\Delta H)=\alpha \Delta H^\gamma\;\;\;\text{and}\;\;\;b(\Delta H)=\beta \Delta H^\gamma$$ that leads to: $$\label{finalfwnfew} |\Delta S|_{max}(N_v,\Delta H)=\Delta H^\gamma(\alpha+\beta N_v)$$ The above equation satisfies what we are observing. The point now is to obtain the parameters $\alpha$, $\beta$ and $\gamma$ from the experimental results. To go further, we then need $a(\Delta H)$ and $b(\Delta H)$ as a function of $\Delta H$; and these quantities can be easily obtained from the experimental data presented on figure \[ds\] - and these are shown on figure \[ab\]. ![Intercept (a) and slope (b) of the linear dependence of $|\Delta S|_{max}$ as a function of $N_v$ (see figure \[ds\]). These parameters depend on the magnetic field change in a power law fashion $\Delta H^\gamma$ (see equation \[abequations\]), experimentally found to be $\gamma=0.80(1)$, for both parameters $a$ and $b$. From these dependence the empirical relationship on equation \[finalfwnfew\] could be obtained. The straight dotted line is a guide to the eyes, to verify the lack of linearity between those parameters and $\Delta H$.[]{data-label="ab"}](ab.eps){width="50.00000%"} Equation \[abequations\] were then fitted to the presented data on figure \[ab\] and the needed parameters were obtained: $\alpha=-1.62(9)$ J/kg.K.kOe$^\gamma$, $\beta=0.063(3)$ J/kg.K.kOe$^\gamma$ and $\gamma=0.80(1)$. An important point should be emphasized: each fitting has its $\gamma$ exponent free to change and the experimental data lead those two (one from $a$ and the other from $b$), to the same value of 0.80(1). It is an experimental evidence that indeed those straight lines on figure \[ds\] tend to the same value of $N_v$ for a zero $|\Delta S|_{max}$ value; otherwise we could not factored $\Delta H^{\gamma}$ as it is on equation \[finalfwnfew\] and, as a consequence, $a(\Delta H)/b(\Delta H)$ would be $\Delta H$ dependent (contrarily to what was observed). To stress this idea, figure \[ds\]-right axis presents $|\Delta S|_{max}/\Delta H^{0.8}$ and indeed this quantity collapses all of the magnetocaloric data into a single point. Concluding remarks ================== In the present paper we explored the Si-rich side of Fe$_2$MnSi$_{1-x}$Ga$_{x}$ Heusler alloys and concluded that the valence electron number $N_v$ plays an important rule on their Curie temperature and magnetic entropy change. Increasing $N_v$ (equivalente to increase the Si content from Fe$_2$MnSi$_{0.5}$Ga$_{0.5}$ compound), leads to a linear increasing of those both quantities. Our conclusion is that $N_v=27.44$ would bring the Curie temperature of the compound to room temperature, as well as promote an increasing on the maximum magnetic entropy change. From these results we could also propose (based on the experimental data), an empirical linear relationship of the maximum magnetic entropy change $|\Delta S|_{max}$ with $N_v$, weighted by a power law of $\Delta H$, i.e.: $|\Delta S|_{max}=\Delta H^{0.8}(\alpha+\beta N_v)$, where $\alpha$ and $\beta$ are constants experimentally determined. To achieve the goal of the present effort, we also propose to substitute Ga by a group 15 element, like P; and we expect that Fe$_2$MnSi$_{0.56}$P$_{0.44}$ would have their Curie temperatures close to 300 K, with an enhanced magnetocaloric effect. In addition, it is know from reference [@Kervan] that Fe$_2$MnP is a half-metal Heusler alloy and therefore we expect that the above proposal leads, in addition to the enhanced MCE properties, a half-metal system with the Curie temperature close to room temperature. Thus, this ideas can indeed lead to new multifunctional material, opening doors for further researches on this topic. Acknowledgments =============== Access to [*Laboratório de Caracterização de Materiais*]{} at IF-Sudeste MG (Juiz de Fora, Brazil), [*Laboratório de Difração de Raios-X*]{} at IF - UFF (Niterói, Brazil) and [*Laboratório de Baixas Temperaturas*]{} at UNICAMP (Campinas, Brazil) are gratefully acknowledged by all authors, in which also acknowledge FAPERJ, FAPESP, CAPES, CNPq and PROPPI-UFF for financial support. [40]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase http://dx.doi.org/10.1116/1.4866418) [****,  ()](\doibase http://dx.doi.org/0.1103/PhysRevB.66.174429) [****,  ()](\doibase http://dx.doi.org/10.1016/j.msea.2007.02.152), @noop [****, ()]{} [****,  ()](\doibase http://dx.doi.org/10.1016/j.progsolidstchem.2011.02.001) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevB.59.1113) [****, ()](\doibase http://dx.doi.org/10.1063/1.4905173) [****, ()](\doibase http://dx.doi.org/10.1016/j.intermet.2012.01.030) @noop [**]{} (, ) @noop [**]{} (, ) [****, ()](\doibase http://dx.doi.org/10.1063/1.4795769) @noop [****,  ()]{} [****,  ()](\doibase http://dx.doi.org/10.1016/j.jssc.2014.07.013) [****,  ()](\doibase 10.1103/PhysRevB.71.144413) [****, ()](\doibase http://dx.doi.org/10.1063/1.2746074) @noop [****,  ()]{} [****,  ()](\doibase 10.1109/TMAG.2008.2002794) [****,  ()](\doibase 10.1103/PhysRevB.77.104439) [****,  ()](\doibase 10.1103/PhysRevB.67.104416) [****,  ()](\doibase 10.1103/PhysRevB.65.014410) [****,  ()](\doibase http://dx.doi.org/10.1063/1.3054178) [****,  ()](\doibase http://dx.doi.org/10.1016/S0304-8853(01)01327-0) [****,  ()](\doibase 10.1103/PhysRevB.79.014428) @noop [****, ()]{} @noop [****,  ()]{} @noop [**]{}, Vol.  (, ) @noop [****,  ()]{} in @noop [**]{}, Vol.  (, ) p. @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.4892677) @noop [****, ()]{} [****,  ()](\doibase http://dx.doi.org/10.1063/1.4861906) [****,  ()](\doibase http://dx.doi.org/10.1016/S0921-4526(02)01853-7)
COSMIC BALDNESS W. Boucher G.W. Gibbons D.A.M.T,P. , Cambridge university Some years ago (Gibbons & Hawking 1977) it was suggested that solutions of Einstein’s equations with a positive cosmological term should eventually settle down to a state which is stationary inside the cosmological event horizon of any future inextendible timelike curve. This stationary state, it was suggested, would, if no black holes were present, be de Sitter space. The recent work on the inflationary scenario in the early universe has reawakened interest in this topic and the purpose of this article is to review the mechanism whereby a universe dominated by vacuum energy, so that it satisfies the equation R\_= g\_, &gt;0, relaxes to an asymptotically de sitter state inside the event horizon of any observer. There is some overlap with the article by Barrow but the emphasis here is rather different. De Sitter space may be thought of as the hyperboloid in 5 dimensional Minkowski space given by -(X\^0)\^2 +(X\^1)\^2+ (X\^2)\^2+ (X\^3)\^2 + (X\^5)\^2 = 3/ where $\Lambda $ is the cosmological constant which is related to Hubble’s constant, $ H$ , or the surface gravity of the horizon, $\kappa$, by H= = ([3]{} ) \^ It is useful to coordinatize the spacetime in two different ways, depending upon whether we wish to think of it as an expanding F.R.W. universe or a static universe with an event horizon. We use $(s,\chi, \theta,\phi)$ the first case and $(t,r,\theta ,\phi)$ in the second. They are given (Hawking & Ellis 1973; Hawking & Gibbons 1977): r&=& X\^1 = H\^[-1]{} Hs\ r&=& X\^2 = H\^[-1]{} Hs\ r &=&X\^3 = H\^[-1]{} Hs\ ( H\^[-2]{} -r\^2 ) \^Ht &=& X\^5 = H\^[-1]{} Hs\ ( H\^[-2]{} -r\^2 ) \^Ht &=& X\^0 = H\^[-1]{} Hs The metric thus becomes: d \^2= -(1- H\^2 r\^2 ) dt \^2 + (1-H\^2 r\^2 ) \^[-1]{} dr \^2 + r\^2 d\^2 or d [ ]{}\^2 = -ds\^2 + H\^[-2]{} \^2 H s (d \^2 + \^2 d \^2 ) The event horizon of an observer situated at $r=0$ given by $r =H^{-1}$ and so since (4) - (6) imply that &=& Hr (Hs)\^[-1]{}\ &=& Hr (1-H\^2 r\^2 \^2 Ht) \^[-]{} (Ht ) \^[-1]{} we see that as $s,t \rightarrow$ an exponentially smaller portion of the 3-spheres $s =$ const. is included within the event horizon of this observer. This is the key to understanding the decay of perturbations. From the point of view of the static frame we expect them to be radiated through the cosmological event horizon just as in the familiar black hole case (see Price 1972). From the point of view of the expanding universe frame we shall see – as pointed out by Starobinsky (1977) that gravitational wave perturbations do not decay but rather they are frozen in as $t\rightarrow \infty$ . However this is not in contradiction with the cosmic No Hair Theorem since as time proceeds the observer sees these perturbations on an exponentially smaller scale and so the region inside his/her event horizon appears to become more and more accurately de Sitter. Following Lifshitz & Khalatnikov (1963) we consider perturbations of the metric form (1O) given by d \^2 =-ds\^2 + H\^[-2]{} \^2 Hs (d \^2 + \^2 d \^2 + \_n \_n(s) G\^[(n)]{}\_[ij]{} (,) dx \^i dx \^j ) where the $G^{n}_{ij}$ are tensor harmonics on $S^3$ and the $\nu_n(s)$ are the amplitudes of the gravitational wave perturbations. It turns are the out that if one considers solutions of the massless, minimally coupled scalar wave equation in the de Sitter background of the form = \_n(s) Q\^[(n)]{} (,,) where the $Q^{n} $ are scalar harmonics on $S^3$ , the coefficients $\nu_n(s)$ in (13) satisfy the same equation as those in (12) and are given constant factor by \_n(s) = (in [sech]{} Hs + Hs) in( \^[-1]{} Hs ) HS) In neither the scalar nor the gravitational case do the perturbations die away, rather they tend to constants at late times. Thus the scalar field $\phi$ can have any functional form, $\Phi(\chi,\theta ,\phi) $, at late times. However using the relation given by (l1) between $\chi$ and the coordinates $r$ and $t$ we see that as $ t \rightarrow \infty $ for all $r$ inside the event horizon (t,r, ,) (0) Thus $\phi$ tends to a constant inside the event horizon exponentially fast which is in accordance with the fact that there are no static solutions of the wave equation which are regular inside and on the event horizon other than the constant one. In the gravitational case similar results hold, locally the constant gravitational wave modes are pure gauge, even though they are not pure gauge globally over the entire $S^3$ . We shall see this in more detail when we consider the fully non-linear asymptotic form of the metric. Note that in contrast to the black hole case the perturbations die away exponentially fast; there is no power law fall-off of the sort discussed by Price (1972). Starobinsky has pointed out to us that a general asymptotic solution of the equation $R_{\alpha \beta}= \Lambda g_{\alpha \beta}$ takes the form d\^2 = - ds \^2 + a\_[ij]{} ([x]{}) dx\^i dx\^j + O(1) where $a_{ij}$ is is an $\underline{\rm arbitrary}$ 3-metric. This clearly indicates that the waves do not decay globally over the entire 3-surface s = constant. The geometry af this surface never settles down to that of a smooth 3-sphere. The curve $x^i = 0$ is a geodesic. By means of a linear coordinate transformation of the $x^i$ ’s we may, with no loss of generality, set a\_[ij]{}(0)= \_[ij]{} Now introduce coordinates $y^i$ and $t$ by y\^i&=& e\^[Hs]{} x\^i\ e\^[Ht]{}&=& (1-H \^2 y\^2) \^[-]{} e\^[Hs]{} It is now an easy exercise to show that in the coordinates $(t, y^i)$, which are only valid within the event horizon of the timelike observer at $y^i =0$ , the general metric (16) approaches the exact metric of de Sitter space (equation (9)) exponentially fast. Thus as far as every freely falling observer is concerned the observable universe becomes quite bald. If we exclude the possibility of a central black hole it would seem likely from the above analysis that de Sitter space is the only exactly static solution of the equations $R_{alpha \beta}=\Lambda g_{\alpha \beta}\,, \quad \Lambda >0$, surrounded by a regular event horizon and with a regular centre – i.e. such that the interior of the event horizon is diffeomorphic to the product of an open ball in $R^3$ with the real line. This last proviso is necessary to exclude the Nariai (1951) metric. We have tried unsuccessfully to generalize the standard Israel theorems for black holes (Israel 1967; Muller zum Hagen, et al. 1973; Robinsan 1977) to this case. If the metric is static the boundary conditions on the horizon are just those required to justify analytically continuing the metric to the Euclidean regime by putting $t = i\tau$. The resulting metric is positive definite, defined on a manifold diffeomorphic to $S^4$, and satisfies $R_{\alpha \beta}=\Lambda g_{\alpha \beta }$. standard Einstein metric on The only known such metric is of course the standard Einstein metric on $S^4$ which is just the analytic continuation of de Sitter space obtained by setting $t=i\tau$ in (9) amd identifying $\tau$ , modulo $2\pi ( 3/\Lambda) ^\half$ . The natural conjecture to make – which is stronger than the generalised Israel theorem suggested above – is that the only such metric is the standard one. Mathematicians we have asked do not know whether this is true but the corresponding statement is false for $S^{4n+3}\,,\quad n\ge 1$ (Jensen 1973). If there is another such metric it cannot be continuously deformed into the standard one as one can readily check by perturbing the Einstein equations on $S^4$ as described in (Gibbons and Perry 1978). We would like to thank S.W. Hawking, A. Starobinsky, S. Siklos and J. Barrow for useful discussions and suggestions on the material presented above. Gibbons, G.W., and Hawking, S.W. (1977). cosmological event horizons, thermodynamics, and particle creation. Phys. Rev. D [ $ \underline{15}$]{}, 2738. Gibbons, G.W., and Perry, M.J. (1978). Quantizing gravitational-instantons. Nuc1. Phys. B [$ \underline {146} $]{} , 90. Hawking, S.W., and Ellis, G.F.R. (l973) The large scale structure of space-time. Cambridge: Cambridge University Press. Israel, W. (1967) Event horizons in static vacuum space-times. Phys. Rev. [$\underline {164} $]{} , 1776. Jensen, G.R. (1973). Einstein metrics on principal fibre bundles. J. Diff. Geom.[$\underline 8$]{} 599 Lifshitz, E.M. and Khalatnikov, I.M. (1963). Investigations in Relativistic Cosmology. Adv. Phys. [$ \underline {12} $]{}, 185. Muller zum Hagen, H., Robinson, D.C., and Seifert; H.J. (1973). Black holes in static vacuum space-times. Gen. Rel. Grav.[$\underline 4$]{} 53. Nariai, H. (1951). On a new cosmological solution of Einstein’s field equations of gravitation. Sci. Rep. Tohoku Univ. [$\underline {35} $]{} , 62. Price, R.H. (1972). Nonspherical perturbations of relativistic gravitational collapse, I: Scalar and gravitational perturbations. Phys. Rev. D[$\underline 5 $ ]{}, 2419. Nonspherical perturbations of relativistic -gravitational collapse, II: Integer-spin, zero-rest-mass fields. Phys. Rev. D[$\underline 5$]{} , 2439. Robinson, D.C. A simple proof of the Generalization of Israel’s Theorem. Gen. Rel. Grav. [$\underline 8$]{} , 695. Starobinsky, A.A. Spectrum of relict gravitational radiation and the early state of the universe. JETP Lett. [$\underline {30} $]{} , 682
--- abstract: | We use a variant of Salikhov’s ingenious proof that the irrationality measure of $\pi$ is at most $7.606308\dots$ to prove that, in fact, it is at most $7.103205334137\dots$.\ <span style="font-variant:small-caps;">Accompanying Maple package</span>. While this article has a fully rigorous human-made and human-readable proof of the claim in the title, it was *discovered* thanks to the Maple package `SALIKOHVpi.txt` available from\ <http://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/pimeas.html>. address: - 'Department of Mathematics, Rutgers University (New Brunswick), Hill Center-Busch Campus, 110 Frelinghuysen Rd., Piscataway, NJ 08854-8019, USA' - 'Department of Mathematics, IMAPP, Radboud University, PO Box 9010, 6500 GL Nijmegen, Netherlands' author: - Doron Zeilberger - Wadim Zudilin date: '13 December 2019. *Revised*: 7 January 2020' title: | The irrationality measure of $\boldsymbol\pi$\ is at most $\mathbf{7.103205334137\dots}$ --- Every number that is not rational (a quotient of integers) is *irrational*, but not all irrational numbers are born equal. To measure ‘how irrational’ is a given number $x$, we define (see [@Wei]) the *irrationality measure* $\mu$ (also called the *irrationality exponent*) as the smallest number $\mu$ such that $$\biggl| x- \frac{p}{q}\bigg| > \frac{1}{q^{\mu +\epsilon}}$$ holds for any $\epsilon>0$ and all integers $p$ and $q$ with sufficiently large $q$. It is not hard to see that the irrationality measure of $e$ is $2$, but the exact irrationality measure of $\pi$ is unknown. It became a **competitive sport** to find lower and lower upper bounds for the irrationality measure of $\pi$. The first upper bound, of $42$, was proved in 1953 by Kurt Mahler [@Mah53]. This record has been subsequently improved by Maurice Mignotte, Gregory Chudnowsky, and in three better-and-better articles, by Masayoshi Hata (see the references in [@Hat93; @Sal10]). The current “world record” is due to Vladislav Khasanovich Salikhov who proved the upper bound of $7.606308$. This was announced [@Sal08] in 2008 and published [@Sal10] in 2010. In this article we tweak Salikhov’s method to beat his more than ten-year-old record to set a new world record of $7.103205334137\dots$. Since the aim of our paper is not *just* to state and prove yet another record that would most likely be broken again sooner or later (we hope that not that soon, unless it is by ourselves…), but to also explain our “experimental mathematics” methodology that pointed the way to the ultimate human-generated formal proof, to be given in Part \[partII\]. We also describe a fully rigorous, and fully computer-generated, proof of a coarser upper bound that is much better than many of the previous world records. This will be done in Part \[partI\]. Readers who are not interested in the process of discovery, or computer proofs, can go straight to Part \[partII\], that is a self-contained human-generated and human-readable proof. We are grateful to Vladislav Salikhov for pointing out a mistake in the previous version of our Lemma \[lem2\] below. Fixing the gap required from us to employ new techniques, so that at the end this manuscript is more than just tweaking Salikhov’s construction in [@Sal10]. \[partI\] General strategy {#general-strategy .unnumbered} ---------------- A good way to gain immortality, and become a *famous* [**person**]{}, is to be the first one to prove that a *famous* [**constant**]{}, let’s call it $x$, is irrational. One way to achieve this is to come up with two sequences of positive [**integers**]{} $\{a_n\}$ and $\{b_n\}$, and a [**positive**]{}, explicit real number $\delta$ such that there is a constant $C$, independent of $n$, such that, for all $n>0$, $$\biggl|x- \frac{a_n}{b_n}\biggr| \le \frac{C}{b_n^{1+\delta}}.$$ This immediately implies the irrationality of $x$ and at the same time establishes an upper bound, namely $1+1/\delta$, for the irrationality measure of $x$. This is exactly how, in 1978, the $64$-year old Roger Apéry became immortal by doing the above with $x=\zeta(3)$ (i.e., $\sum_{n=1}^{\infty} n^{-3}$); see Alf van der Poorten’s classic exposition [@vdP79]. Shortly after, Frits Beukers [@Beu79] gave a much simpler rendition of Apéry’s construction by introducing a certain explicit triple integral $$I(n)= \int_0^1 \!\!\! \int_0^1 \!\!\! \int_0^1 \biggl( \frac{x(1-x)y(1-y)z(1-z)}{1-(1-xy)z)} \biggr)^n \frac{{{\mathrm d}}x\,{{\mathrm d}}y\,{{\mathrm d}}z}{1-(1-xy)z},$$ and pointing out that - $I(n)$ is small and can be explicitly bounded, - $I(n)=A(n) +B(n) \zeta(3)$ for certain sequences of rational numbers $A(n),B(n)$ that can be explicitly bounded, and - $A(n){\operatorname{lcm}}(1,2, \dots, n)^3$ and $B(n)$ are integers. Since thanks to the Prime Number Theorem ${\operatorname{lcm}}(1,2,\dots,n)$ grows like $e^{n+o(n)}$ as $n\to\infty$, everything followed. Shortly after [@AR79], Krishna Alladi and Michael Robinson used one-dimensional analogs to reprove the irrationality of $\log 2$, and established an upper bound of $4.63$ for its irrationality measure (subsequently improved, see [@Wei]) by considering the simple integral $$I(n)= \int_0^1 \biggl( \frac{x(1-x)}{1+x} \biggr)^n \,\frac{{{\mathrm d}}x}{1+x}.$$ Our coming manuscript [@ZZ20] is dedicated to further exploration of this theme. An experimental mathematics redux of Salikhov’s approach {#an-experimental-mathematics-redux-of-salikhovs-approach .unnumbered} -------------------------------------------------------- Salikhov [@Sal10] essentially uses the same strategy, but with the far more complicated integral $$I(n)=-i\int_{4-2i}^{4+2i}\biggl(\frac{(x-4+2i)^6(x-4-2i)^6(x-5)^6(x-6+2i)^6(x-6-2i)^6} {x^{10} (x-10)^{10}}\biggr)^n\,\frac{{{\mathrm d}}x}{x(x-10)}.$$ He then used [**partial fractions**]{} to claim that $$I(n)=A(n) + B(n) \pi,$$ for some sequences of $\{A(n)\} \,, \,\{B(n)\}$ of *rational numbers*. Using the **saddle-point method**, he bounded $I(n)$ and $A(n)$, $B(n)$. He then proved that if one sets $$A'(n) = {\operatorname{lcm}}(1,2,\dots, 10n) \biggl(\frac{25}{32}\biggr)^nA(n) \quad\text{and}\quad B'(n) = {\operatorname{lcm}}(1,2,\dots, 10n) \biggl(\frac{25}{32}\biggr)^nB(n),$$ then $A'(n)$ and $B'(n)$ are [**integer sequences**]{}, and defining $$I'(n) = {\operatorname{lcm}}(1,2,\dots,10n) \biggl(\frac{25}{32}\biggr)^n I(n),$$ using ${\operatorname{lcm}}(1,2,\dots,10n)=O(e^{10n+o(n)})$, one can explicitly bound $A'(n),B'(n),I'(n)$, and $I'(n)$ being small and $B'(n)$ being big one can get a crude upper bound for the irrationality measure, using the fact that $A'(n)/B'(n)$ approximate $\pi$. Finally, the hard part was to come up with ‘additional saving’, a sequence of integers $F(n)$, such that $A''(n)=A'(n)/F(n)$ and $B''(n)=B'(n)/F(n)$ are still integers. Setting $I''(n)=I'(n)/F(n)$ he squeezed more juice out of it, getting a larger $\delta$ and hence a smaller irrationality measure $1+1/\delta$, setting the current record of $7.606308\dots$. Our approach is different. We do not use partial fractions, but rather the fact, that thanks to the Almkvist–Zeilberger algorithm [@AZ90], there exists a third-order linear recurrence equation of the form $$p_0(n) I(n)+ p_1(n) I(n+1) + p_2(n) I(n+2) + p_3(n) I(n+3) = 0 ,$$ for some explicit polynomials $p_0(n), p_1(n), p_2(n), p_3(n)$. To save space, we do not reproduce it here, but refer the reader to the following webpage:\ <http://sites.math.rutgers.edu/~zeilberg/tokhniot/oSALIKHOVpi2.txt>. That web-page gives a new, computer-generated proof of the crude upper bound, only using the recurrence and the so-called Poincaré lemma that gives the asymptotics of $A(n)$, $B(n)$ and $I(n)$ from which it is immediate to bound $A'(n)$, $B'(n)$, and $I'(n)$. The only non-rigorous part in our approach is the study of the extra divisor $F(n)$, whose growth we estimate empirically. For details see the above-mentioned computer-generated article. Tweaking Salikhov’s integral. Looking where to dig {#tweaking-salikhovs-integral.-looking-where-to-dig .unnumbered} -------------------------------------------------- Looking at Salikhov’s integral, it is natural to consider the more general integral $$I_{A,B}(n) = -i\int_{4-2i}^{4+2i}\frac{(x-4+2i)^{2An}(x-4-2i)^{2An}(x-5)^{2An}(x-6+2i)^{2An}(x-6-2i)^{2An}} {x^{2Bn+1} (x-10)^{2Bn+1}}\,{{\mathrm d}}x,$$ where Salikhov’s integral is the special case $I_{3,5}(n)$. Perhaps we can do better? But before we invest time and energy, trying out many choices of $A$ and $B$, it makes sense to do things *empirically*, crank out, say, 300 terms of the examined sequence and see whether it yields good ‘deltas’. Alas, even Maple and Mathematica will start to complain if we use the definition for, say $n=300$. Luckily, for each specific $A$ and $B$, Shalosh B. Ekhad can quicky use the Almkvist–Zeilberger algorithm [@AZ90] to crank out many terms, and thereby get very good estimates for the ‘deltas’. This initial *reconnaissance* is very fast and gives you an indication on *where to dig*. This is implemented in procedure `BestAB` in the Maple package `SALIKOHVpi.txt` mentioned above. Typing `BestAB(10,300);` gives the following computer-generated article:\ <http://sites.math.rutgers.edu/~zeilberg/tokhniot/oSALIKHOVpi4.txt>. Most of the choices of $(A,B)$ give negative, useless, deltas, but—[**surprise!**]{}—the choice of $A=2$, $B=3$ yielded that the smallest $\delta$ in the range $290 \leq n \leq 300$ was $0.16605428729395818514$. This beats the analogous value for the $A=3$, $B=5$ case, that equals $0.15727140930557009691$. The ‘bronze medal’ was won by $A=5$, $B=8$ that was almost as good: $0.15701995819256081077$; followed by $A=8$, $B=13$ that gave the respectable $0.15586354092162189848$. Next in line was a non-Fibonacci $A=7$, $B=10$ that placed fifth, with $0.12451550531454231901$. For all the other ‘empirical deltas’ see the above output file. Once we found out that $A=2$, $B=3$ was a good gamble, we had another pleasant surprise. We can replace $n$ by $n/2$ and still get combinations of $1$ and $\pi$ (in the original case $A=3$, $B=5$ of Salikhov, the odd indices $n$ give combinations of $1$ and $\arctan(1/7)$). This simplifies the recurrence, and a fully rigorous proof of the cruder upper bound of $10.747747465671804677\dots$ can be found here:\ <http://sites.math.rutgers.edu/~zeilberg/tokhniot/oSALIKHOVpi3.txt>. In order to get the more refined upper bound, we had to resort to non-rigorous estimates. Luckily it was possible to make everything fully rigorous, and this brings us to Part \[partII\]. \[partII\] Test bunny {#test-bunny .unnumbered} ---------- For $n=0,1,2,\dots$, our integrals in question are $$\begin{aligned} I_n &=5i\int_{4-2i}^{4+2i}\frac{(x-5)^{2n}(x-4+2i)^{2n}(x-4-2i)^{2n}(x-6+2i)^{2n}(x-6-2i)^{2n}} {x^{3n+1}(x-10)^{3n+1}}\,{{\mathrm d}}x \nonumber\\ &=i(-1)^{n+1}\int_{-1-2i}^{-1+2i}\frac{5x^{2n}(x+1+2i)^{2n}(x+1-2i)^{2n}(x-1+2i)^{2n}(x-1-2i)^{2n}} {(5+x)^{3n+1}(5-x)^{3n+1}}\,{{\mathrm d}}x. \label{eq:1}\end{aligned}$$ These are from the winning family in Part \[partI\]. Arithmetic {#arithmetic .unnumbered} ---------- The integrand $$\begin{aligned} R(x) &=\frac{5x^{2n}(x+1+2i)^{2n}(x+1-2i)^{2n}(x-1+2i)^{2n}(x-1-2i)^{2n}} {(5+x)^{3n+1}(5-x)^{3n+1}} \nonumber\\ &=\frac{5x^{2n}(x^4+6x^2+25)^{2n}}{(5+x)^{3n+1}(5-x)^{3n+1}} \label{eq:2}\end{aligned}$$ possesses the symmetry $R(-x)=R(x)$ and therefore can be written as $$R(x)=P(x)+\sum_{j=0}^{3n}\biggl(\frac{A_j}{(5+x)^{j+1}}+\frac{A_j}{(5-x)^{j+1}}\biggr) \label{eq:3}$$ for some rational $A_j$ and a polynomial $P(x)\in\mathbb Z[x^2]$ of degree $4n-2$. \[lem1\] The coefficients $A_j$ in the partial-fraction expansion satisfy $$2^{-\lfloor(5n+3j)/2\rfloor+1}5^{-j}A_j\in\mathbb Z \quad\text{for}\; j=0,1,\dots,3n. \label{eq:inc}$$ In particular, they are integers. To compute $A_j$, introduce linear operators $$D_m\colon f(x)\mapsto\frac1{m!}\,\frac{{{\mathrm d}}^mf(x)}{{{\mathrm d}}x^m}\bigg|_{x=-5}.$$ Then with the help of Leibniz’s formula we deduce $$\begin{aligned} A_j &=D_{3n-j}\bigl((x+5)^{3n+1}R(x)\bigr) \displaybreak[2]\nonumber\\ &=5\sum_{\substack{m_0,m_1,\dots,m_5\ge0 \\ m_1,\dots,m_5\le 2n \\ m_0+m_1+\dots+m_5=3n-j}} D_{m_0}(5-x)^{-3n-1} D_{m_1}x^{2n}D_{m_2}(x+1+2i)^{2n}D_{m_3}(x+1-2i)^{2n} \nonumber\\[-21pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times D_{m_4}(x-1+2i)^{2n}D_{m_5}(x-1-2i)^{2n} \displaybreak[2]\nonumber\\ &=5\sum_{{{\boldsymbol m}}\in{{\mathcal M}}_j}(-1)^{m_1+\dots+m_5}T({{\boldsymbol m}})10^{-3n-1-m_0}5^{2n-m_1} (4-2i)^{2n-m_2}(4+2i)^{2n-m_3} \nonumber\\[-6pt] &\qquad\qquad\qquad\qquad\qquad\times (6-2i)^{2n-m_4}(6+2i)^{2n-m_5} \displaybreak[2]\nonumber\\ &=\sum_{{{\boldsymbol m}}\in{{\mathcal M}}_j}(-1)^{m_1+\dots+m_5}T({{\boldsymbol m}}) 2^{4n-1+j+m_1}(1-i)^{-m_4}(1+i)^{-m_5} \nonumber\\[-6pt] &\qquad\qquad\qquad\qquad\times 5^j(2+i)^{m_2+m_5}(2-i)^{m_3+m_4} \label{eq:A_j}\end{aligned}$$ for $j=0,\dots,3n$, where the summation is over the multi-indices ${{\boldsymbol m}}=(m_0,\dots,m_5)$ from the set $$\begin{aligned} {{\mathcal M}}_j =\{(m_0,m_1,\dots,m_5):\, & m_0,m_1,\dots,m_5\ge0;\ m_1,\dots,m_5\le 2n;\ \nonumber\\ &\quad m_0+m_1+\dots+m_5=3n-j\} \subset\mathbb Z_{\ge0}^6 \nonumber\end{aligned}$$ and $$T({{\boldsymbol m}})=T(m_0,m_1,\dots,m_5) =\binom{3n+m_0}{m_0}\prod_{\ell=1}^5\binom{2n}{m_\ell}\in\mathbb Z. \nonumber$$ Now $m_4+m_5\le3n-j$ and $(1\pm i)^2=\pm2i$; hence $$2^{\lceil(3n-j)/2\rceil}\times(1-i)^{-m_4}(1+i)^{-m_5}\in\mathbb Z[i]$$ and $$2^{-\lfloor(5n+3j)/2\rfloor+1}\times2^{4n-1+j+m_1}(1-i)^{-m_4}(1+i)^{-m_5}\in\mathbb Z[i].$$ Therefore, $2^{-\lfloor(5n+3j)/2\rfloor+1}5^{-j}A_j\in\mathbb Z[i]$ and the result follows from using the fact that $A_j\in\mathbb Q$. Formula for the coefficients $A_j$ makes sense for *any* integer $j\le3n$; it generates the coefficients in the Laurent series expansion of $R(x)$ at $x=-5$. More precisely, $$R(x) =\sum_{k=-3n}^\infty A_{-k}(x+5)^{k-1} =\sum_{j=0}^{3n}\frac{A_j}{(x+5)^{j+1}} +\sum_{k=1}^\infty A_{-k}(x+5)^{k-1}. $$ Note that $A_j$ produced by formula are not necessarily integral for *negative* $j$ but at least they satisfy $10^{-j}A_j\in\mathbb Z$ for $j=-1,-2,\dots,-(4n-1)$ on the grounds of the formula; and we also have $10^{-j}A_j\in\mathbb Z$ for $j=0,1,2,\dots,3n$ in accordance with Lemma \[lem1\]. Furthermore, $$\sum_{j=0}^{3n}\frac{A_j}{(5-x)^{j+1}} =\sum_{j=0}^{3n}\frac{A_j}{(10-(x+5))^{j+1}} =\sum_{j=0}^{3n}A_j\sum_{k=1}^\infty\binom{j+k-1}j\frac{(x+5)^{k-1}}{10^{j+k}};$$ comparing the last two expansions with we find out that $$\begin{aligned} P(x) &=R(x)-\sum_{j=0}^{3n}\biggl(\frac{A_j}{(5+x)^{j+1}}+\frac{A_j}{(5-x)^{j+1}}\biggr) \\ &=\sum_{k=1}^\infty\biggl(A_{-k} -\sum_{j=0}^{3n}\binom{j+k-1}j\frac{A_j}{10^{j+k}}\biggr)(x+5)^{k-1}.\end{aligned}$$ On the other hand, $P(x)$ is a *polynomial* of degree $4n-2$, hence $$P(x) =\sum_{k=1}^{4n-1}\biggl(A_{-k} -\sum_{j=0}^{3n}\binom{j+k-1}j\frac{A_j}{10^{j+k}}\biggr)(x+5)^{k-1}. \label{eq:x+5}$$ \[lem2\] Any prime from the set $${{\mathcal P}}_n=\biggl\{p>\max\{5,\sqrt{3n}\}:\frac12\le\Bigl\{\frac np\Bigr\}<\frac23\biggr\} \subset\{p\; \text{prime}:5<p\le2n\}$$ satisfies the following property: if $p\mid j$ for $j\in\{-4n+1,-4n+2,\dots,3n\}$, then $A_j\equiv0\pmod p$ (in other words, $p\mid 10^{-j}A_j$). In order to establish the claim, we will cast the coefficients $A_j$ in differently. Observe that $$\begin{aligned} R(x) &=\frac{5x^{2n}(x^2+(3-4i))^{2n}(x^2+(3+4i))^{2n}}{(25-x^2)^{3n+1}} \\ &=5\sum_{n_1,n_2\ge0}\binom{2n}{n_1}\binom{2n}{n_2}(3-4i)^{2n-n_1}(3+4i)^{2n-n_2} \frac{x^{2(n+n_1+n_2)}}{(25-x^2)^{3n+1}}\end{aligned}$$ and $$\begin{aligned} \frac{x^{2m}}{(25-x^2)^{3n+1}} &=\frac{(5-(x+5))^{2m}}{(x+5)^{3n+1}(10-(x+5))^{3n+1}} \\ &=\frac{5^{2m}10^{-3n-1}}{(x+5)^{3n+1}} \sum_{k=0}^\infty\frac{(x+5)^k}{10^k}\sum_{n_0\ge0}(-2)^{n_0}\binom{2m}{n_0}\binom{3n+k-n_0}{3n},\end{aligned}$$ hence $$\begin{aligned} A_j &=\sum_{n_1,n_2\ge0}(3-4i)^{2n-n_1}(3+4i)^{2n-n_2}5^{2n+2n_1+2n_2+1}10^{-(6n-j)-1} \\ &\qquad\times \binom{2n}{n_1}\binom{2n}{n_2}Z(n,n_1+n_2,j),\end{aligned}$$ where $$Z(n,m,j)=\sum_{n_0\ge0}(-2)^{n_0}\binom{2n+2m}{n_0}\binom{6n-j-n_0}{3n}.$$ This means that our lemma is a consequence of the following divisibility property: If a prime $p\in{{\mathcal P}}_n$ divides $j$, then it also divides $$\hat T(n,n_1,n_2)=\binom{2n}{n_1}\binom{2n}{n_2}Z(n,n_1+n_2,j)$$ for any $n_1,n_2\ge0$. From now on, we will repeatedly use the fact that the $p$-adic order of $N!$ satisfies ${\operatorname{ord}}_pN!=\lfloor N/p\rfloor=N/p-\{N/p\}$ when $p>\sqrt N$. In particular, $${\operatorname{ord}}_p\binom{2n}{n_\ell}=\lfloor2\omega\rfloor-\lfloor2\omega-\omega_\ell\rfloor-\lfloor\omega_\ell\rfloor =\lfloor2\omega\rfloor-\lfloor2\omega-\omega_\ell\rfloor \quad\text{for}\;\ell=1,2, \label{eq:pord}$$ where the fractional parts $\omega=\{n/p\}$, $\omega_1=\{n_1/p\}$ and $\omega_2=\{n_2/p\}$ all belong to the interval $[0,1)$. Since $p\in{{\mathcal P}}_n$, we have $\omega\in[\frac12,\frac23)$, so that $\lfloor2\omega\rfloor=\lfloor3\omega\rfloor=1$. If at least one of the $p$-adic orders in is positive then immediately ${\operatorname{ord}}_p\hat T(n,n_1,n_2)\ge1$ establishing the required divisibility; therefore, it remains to analyze the remaining situations assuming $\lfloor2\omega-\omega_\ell\rfloor=\lfloor2\omega\rfloor=1$ for $\ell=1,2$; in other words, assuming $$2\omega-\omega_1\ge1 \quad\text{and}\quad 2\omega-\omega_2\ge1. $$ The binomial sums $Z(n,m,j)$ can be realized as a terminating $_2F_1$ hypergeometric function, to which several classical transformations can be applied. For example, it can be transformed into $$\begin{aligned} Z(n,m,j) &=\sum_{n_0\ge0}(-1)^{n_0}\binom{2n+2m}{n_0}\binom{6n-2(n+m)-j}{3n-j-n_0} \\ &=(-1)^{n+m}\sum_{k\in\mathbb Z}(-1)^k\binom{2n+2m}{n+m+k}\binom{4n-2m-j}{2n-m-k}.\end{aligned}$$ Though the expression does not possess a closed form in general, its particular instance $j=0$ reduces to the *super Catalan numbers* $$\frac{(2N)!\,(2M)!}{N!\,(N+M)!\,M!} =\sum_{k=-\infty}^{\infty}(-1)^k\binom{2N}{N+k}\binom{2M}{M+k};$$ see [@Ges92], also for the historical reference of this identity due to K. von Szily (1894). The argument in [@Ges92 Sect. 6] shows that the more general sum $$\sum_{k=-\infty}^{\infty}(-1)^k\binom{2N}{N+k}\binom{2M-j}{M+k}$$ is the coefficient of $t^{2N}$ in the polynomial $$(-1)^N\frac{(2N)!\,(2M-j)!}{(N+M)!\,(N+M-j)!}\,(1+t)^{N+M}(1-t)^{N+M-j}.$$ In our situation $N=n+m$, $M=2n-m$ with $m=n_1+n_2$, the factorial-ratio factor $$\frac{(2N)!\,(2M-j)!}{(N+M)!\,(N+M-j)!} =\frac{(2n+2n_1+2n_2)!\,(4n-2n_1-2n_2-j)!}{(3n)!\,(3n-j)!}$$ has the nonnegative $p$-adic order $$\begin{aligned} & \lfloor2\omega+2\omega_1+2\omega_2\rfloor +\lfloor4\omega-2\omega_1-2\omega_2-j/p\rfloor -\lfloor3\omega\rfloor-\lfloor3\omega-j/p\rfloor \\ &\quad =\lfloor2\omega+2\omega_1+2\omega_2\rfloor +\lfloor4\omega-2\omega_1-2\omega_2\rfloor -2\lfloor3\omega\rfloor\end{aligned}$$ (we use $j/p\in\mathbb Z$), because $\lfloor3\omega\rfloor=1$, $$2\omega+2\omega_1+2\omega_2\ge2\omega\ge1 \quad\text{and}\quad 4\omega-2\omega_1-2\omega_2\ge4\omega-4(2\omega-1)=4(1-\omega)>\frac43.$$ Moreover, if this $p$-adic order is *positive* then $Z(n,n_1+n_2,j)$ is divisible by $p$, hence the divisibility of $\hat T(n,n_1,n_2)$ follows. Thus, we are left with the situation when this order is zero, $$\lfloor2\omega+2\omega_1+2\omega_2\rfloor=\lfloor4\omega-2\omega_1-2\omega_2\rfloor=1,$$ meaning that $$2\omega+2\omega_1+2\omega_2<2 \quad\text{and}\quad 4\omega-2\omega_1-2\omega_2<2. \label{eq:ineq2}$$ We have to show that the coefficient of $t^{2N}$ in $(1+t)^{N+M}(1-t)^{N+M-j}$ is divisible by $p$. Denoting $r=-j/p\in\mathbb Z$ and using the “Freshman’s Dream Identity” $(1-t)^p\equiv1-t^p\pmod p$ in the ring $\mathbb Z[[t]]$, we find out that $$\begin{aligned} & (1+t)^{N+M}(1-t)^{N+M-j} =(1-t^2)^{N+M}(1-t)^{-j} \\ &\quad \equiv(1-t^2)^{N+M}(1-t^p)^r =\sum_{k_1\ge0}(-1)^{k_1}\binom{N+M}{k_1}t^{2k_1} \sum_{k_2\ge0}(-1)^{k_2}\binom r{k_2}t^{pk_2},\end{aligned}$$ hence the coefficient of $t^{2N}$ is congruent to $$\sum_{k=0}^{\lfloor N/p\rfloor}(-1)^{k+N}\binom{N+M}{N-kp}\binom r{2k}$$ modulo $p$. The $p$-adic order of the *nonzero* binomial coefficients $\binom{N+M}{N-kp}$ does not depend on $k$: $$\begin{aligned} {\operatorname{ord}}_p\binom{N+M}{N-kp} &=-\biggl\{\frac Np+\frac Mp\biggr\}+\biggl\{\frac Np-k\biggr\}+\biggl\{\frac Mp+k\biggr\} \\ &=-\biggl\{\frac Np+\frac Mp\biggr\}+\biggl\{\frac Np\biggr\}+\biggl\{\frac Mp\biggr\}.\end{aligned}$$ Recalling that $N=n+m$, $M=2n-m$ with $m=n_1+n_2$ the latter quantity reads $${\operatorname{ord}}_p\binom{3n}{n+n_1+n_2} =\lfloor3\omega\rfloor-\lfloor\omega+\omega_1+\omega_2\rfloor-\lfloor2\omega-\omega_1-\omega_2\rfloor=1,$$ where we employed to get $$\lfloor\omega+\omega_1+\omega_2\rfloor =\lfloor2\omega-\omega_1-\omega_2\rfloor =0.$$ This means that all binomial coefficients $\binom{N+M}{N-kp}$ are divisible by $p$, thus completing our proof of the divisibility of $\hat T(n,n_1,n_2)$ by $p$, and of the lemma. An earlier version of the lemma claimed that any prime $p\in{{\mathcal P}}_n$, $p\mid j$ for $j\in\{-4n,-4n+1,\dots,3n\}$, divides $T({{\boldsymbol m}})$ for all ${{\boldsymbol m}}\in{{\mathcal M}}_j$; this would clearly imply the present statement in view of formula . However, the claim about the divisibility properties of $T({{\boldsymbol m}})$ was false. \[lem3\] Define $\Phi=\Phi_n=\prod_{p\in{{\mathcal P}}_n}p$ and $$L_n=\frac{{\operatorname{lcm}}(1,2,\dots,4n)}{\Phi_n}\in\mathbb Z. \nonumber$$ Then $$L_n\times\frac{10^{-j}A_j}j\in\mathbb Z \quad\text{for}\; j\in\{-4n,-4n+1,\dots,3n-1,3n\}, \; j\ne0, \label{eq:inc1}$$ and $\Phi_n^{-1}\times A_0\in\mathbb Z$. Asymptotically, $$\lim_{n\to\infty}\frac{\log\Phi_n}n =\frac{\Gamma'(2/3)}{\Gamma(2/3)}-\frac{\Gamma'(1/2)}{\Gamma(1/2)} =\frac\pi{2\sqrt3}-\log\frac{3\sqrt3}4 =0.64527561\dotsc \label{eq:Phi}$$ (see [@Hat93 Lemma 2.2]). Note that $${\operatorname{lcm}}(1,2,\dots,4n)\times\frac1j\in\mathbb Z \quad\text{for}\; j\in\{-4n,-4n+1,\dots,3n\}, \; j\ne0,$$ implying, for all such $j$, $$L_n\cdot\frac1{j/p}\in\mathbb Z \quad\text{if}\; p\mid j, \; p\in{{\mathcal P}}_n. \label{eq:5}$$ On the other hand, it follows from formula and Lemma \[lem2\] that $$\frac{10^{-j}A_j}p\in\mathbb Z \quad\text{if}\; p\mid j, \; p\in{{\mathcal P}}_n. \label{eq:6}$$ Combining and results in claim . \[lem4\] Write the polynomial $P(x)\in\mathbb Z[x]$ in the decomposition as $$P(x)=\sum_{k=0}^{4n-2}B_k(x+1+2i)^k \qquad\text{with}\quad B_k\in\mathbb Z[i] \quad\text{for}\; k=0,1,\dots,4n-2. \label{eq:P}$$ Then $$2^{-\lfloor5n/2\rfloor+\lceil3k/2\rceil+2}\times B_k\in\mathbb Z[i] \quad\text{for}\; k=0,1,\dots,4n-2. \label{eq:inc2}$$ If $k\ge2n$ then $-\lfloor5n/2\rfloor+\lceil3k/2\rceil+2\ge0$ and the inclusion in follows from $B_k\in\mathbb Z[i]$. Therefore, we only need to verify for $k<2n$; since $R(x)$ from has a zero of order $2n$ at $x=-1-2i$, we deduce from  that $$\begin{aligned} B_k &=-\frac1{k!}\,\frac{{{\mathrm d}}^k}{{{\mathrm d}}x^k}\sum_{j=0}^{3n}\biggl(\frac{A_j}{(5+x)^{j+1}}+\frac{A_j}{(5-x)^{j+1}}\biggr)\bigg|_{x=-1-2i} \\ &=-\sum_{j=0}^{3n}(-1)^k\binom{j+k}k\biggl(\frac{A_j}{(5+x)^{j+k+1}}+(-1)^{j+1}\frac{A_j}{(x-5)^{j+k+1}}\biggr)\bigg|_{x=-1-2i} \\ &=-\sum_{j=0}^{3n}\binom{j+k}k\biggl(\frac{(-1)^kA_j}{(2(2-i))^{j+k+1}}+\frac{A_j}{(2(1+i)(2-i))^{j+k+1}}\biggr)\end{aligned}$$ for $k=0,1,\dots,2n-1$. It follows then from that $$2^{-\lfloor5n/2\rfloor+\lceil3k/2\rceil+2}(2-i)^{3n+k+1}\times B_k\in\mathbb Z[i]$$ and, again, we recall $B_k\in\mathbb Z[i]$ to conclude with  for $k<2n$. \[lem5\] For the polynomial $P(x)$ in the decomposition , we have $$2^{-\lfloor5n/2\rfloor}L_n\times i\int_{-1-2i}^{-1+2i}P(x)\,{{\mathrm d}}x\in\mathbb Z. \label{eq:A}$$ We first compute the integral using representation : $$\begin{aligned} i\int_{-1-2i}^{-1+2i}P(x)\,{{\mathrm d}}x &=i\sum_{k=0}^{4n-2}B_k\int_{-1-2i}^{-1+2i}(x+1+2i)^k\,{{\mathrm d}}x \displaybreak[2]\\ &=i\sum_{k=0}^{4n-2}\frac{B_k}{k+1}(4i)^{k+1} =-\sum_{k=0}^{4n-2}\frac{2^{2k+2}B_k}{k+1}\,i^k\end{aligned}$$ implying $$2^{-\lfloor5n/2\rfloor}{\operatorname{lcm}}(1,2,\dots,4n)\times i\int_{-1-2i}^{-1+2i}P(x)\,{{\mathrm d}}x\in\mathbb Z[i] \label{eq:A1}$$ on the basis of Lemma \[lem4\]. On the other hand, if representation is applied then $$\begin{aligned} i\int_{-1-2i}^{-1+2i}P(x)\,{{\mathrm d}}x &=i\sum_{k=1}^{4n-1}\biggl(A_{-k}-\sum_{j=0}^{3n}\binom{j+k-1}j\frac{A_j}{10^{j+k}}\biggr) \int_{-1-2i}^{-1+2i}(x+5)^{k-1}\,{{\mathrm d}}x \\ &=i\sum_{k=1}^{4n-1}\biggl(\frac{A_{-k}}k -\sum_{j=0}^{3n}\binom{j+k-1}j\frac{A_j}{k\,10^{j+k}}\biggr) \bigl((4+2i)^k-(4-2i)^k\bigr) \displaybreak[2]\\ &=\sum_{k=1}^{4n-1}\biggl(\frac{A_{-k}}k-\frac{A_0}k\,\frac1{10^k} -\sum_{j=1}^{3n}\binom{j+k-1}{j-1}\frac{A_j}{j}\,\frac{1}{10^{j+k}}\biggr) \\ &\qquad\times 2^{k+1}\sum_{\substack{\ell=0\\\ell\;\text{odd}}}^k\binom k\ell(-1)^{(\ell+1)/2}2^{k-\ell}\end{aligned}$$ is a rational number satisfying $$\frac{{\operatorname{lcm}}(1,2,\dots,4n)}{\Phi_n}\times i\int_{-1-2i}^{-1+2i}P(x)\,{{\mathrm d}}x\in10^{-4n}\mathbb Z \label{eq:A2}$$ on the basis of Lemma \[lem3\]. Finally, the two inclusions and combine into . \[lem6\] For the partial-fraction part in (without the $j=0$ term), we have $$2^{-\lfloor5n/2\rfloor+1}L_n\times i\int_{-1-2i}^{-1+2i} \sum_{j=1}^{3n}A_j\biggl(\frac1{(5+x)^{j+1}}+\frac1{(5-x)^{j+1}}\biggr)\,{{\mathrm d}}x\in\mathbb Z. \nonumber$$ This follows from $$\begin{aligned} & i\sum_{j=1}^{3n}A_j\int_{-1-2i}^{-1+2i}\biggl(\frac1{(5+x)^{j+1}}+\frac1{(5-x)^{j+1}}\biggr)\,{{\mathrm d}}x \\ &\quad =i\sum_{j=1}^{3n}\frac{A_j}j \biggl(\frac1{(4-2i)^j}-\frac1{(4+2i)^j}-\frac1{(6+2i)^j}+\frac1{(6-2i)^j}\biggr) \\ &\quad =i\sum_{j=1}^{3n}\frac{A_j}j \biggl(\frac{(2+i)^j}{2^j5^j}-\frac{(2-i)^j}{2^j5^j}-\frac{(2+i)^j}{2^j(1+i)^j5^j}+\frac{(2-i)^j}{2^j(1-i)^j5^j}\biggr) \in\mathbb Q\end{aligned}$$ and the inclusions of Lemma \[lem1\] and \[lem3\]. Lemma \[lem1\] and the integrality of $L_n$ imply that $2^{-\lfloor5n/2\rfloor+1}L_n\times A_0\in\mathbb Z$; together with the calculation $$\begin{aligned} \int_{-1-2i}^{-1+2i}\biggl(\frac1{5+x}+\frac1{5-x}\biggr){{\mathrm d}}x &=\log(4+2i)-\log(4-2i)-\log(6-2i)+\log(6+2i) \\ &=\frac{\pi i}2\end{aligned}$$ and Lemmas \[lem5\], \[lem6\] we are thus led to the following statement. \[prop1\] For the integrals $I_n$ in , we have $$2^{-\lfloor5n/2\rfloor+2}L_n\times I_n\in\mathbb Z+\mathbb Z\pi. \nonumber$$ Asymptotics {#asymptotics .unnumbered} ----------- By now we have legally settled down that $I_n=a_n+b_n\pi$ for some *rational* $a_n$ and $b_n$. \[prop2\] The asymptotics of the integrals $I_n$ and the coefficients $b_n$ in the representation $I_n=a_n+b_n\pi$ is as follows: $$\limsup_{n\to\infty}|I_n|^{1/n}=|N_1|=0.029458495928\dots \quad\text{and}\quad \lim_{n\to\infty}b_n^{1/n}=N_3=21851.691396\dots,$$ where $$N_{1,2}=0.02930189\hdots \pm i\,0.00303351\dots, \quad N_3=21851.691396\dots$$ are the zeros of polynomial $$108N^3-2359989N^2+138304N-2048. \label{eq:ind}$$ This rigorously follows from the Poincaré lemma supplied by the rigorously produced—thanks to the Almkvist–Zeilberger method [@AZ90]—difference equation for the integrals $I_n$ (hence also for $a_n,b_n$), whose indicial polynomial (more precisely: the indicial polynomial of its ‘constant-coefficients approximation’) is exacty . Observe that $|I_n|\le1$ follows from integrating over the *line* interval $[-1-2i,-1+2i]$ and trivially bounding the absolute value of the integrand on it. However, those who prefer traditional analytical methods can have fun going through the glorious details of the saddle-point method, at least after the change of variables $y=x^2$ is performed in . For that one deals with the function $$\tilde R(y)=\frac{5g(y)^n}{y-25}, \quad\text{where}\; g(y)=\frac{y(y^2+6y+25)^2}{(y-25)^3},$$ and with the zeros $$y_{1,2}=-1.91975076\hdots \mp i\,1.01250889\dots, \quad y_3=66.33950152\dots$$ of $$\frac{g'(y)}{g(y)}=\frac{2y^3-125y^2-500y-625}{y(y^2+6y+25)(y-25)}.$$ Then $N_j=g(y_j)$ for $j=1,2,3$. The remaining part is performing a suitable deformation of path in to pass through the saddle points $\sqrt{y_1}$ and $\sqrt{y_2}$ (with the choice of branch such the real parts of the roots are negative) and writing a Cauchy integral for $b_n$ over a closed contour passing through the saddle points $\pm\sqrt{y_3}$. World record {#world-record .unnumbered} ------------ It follows from Propositions \[prop1\] and \[prop2\] that the forms $$I_n'=2^{-\lfloor5n/2\rfloor+2}L_nI_n=a_n'+b_n'\pi, \quad\text{where}\; n=0,1,2,\dots,$$ all have integral coefficients $a_n',b_n'$ and the asymptotics $$\begin{aligned} \limsup_{n\to\infty}\frac{\log|I_n'|}n &=\log|N_1|-\frac52\,\log2+4-\frac\pi{2\sqrt3}+\log\frac{3\sqrt3}4 =-1.90291648559998\dots \\ \intertext{and} \lim_{n\to\infty}\frac{\log b_n}n &=\log N_3-\frac52\,\log2+4-\frac\pi{2\sqrt3}+\log\frac{3\sqrt3}4 =11.613890045331\dots\end{aligned}$$ (the asymptotics of $L_n$ follows from the Prime Number Theorem and ). This implies (see, e.g., [@Sal10 Lemma 1]) that the irrationality measure of $\pi$ is bounded above by $$1+\frac{11.613890045331\dots}{1.90291648559998\dots} =7.10320533413700172750577342281\dotsc.$$ [99]{} <span style="font-variant:small-caps;">K. Alladi</span> and <span style="font-variant:small-caps;">M.L. Robinson</span>, Legendre polynomials and irrationality, *J. Reine Angew. Math.* **318** (1980), 137–155. <span style="font-variant:small-caps;">G. Almkvist</span> and <span style="font-variant:small-caps;">D. Zeilberger</span>, The method of differentiating under the integral sign, *J. Symbolic Computation* **10** (1990), 571–591;\ <http://www.math.rutgers.edu/~zeilberg/mamarimY/duis.pdf>. <span style="font-variant:small-caps;">F. Beukers</span>, A note on the irrationality of $\zeta(2)$ and $\zeta(3)$, *Bull. London Math. Soc.* **11** (1979), no. 3, 268–272. <span style="font-variant:small-caps;">I. Gessel</span>, Super ballot numbers, *J. Symbolic Computation* **14** (1992), 179–194;\ <http://people.brandeis.edu/~gessel/homepage/papers/superballot.pdf>. <span style="font-variant:small-caps;">M. Hata</span>, Rational approximations to $\pi$ and some other numbers, *Acta Arith.* **63** (1993), no. 4, 335–349. <span style="font-variant:small-caps;">K. Mahler</span>, On the approximation of $\pi$, *Nederl. Adad. Wetensch. Proc. Ser. A* **56** (1953), no. 1, 30–42. <span style="font-variant:small-caps;">V.Kh. Salikhov</span>, On the irrationality measure of the number $\pi$, *Russian Math. Surveys* **63** (2008), no. 3, 570–572. <span style="font-variant:small-caps;">V.Kh. Salikhov</span>, On the measure of irrationality of the number $\pi$, *Math. Notes* **88** (2010), no. 4, 563–573. <span style="font-variant:small-caps;">E.W. Weisstein</span>, *Irrationality Measure*, from MathWorld-A Wolfram Web Resource;\ <http://mathworld.wolfram.com/IrrationalityMeasure.html>. <span style="font-variant:small-caps;">A. van der Poorten</span>, A proof that Euler missed… Apéry’s proof of the irrationality of $\zeta(3)$, An informal report, *Math. Intelligencer* **1** (1979), 195–203;\ <http://www.ega-math.narod.ru/Apery1.htm>. <span style="font-variant:small-caps;">D. Zeilberger</span> and <span style="font-variant:small-caps;">W. Zudilin</span>, Automatic Discovery of Irrationality Proofs and Irrationality Measures, *Preprint* [`arXiv:1912.10381 [math.NT]`](http://arxiv.org/abs/1912.10381), 10 pp.
--- abstract: | Given a graph $G$, we would like to find (if it exists) the largest induced subgraph $H$ in which there are at least $k$ vertices realizing the maximum degree of $H$. This problem was first posed by Caro and Yuster. They proved, for example, that for every graph $G$ on $n$ vertices we can guarantee, for $k = 2$, such an induced subgraph $H$ by deleting at most $2\sqrt{n}$ vertices, but the question if $2\sqrt{n}$ is best possible remains open. Among the results obtained in this paper we prove that: 1. [For every graph $G$ on $n \geq 4$ vertices we can delete at most $\lceil \frac{- 3 + \sqrt{ 8n- 15}}{2 } \rceil$ vertices to get an induced subgraph $H$ with at least two vertices realizing $\Delta(H)$, and this bound is sharp, solving the problems left open by Caro and Yuster.]{} 2. [For every graph $G$ with maximum degree $\Delta \geq 1$ we can delete at most $\lceil \frac{ -3 + \sqrt{8\Delta +1}}{2 } \rceil$ vertices to get an induced subgraph $H$ with at least two vertices realizing $\Delta(H)$, and this bound is sharp.]{} 3. [Every graph $G$ with $\Delta(G) \leq 2$ and least $2k - 1$ vertices (respectively $2k - 2$ vertices if k is even) contains an induced subgraph $H$ in which at least $k$ vertices realise $\Delta(H)$, and these bound are sharp.]{} author: - | Yair Caro\ Department of Mathematics\ University of Haifa-Oranim\ Israel - | Josef Lauri\ Department of Mathematics\ University of Malta\ Malta - | Christina Zarb\ Department of Mathematics\ University of Malta\ Malta bibliography: - '2maxbibnew5.bib' title: Equating two maximum degrees --- \[section\] \[section\] \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Lemma]{} \[theorem\][Conjecture]{} Introduction ============ A well-known elementary exercise in graph theory states that every (simple) graph on at least two vertices has two vertices with the same degree. Motivated by this fact, Caro and West [@caro2009repetition] formally defined the repetition number of a graph $G$, $rep(G)$, to be the maximum multiplicity in the list (degree sequence) of the vertex degrees. Various research was done concerning the repetition number or repetitions in the degree sequence. Here we mention some of these directions. 1. [The connection between the independence number and $K_r$-free graphs with given repetition number [@bollobas1996degree; @bollobas1997independent; @erdHos1995degree].]{} 2. [Hypergraph irregularity - the existence of $r$-uniform hypergraphs ($r \geq 3$) with no repeated degrees [@balister2016random; @gyarfas1992irregularity].]{} 3. [Ramsey type problems with repeated degrees [@albertson1991ramsey] and [@erdos1993ramsey]. ]{} 4. [Regular independent sets — vertices of the same degree forming an independent set [@albertson1994lower; @albertson79con; @caro2016regular].]{} 5. [ Forcing $k$-repetition anywhere in the degree sequence [@caro2014forcing]]{} 6. [ Forcing $k$-repetition of the maximum degree [@caro2010large].]{} In this paper we shall focus on the following problem first stated in [@caro2010large]. For a graph $G$ and an integer $k \geq 2$ let $f_k(G)$ denote the minimum number of vertices we have to delete from $G$ in order to get an induced subgraph $H$ in which there are at least $k$ vertices that attain the maximum degree $\Delta(H)$, of $H$, or otherwise $|H| < k$, where as usual, following the notation of [@westintro], $|G| = n$ is the number of vertices of $G$, $\Delta(G)$ is the maximum degree of $G$ and a vertex of degree $t$ is called a $t$-vertex. In the case $ k = 2$ we use the abbreviation $f(G)$ instead of $f_2(G)$. We define $f(n,k) = \max \{ f_k(G): |G| \leq n \}$ and $g(\Delta,k) = \max \{ f_k(G) : \Delta(G) \leq \Delta \}$. Clearly there are graphs in which we cannot equate $k$ degrees let alone $k$ maximum degrees. A simple example is the star $K_{1,k-1}$ for $k \geq 3$ having $k$ vertices and by definition $f_ j(K_{1,k-1}) = 1$ for $j \geq2$. However, it is trivial that in every graph $G$ on at least $R(k,k)$ vertices (where $R(k,k)$ is the diagonal Ramsey number), we can equate $k$ maximum degrees. We call a graph $G$ in which (by deleting vertices) we can equate $k$ maximum degrees a *k-feasible* graph. So of interest is the following function $$h(\Delta,k) = \max \{ |G| : \Delta(G) \leq \Delta \mbox{ and $G$ is not $k$-feasible}\}.$$ Caro and Yuster [@caro2010large] conjectured that for every $k \geq 2$ there exists a constant $c(k)$ such that $f(n,k) \leq c(k) \sqrt{n}$ and proved the conjecture for $k = 2$ with $c(2) = 2$ and $k = 3$ with $c(3) = 43$. For $k \geq 4$ the conjecture is still open. The question whether $c(2) = 2$ and $c(3)=43$ are best possible also remains open. Our main purpose in this paper is to show: 1. [$f(G)$ can be computed exactly in polynomial time $O(n^2)$.]{} 2. [for $\Delta \geq 1$, $g(\Delta,2)) \leq \lceil ( \frac{- 3 + \sqrt{8 \Delta+1}}{2} \rceil$ and this bound is sharp.]{} 3. [for $n \geq 4$, $f(n,2) \leq \lceil ( \frac{- 3 + \sqrt{8n-15}}{2} \rceil$ and this bound is sharp. Hence in particular $f(2) = \sqrt{2}$, solving the problem left open in [@caro2010large].]{} 4. [for a forest $F$ on $n$ vertices, $f_k(F) \leq (2k-1)n^{\frac{1}{3}}$.]{} 5. [$g(1,k) = \lfloor \frac{ k-1}{2} \rfloor$, $g(2,k) = k-1$, thus determining exactly $g(\Delta,k)$ for $\Delta = 1,2$.]{} 6. [$h(0,k) = k-1$, $h(1,k) = \lfloor \frac{ k}{2} \rfloor + 2 \lfloor \frac{k-1}{2} \rfloor$, $h(2,k) = 2k-2$ for odd $ k \geq 3$, $h(2,k) = 2k-3$ for even $k \geq 2$.]{} The paper is organized as follows : In section 2 we cover the complexity issue of computing $f(G)$, as well as the sharp upper-bounds for $g(\Delta,2)$ and $f(n,2)$. In section 3 we consider upper-bounds for $f(F)$ and $f_k(F)$ where $F$ is a forest. In section 4 we prove exact results about $g(\Delta,k)$ and $h(\Delta,k)$ for $\Delta= 0,1,2$. Finally, in section 5 we shall collect open problems and conjectures that deserve further exploration. Determination of exact upper bounds for $f(G)$ in terms of $\Delta(G)$ and $|G| = n$. ====================================================================================== We first need a definition and two lemmas: We call $B \subset V(G)$, a set of vertices in a graph $G$, a *2-equating set* if in the induced subgraph $H$ on $V(G) \backslash B$, there are at least two vertices that realise $\Delta(H)$. We say that $B$ is a 2-equating set which realises $f(G)$ if $B$ has the minimum cardinality among all 2-equating sets of $G$. Let the degree sequence of the graph $G$ on $n$ vertices be $\Delta= d_1 \geq d_2 \geq d_3 \geq \ldots \geq d_n= \delta$ so that $\Delta$ is the maximum degree and $\delta$ the minimum degree. We define ${\mbox{\rm diff}}(G)=d_1-d_2$. \[lemma0\] Let $G$ be a graph on $n \geq 2$ vertices with degree sequence $d_1 \geq d_2 \geq \ldots \geq d_n$, with $\deg(v) = d_1$ and $\deg(u) = d_2$. Then $f(G) \leq d_1 - d_2={\mbox{\rm diff}}(G)$. If $d_1 = d_2$ then clearly $f(G) = 0$. So let $ {\mbox{\rm diff}}(G)=d_1 -d _2 \geq 1$. But then there is at least one set $B$ of neighbours of $v$ of size ${\mbox{\rm diff}}(G)$, none of which are adjacent to $u$, and clearly $f(G) \leq |B| = {\mbox{\rm diff}}(G)$. \[lemma\_1\] Let $G$ be a graph on $n \geq 2$ vertices, with degree sequence $d_1 \geq d_2 \geq \ldots \geq d_n$, with $\deg(v) = d_1$. Then either $f(G) = {\mbox{\rm diff}}(G)$, or $v$ must be in every minimal 2-equating set of $G$. Suppose $f(G)$ is not realised by ${\mbox{\rm diff}}(G)$ ( which includes the case $d_1 = d_2$). Then $f(G) < {\mbox{\rm diff}}(G)$, and we may assume that $v$ is the unique vertex in $G$ of degree $d_1$. Let $f(G)$ be realised by some induced subgraph $H$, and $B = V(G) - V(H)$ is a minimum 2-equating set for $G$. Assume for the contrary that $v \in H$ (then $v$ is not a member of $B$). Since $v \in H$ but $f(G)$ is not realised by ${\mbox{\rm diff}}(G)$ then either $v$ is not of maximum degree in $H$ in which case at least ${\mbox{\rm diff}}(G) +1$ vertices among the neighbours of $v$ must be deleted contradicting $|B| = f(G) < {\mbox{\rm diff}}(G)$, or $v$ is of maximum degree in $H$ and still at least ${\mbox{\rm diff}}(G)$ among its neighbours must be deleted and again $f(G) = |B| \geq {\mbox{\rm diff}}(G)$ contradicting $f(G) < {\mbox{\rm diff}}(G)$. \[lemma\_2\] Let $G$ be a graph on at least $n \geq 2$ vertices. Suppose that $f(G) \not = {\mbox{\rm diff}}(G)$, then $f(G) = 1 + f(G -v)$, where $v$ is the single vertex of maximum degree in $G$. Since $f(G) \not = {\mbox{\rm diff}}(G)$ it follows that there is a single vertex $v$ of maximum degree in $G$ and also from Lemma \[lemma\_1\] we infer that $v$ must be in any minimal 2-equating set of $G$. Let $G_1 = G \backslash \{ v\}$ and let $B$ be a minimal 2-equating set for $G_1$, namely $f(G_1) = |B|$. Then clearly $B \cup \{ v\}$ is a 2-equating set for $G$ hence $f(G) \leq 1+ f(G_1)$. On the other hand let $B$ be a minimum 2-equating set for $G$. Then by assumption and Lemma \[lemma\_1\] $v \in B$. Set $B_1 = B \backslash \{ v \}$. Clearly $B_1$ is a 2-equating set of $G_1$ hence $f(G_1) \leq |B_1| = |B| - 1 = f(G) - 1$ which gives $f(G_1) +1 \leq f(G)$. Hence combining both inequalities we get $f(G) = f(G \backslash \{ v\} ) +1$. \[theorem\_1\] Let $G$ be a graph on $n \geq 2$ vertices, then $$f(G)= \min\{{\mbox{\rm diff}}(G_j)+j: j=0 \ldots n-2 \},$$where $G_ { j+1}$ is obtained from $G_ j$ by deleting the vertex $v_{1,j}$ of the maximum degree $d_{1,j}$ from $G_ j$ (where $G_0$ is taken to be $G$), and $d_{2,j}$ is the second largest degree in $G_ j$. Moreover $f(G)$ can be determined in time $O(n^2)$. By Lemma \[lemma\_1\], either $f(G) = {\mbox{\rm diff}}(G)$ or $v_{1,0}$ must be deleted to obtain $G_1$ and in this case by Lemma \[lemma\_2\], $f(G) = f(G_1) +1$. Hence $f(G) = \min\{ {\mbox{\rm diff}}(G), f(G_1) +1\}$. Now again either $f(G_1) = {\mbox{\rm diff}}(G_1)$, or by Lemma \[lemma\_1\] and Lemma \[lemma\_2\] the maximum degree in $G_1$ must be deleted to obtain $G_2$ and then $f(G_1) = f(G_2) +1$. Hence $f(G) = \min\{{\mbox{\rm diff}}(G), {\mbox{\rm diff}}(G_1) +1, f(G_2 )+2 \}$. We continue this process until for some first $j$, ${\mbox{\rm diff}}(G_j) = 0$ and there we stop having two vertices realizing the maximum degree of $G_ j$ ( the later steps will always give a larger value then ${\mbox{\rm diff}}(G_j) +j = j $). Each step is forced by Lemma \[lemma\_1\] and Lemma \[lemma\_2\] , hence $$f(G)= \min\{{\mbox{\rm diff}}(G_j)+j: j=0 \ldots n-2 \}.$$ Now in each iteration we have to construct $G_ j$ from $G_ {j-1}$ by deleting the maximum degree $v_{1,j-1}$ from $G_{ j-1}$ and compute $d_{1,j}$ and $d_{2,j}$ which can be done in $O(n)$ time running over the new degree sequence of $G_ j$ that can be computed from the degree sequence of $G_{j-1}$ by updating $d_{1,j}$ values in it . So the total running time for the algorithm is $O(n^2 + e(G) ) = O(n^2 )$. \[theoremdelta\] Let $G$ be a graph on $n \geq 2$ vertices with maximum degree $\Delta$, and $t \geq 1$ be an integer. 1. [If $0 \leq \Delta \leq 1$, then $f(G)=0$.]{} 2. [If $\binom{t +1}{2} + 1 \leq \Delta \leq \binom{t +2}{2}$, then $f(G) \leq t$, and this bound is sharp for every $\Delta$ in the range.]{} 3. [For $\Delta \geq 1$, $f(G) \leq \left \lceil \frac{-3+\sqrt{8\Delta+1}}{2} \right \rceil$.]{} Clearly, if $0 \leq \Delta \leq 1$ and $n \geq 2$, $f(G)=0$. For (ii), we use induction on $t$. For $t = 1$, $2 \leq \Delta \leq 3$. If there is one vertex $v$ of degree $\Delta=2$. Removing $v$ clearly leaves at least two vertices of maximum degree equal to one or zero, and hence $f(G)=1$. If there is a vertex $v$ of degree $\Delta=3$ and a vertex $u$ of degree 2, then by Lemma \[lemma0\], $f(G) \leq 3-2=1$. Otherwise, all other vertices have degree 0 or 1 and deleting $v$ leaves at least two vertices of maximum degree equal to one or zero, and $f(G)=1$. So assume statement is true for $t -1$ and we shall prove it is true for $t$. By assumption, $\binom{t +1}{2} + 1 \leq \Delta \leq \binom{t +2}{2}$. Let $v$ be a vertex of maximum degree and $u$ a vertex with the second largest degree — clearly $$\deg(u) \leq \deg(v) \leq \binom{t +2}{2}.$$ Now consider $\deg(u)$. 1. [if $\deg(u) \leq \binom{t+1}{2}$ we drop $v$ to get $G \backslash \{ v \} = H$ where $\Delta(H)\leq \binom{t+1}{2}$, and by induction $f(G) \leq f(H) +1 \leq t - 1 +1 = t$ and we are done.]{} 2. [if $\deg(u) \geq \binom{t+1}{2} +1$ then clearly $${\mbox{\rm diff}}(G) = \deg(v) - \deg(u) \leq \binom{t+2}{2} - \binom{t+1}{2} - 1 = t ,$$ hence by Theorem \[theorem\_1\] $f(G) \leq {\mbox{\rm diff}}(G) \leq t$ and we are done.]{} Sharpness: consider the sequence $a_ j = \binom{ j +1}{2} +1$ i.e. $a_1 = 2$, $a_2 = 4$, $a_3 = 7$ etc. and let $\Delta=\binom{ t +1}{2}+ j$, $j = 1, \ldots, t +1$. For example, if $t = 4$, $\Delta= 11,12,13,14,15$. Consider the graph $G_{\Delta}$ consisting of the stars $K_{1,a_j}$ for $j = 1, \ldots, t-1$ and a “big star" $K_{1,\Delta}$. Suppose for example $\Delta= 13$ , which is the case $t = 4 $, since $ \binom{ t+1}{2}=\binom{5}{2} < 13 < \binom{6}{2}=\binom{t+2}{2}.$ The sequence of stars we choose involves $a_1 ,a_2 ,a_3$ and $\Delta$, that is $ K_{1,2} \cup K_{1,4} \cup K_{1,7} \cup K_{1,13}$, and this realises $f(G) = 4$ as required. The validity of this construction is a simple application of Theorem \[theorem\_1\]. So this construction shows the bound is sharp for every $\Delta \geq1$. For (iii), from part (ii) above ($t \geq 1$ and $\Delta \geq 2$), we get $t^2 + 3t +2 -2\Delta \leq 0$. Solving the quadratic and rounding up, since $t$ must be an integer, we get $$f(G) \leq t = \left \lceil \frac{-3 + \sqrt{ 1 +8\Delta}}{2} \right \rceil,$$ which holds true also for the case $\Delta=1$. \[theorem\_n\] Let $G$ be a graph on $n \geq 4$, and $t \geq 1$ an integer such that $$\binom{t+1}{2}+3 \leq n \leq \binom{ t + 2}{2} +2.$$ Then $f(G) \leq t$, and this is sharp for all values of $n$ in the range. Also, for $n \geq 4$, $$f(G) \leq \left \lceil \frac{ - 3 +\sqrt{8n-15}}{2} \right \rceil.$$ Observe that for $\binom{t+1}{2}+3 \leq n \leq \binom{ t + 2}{2} +1$ it follows that if $|G|= n$ then $\Delta(G)\leq n-1$ hence $\binom{t + 1}{2} +2 \leq \Delta(G) \leq \binom{t +2}{2}$ and $f(G) \leq t$ by Theorem \[theoremdelta\] We now construct for every $n$, such that $\binom{t +1}{2}+3 \leq n \leq \binom{t +2}{2} +1$, a graph $G_n=G$ with $f(G) = t$ proving sharpness. Let $n = \binom{t +1}{2} + j : j =3,\ldots, t +2$. Let $A = \{ v_1,v_2,\ldots, v_t\}$ and $B = \{ u_1,\ldots, u_{n-t}\}$. Vertex $v_t$ is adjacent to all other vertices so that $\deg(v_t) = n-1 = \binom{t +1}{2}+ j -1$. Vertex $v_ q$, for $q = t-1,\ldots,1 $ has degree $\deg(v_q) = \frac{q^2 +q + 2}{2} +1=a_q+1$, where $v_q$ is adjacent to $v_t$ and to $u_1,\ldots, u_{a_q}$. Figure \[graph1\] shows the case when $t=3$ and $j=3$ i.e. $n=\binom{3+1}{2} +3=9$. ![The graph $G_n$ for $t=3$ and $j=3$[]{data-label="graph1"}](diag2) We now apply Theorem \[theorem\_1\] to $G$. Then $${\mbox{\rm diff}}(G)= \binom{t +1}{2}+j-1- \frac{(t-1)^2+t-1+2}{2}+1=t+j-3 \geq t.$$ Hence, we apply Theorem \[theorem\_1\] by deleting $v_t$ to give a new graph $G_1$ on $n-1$ vertices in which $\deg(v_i)$, $i=1 \ldots t-1$, as well as all the degrees of vertices in $B$ adjacent to $v_t$ are reduced by 1, and hence ${\mbox{\rm diff}}(G_1)=\deg(v_{t-1})-\deg(v_{t-2})=t+j-1 \geq t-1$ Therefore $f(G) \geq 1+{\mbox{\rm diff}}(G_1) \geq t$ and we again apply Theorem \[theorem\_1\] to delete $v_{t-1}$. The degrees of $v_{t-2} \ldots v_1$ now remain unchanged, and for $i=2 \ldots t-1$, $\deg(v_{t-i})-\deg(v_{t-i-1})=t-i$, and the vertices are not adjacent to each other. Hence it follows that, at each step, ${\mbox{\rm diff}}(G_i)=t-i$, which, by Theorem \[theorem\_1\], implies that $f(G)=\min\{{\mbox{\rm diff}}(G_j)+j: j=0,\ldots,n-2\}=t$. Let us now look at the case when $|G| = n = \binom{t +2}{ 2} +2$. 1. [If $\Delta(G) \leq \binom{t +2}{ 2} $. Then by Theorem \[theoremdelta\], $f(G) \leq t$ and we are done.]{} 2. So $\Delta = \binom{t +2}{ 2} +1$. Let $v_1$ and $v_2$ be such that $\deg(v_1) = \Delta$ and $v_2$ has the second largest degree. Observe that $v_1$ is adjacent to all vertices of $G$. Now if $\deg(v_2) \geq \binom{t +1}{ 2} +2$, then ${\mbox{\rm diff}}(G) \leq t$ and again we are done by Theorem \[theorem\_1\]. So $\deg(v_2) \leq \binom{t +1}{ 2}+1 $. We delete $v_1$ to get the graph $G_1$. Clearly $\Delta(G_1) = \deg(v_2) - 1 \leq \binom{t +1}{ 2} $ and by the Theorem \[theoremdelta\], $f(G_1)\leq t - 1$ hence $f(G) \leq t$. For, sharpness we can take the graph $G$ constructed above on $n$ vertices for $ n =\binom{t +2}{ 2} +1$ and add an isolated vertex. Now for a graph $G$ on $n$ vertices with $4 \leq n \leq \binom{t +2}{ 2} +2$, we know $g(G) \leq t$, hence we get $ t^2 +3t - 2n + 6 \geq 0$, and solving the quadratic gives $$f(G) = t \leq \left \lceil \frac{ -3 +\sqrt{ 8n -15 }}{2} \right \rceil.$$ Trees and Forests ================= We have determined the maximum possible value for $f(G)$ with respect to $\Delta(G)$ (Theorem \[theoremdelta\]) and with respect to $|G| = n$. (Theorem \[theorem\_n\]). We propose the problem of finding $\max \{ f(G) : G \mbox{ is a forest on $n$ vertices} \}$ and conjecture the following : \[con\_trees\] If $F$ is a forest on $n$ vertices, where $n \leq \frac{t^3 + 6t^2 +17t +12}{6}$ then $f(F) \leq t$ and this is sharp. The following construction shows that if the conjecture is true then the upper bound is best possible. Consider the sequence $a_ j = \binom{ j +1}{2} +1$. For $t \geq 0$ we define a tree $T_t$ on $b_t$ vertices as follows: Let $P_{2t +3}$ be a path on $2t +3$ vertices. Now to the vertex $v_{2j}$ for $j = 1,\ldots,t+1$ of the path we add exactly $ x_ j = a_ j - 2 = \binom{ j +1}{2} - 1$ leaves, so that $ x_1 = 0$, $x_2 = 2 $ and so on. Clearly $\deg(v_1) = \deg(v_{2t+3}) = 1$, and for $t \geq 1$, $\deg(v_{2j +1} ) = 2$ for $j =1,\ldots,t$ while $\deg(v_{2j} ) = a_ j$ for $j = 1,\ldots,t+1$. Now for $t = 0$ we get $T_0=K_{1,2}$ with $b_t=3$, for $t = 1$ we get $b_t=7$ and for $t=2$, $b_t=14$, as shown in Figure \[tree1\]. ![The tree $T_2$ on 14 vertices[]{data-label="tree1"}](tree1) The number of vertices in $T_t$ is $b_t=\frac{t^3+6t^2+17t+18}{6}$. We prove this by induction on $t$. For $t=0$, $b_0=3$ and for $t=1$, $b_1=7$ as required. So let us assume it is true for $b_{k-1}$. Then $$b_k=b_{k-1}+\frac{k^2+3k+4}{2}$$ $$=\frac{(k-1)^3+6(k-1)^2+17(k-1)+18}{6}+\frac{k^2+3k+4}{2}$$$$=\frac{k^3+6k^2+17k+18}{6}$$ as required. So ${\mbox{\rm diff}}(T_t)= \frac{t^2+3t+4}{2} - \frac{(t-1)^2+3(t-1)+4}{2}= t+1$. We can, again by induction on $t$, show that $f(T_t)=t+1$, for $t \geq 1$. Clearly $f(T_0)=1$. Suppose that $f(T_t) \leq t < {\mbox{\rm diff}}(T_t)$. Then we should remove the vertex of degree $\Delta$ in order to obtain a minimal 2-equating set. But this leaves isolated vertices and the tree $T_{t-1}$. But $f(T_{t-1})=t$, by induction, and hence $f(T_t)=t+1$, a contradiction. Let $F$ be a forest on 13 vertices. Then $f(F) \leq 2$. Let $F$ be a forest on 13 vertices with degree sequence $d_1 \geq d_2 \geq d_3 \geq \ldots \geq d_{13}$, and let $u$, $v$ and $w$ be vertices of degree $d_1$, $d_2 $ and $d_3 $ respectively. We observe the following facts : 1. [We may assume that $d_2 \geq 4$, for otherwise we delete $u$ and let $ F^* = F-u$. Then $\Delta(F^*) \leq 3$ and by Theorem \[theoremdelta\], $f(F^*) \leq 1$ hence $f(F) \leq 2$.]{} 2. [Therefore $d_1 - d_2 \geq 3$, for otherwise $f(F) \leq {\mbox{\rm diff}}(F) \leq 2$.]{} 3. [Hence we may assume $d_2 \geq 4$ and $d_1 \geq 7$.]{} 4. [$d_3 \leq 3$ — otherwise since $d_1 \geq 7$, $ d_2 ,d_3 \geq 4$, we get (even in the worst case where $u$, $v$, $w$ induce a path of three vertices in some order) $ |F| \geq 14$.]{} 5. [If $u$ and $v$ are not in the same component then from $|F| = 13$ we get $F = K_{1,4} \cup K_{1,7}$, and $f(F) = 2$, by deleting the centers of the stars. We now use the notation $S_{a,b}$ to denote the double star with adjacent centres of degrees $a$ and $b$. ]{} 6. [If $d_2 \geq 5$ and $d_1 \geq 8$ then $|F| = 13$ if and only if $F = S_{5,8}$ and $f(S_{5,8}) = 2$. So we assume $d_2 = 4$ and $u$ and $v$ are in the same component.]{} 7. [If $u$ and $v$ are adjacent, let $F^* = F-u$. Then $\Delta(F^*) \leq 3$ and by Theorem \[theoremdelta\] $f(F^*) \leq 1$ hence $f(F) \leq 2$.]{} Therefore, $F$ can be one of the following graphs: - [$S_{4,8}$ with the edge between the centres subdivided. Clearly $f(F) = 2$.]{} - [$S_{4,7}$ with the edge between the centres subdivided twice. Clearly $f(F) = 2$.]{} - [$S_{4,7}$ with the edge between the centres subdivided and another vertex added adjacent to a leaf of the vertex of degree 7 or of degree 4. In both two cases $f(F) = 2$.]{} - [$S_{4,7}$ with the edge between the centres subdivided by a vertex $w$ and to the vertex $w$ we attach a leaf so that $\deg( w) = 3$. Clearly $f(F) = 2$.]{} In all these cases we only need to delete vertices $u$ and $v$ to get at least two vertices of maximum degree, and hence $f(F) \leq 2$. Observe that if $|F| < 13$ we may add $0$-vertices to get a forest on 13 vertices and the same argument applies, hence for $|F| \leq 13$, $f(F) \leq 2$.. For $n=14$ we have the graph $T_2$ (Figure \[tree1\]) which has exactly $\frac{2^2+6(2^2)+17(2)+18}{2}=14$ vertices and we know that $f(T_2)=3$. We now prove the following result: \[forests\] Let $F$ be a forest on $n$ vertices and $k \geq 2$ an integer. Suppose $n^{\frac{1}{3}} \geq 2k-1$. Then $$f_k(F) \leq (2k-1) \lfloor n^{\frac{1}{3}} \rfloor.$$ We first prove the following lemmas: \[lemma\_tree\_1\] For every $k\geq 2$ and every graph $G$ with maximum degree $\Delta$, $f_k(G) \leq (k-1)\Delta$. By induction on $\Delta$. If $\Delta=0$, then the result is trivial since either $|G| <k$ or there are $k$ vertices of degree 0. So suppose the result holds for $\Delta=r$ and let $G$ have $\Delta=r+1$. If there are $k$ vertices of maximum degree $r+1$ we are done. Otherwise, remove all the vertices of maximum degree $\Delta$ in $G$ —there are at most $k-1$ such vertices. The resulting graph $H$ has maximum degree $r$ and hence by the induction hypothesis, $f_k(H) \leq (k-1)r$. Hence $$f_k(G) \leq (k-1)r+k-1 = (k-1)(r+1)=(k-1)\Delta(G)$$ as required. \[lemma\_tree\_2\] Let $G$ be a forest and let $A$ be any subset of $k \geq 2$ vertices of $G$. Define $M(A)$ to be the set of vertices of $V(G)\backslash A$ each having at least two neighbours in $A$. Then $|M(A)|<|A|=k$. Suppose $|M(A)| \geq |A|=k$. Let $B$ be any subset of $M(A)$ of cardinality $k$ and let $H$ be the bipartite graph with vertices $A \cup B$ and only those edges connecting vertices in $A$ to vertices in $B$. Since each vertex of $B$ has degree at least 2 in $H$, $|E(H)| \geq 2k$. But $|V(H)|=2k$, therefore $H$ has a cycle contradicting the fact that $G$ is a forest. We now prove Theorem \[forests\]. The degree of $F$ can range from $0$ to $n-1$. Let us divide this range into subintervals $$S_j=[jn^{\frac{1}{3}}, (j+1)n^{\frac{1}{3}} ) \mbox{ for } j=0,\ldots,\lfloor n^{\frac{1}{3}} \rfloor-1$$ with the last two intervals being $S_{\lfloor n^{\frac{1}{3}} \rfloor}=[ \lfloor n^{\frac{1}{3}} \rfloor n^{\frac{1}{3}}, n^{\frac{2}{3}})$ and $S_{L}=[\lceil n^{\frac{2}{3}} \rceil ,n)$. Let us denote by $A_j$ or $A_L$ the set of vertices of $F$ whose degrees fall in the intervals $S_j$ or $S_L$ respectively. We first claim that $A_L$ contains at most $\lfloor n^{\frac{1}{3}} \rfloor$ vertices. Suppose not and let $x_j$ be the number of vertices of $F$ having degree $j$. Consider the forest $F^*$ on $n^* \leq n$ vertices obtained by deleting all isolated vertices in $F$. Then clearly the number of vertices $x_j$ of degree $j \geq 1$ in $F^*$ in the same as in $F$, and we have $x_1+\ldots+x_{n-1}=n^*$ and $x_1+2x_2+\ldots+(n-1)x_{n-1} \leq 2n^*-2$. Multiplying the first equation by 2 and subtracting the second gives: $$x_1-\sum_{j=3}^{n-1}(j-2)x_j \geq 2.$$ Hence $$x_1 \geq 2 + \sum_{j \geq 3}(j-2)x_j.$$ In particular $$x_1 \geq 2+(n^{\frac{2}{3}}-2)|A_L$$$$\geq 2(n^{\frac{2}{3}}-2)(n^{\frac{1}{3}}+1)=n+n^{\frac{2}{3}}-2n^{\frac{1}{3}}.$$ But $$n^* \geq x_1+|A_L| \geq n+n^{\frac{2}{3}}-2n^{\frac{1}{3}}+n^{\frac{1}{3}}+1=n+n^{\frac{2}{3}}-n^{\frac{1}{3}}+1>n^*$$ a contradiction. We now proceed as follows. We remove from $F$ the vertices in $A_L$ and redistribute the resulting degrees among the intervals $S_j$, $j=0,\ldots,\lfloor n^{ \frac{1}{3}} \rfloor$, recalculating $A_ j$ for $j=0,\ldots,\lfloor n^{\frac{1}{3}} \rfloor$. If there are at least $k$ vertices with degrees in the last interval we stop. Otherwise we remove these vertices and again we redistribute the degrees among the intervals $S_j$, $j=0,\ldots, \lfloor n^{\frac{1}{3}} \rfloor-1$, recalculating $A_ j$ for $j=0,\ldots,\lfloor n^{\frac{1}{3}} \rfloor-1$. This process continues until we reach one of the following possibilites: 1. [We have deleted all vertices and we are left with only those vertices in $A_0$;]{} 2. [For some $j \geq 1$, $A_j$ contains at least $k$ vertices.]{} We consider these cases separately: 1. In this case we have deleted $\lfloor n^{\frac{1}{3}} \rfloor $ vertices from $A_L$ and at most $(k-1)\lfloor n^{\frac{1}{3}} \rfloor $ further vertices by deleting at most $(k-1)$ vertices that were in the respective sets $A_1,\ldots,A_{\lfloor n^{\frac{1}{3}} \rfloor }$ at each stage. So altogether $k\lfloor n^{\frac{1}{3}} \rfloor $ vertices have been deleted. But now the resulting graph has maximum degree at most $\lfloor n^{\frac{1}{3}} \rfloor $ and therefore, by Lemma \[lemma\_tree\_1\], by deleting at most a further $(k-1)\lfloor n^{\frac{1}{3}} \rfloor $ vertices we arrive at a graph with $k$ vertices of maximum degree (or at most $k-1$ vertices at all). To do this we have altogether deleted at most $(2k-1)\lfloor n^{\frac{1}{3}} \rfloor $ vertices, as required. 2. We have stopped the deletion process when $A_j$, $j \geq 1$, contains at least $k$ vertices, $A_j$ being the set of vertices of the reduced forest having degrees in $S_j=[jn^{\frac{1}{3}},(j+1)n^{\frac{1}{3}})$. Let $v_1,v_2,\ldots,v_k$ be the $k$ vertices in $A_j$ of largest degrees, say $d_1 \geq d_2\geq \ldots \geq d_k$. Let us call this set of vertices $A$. By Lemma \[lemma\_tree\_2\], $|M(A)|<k$ where we recall that $M(A)$ is the set of vertices adjacent to at least two vertices of $A$. Since a vertex $v \in A$ can be adjacent to at most $k-1$ other vertices in $A$ and $k$ vertices in $M(A)$, there are at least $\deg(v)-2k+1$ vertices that are neighbours of $v$ but which are not in $A \cup M(A)$. Since such vertices are adjacent to at most one vertex from $A$, these $\deg(v)-2k+1$ vertices are only adjacent to $v \in A$ and not to any other vertex in $A$. Let $B(v)$ be the set of these neighbours of $v$. Now consider any vertex $v_i \in A$, $i=1 \ldots k$. Suppose $\deg(v_i)=\deg(v_k)+t_i$. Then $$|B(v_i)| \geq \deg(v_i) - 2k+1 \geq \deg(v_k)+t_i-2k+1 \geq n^{\frac{1}{3}}+t_i-2k+1 \geq t_i$$ since $n^{\frac{1}{3}} \geq 2k-1$. We therefore need to remove $t_i$ vertices of $B(v_i)$ (and this will not change the degree of any other vertex in $A$) in order to equate $\deg(v_i)$ and $\deg(v_k)$. However, since $|B(v_i)| \geq t_i$, we can do this. Hence, equating all the degrees of the vertices $v_1,\ldots,v_{k-1}$ to $\deg(v_k)$ can be done at the cost of deleting at most a further $(k-1)\lfloor n^{\frac{1}{3}} \rfloor $ vertices. This means that we have deleted altogether at most $(2k-1)\lfloor n^{\frac{1}{3}} \rfloor $ vertices, so we are done. Remark: The above proof also works for the more general class of graphs without even cycles. Lemma \[lemma\_tree\_2\] remains unchanged since the graph $H$ used in the proof is bipartite by construction. A graph $G$ on $n$ vertices and without even cycles contains at most $\frac{3n}{2}$ edges [@bollobas2004extremal]. Therefore, in the proof of the Theorem, instead of the computation involving $x_1$ we compute an upperbound on the number of vertices in $A_J$, by noting that if this number is at least $3n^{\frac{1}{3}}+1$ then $$3n \geq 2|E(G)| \geq \sum_{v \in A_J} \geq (3n^{\frac{1}{3}}+1)(n^{\frac{2}{3}}) = 3n + n^{\frac{2}{3}}>3n,$$ a contradiction. We then remove the vertices of $A_J$ and redistribute the resulting degrees, sacrificing at most $3n^{\frac{1}{3}}$ vertices, and continue as in the proof. This gives $$f_k(G) \leq (2k+1)n^{\frac{1}{3}},$$ giving a weaker bound for a more general class of graphs. The functions $g(\Delta,k)$ and $h(\Delta,k)$ ============================================== Lemma \[lemma\_tree\_1\], which states that $g(\Delta,k) \leq ( k-1)\Delta$, plays a crucial rule in the proof of Theorem \[forests\]. Another motivation to study $g(\Delta,k) = \max\{ f_k(G): \Delta(G) \leq \Delta \}$ comes from the Proposition \[prop1\] below, which gives a weak support for the conjecture $f_k(G) =< f(k) \sqrt{|G|}$ mentioned in the introduction, and also demonstrates that for graphs with $e(G) = o(n^2)$, $f_k(G) = o(n)$ (where $e(G)$ is the number of edges of $G$) . Observe that it has not yet been proved in general that for fixed $k$ and $G$ a graph on $n$ vertices, $f_k(G) = o(n)$. \[prop1\] Suppose $G$ is a graph on $n$ vertices and $e(G) \leq cn^{1+\beta}$ where $0 \leq \beta < 1$ and let $\alpha= \frac{1+ \beta}{2}$. Then $f_k(G) \leq (k - 1 +2c)n^{\alpha}$, and in particular for $\beta= 0$, $f_k(G) \leq (k-1+2c) \sqrt{n}$. Define $V_{\alpha} = \{ v: \mbox{ }\deg(v) \geq n^{\alpha} \}$ and suppose $|V_{\alpha})| > 2cn^\alpha$. Then $$2e(G) = \sum \{\deg(v) : v \in V(G)\} \geq \sum \{ \deg(v): v \in V_{\alpha} \} > n^{\alpha}2cn^{\alpha} = 2cn^{2\alpha} = 2cn^{1+\beta} \geq 2e(G),$$ a contradiction. Hence $|V_{\alpha}| \leq 2cn^{\alpha}$. Delete $V_{\alpha}$ we get a graph $H$ with $\Delta(H) \leq n^{\alpha}$. Hence applying Lemma \[lemma\_tree\_1\] we get $$f_k(G) \leq |V_{\alpha}| +(k-1)\Delta(H) \leq 2cn^{\alpha} + (k-1)n^{\alpha} = (k -1 +2c) n^{\alpha}.$$ So a better knowledge of the behavior of $g(\Delta,k)$ will help to obtain better bound on $f_k(G)$ as well as $f(n,k) = \max \{ f_k(G) : |G|= n \}$. \[gDelta\] For every $\Delta \geq 0$ and $k \geq 2$, 1. [$g(0,k) = 0$.]{} 2. [$g(1,k) = \lfloor\frac{k-1}{2 } \rfloor$.]{} 3. [For $\Delta \geq 1$, $g(\Delta,2) = \left \lceil \frac{-3+\sqrt{8\Delta+1}}{2} \right \rceil$.]{} <!-- --> 1. [Clearly if $G$ is a graph with maximum degree $\Delta = 0$ then either $|G| \geq k$ and we are done, or else $|G| \leq k-1$ and we are done by the definition of $f_k(G)$, hence $g(0,k) = 0$.]{} 2. [Consider $G$ with maximum degree $\Delta(G) = 1$. If there are already $k$ vertices of degree 1 we are done. So assume there are at most $k-1$ vertices of degree 1. By parity these vertices form exactly $ \lfloor \frac{ k-1}{2 } \rfloor$ isolated edges containing exactly $2 \lfloor\frac{k-1}{2 } \rfloor$ vertices of degree 1. We delete from each isolated edge one vertex of degree 1 to get an induced subgraph $H$ with $\Delta(H) = 0$. It follows, since $g(0,k)=0$, that $f_k(G) \leq \lfloor \frac{ k-1}{2 } \rfloor$. This bound is sharp as demonstrated by the graph $tK_2 \cup mK_1$ where $m \geq 0$ and $t =\lfloor \frac{ k-1}{2 } \rfloor$, $ k \geq 3$. ]{} 3. [This is a restatement of Theorem \[theoremdelta\].]{} Determining $g(2,k)$ requires more efforts, in particular we will use Ore’s observation that if $G$ is a graph on $n$ vertices without isolated vertices, then the domination number of $G$, denoted $\gamma(G)$, satisfies $\gamma(G) \leq \lfloor \frac{n}{2 } \rfloor$ [@ore1962theory]. For $k \geq 2$, $g(2,k) = k-1$. The graph $G = (k-1) K_{1,2}$ ($ k-1$ vertex-disjoint copies of the star $K_{1,2}$) has $f_k(G) = k-1$ as is easily checked. So $g(2,k) \geq k-1$. Let us prove the converse. Consider a graph $G$ with $\Delta(G) = 2$, otherwise by Proposition \[gDelta\] (part 2) we are done. Let $ n_2 = |\{ v : \deg(v) = 2\} |$. Clearly if $n_2 \geq k$ we are done so we may assume $1 \leq n_2 \leq k-1$. We collect the (possible) components of $G$ into three subgraphs: $A = \{$ all isolated vertices and isolated edges$\}$, $B = \{$all copies of $K_{1,2 }\}$, $C = \{$all other components$\}$. We denote by $t$ the number of copies of $K_{1,2}$ in $B$ and we observe that $t \leq n_2$ and that in each component in $C$ the vertices of degree 2 induce either a path (including a single edge) or a cycle. We claim that if $t > \lfloor \frac{k-1}{2 } \rfloor$ we are done by deleting all $n_2-t$ vertices of degree 2 in $C$ and from each copy of $K_{1,2}$ in $B$ we delete a leaf to get from $G$ an induced subgraph $H$ with $\Delta(H) = 1$ and with at least $2 (\lfloor \frac{k-1}{2 } \rfloor + 1) \geq k$ vertices of degree 1, and we have deleted altogether $n_2 =< k-1$ vertices. So we shall assume $t \leq \lfloor \frac{k-1}{2 } \rfloor$. Consider the subgraph $F$ induced by the vertices of degree 2 in $C$. Case 1: $|F| = 0$. If $|F| = 0$ (namely $C$ is empty) then $n_2 = t \leq \lfloor\frac{k-1}{2 } \rfloor$. Delete a leaf from each copy of $K_{1,2}$ in $B$. We get from $G$ a graph $H$ with $\Delta(H) = 1$ (as in $A$ all components have maximum degree at most 1). If in $H$ there are already $k$ vertices of degree 1, we are done as we have deleted $t \leq \lfloor \frac{k-1}{2 } \rfloor$ vertices. Otherwise by Proposition \[gDelta\] (part 2), $ f_k(H) \leq \lfloor \frac{k-1}{2 } \rfloor$ and hence $f_k(G) \leq 2 \lfloor \frac{k-1}{2 } \rfloor \leq k-1$. Case 2: $|F| > 0$. Then as we have noted before, due to the components of $C$, there are no isolated vertices in $F$, and by Ore’s result $\gamma(F) \leq \lfloor\frac{n_2 - t }{2 } \rfloor \leq \lfloor \frac{k- 1 - t}{2 } \rfloor$. Let $D$ be a dominating set for $F$ that realises $\gamma(F)$, hence $|D| \leq \frac{n_2 - t}{2}$. Delete $D$ and consider the induced subgraph $H$ on $A \cup C$. Clearly $\Delta(H) \leq 1$ and denote by $n_1$ the number of vertices of degree 1 in $H$. Now we look again at $B$. Case 1: $ t = 0$. Since $t = 0$, $B$ is empty, and either $n_1 \geq k$ and we are done as we have deleted $|D| =\leq \lfloor \frac{n_2}{2} \rfloor \leq \lfloor \frac{k-1}{2 } \rfloor$ vertices or by Proposition \[gDelta\] (part 2), $f_k(G) \leq f_k( H)+ |D| \leq 2 \lfloor\frac{k-1}{2 } \rfloor \leq k-1$. Case 2: $1 \leq t \leq \lfloor \frac{k-1}{2 } \rfloor$. 1. [if $n_1 \geq k - 2t$ then deleting a leaf from every copy of $K_{1,2}$ in $B$, we get an induced graph $H^*$ on $A \cup B \cup C$ (extending $H$ to the leftover of $B$) with $\Delta(H^*) = 1$ and at least $k- 2t +2t = k$ vertices of degree 1 and we are done as we have deleted altogether $$|D| + t \leq \frac{n_2 - t }{2 }+t = \frac{n_2 +t}{2} \leq n_2 \leq k-1$$ vertices.]{} 2. If $n_1 \leq k - 1 -2t$ (recall $n_1$ is the number of vertices of degree 1 in $H$ formed from $A \cup \{C \backslash D\} $), then we delete $\frac{n_1}{2}$ independent vertices of degree 1 in $H$, and $t$ vertices of degree 2 in $B$ to get an induced subgraph $H^*$ with $\Delta(H^*) = 0$. But $g(0,k) = 0$ hence $f_k(H^*) = 0$ and $$f_k(G) \leq \frac{n_1}{2} + |D| + t \leq \frac{k-1-2t}{2} + \frac{k-1-t}{2} + t = \frac{2k - 2 - t}{2} \leq k-1$$ and the proof is complete. The following construction supplies a lower bound for $g(\Delta,k)$ in terms of $g(\Delta,2)$. For even $k \geq 2$, $g(\Delta,k) \geq g(\Delta,2) \frac{k}{2} + \frac{k}{2}-1$. Recall the sequence $a_t = \binom{t + 1}{2} +1$, which for $t \geq 1$ gives the smallest maximum degree for which there is a graph $G$ with $f(G) = t$. Such a graph is $\bigcup K_{1,a_j }$ for $j = 1,\ldots,t$ and in case we have $a_t \leq \Delta < a_{t+1}$, $G = K_{1, \Delta} \cup K_{1,a_ j}: j = 1,\ldots,t-1$. Now we take $k-1$ copies of $K_{1,a_t}$ and $\frac{k}{2}$ copies of $K_{1,a_j}: j = 1,\ldots,t-1$. In case $a_t \leq \Delta < a_{t+1}$ we take $k-1$ copies of $K_{1,\Delta}$ and $\frac{k}{2}$ copies of $K_{1,a_j}: j = 1,\ldots, t-1$. Note that for $k = 2$ this is exactly the sequence that realises Theorem \[theoremdelta\]. Observe now that we cannot equate to degree $\Delta$ as there are just $k-1$ such degrees. So we can equate to the second largest degree $a_{t-1}$ by deleting exactly $\Delta - a_{t-1}$ leaves from $\frac{k}{2}$ vertices of the maximum degree and $k -1 - \frac{k}{2}$ other centres. Altogether we deleted $$\frac{(\Delta - a_t)k}{2} + \frac{k}{2}-1 \geq \frac{g(\Delta,2)k}{2} + \frac{k}{2} - 1.$$ In case $\Delta = a_t$ we have deleted exactly $\frac{g(\Delta,2)k}{2} + \frac{k}{2}-1$. We can now equate to some value $x$ such that $a_{t-1}> x \geq a_{t-2}+j$, $ j \geq 1$. However clearly this requires the deletion of more vertices then just to equate to $a_{t-1}$ and in particular the deletion of at least $g(\Delta,2) \frac{k}{2} + \frac{k}{2}- 1$ vertices. Now we can try to equate to $a_{t-2}$. The cheapest way is to delete the $k-1$ vertices of degree $\Delta$ and $a_{t-1} -a_{t-2}$ leaves from each of the $\frac{k}{2}$ vertices of degree $a_{t-1}$. So altogether we deleted$$k - 1 + (a_{t-1}- a_{t-2})\frac{k}{2}= k-1 + (g(\Delta,2) - 1)\frac{k}{2} = g(\Delta,2)\frac{k}{2} + \frac{k}{2} - 1$$ vertices. Again we can now try to equate to some value $x$ such that $a_{t-2}> x \geq a_{t-3}+j$, $ j \geq 1$.. However Clearly this requires the deletion of more vertices then just to equate to $a_{t-2}$ and in particular the deletion of at least $g(\Delta,2) \frac{k}{2} + \frac{k}{2}- 1$ vertices. So this deletion process continues and we always forced to delete at least $g(\Delta,2) \frac{k}{2} + \frac{k}{2}- 1$ vertices, even if we delete all the centres of the stars to get an induced subgraph with all degrees equal 0. Hence for even $k \geq 2$ we get $g(\Delta,k) \geq g(\Delta,2)\frac{k}{2} + \frac{k}{2} -1$ (which is sharp for $k=2$). While slight improvements on this lower bound are possible for odd $k \geq 3$, our goal in this construction is only to demonstrate a linear lower bound on $g(\Delta,k)$ in terms of $g(\Delta,2)$ and $k$ for which the construction suffices. We now turn our attention to $h(\Delta,k)$. Recall that for $ k \geq 2$, a graph $G$ is $k$-feasible if it contains an induced subgraph $H$ (possibly also $H = G$) such that in $H$ there are at least $k$ vertices that realise $\Delta(H)$, and we define $$h(\Delta,k) = \max \{ |G| : \Delta(G) \leq \Delta \mbox{ and $G$ is not $k$-feasible}\}.$$ For every $\Delta \geq 0$ and $k \geq 2$, 1. [$h(\Delta,k) \leq R(k,k) - 1$.]{} 2. [$ h(0,k) = k-1$.]{} 3. [$h(1,k) = \lfloor \frac{k}{2} \rfloor + 2 \lfloor \frac{k-1}{2} \rfloor$.]{} 4. [For odd $k \geq 3$, $h(2,k) = 2k-2$, and for even $k \geq 2$, $h(2,k) = 2k-3$.]{} 5. [$h(\Delta,k) \leq g(\Delta,k) + k-1$.]{} <!-- --> 1. Clearly if $|G| \geq R(k,k)$ then $G$ has a vertex-set $A$, $|A| \geq k$ such that the induced subgraph on $A$ is either a clique or an independent set . Hence deleting $V - A$ we are left with a regular graph on at least $k$ vertices hence $G$ is $k$-feasible and $h(\Delta,k) \leq R(k,k)- 1$. 2. [$h(0,k) = k-1$ is trivially realised by $( k-1) K_1$ i.e. $k-1$ isolated vertices.]{} 3. A lower bound for $h(1,k)$ is $h(1,k) \geq \lfloor\frac{k}{2} \rfloor + 2 \lfloor \frac{k-1}{2} \rfloor$ realised by the graph $G = \lfloor \frac{k}{2} \rfloor K_1 \cup \lfloor \frac{k-1}{2} \rfloor K_2$ which is trivially seen to be non-$k$-feasible. Next suppose $G$ is a graph having $\lfloor\frac{k}{2} \rfloor + 2 \lfloor \frac{k-1}{2} \rfloor + j$ vertices, $j \geq 1$. Write $\lfloor\frac{k}{2} \rfloor + 2 \lfloor \frac{k-1}{2} \rfloor + j = x +2y$ where $x$ denotes the number of $0$-vertices and $2$y the number of $1$-vertices in $G$. Now if $y > \lfloor \frac{k-1}{2} \rfloor$ we have at least $k$ vertices of degree 1 and we are done. If $0 \leq y \leq \lfloor \frac{k-1}{2} \rfloor$ then delete $y$ $1$-vertices, one of each copy of $K_2$, and we are left with at least $\lfloor\frac{k}{2} \rfloor + \lfloor \frac{k-1}{2} \rfloor + j \geq \lfloor\frac{k}{2} \rfloor + \lfloor \frac{k-1}{2} \rfloor + 1 = k$ vertices of degree 0. Hence $G$ is $k$-feasible, and $h(1,k) = \lfloor\frac{k}{2} \rfloor + 2 \lfloor \frac{k-1}{2} \rfloor$. 4. Clearly $h(2,k) \leq 2k - 2$ since if $G$ has $\Delta = 2$ and at least $2k- 1$ vertices then by deleting at most $k- 1 = g(2,k)$ vertices we cannot get below $k$ so there must be induced $H$ with at least $k$ vertices realizing the maximum degree. Suppose $k$ **is odd** and $k \geq 3$. Consider the graph $G = \frac{k-1}{2}P_4$ ($\frac{k-1}{2}$ copies of the path on four vertices $P_ 4$). Clearly $|G| = 2k-2$ having exactly $k-1$ $2$-vertices and $k-1$ $1$-vertices. Observe that if $G$ is $k$-feasible then in at least one of the $P_4$ we should be able to delete just one vertex to get the remaining three vertices of the same degree, otherwise if in each copy of $P_4$ (or what remains of it after deleting some vertices) we will have at most two vertices of the same degree then over all $G$ we will have at most $k-1$ vertices of the same degree, meaning $G$ is not $k$-feasible. However it is impossible to delete one vertex from $P_4$ to get all the remaining three vertices of the same degree hence $G$ is not $k$-feasible proving $h(2,k) = 2k-2$ for odd $k \geq 3$. Suppose $k$ **is even**, $k \geq 2$. The case $k = 2$ is trivial hence we assume$k \geq 4$. Consider the graph $G = \frac{k-1}{2}P_4 \cup K_1$. Clearly $|G| = 4\frac{k-2}{2} +1 = 2k-3$ having exactly $k-2$ $1$-vertices, $k-2$ $2$-vertices and one $0$-vertex. If $G$ was $k$-feasible then by deleting the $0$-vertex $v$, $H = G-v$ would be at least $k-1$-feasible with odd $t = k -1 \geq 3$. But $H$ is exactly the graph which was proved above to be non $t$-feasible for odd $t \geq 3$, so $G$ is not $k$-feasible, proving $h(2,k) \geq 2k-3$ for even $k \geq 4$. We have to show that if $|G| = 2k-2$ and $\Delta(G)=2$, then for even $k \geq 4$, $G$ is $k$-feasible, this will complete the proof that for even $k \geq 2$, $h(2,k) = 2k-3$. Suppose on the contrary that $|G| = 2k-2$ and $\Delta(G) = 2$ but $G$ is not $k$-feasible. Let $n_j$, $j = 0,1,2$ be the number of vertices of degree $j = 0,1,2$ respectively in $G$. Since $G$ is non-$k$-feasible and by the value of $h(1,k)$ we may assume $1 \leq n_2 \leq k-1$. However $2k - 2 > h(2,k-1) = 2(k-1) -2 = 2k-4$. Hence $G$ is $k-1$-feasible. So either $n_2 = k-1$ or else, by removing at most $k -2$ vertices, we get an induced subgraph $H$, $|H| > = k$ with at least $k-1$ vertices realising the maximum degree of $H$. If $\Delta( H) = 1$ then, since $k - 1$ is odd, it forces that there are at least $k$ $1$-vertices but then $G$ is $k$-feasible. Otherwise $\Delta(H) = 0$ but $|H| \geq k$ and again $G$ is $k$-feasible. So only the case $n_2 = k-1$ is left. Since $n=2k-2$ and $n_2=k-1$ then by parity $n_1 \leq k-2$ and $ n_0 \geq 1$. We collect the (possible) components of $G$ into three subgraphs: $A = \{$all isolated vertices and isolated edges$\}$, $B = \{$all copies of $K_{1,2}\}$, $C =\{$all other components$\}$. We denote by $t$ the number of copies of $K_{1,2}$ in $B$ and also observe that $t < n_2 = k-1$ since otherwise $|G| = 3k-3 > 2k-2=|G|$ a contradiction since $k \geq 2$. Also observe that in each component in $C$ the vertices of degree 2 induced either on a path (including a single edge) or a cycle. Claim: If $t > \lfloor \frac{k-1}{2} \rfloor$ we are done. This is because $F$ is not empty since $t < n_2$, and by the observation above $\delta(F) \geq 1$ hence by Ore’s result the domination number of $F$ satisfies $\gamma(F) \leq \lfloor \frac{n_2 - t }{2 } \rfloor \leq \lfloor \frac{k-1-t }{2 } \rfloor$. Let $D$ be a minimum dominating set for $F$. Deleting $D$ from $C$ and from each copy of $K_{1,2}$ in $B$ we delete a leaf to get an induced subgraph $H$ with $\Delta(H) = 1$ and with at least $2 (\lfloor \frac{k-1}{2 } \rfloor +1) \geq k$ vertices of degree 1, meaning $G$ is $k$-feasible. Observe we have deleted at most $$t + \lfloor \frac{n_2 - t }{2 } \rfloor \leq \frac{n_2+t }{2 } \leq \lfloor \frac{2n_2 -1 }{2 } \rfloor = \lfloor \frac{2k-3}{2} \rfloor = k-2$$ vertices, proving the claim. Consider the subgraph $F$ induced by the vertices of degree 2 in $C$ and recall $ |F| > 0$, hence $|F| \geq 2$. Then as we have noted before, due to the components of $C$, there is no isolated vertices in $F$, and by Ore’s result $\gamma(F) \leq \lfloor \frac{n_2 - t }{2 } \rfloor \leq \lfloor \frac{k -1 - t}{2 } \rfloor$. Let $D$ be a dominating set for $D$ that realises $\gamma(F)$, hence $|D|\leq \frac{ n_2 - t}{2}$. Delete $D$ and consider the induced subgraph $H$ on $A \cup C$. Clearly $\Delta(H) \leq 1$ and denote by $x(1)$ the number of vertices of degree 1 in $H$. Now let us look again at $B$. Case 1: $t = 0$. Since $t = 0$, $B$ is empty, and we have deleted $|D| \leq \lfloor \frac{n_2}{2} \rfloor \leq \lfloor \frac{ k-1}{2 } \rfloor = \frac{k-2}{2}$ vertices since $k$ is even. So the number of vertices remains is at least $2k-2 - \frac{k-2}{2} = \frac{3k-2}{2}$. But as $k$ is even, $ h(1,k) = \lfloor \frac{k}{2 } \rfloor + 2\lfloor \frac{k-1}{2} \rfloor= \frac{ k}{2} + \frac{2(k-2)}{2} = \frac{3k - 4}{2} < \frac{3k-2}{2}$ hence $H$ is $k$-feasible and so $G$ is $k$-feasible. Case 2: $1 \leq t \leq \lfloor\frac{k-1}{2 } \rfloor$. We consider two cases: 1. [if $x(1) \geq k - 2t$ then deleting a leaf from every copy of $K_{1,2}$ in $B$ we get an induced graph $H^*$ on $A \cup B \cup C$ (extending $H$ to the leftover of $B$) with $\Delta(H^*) = 1$ and at least $k- 2t +2t = k$ vertices of degree 1 and we are done as we have deleted altogether $|D| + t \leq \frac{n_2 - t }{2} +t = \frac{n_2 +t}{2} \leq k-2$ (as before). Hence $G$ is $k$-feasible. ]{} 2. if $x(1) \leq k - 1 -2t$ (recall $x(1)$ is the number of vertices of degree 1 in $H$ formed from $A \cup \{ C \backslash D \}$), then by the even parity of $x(1)$ and as k is even we must have $x(1) \leq k-2-2t$. Now delete $\frac{x(1)}{2}$ independent vertices of degree 1 in $H$, and $t$ vertices of degree 2 in $B$ to get an induced subgraph $H^*$ with $\Delta(H^*) = 0$. We have removed $$\frac{x(1)}{2} + |D| + t \leq \frac{k-2-2t}{2} + \frac{k-1-t}{2} + t = \frac{2k - 3 -t }{2} \leq \frac{2k-4}{2 }= k-2$$ vertices (since $t \geq 1$), hence $|H^*| \geq k$ and we have $k$ vertices of degree 0 realizing $\Delta(H^*)$. Hence $H^*$ is $k$-feasible and so does $G$, completing the proof. 5. Suppose $|G|= g(\Delta,k) +k$ and $\Delta(G) = \Delta$. Then by the definition of $g(\Delta,k)$, by deleting at most $g(\Delta,k)$ vertices we either get below $k$ vertices or have an induced subgraph $H$ with at least $k$ vertices realizing the maximum degree of $H$. But deleting $g(\Delta,k)$ vertices from $G$ will leave us with a graph on at least $k$ vertices hence the second possibility above holds and $G$ is $k$-feasible, and we conclude that $h(\Delta,k) \leq g(\Delta,k) + k-1$. Open Problems ============= We conclude by proposing the following open problems: 1. [Certainly the most intriguing problem is to solve the Caro-Yuster conjecture that $f(n,k) \leq f(k) \sqrt{n}$. As mentioned we proved that $ f(2) = \sqrt{2}$ is sharp and best possible, and it is known that $f(3) \leq 43$. For $k \geq 4$ the conjecture remains open. Even a proof that $f(n,k) =o(n)$ is of interest.]{} 2. [Theorem \[theorem\_1\] supplies an $O(n^2)$ algorithm to compute $f(G)$. Can $f_3(G) $ be computed in polynomial time?]{} 3. We have calculated, in section 4 , the exact values of $g(\Delta,k)$ for $\Delta= 0,1,2$, and we have given a general constructive lower bound for $g(\Delta,k)$. Determining $g(3,k)$ seems a considerably more involved task, as well as proving a conjecture inspired by the Caro-Yuster conjecture namely: For $k \geq 2$ there is a constant $g(k)$ such that $g(\Delta,k) \leq g(k) \sqrt{\Delta}$. This conjecture, if true, implies the Caro-Yuster conjecture. 4. [We introduced the notion of a $k$-feasible graph and the corresponding function $h(\Delta,k)$ discussed in Section 4. We have determined the exact values of $h(\Delta,k)$ for $\Delta = 0,1,2$. We pose the problem to determine more exact values of $h(\Delta,k)$ in particular for $\Delta = 3$ as well as to determine $h(k) = \max\{ h(\Delta,k) : \Delta \geq 0\}$. Clearly as already proved in section 4, $h(k) \leq R(k,k) -1$.]{} 5. Lastly we mention again the conjecture about forests: If $F$ is a forest on $n$ vertices, where $n \leq \frac{t^3 + 6t^2 +17t +12}{6}$ then $f(F) \leq t$ and this bound is sharp.
--- abstract: 'We prove extensions of classical concentration inequalities for random variables which have $\alpha$-subexponential tail decay for any $\alpha \in (0,2]$. This includes Hanson–Wright type and convex concentration inequalities. We also provide some applications of these results. This includes uniform Hanson–Wright inequalities and concentration results for simple random tensors in the spirit of [@KZ18] and [@Ver19].' address: 'Faculty of Mathematics, Bielefeld University, Bielefeld, Germany' author: - Holger Sambale bibliography: - 'literature.bib' title: 'Some notes on concentration for $\alpha$-subexponential random variables' --- Introduction ============ The concentration of measure phenomenon is by now a well-established part of probability theory, see e.g. the monographs [@Led01], [@BLM13] or [@Ver18]. A central feature is to study the tail behaviour of functions of random vectors $X = (X_1, \ldots, X_n)$, i.e. to find suitable estimates for $\mathbb{P} (|f(X) - \mathbb{E}f(X)| \ge t)$. Here, numerous situations have been considered, e.g. assuming independence of $X_1, \ldots, X_n$ or in presence of some classical functional inequalities for the distribution of $X$ like Poincaré or logarithmic Sobolev inequalities. Let us recall some fundamental results which will be of particular interest for the present paper. One of them is the classical convex concentration inequality as first established in [@Tal88], [@JS91] and later [@Led95/97]. Assuming $X_1, \ldots, X_n$ are independent random variables each taking values in some bounded interval $[a,b]$, we have that for every convex Lipschitz function $f \colon [a,b]^n \to \mathbb{R}$ with Lipschitz constant $1$, $$\label{convconc} \mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(-\frac{t^2}{2(b-a)^2}\Big)$$ for any $t \ge 0$ (see e.g. [@Sa00], Corollary 3). One remarkable feature of is that under the additional assumption of convexity, no further information about the distribution of $X_1, \ldots, X_n$ is needed to obtain subgaussian tails for Lipschitz functions. While the convex concentration inequality, as most basic concentration inequalities, addresses Lipschitz functions, there are many situations of interest in which the functionals under consideration are not Lipschitz or have Lipschitz constants which grow as the dimension increases even after a proper renormalization. A typical example are quadratic forms. For quadratic forms, a famous concentration result is the Hanson–Wright inequality, which was first established in [@HW71]. We may state it as follows: assuming $X_1, \ldots, X_n$ are centered, independent random variables satisfying $\norm{X_i}_{\Psi_2} \le K$ for any $i$ (i.e. the $X_i$ are subgaussian), and $A = (a_{ij})$ is a symmetric matrix, we have for any $t \ge 0$ $$\operatorname{\mathbb{P}}\big(\abs{\sum_{i,j} a_{ij} X_i X_j - \sum_{i = 1}^n a_{ii} \operatorname{\mathbb{E}}X_i^2} \ge t\big) \le 2 \exp\Big( - \frac{1}{C} \min\Big( \frac{t^2}{K^4 \norm{A}_{\mathrm{HS}}^2}, \frac{t}{K^2 \norm{A}_{\mathrm{op}}} \Big)\Big).$$ Here, $\norm{\cdot}_{\Psi_2}$ denotes the Orlicz norm of order $2$ (for the definition, see below), and $C > 0$ is some absolute constant. For a modern proof, see [@RV13], and for various developments, cf. [@HKZ12; @VW15; @Ad15; @ALM18]. In the present note, we compile a number of smaller results which extend some of the inequalities above to more general situations. In particular, this means removing boundedness or subgaussianity to allow for slightly heavier (even if still exponentially decaying) tails. In detail, we shall consider random variables $X_i$ which satisfy $$\mathbb{P}(|X_i-\mathbb{E}X_i| \ge t) \le 2 \exp(-ct^\alpha)$$ for any $t \ge 0$, some $\alpha \in (0,2]$ and a suitable constant $c > 0$. Such random variables are sometimes called $\alpha$-subexponential (for $\alpha = 2$, they are subgaussian), or sub-Weibull$(\alpha)$ (cf. [@KC18 Definition 2.2]). Equivalently, these random variables have finite Orlicz norms of order $\alpha$. Here we recall that for a probability space $(\Omega, \mathcal{A}, \mathbb{P})$ and any $\alpha \in (0,2]$, the Orlicz norm of a random variable $X$ is defined by $$\label{ON} \lVert X \rVert_{\Psi_\alpha} \coloneqq \inf \{t > 0 \colon \mathbb{E} \exp ((|X|/t)^\alpha) \le 2 \}.$$ If $\alpha < 1$, this is actually a quasi-norm, however many norm-like properties can nevertheless be recovered up to $\alpha$-dependent constants (see e.g. [@GSS19], Appendix A). In particular, let us recall the triangle-type inequality $\lVert X+Y \rVert_{\Psi_\alpha} \le 2^{1/\alpha}(\lVert X \rVert_{\Psi_\alpha} + \lVert Y \rVert_{\Psi_\alpha})$ and the fact that for any $\alpha > 0$, $$d_\alpha \sup_{p \ge 1} \frac{\norm{X}_{L^p}}{p^{1/\alpha}} \le \norm{X}_{\Psi_\alpha} \le D_\alpha \sup_{p \ge 1} \frac{\norm{X}_{L^p}}{p^{1/\alpha}}$$ for suitable constants $0 < d_\alpha < D_\alpha < \infty$ (these are Lemmas A.2 and A.3 in [@GSS19], respectively). In the sequel, whenever some of these properties are needed, we will only refer to the case of $\alpha \in (0,1)$, since we assume the case of $\alpha \ge 1$ to be well-known anyway. Let us recall some classical concentration results for $\alpha$-subexponential random variables. In fact, such random variables have log-convex (if $\alpha \le 1$) or log-concave (if $\alpha \ge 1$) tails, i.e. $t \mapsto - \log \mathbb{P}(\abs{X} \ge t)$ is convex or concave, respectively. For log-convex or log-concave measures, two-sided $L^p$ norm estimates for polynomial chaos (from which concentration bounds can easily be derived) have been established over the last 25 years. In the log-convex case, results of this type have been derived for linear forms in [@HMO97] and for any order in [@KL15] and also [@GSS19]. For log-concave measures, starting with linear forms again in [@GK95], important contributions have been achieved in [@La96], [@La99], [@LL03] and [@AL12]. Before we state our main results, let us introduce some notation which we will use in this paper. **Notations.** If $X_1, \ldots, X_n$ is a set of random variables, we denote by $X = (X_1, \ldots, X_n)$ the corresponding random vector. Moreover, we shall need certain types of norms throughout the paper. These are: - the norms $\lVert x \rVert_p \coloneqq (\sum_{i=1}^n\abs{x_i}^p)^{1/p}$ for $x \in \mathbb{R}^n$, - the $L^p$ norms $\lVert X \rVert_{L^p} := (\mathbb{E}|X|^p)^{1/p}$ for random variables $X$, - the Orlicz (quasi-) norms $\lVert X \rVert_{\Psi_\alpha}$ as introduced in , - the Hilbert–Schmidt and operator norms $\lVert A \rVert_\mathrm{HS} \coloneqq (\sum_{i,j} a_{ij}^2)^{1/2}$, $\lVert A \rVert_\mathrm{op} \coloneqq \sup\{ \lVert Ax \rVert \colon \lVert x \rVert = 1\}$ for matrices $A = (a_{ij})$. All the constants appearing in this paper depend on $\alpha$ only. Main results ------------ Our first result is an extension of the Hanson–Wright inequality to random variables with bounded Orlicz norms of any order $\alpha \in (0,2]$. This complements the results in [@GSS19], where the case of $\alpha \in (0,1]$ was considered. Note that for $\alpha = 2$, we get back the actual Hanson–Wright inequality. \[HWalpha\] For any $\alpha \in (0,2]$ there exists a constant $C = C(\alpha)$ such that the following holds. Let $X_1, \ldots, X_n$ be centered, independent random variables satisfying $\norm{X_i}_{\Psi_\alpha} \le K$ for any $i$, and $A = (a_{ij})$ be a symmetric matrix. For any $t \ge 0$ we have $$\operatorname{\mathbb{P}}\big(\abs{\sum_{i,j} a_{ij} X_i X_j - \sum_{i = 1}^n a_{ii} \operatorname{\mathbb{E}}X_i^2} \ge t\big) \le 2 \exp\Big( - \frac{1}{C} \min\Big( \frac{t^2}{K^4 \norm{A}_{\mathrm{HS}}^2}, \Big( \frac{t}{K^2 \norm{A}_{\mathrm{op}}}\Big)^{\frac \alpha 2} \Big)\Big).$$ To prove Theorem \[HWalpha\] for $\alpha \in (1,2]$, a central task is to evaluate the family of norms used in [@AL12]. In this sense, Theorem \[HWalpha\] (again, for $\alpha \in (1,2]$) is a simplification of the results from [@AL12]. The benefit of Theorem \[HWalpha\] is that it uses norms which are easily calculable and often times already sufficient for applications. Theorem \[HWalpha\] generalizes and implies a number of inequalities for quadratic forms in $\alpha$-subexponential random variables (in particular for $\alpha = 1$) which are spread throughout the literature, cf. e.g. [@NSU18 Lemma A.6], [@YangEtAl17 Lemma C.4] or [@EYY12 Appendix B]. For a detailed discussion, see [@GSS19 Remark 1.6]. Moreover, we prove a version of the convex concentration inequality for random variables with bounded Orlicz norms. While convex concentration for bounded random variables is by now standard, there is less literature for unbounded random variables. In [@Mar18], a martingal-type approach is used, leading to a result for functionals with stochastically bounded increments. The special case of suprema of unbounded empirical processes was treated in [@Ad08], [@LV13] and [@LV14]. In [@KZ18 Lemma 1.4], a general convex concentration inequality for subgaussian random variables ($\alpha =2$) was proven. We may extend the latter to any order $\alpha \in (0,2]$: \[prop-alphafl\] Let $X_1, \ldots, X_n$ be independent random variables, $\alpha \in (0,2]$ and $f \colon \mathbb{R}^n \to \mathbb{R}$ convex and $1$-Lipschitz. Then, for any $t \ge 0$ and some numerical constant $c > 0$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(-\frac{ct^\alpha}{\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ In particular, $$\label{orliczformulation} \lVert f(X) - \mathbb{E}f(X) \rVert_{\Psi_\alpha} \le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}.$$ As in [@KZ18], we can actually prove a deviation inequality for separately convex functions. We defer the details to Section \[Section:convconc\]. The dependency on the Orlicz norms of $\max_i\abs{X_i}$ is optimal in the sense that it cannot be replaced by the maximum of the Orlicz norms of the random variables $X_i$. In general, the Orlicz norm of $\max_i\abs{X_i}$ will be of order $(\log n)^{1/\alpha}$ (cf. Lemma \[maxOrl\]). With the help of the results from above, we are able to prove concentration results in slightly more advanced situations. One example are uniform Hanson–Wright inequalities, i.e. we do not consider a single quadratic form but the supremum of a family of quadratic forms. A pioneering result (for Rademacher variables) can be found in [@Tal96a]. Later results include [@Ad15] (which requires the so-called concentration property), [@KMR14], [@DE17] and [@GSS18b] (certain classes of weakly dependent random variables). In [@KZ18], a uniform Hanson–Wright inequality for subgaussian random variables was proven. We may show a similar result for random variables with bounded Orlicz norms of any order $\alpha \in (0,2]$. \[unifHW\] Let $X = (X_1, \ldots, X_n)$ be a random vector with independent centered components and $K \coloneqq \norm{\max_i \abs{X_i}}_{\Psi_\alpha}$, where $\alpha \in (0,2]$. Let $\mathcal{A}$ be a compact set of real symmetric $n \times n$ matrices, and let $f(X) \coloneqq \sup_{A \in \mathcal{A}} (X^TAX - \mathbb{E}X^TAX)$. Then, for any $t \ge 0$, $$\mathbb{P}(f(X) - \mathbb{E}f(X) \ge t) \le 2\exp\Big(-\frac{C}{K^\alpha}\min\Big(\frac{t^\alpha}{(\mathbb{E}\sup_{A\in\mathcal{A}}\norm{AX}_2)^\alpha}, \frac{t^{\alpha/2}}{\sup_{A\in\mathcal{A}}\norm{A}_{\mathrm{op}}^{\alpha/2}}\Big)\Big),$$ where $C > 0$ is some absolute constant. For $\alpha = 2$, this gives back [@KZ18 Theorem 1.1] (up to constants and a different range of $t$). Comparing Theorem \[unifHW\] to Theorem \[HWalpha\], we note that instead of a subgaussian term, we obtain an $\alpha$-subexponential term. Moreover, Theorem \[unifHW\] only gives a bound for the upper tails. Therefore, if $\mathcal{A}$ just consists of a single matrix, Theorem \[HWalpha\] is stronger. These differences have technical reasons. With the help of both Theorem \[HWalpha\] and Proposition \[prop-alphafl\], we may also extend a recent concentration result by [@Ver19] for simple random tensors. Here, a *simple random tensor* is a random tensor of the form $$\label{SRT} X \coloneqq X_1 \otimes \cdots \otimes X_d = (X_{1,i_1} \cdots X_{d,i_d})_{i_1, \ldots, i_d} \in \mathbb{R}^{n^d},$$ where all $X_k$ are independent random vectors in $\mathbb{R}^n$ whose coordinates are independent random variables with mean zero and variance one. Concentration results for random tensors (typically for polynomial-type functions) have been achieved in [@La06; @AW15; @GSS19], for instance. In comparison to these results, the inequalities proven in [@Ver19] focus on *small* values of $t$, e.g. a regime where subgaussian tail decay holds. Moreover, while in previous papers the constants appearing in the concentration bounds depend on $d$ in some manner which is not assumed to be optimal (and often not even explicitly stated), [@Ver19] provides constants with optimal dependence on $d$. One of these results is the following convex concentration inequality: assuming that $n$ and $d$ are positive integers, $f \colon \mathbb{R}^{n^d} \to \mathbb{R}$ is convex and 1-Lipschitz and the $X_{ij}$ are bounded a.s., then for any $t \in [0, 2 n^{d/2}]$, $$\label{V1.3} \mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp \Big(- \frac{ct^2}{dn^{d-1}}\Big),$$ where $c > 0$ only depends on the bound of the coordinates. We may extend this result to unbounded random variables as follows: \[V1.3subg\] Let $n, d \in \mathbb{N}$ and $f \colon \mathbb{R}^{n^d} \to \mathbb{R}$ be convex and 1-Lipschitz. Consider a simple random tensor $X \coloneqq X_1 \otimes \cdots \otimes X_d$ as in . Fix $\alpha \in [1,2]$, and assume that $\lVert X_{i,j} \rVert_{\Psi_\alpha} \le K$. Then, for any $t \in [0, C n^{d/2} (\log n)^{1/\alpha}/K]$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(-c\Big( \frac{t}{d^{1/2} n^{(d-1)/2} (\log n)^{1/\alpha} K}\Big)^\alpha\Big),$$ where $c > 0$ is some absolute constant. On the other hand, if $\alpha \in (0,1)$, then, for any $t \in [0, C n^{d/2} (\log n)^{1/\alpha}d^{1/\alpha-1/2}/K]$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(-c\Big( \frac{t}{d^{1/\alpha}n^{(d-1)/2} (\log n)^{1/\alpha} K}\Big)^\alpha\Big).$$ The logarithmic factor stems from the Orlicz norm of $\max_i |X_i|$ in Proposition \[prop-alphafl\]. In Section \[Section:Pfs\] we will show a slightly more general result from which we may get back . We believe that Theorem \[V1.3subg\] is non-optimal for $\alpha<1$ in the sense that we would expect a bound of the same type as for $\alpha \in [1,2]$. However, a key difference in the proofs is that in the case of $\alpha \ge 1$ we can make use of moment-generating functions. This is clearly not possible if $\alpha < 1$, so that less subtle estimates must be invoked instead. Note that Theorem \[V1.3subg\] in particular yields a result for so-called “Euclidean functions”, i.e. functions of the form $f(X) = \lVert AX \rVert_H$ for some linear operator $A \colon (\mathbb{R}^{n^d}, \lVert \cdot \rVert_2) \to H$, where $H$ is a Hilbert space. For $\alpha=2$, such functions have been considered in [@Ver19], proving concentration around their $L^2$ norm (which equals $\norm{A}_{\mathrm{HS}}$). Theorem \[V1.3subg\] complements these bounds by results for $\alpha < 2$, though centering around the expected value. Extending the methods used in [@Ver19] for proving bounds around the $L^2$ norm (which appears favorable in some applications) seems difficult however, since a key step is working with the moment-generating function of $f^2$, which again will not exist in general if $\alpha < 2$. As suggested in [@Ver19], with the same methodology it is possible to extend well-known concentration inequalities for random tensors. For example, we may consider situations where the random tensors $X_1, \ldots, X_n$ satisfy some classical functional inequalities like a Poincaré or a logarithmic Sobolev inequality. Let us recall that a random vector $Y$ taking values in $\mathbb{R}^n$ satisfies a Poincaré inequality with constant $\sigma^2 > 0$ if for all sufficiently smooth functions $f$ we have $$\label{PI} \mathrm{Var}(f(X)) \le \sigma^2\mathbb{E}|\nabla f(X)|^2,$$ where $\mathrm{Var}(f(X)) \coloneqq \mathbb{E}f^2(X) - (\mathbb{E}f(X))^2$ denotes the usual variance. Moreover, $Y$ satisfies a logarithmic Sobolev inequality with constant $\sigma^2 > 0$ if for all sufficiently smooth functions $f$ we have $$\label{LSI} \mathrm{Ent}(f^2(X)) \le 2\sigma^2\mathbb{E}|\nabla f(X)|^2,$$ where $\mathrm{Ent}(f(X)) \coloneqq \mathbb{E}f(X)\log(f(X)) - \mathbb{E}f(X)\log(\mathbb{E}f(X))$ for every measurable $f \ge 0$. It is well-known that logarithmic Sobolev inequalities imply Poincaré inequalities with the same constant $\sigma^2$. We now have the following result: \[LSIten\] Let $n, d \in \mathbb{N}$ and $f \colon \mathbb{R}^{n^d} \to \mathbb{R}$ be 1-Lipschitz. Consider a simple random tensor $X \coloneqq X_1 \otimes \cdots \otimes X_d$ as in . 1. Assuming that every $X_i$ satisfies a Poincaré inequality with constant $\sigma^2 > 0$, for any $t \in [0, C n^{d/2} \sigma]$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(- \frac{ct}{d^{1/2}n^{(d-1)/2} \sigma}\Big).$$ 2. Assuming that every $X_i$ satisfies a logarithmic Sobolev inequality with constant $\sigma^2 > 0$, for any $t \in [0, C n^{d/2} \sigma]$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(- \frac{ct^2}{dn^{d-1} \sigma^2}\Big).$$ Here, by $c > 0$ we denote some absolute constants. Note that due to , the $X_i$ must have independent coordinates $X_{i,j}$ in particular. By the tensorization property of Poincaré and logarithmic Sobolev inequalities, this means that we essentially require all the $X_{i,j}$ to satisfy or . For instance, if the $X_i$ have standard Gaussian distributions, all the conditions from Theorem \[LSIten\] (2) are satisfied. Overview -------- In Section \[Section:HWI\], we prove Theorem \[HWalpha\]. Moreover, we provide an application to the fluctuations of a linear transformation of a random vector with independent $\alpha$-subexponential entries around the Hilbert–Schmidt norm of the transformation matrix. In Section \[Section:convconc\], we give the proof of Proposition \[prop-alphafl\] and discuss some smaller applications, while in Section \[Section:unifHW\] we prove Theorem \[unifHW\]. Section \[Section:AuxL\] provides a number of auxiliary lemmas partly of smaller scope. They are needed for the proof of Theorems \[V1.3subg\] and \[LSIten\], which finally follows in Section \[Section:Pfs\]. Acknowledgements ---------------- This work was supported by the German Research Foundation (DFG) via CRC 1283 “*Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications*”. The author would moreover like to thank Arthur Sinulis for carefully reading this paper and many fruitful discussions and suggestions. A generalized Hanson–Wright inequality {#Section:HWI} ====================================== In this section, we prove Theorem \[HWalpha\], i.e. the generalized Hanson–Wright inequality. In what follows, we actually show that for any $p \ge 2$, $$\norm{\sum_{i,j} a_{ij} X_i X_j - \sum_{i = 1}^n a_{ii} \operatorname{\mathbb{E}}X_i^2}_{L^p} \le CK^2 \big(p^{1/2} \norm{A}_{\mathrm{HS}} + p^{2/\alpha} \norm{A}_{\mathrm{op}} \big).$$ From here, Theorem \[HWalpha\] follows by standard means (cf. [@SS18 Proof of Theorem 3.6]). Before we start, let us introduce some basic facts and conventions with respect to the constants appearing in the proofs which we will use throughout the paper. **Adjusting constants.** In all of the proofs, we will make use of certain constants $C$ or $c$ depending on $\alpha$ only. These constants may and will vary from line to line. To adjust constants in the tail bounds we derive, it is often convenient to make use of the following elementary inequality (cf. e.g. (3.1) in [@SS19]): for any two constants $c_1 > c_2 > 1$ we have for all $r \ge 0$ and $c > 0$ $$\label{eqn:constantAdjustment} c_1 \exp(-c r) \le c_2 \exp\Big(-\frac{\log(c_2)}{\log(c_1)} cr\Big)$$ whenever the left hand side is smaller or equal to $1$. Finally, note that for any $\alpha \in (0,2)$, any $\gamma > 0$ and all $t \ge 0$, we may always estimate $$\label{from2toalpha} \exp(-(t/\gamma)^2) \le 2 \exp(-c (t/\gamma)^\alpha),$$ using $\exp(-s^2) \le \exp(1-s^\alpha)$ for any $s > 0$ and . As the case $\alpha \in (0,1]$ has been proven in [@GSS19], we assume $\alpha \in (1,2]$. First we shall treat the off-diagonal part of the quadratic form. Let $w^{(1)}_i, w^{(2)}_i$ be independent (of each other as well as of the $X_i$) symmetrized Weibull random variables with scale $1$ and shape $\alpha$, i.e. $w^{(j)}_i$ are symmetric random variables with $\mathbb{P}(\abs{w^{(j)}_i} \ge t) = \exp(-t^\alpha)$. In particular, the $w^{(j)}_i$ have logarithmically concave tails. Using standard decoupling arguments as well as [@AL12 Theorem 3.2] in the second inequality, there is a constant $C = C(\alpha)$ such that for any $p \ge 2$ it holds $$\label{eqn:LpInequality} \norm{\sum_{i \neq j} a_{ij} X_i X_j}_{L^p} \le C K^2 \norm{\sum_{i \neq j} a_{ij} w^{(1)}_i w^{(2)}_j}_{L^p} \le CK^2(\norm{A}_{\{1,2\},p}^{\mathcal{N}} + \norm{A}_{\{\{1\},\{2\}\}, p}^{\mathcal{N}}),$$ where the norms $\norm{A}_{\mathcal{J},p}^{\mathcal{N}}$ are defined as in [@AL12]. Instead of repeating the general definitions, we will only focus on the case we need in our situation. Indeed, for the symmetric Weibull distribution with parameter $\alpha$ we have (again, in the notation of [@AL12]) $N(t) = t^\alpha$, and so for $\alpha \in (1,2]$ $$\hat{N}(t) = \min(t^2, \abs{t}^\alpha).$$ So, the norms can be written as follows: $$\begin{aligned} \norm{A}_{\{1,2\},p}^{\mathcal{N}} &= 2 \sup\big\lbrace \sum_{i,j} a_{ij} x_{ij} : \sum_{i = 1}^n \min\big(\sum_{j} x_{ij}^2, \big(\sum_{j} x_{ij}^2\big)^{\alpha/2}\big) \le p \big\rbrace\\ \norm{A}_{\{\{1\},\{2\}\}, p}^{\mathcal{N}} &= \sup\big\lbrace \sum_{i,j} a_{ij} x_i y_j : \sum_{i = 1}^n \min(x_i^2, \abs{x_i}^{\alpha}) \le p, \sum_{j = 1}^n \min(y_j^2, \abs{y_j}^{\alpha}) \le p \big\rbrace. \end{aligned}$$ Before continuing with the proof, we next introduce a lemma which will help to rewrite the norms in a more tractable form. For any $p \ge 2$ define $$I_1(p) \coloneqq \big\lbrace x = (x_{ij}) \in \operatorname{\mathbb{R}}^{n \times n} : \sum_{i = 1}^n \min\big( \big( \sum_{j = 1}^n x_{ij}^2\big)^{\alpha/2}, \sum_{j = 1}^n x_{ij}^2 \big) \le p \big\rbrace$$ and $$I_2(p) = \big\lbrace x_{ij} = z_i y_{ij} \in \operatorname{\mathbb{R}}^{n \times n} : \sum_{i = 1}^n \min(\abs{z_i}^\alpha, z_i^2) \le p, \max_{i = 1,\ldots,n} \sum_{j = 1}^n y_{ij}^2 \le 1 \big\rbrace.$$ Then $I_1(p) = I_2(p)$. The inclusion $I_1(p) \supseteq I_2(p)$ is an easy calculation, and the inclusion $I_1(p) \subseteq I_2(p)$ follows by defining $z_i = \norm{(x_{ij})_j}$ and $y_{ij} = x_{ij} / \norm{(x_{ij})_j}$ (or $0$, if the norm is zero). For brevity, for any matrix $A = (a_{ij})$ let us write $\norm{A}_m \coloneqq \max_{i = 1,\ldots,n} ( \sum_{j = 1}^n a_{ij}^2 )^{1/2}$. Note that $\norm{A}_m \le \norm{A}_{\mathrm{op}}$, see for example [@CTP17]. Now, fix some vector $z \in \operatorname{\mathbb{R}}^n$ such that $\sum_{i = 1}^n \min(\abs{z_i}^\alpha, z_i^2) \le p$. The condition also implies $$p \ge \sum_{i = 1}^n \abs{z_i}^\alpha {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}} + \sum_{i= 1}^n z_i^2 {\text{$\mathbbm{1}$}}_{\{\abs{z_i} \le 1\}} \ge \max\Big( \sum_{i=1}^n z_i^2 {\text{$\mathbbm{1}$}}_{\{\abs{z_i} \le 1\}}, \sum_{i = 1}^n \abs{z_i} {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}} \Big),$$ where in the second step we used $\alpha \in [1,2]$ to estimate $\abs{z_i}^\alpha {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}} \ge \abs{z_i} {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}}$. So, given any $z$ and $y$ satisfying the conditions of $I_2(p)$, we can write $$\begin{aligned} \abs{\sum_{i,j} a_{ij} z_i y_{ij}} &\le \sum_{i = 1}^n \abs{z_i} \big( \sum_{j = 1}^n a_{ij}^2 \big)^{1/2} \big( \sum_{j = 1}^n y_{ij}^2 \big)^{1/2} \le \sum_{i = 1}^n \abs{z_i} \big( \sum_{j = 1}^n a_{ij}^2 \big)^{1/2} \\ &\le \sum_{i = 1}^n \abs{z_i} {\text{$\mathbbm{1}$}}_{\{\abs{z_i} \le 1\}} \big( \sum_{j = 1}^n a_{ij}^2 \big)^{1/2} + \sum_{i = 1}^n \abs{z_i} {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}} \big( \sum_{j =1 }^n a_{ij}^2 \big)^{1/2} \\ &\le \norm{A}_{\mathrm{HS}} \big(\sum_{i = 1}^n z_i^2 {\text{$\mathbbm{1}$}}_{\{\abs{z_i} \le 1\}}\big)^{1/2} + \norm{A}_m \sum_{i = 1}^n \abs{z_i} {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}}. \end{aligned}$$ So, this yields $$\label{eqn:norm1} \norm{A}_{\{1,2\},p}^{\mathcal{N}} \le 2p^{1/2} \norm{A}_{\mathrm{HS}} + 2p \norm{A}_m \le 2p^{1/2} \norm{A}_{\mathrm{HS}} + 2p \norm{A}_{\mathrm{op}}.$$ As for $\norm{A}_{\{\{1\},\{2\}\}, p}^{\mathcal{N}}$, we can use the decomposition $z = z_1 + z_2$, where $(z_1)_i = z_i {\text{$\mathbbm{1}$}}_{\{\abs{z_i} > 1\}}$ and $z_2 = z - z_1$, and obtain $$\begin{aligned} \norm{A}_{\{\{1\},\{2\}\}, p}^{\mathcal{N}} &\le \sup\big\lbrace \sum_{ij} a_{ij} (x_1)_i (y_1)_j : \norm{x_1}_\alpha \le p^{1/\alpha}, \norm{y_1}_\alpha \le p^{1/\alpha} \big\rbrace \\ &+ 2 \sup\big\lbrace \sum_{ij} a_{ij} (x_1)_i (y_2)_j : \norm{x_1}_\alpha \le p^{1/\alpha}, \norm{y_2}_2 \le p^{1/2} \big\rbrace \\ &+ \sup\big\lbrace\sum_{ij} a_{ij} (x_2)_i (y_2)_j : \norm{x_2}_2 \le p^{1/2}, \norm{y_2}_2 \le p^{1/2} \big\rbrace \\ &= p^{2/\alpha} \sup\lbrace \ldots \rbrace + 2p^{1/\alpha + 1/2} \sup\{\ldots\} + p \norm{A}_{\mathrm{op}}. \end{aligned}$$ (in the brackets, the conditions $\norm{\cdot}_2 \le p^{1/\beta}$ have been replaced by $\norm{\cdot}_2 \le 1$). Clearly, since $\norm{x_1}_\alpha \le 1$ implies $\norm{x_1}_2 \le 1$ (and the same for $y_1$), all of the norms can be upper bounded by $\norm{A}_{\mathrm{op}}$, i.e. we have $$\label{eqn:norm2} \norm{A}_{\{\{1\},\{2\}\}, p}^{\mathcal{N}} \le (p^{2/\alpha} + 2p^{1/\alpha + 1/2} + p) \norm{A}_{\mathrm{op}} \le 4p^{2/\alpha} \norm{A}_{\mathrm{op}},$$ where the last inequality follows from $p \ge 2$ and $1/2 \le 1/\alpha \le 1 \le (\alpha+2)/(2\alpha) \le 2/\alpha$. Combining the estimates , and yields $$\begin{aligned} \norm{\sum_{i,j} a_{ij} X_i X_j}_{L^p} \le CK^2\big( 2p^{1/2} \norm{A}_{\mathrm{HS}} + 6p^{2/\alpha} \norm{A}_{\mathrm{op}} \big). \end{aligned}$$ To treat the diagonal terms, we use Corollary 6.1 in [@GSS19], as $X_i^2$ are independent and satisfy $\norm{X_i^2}_{\Psi_{\alpha/2}} \le K^2$, so that it yields $$\operatorname{\mathbb{P}}\big( \abs{\sum_{i = 1}^n a_{ii} (X_i^2 - \operatorname{\mathbb{E}}X_i^2)} \ge t\big) \le 2 \exp\Big( - \frac{1}{CK^2} \min\Big( \frac{t^2}{\sum_{i =1}^n a_{ii}^2}, \Big( \frac{t}{\max_{i = 1,\ldots,n} \abs{a_{ii}}}\Big)^{\alpha/2} \Big)\Big).$$ Now it is clear that $\max_{i = 1,\ldots, n} \abs{a_{ii}} \le \norm{A}_{\mathrm{op}}$ and $\sum_{i = 1}^n a_{ii}^2 \le \norm{A}_{\mathrm{HS}}^2$. In particular, for some constant $C > 0$ $$\norm{\sum_{i = 1}^n a_{ii} (X_i^2 - \operatorname{\mathbb{E}}X_i^2)}_{L^p} \le CK^2(p^{1/2} \norm{A}_{\mathrm{HS}} + p^{2/\alpha} \norm{A}_{\mathrm{op}}).$$ The claim now follows from Minkowski’s inequality. A number of selected applications of the Hanson–Wright inequality can be found in [@RV13]. Some of them were generalized to $\alpha$-subexponential random variables with $\alpha \le 1$ in [@GSS19]. In general, it is no problem to extend these proofs to any order $\alpha \in (0,2]$ using Theorem \[HWalpha\]. At this point, we just focus on a single example which we will need for the proof of Theorem \[V1.3subg\]. In detail, we show a concentration result for the Euclidean norm of a linear transformation of a vector $X$ having independent components with bounded Orlicz norms around the Hilbert–Schmidt norm of the transformation matrix. This extends and, by a slightly more elaborate approach, even sharpens [@GSS19 Proposition 2.1]. \[proposition:EuclideanNormVector\] Let $X_1, \ldots, X_n$ be independent random variables satisfying $\operatorname{\mathbb{E}}X_i = 0, \operatorname{\mathbb{E}}X_i^2 = 1, \norm{X_i}_{\Psi_\alpha} \le K$ for some $\alpha \in (0,2]$ and let $B \neq 0$ be an $m \times n$ matrix. For any $t \ge 0$ we have $$\label{ENVr} \operatorname{\mathbb{P}}(\abs{\norm{BX}_2 - \norm{B}_{\mathrm{HS}}} \ge t K^2 \norm{B}_{\mathrm{op}}) \le 2\exp(-c t^\alpha ).$$ In particular, for any $t \ge 0$ it holds $$\label{nov} \operatorname{\mathbb{P}}( \abs{\norm{X}_2 - \sqrt{n}} \ge tK^2) \le 2\exp(-c t^\alpha).$$ It suffices to prove this for matrices satisfying $\norm{B}_{\mathrm{HS}} = 1$, as otherwise we set ${\widetilde}{B} = B \norm{B}_{\mathrm{HS}}^{-1}$ and use the equality $$\{ \abs{\norm{BX}_2 - \norm{B}_{\mathrm{HS}}} \ge \norm{B}_{\mathrm{op}} t \} = \{ \abs{\norm{{\widetilde}{B}X}_2 - 1} \ge \norm{{\widetilde}{B}}_{\mathrm{op}} t \}.$$ Now let us apply Theorem \[HWalpha\] to the matrix $A \coloneqq B^T B$. An easy calculation shows that $\mathrm{trace}(A) = \mathrm{trace}(B^T B) = \norm{B}_{\mathrm{HS}}^2 = 1$, so that we have for any $t \ge 0$ $$\begin{aligned} \operatorname{\mathbb{P}}\big( \abs{\norm{BX}_2 - 1} \ge t \big) &\le \operatorname{\mathbb{P}}\big( \abs{\norm{BX}_2^2 - 1} \ge \max(t, t^2) \big) \\ &\le 2\exp\Big( - \frac{1}{C} \min\Big( \frac{\max(t,t^2)^2}{K^4\norm{B}_{\mathrm{op}}^2}, \Big( \frac{\max(t,t^2)}{K^4 \norm{B}_{\mathrm{op}}^2} \Big)^{\alpha/2} \Big) \Big) \\ &\le 2\exp\Big( - \frac{1}{C} \min\Big( \frac{t^2}{K^4 \norm{B}_{\mathrm{op}}^2}, \Big( \frac{t^2}{K^4 \norm{B}_{\mathrm{op}}^2} \Big)^{\alpha/2} \Big) \Big) \\ &\le 2\exp\Big( - \frac{1}{C} \Big( \frac{t}{K^2 \norm{B}_{\mathrm{op}}} \Big)^{\alpha} \Big).\end{aligned}$$ Here, in the first step we have used the estimates $\norm{A}_{\mathrm{HS}}^2 \le \norm{B}_{\mathrm{op}}^2 \norm{B}_{\mathrm{HS}}^2 = \norm{B}_{\mathrm{op}}^2$ and $\norm{A}_{\mathrm{op}} \le \norm{B}_{\mathrm{op}}^2$ and moreover the fact that since $\mathbb{E}X_i^2 = 1$, $K \ge C_\alpha > 0$ (cf. e.g. [@GSS19 Lemma A.2]), while the last step follows from . Setting $t = K^2s\norm{B}_{\mathrm{op}}$ for $s \ge 0$ finishes the proof of . Finally, follows by taking $m=n$ and $B=I$. Convex concentration for random variables with bounded Orlicz norms {#Section:convconc} =================================================================== In this chapter, we show Proposition \[prop-alphafl\]. As mentioned before, we are actually able to prove a slightly more general statement for functions which are assumed to be separately convex only, i.e. convex in every coordinate (with the other coordinates being fixed). In this situation, for bounded random variables $X_1, \ldots, X_n$ taking values in some interval $[a,b]$, we obtain inequalities for the upper tail of $f(X) - \mathbb{E}f(X)$ (sometimes also called deviation inequalities). That is, for any $t \ge 0$, $$\label{sepconvdev} \mathbb{P}(f(X) - \mathbb{E}f(X) > t) \le \exp\Big(-\frac{t^2}{2(b-a)^2}\Big),$$ see e.g. [@BLM13 Theorem 6.10]. \[prop-alpha\] Let $X_1, \ldots, X_n$ be independent random variables, $\alpha \in (0,2]$ and $f \colon \mathbb{R}^n \to \mathbb{R}$ separately convex and $1$-Lipschitz. Then, for any $t \ge 0$ and some numerical constant $c > 0$, $$\mathbb{P}(f(X) - \mathbb{E}f(X) > t) \le 2 \exp\Big(- \frac{ct^\alpha}{\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ This generalizes Lemma 3.5 in [@KZ18], and the proof works much in the same way. The key step is a suitable truncation which goes back to [@Ad08]. Let us write $$\label{trunc} X_i = X_i1_{\{|X_i|\le M\}} + X_i1_{\{|X_i| > M\}} \eqqcolon Y_i + Z_i$$ with $M \coloneqq 8 \mathbb{E}\max_i |X_i|$ (in particular, $M \le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}$, cf. [@GSS19 Lemma A.2]), and define $Y = (Y_1, \ldots, Y_n)$, $Z = (Z_1, \ldots, Z_n)$. Noting that $$\begin{aligned} \label{dreieck} \begin{split} &\mathbb{P}(f(X) - \mathbb{E}f(X) > t)\\ \le \ &\mathbb{P}(f(Y) - \mathbb{E}f(Y) + |f(X) - f(Y)| + |\mathbb{E}f(Y) - \mathbb{E}f(X)| > t), \end{split} \end{aligned}$$ it suffices to find suitable tail estimates for the terms in . To start, we apply the deviation inequality to $Y$. Using , we obtain $$\begin{aligned} \label{tailsA} \begin{split} \mathbb{P}(f(Y) - \mathbb{E}f(Y) > t) &\le \exp\Big(- \frac{t^2}{C^2\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^2}\Big)\\ &\le 2 \exp\Big(- \frac{t^\alpha}{C^\alpha\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big). \end{split} \end{aligned}$$ To control the tails of $f(X)-f(Y)$, by the Lipschitz property we may study the tails of $\lVert Z \rVert_2$. To this end, we use the Hoffmann–J[ø]{}rgensen inequality (cf. [@LT91 Theorem 6.8]) in the following form: if $W_1, \ldots, W_n$ are independent random variables, $S_k := W_1 + \ldots + W_k$, and $t \ge 0$ is such that $\mathbb{P}(\max_k |S_k| > t) \le 1/8$, then $$\mathbb{E}\max_k|S_k| \le 3 \operatorname{\mathbb{E}}\max_i|W_i| + 8t.$$ In our case, we set $W_i \coloneqq Z_i^2$, $t=0$, and note that by Chebyshev’s inequality, $$\mathbb{P}(\max_iZ_i^2 > 0) = \mathbb{P}(\max_i|X_i| > M) \le \mathbb{E}\max_i|X_i|/M \le 1/8,$$ and consequently, recalling that $S_k = Z_1^2 + \ldots + Z_k^2$, $$\mathbb{P}(\max_k |S_k| > 0) \le \mathbb{P}(\max_i Z_i^2 > 0) \le 1/8.$$ Thus, by the Hoffmann–J[ø]{}rgensen inequality together with [@GSS19 Lemma A.2], we have $$\mathbb{E}\lVert Z \rVert_2^2 \le 3 \mathbb{E} \max_i Z_i^2 \le C \lVert \max_i Z_i^2 \rVert_{\Psi_{\alpha/2}}.$$ Now it is easy to see that $\lVert \max_i Z_i^2 \rVert_{\Psi_{\alpha/2}} \le \lVert \max_i |X_i| \rVert_{\Psi_\alpha}^2$, so that altogether we obtain $$\label{HJe} \mathbb{E}\lVert Z \rVert_2^2 \le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}^2.$$ Furthermore, by [@LT91 Theorem 6.21], if $W_1, \ldots, W_n$ are independent random variables with zero mean and $\alpha \in (0,1]$, $$\lVert \sum_{i=1}^{n} W_i \rVert_{\Psi_\alpha} \le C (\lVert \sum_{i=1}^n W_i \rVert_{L^1} + \lVert \max_i |W_i| \rVert_{\Psi_\alpha}).$$ In our case, we consider $W_i = Z_i^2 - \mathbb{E} Z_i^2$ and $\alpha/2$ (instead of $\alpha$). Together with the previous arguments (in particular ) and [@GSS19 Lemma A.3], this yields $$\begin{aligned} \lVert \sum_{i=1}^n(Z_i^2 - \mathbb{E}Z_i^2) \rVert_{\Psi_{\alpha/2}} &\le C (\mathbb{E}|\lVert Z\rVert_2^2 - \mathbb{E} \lVert Z\rVert_2^2| + \lVert \max_i |Z_i^2 - \mathbb{E} Z_i^2| \rVert_{\Psi_{\alpha/2}})\\ &\le C (\mathbb{E}\lVert Z\rVert_2^2 + \lVert \max_i Z_i^2 \rVert_{\Psi_{\alpha/2}}) \le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}^2. \end{aligned}$$ Therefore, together with [@GSS19 Lemma A.3] and , we obtain $$\lVert \lVert Z \rVert_2 \rVert_{\Psi_\alpha} \le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha},$$ and hence, for any $t \ge 0$, $$\label{tails} \mathbb{P}(\lVert Z \rVert_2 \ge t) \le 2 \exp \Big(- \frac{t^\alpha}{C^\alpha\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ Since $f$ is $1$-Lipschitz, we have $|f(X) - f(Y)| \le \lVert Z\rVert_2$. Consequently, by , $$\label{tailsB} \mathbb{P}(|f(X) - f(Y)| \ge t) \le \mathbb{P}(\lVert Z\rVert_2 \ge t) \le 2 \exp \Big(- \frac{t^\alpha}{C^\alpha\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ Furthermore, we may conclude that $$\begin{aligned} \label{tailsC} \begin{split} \abs{\mathbb{E}f(Y) - \mathbb{E}f(X)} &\le \int_{0}^{\infty} \mathbb{P}(|f(X) - f(Y)| \ge t) \ dt\\ &\le \int_{0}^{\infty} 2 \exp \Big(- \frac{t^\alpha}{C^\alpha\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big) \ dt\\ &\le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}. \end{split} \end{aligned}$$ For the rest of the proof, we use the temporary notation $K \coloneqq C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}$, where $C$ has to be read off , and . Then, and yield $$\mathbb{P}(f(X) - \mathbb{E}f(X) > t) \le \mathbb{P}(f(Y) - \mathbb{E}f(Y) + |f(X) - f(Y)| > t - K) $$ if $t \ge K$. Using subadditivity and invoking and , we obtain $$\mathbb{P}(f(X) - \mathbb{E}f(X) > t) \le 4 \exp\Big(-\frac{(t-K)^\alpha}{(2K)^\alpha}\Big) \le 4 \exp\Big(-\frac{ct^\alpha}{(2K)^\alpha}\Big),$$ where the last step holds for $t \ge K+\delta$ for some $\delta > 0$. This bound extends trivially to any $t \ge 0$ (if necessary, by a suitable change of constants). Finally, the constant in front of the exponential may be adjusted to 2 by . If the function $f$ is convex, it is possible to prove concentration (as opposed to deviation) inequalities by a slight modification of the proof. Here we begin as in the proof of Proposition \[prop-alpha\] (with the obvious modification of ). We then only need to adapt the arguments leading to . Using instead of and proceeding as in , it follows that $$\begin{split} \mathbb{P}(|f(Y) - \mathbb{E}f(Y)| > t) \le 4 \exp\Big(- \frac{t^\alpha}{C^\alpha\lVert \max_i |X_i| \rVert_{\Psi_\alpha}^\alpha}\Big). \end{split}$$ The rest of the proof follows exactly as above. In the next sections, we will apply Proposition \[prop-alphafl\] in order to prove uniform Hanson–Wright inequalities in a similar way as in [@KZ18] for $\alpha = 2$. Moreover, we will make use of it to prove Theorem \[V1.3subg\]. For the moment, let us provide some other simple applications. A standard example (cf. [@BLM13 Examples 3.18 & 6.11]) of convex concentration for bounded random variables is the behavior of the largest singular value of a random matrix. With the help of Proposition \[prop-alphafl\], this can easily be extended to unbounded settings. Let $A$ be an $m \times n$ matrix with entries $X_{ij}$, $i = 1, \ldots, m$, $j = 1, \ldots, n$, such that the $X_{ij}$ are independent real-valued random variables, and let $\alpha \in (0,2]$. Consider the largest singular value $s_{\mathrm{max}}$ of $A$, i.e. the square root of the largest eigenvalue of $A^TA$. Writing $$s_{\mathrm{max}} = \sqrt{\lambda_{\mathrm{max}}(A^TA)} = \sup_{u \in \mathbb{R}^n \colon \norm{u}_2=1} \norm{Au}_2,$$ it is easy to see that $s_{\mathrm{max}}$ is a convex function of the $X_{ij}$ (as a supremum of convex functions). Moreover, by Lidskii’s inequality, $s_{\mathrm{max}}$ is $1$-Lipschitz. Therefore, $$\mathbb{P}(|s_{\mathrm{max}} - \mathbb{E}s_{\mathrm{max}}| > t) \le 2 \exp\Big(-\frac{ct^\alpha}{\lVert \max_{i,j} |X_{ij}| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ As mentioned before, $\lVert \max_{i,j} |X_{ij}| \rVert_{\Psi_\alpha}$ will be of order $(\log \max(m,n))^{1/\alpha}$ in general (cf. Lemma \[maxOrl\]). For bounded random variables, we get back the results mentioned above. Another example deals with norms of random series, extending [@Led95/97 (1.9)] and [@Sa00 (2.23)] to unbounded independent random variables. Let $Z_1, \ldots, Z_n$ be independent random variables, let $b_1, \ldots, b_n$ be vectors in some real Banach space $(E, \norm{\cdot})$, and let $\alpha \in (0,2]$. Consider the function $$f(Z) \coloneqq \norm{\sum_{i=1}^n Z_ib_i}.$$ Clearly, $f$ is convex and has Lipschitz seminorm bounded by $$L \coloneqq \sup\big\{(\sum_{i=1}^n \xi(b_i)^2)^{1/2} \colon \xi \in E^\ast, \norm{\xi}^\ast \le 1 \big\},$$ where $(E^\ast, \norm{\cdot}^\ast)$ is the dual space. Hence, for any $t \ge 0$, $$\mathbb{P}(\abs{f(Z)-\mathbb{E}f(Z)} \ge t) \le 2 \exp\Big(-\frac{ct^\alpha}{L^\alpha\lVert \max_i |Z_i| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ Again, for bounded random variables, choosing $\alpha = 2$, we get back [@Led95/97 (1.9)] and the independent case of [@Sa00 (2.23)] up to constants. Similarly (cf. [@Ma00b (14)]), we may consider the random variable $$g(Z) \coloneqq \sup_{t \in \mathcal{T}} \sum_{i=1}^n a_{i,t}Z_i$$ for a compact set of real numbers $a_{i,t}$, $i = 1, \ldots, n$, $t \in \mathcal{T}$, where $\mathcal{T}$ is some index set. As above, $g$ is convex and has Lipschitz constant $$D \coloneqq \sup_{t \in \mathcal{T}} (\sum_{i=1}^n a_{i,t}^2)^{1/2}.$$ Therefore, for any $t \ge 0$, $$\mathbb{P}(\abs{g(Z)-\mathbb{E}g(Z)} \ge t) \le 2 \exp\Big(-\frac{ct^\alpha}{D^\alpha\lVert \max_i |Z_i| \rVert_{\Psi_\alpha}^\alpha}\Big).$$ Uniform Hanson–Wright inequalities {#Section:unifHW} ================================== To prove Theorem \[unifHW\], the strategy is as follows: in [@KZ18], the corresponding result for subgaussian random variables has been proven by first treating bounded random variables and then extending these bounds by suitable truncation arguments. Therefore, to prove a result for general $\alpha \in (0,2]$, we have to modify the final step. Much of this is actually repetition of the respective arguments for $\alpha = 2$, but we will provide the details for the sake of completeness. Let us first repeat some tools and results. In the sequel, for a random vector $W = (W_1, \ldots, W_n)$, we shall denote $$f(W) \coloneqq \sup_{A \in \mathcal{A}} (W^TAW - g(A)),$$ where $g \colon \mathbb{R}^{n \times n} \to \mathbb{R}$ is some function. Moreover, if $A$ is any matrix, we denote by $\mathrm{Diag}(A)$ its diagonal part (regarded as a matrix with has zero entries on its off-diagonal). The following lemma combines [@KZ18 Lemmas 3.1 & 3.4]. \[KZ3.1\] 1. Assume the vector $W$ has independent components which satisfy $W_i \le K$ a.s. Then, for any $t \ge 1$, we have $$f(W) - \mathbb{E}f(W) \le C\big(K(\mathbb{E}\sup_{A \in \mathcal{A}}\norm{AW}_2 + \mathbb{E}\sup_{A \in \mathcal{A}} \norm{\mathrm{Diag}(A)W}_2)\sqrt{t} + K^2\sup_{A\in \mathcal{A}}\norm{A}_{\mathrm{op}}t\big)$$ with probability at least $1 - e^{-t}$, where $C> 0$ is some absolute constant. 2. Assuming the vector $W$ has independent (but not necessarily bounded) components with mean zero, we have $$\mathbb{E}\sup_{A\in\mathcal{A}}\norm{\mathrm{Diag}(A)W}_2 \le \mathbb{E}\sup_{A\in\mathcal{A}}\norm{AW}_2,$$ where $C> 0$ is some absolute constant. From now on, let $X$ be the random vector from Theorem \[unifHW\], and recall the truncated random vector $Y$ which we introduced in (and the corresponding “remainder” $Z$). Then, Lemma \[KZ3.1\] (1) for $f(Y)$ with $g(A) = \mathbb{E}X^TAX$ yields $$\label{3.11} f(Y) - \mathbb{E}f(Y) \le C\big(Mt^{1/\alpha}(\mathbb{E}\sup_{A \in \mathcal{A}}\norm{AY}_2 + \mathbb{E}\sup_{A\in \mathcal{A}}\norm{\mathrm{Diag}(A)}_2) + M^2t^{2/\alpha}\sup_{A\in\mathcal{A}}\norm{A}_{\mathrm{op}}\big)$$ with probability at least $1 - e^{-t}$ (actually, even holds with $\alpha=2$, but in the sequel we will have to use the weaker version given above anyway). Here we recall that $M \le C \lVert \max_i |X_i| \rVert_{\Psi_\alpha}$. To prove Theorem \[unifHW\], it remains to replace the terms involving the truncated random vector $Y$ by the original vector $X$, which is what we will prepare now. First, by Proposition \[prop-alphafl\] and since $\sup_{A\in\mathcal{A}} \norm{AX}_2$ is $\sup_{A\in\mathcal{A}}\norm{A}_{\mathrm{op}}$-Lipschitz, we obtain $$\label{3.13} \mathbb{P}(\sup_{A\in\mathcal{A}} \norm{AX}_2 > \mathbb{E}\sup_{A\in\mathcal{A}}\norm{AX}_2 + C\norm{\max_i \abs{X_i}}_{\Psi_\alpha} \sup_{A\in\mathcal{A}} \norm{A}_{\mathrm{op}}t^{1/\alpha}) \le 2e^{-t}.$$ Moreover, by , $$\label{3.14} \abs{\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AY}_2 - \mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2} \le C\norm{\max_i \abs{X_i}}_{\Psi_\alpha} \sup_{A\in\mathcal{A}} \norm{A}_{\mathrm{op}}.$$ Next we estimate the difference between the expectations of $f(X)$ and $f(Y)$. \[KZ3.6\] We have $$\abs{\mathbb{E}f(Y)-\mathbb{E}f(X)} \le C\big(\norm{\max_i \abs{X_i}}_{\Psi_\alpha} \mathbb{E}\sup_{A\in\mathcal{A}}\norm{AX}_2 + \norm{\max_i \abs{X_i}}_{\Psi_\alpha}^2 \sup_{A \in \mathcal{A}}\norm{A}_\mathrm{op}\big).$$ First note that $$\begin{aligned} f(X) &= \sup_{A\in\mathcal{A}} (Y^T AY - \mathbb{E}X^TAX + Z^TAX + Z^TAY)\\ &\le \sup_{A\in\mathcal{A}} (Y^T AY - \mathbb{E}X^TAX) + \sup_{A\in\mathcal{A}}\abs{Z^TAX} + \sup_{A\in\mathcal{A}}\abs{Z^TAY}\\ &\le f(Y) + \norm{Z}_2\sup_{A\in\mathcal{A}} \norm{AX}_2 + \norm{Z}_2\sup_{A\in\mathcal{A}}\norm{AY}_2.\end{aligned}$$ The same holds if we reverse the roles of $X$ and $Y$. As a consequence, $$\label{3.9} \abs{f(X)-f(Y)} \le \norm{Z}_2\sup_{A\in\mathcal{A}} \norm{AX}_2 + \norm{Z}_2\sup_{A\in\mathcal{A}}\norm{AY}_2$$ and thus, taking expectations and applying Hölder’s inequality, $$\label{3.15} \abs{\mathbb{E}f(X)-\mathbb{E}f(Y)} \le (\mathbb{E}\norm{Z}_2^2)^{1/2}((\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2^2)^{1/2} + (\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AY}_2^2)^{1/2}).$$ We may estimate $(\mathbb{E}\norm{Z}_2^2)^{1/2}$ using . Moreover, arguing similarly as in , from we get that $$\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2^2 \le C((\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2)^2 + \norm{\max_i \abs{X_i}}_{\Psi_\alpha}^2 \sup_{A \in \mathcal{A}}\norm{A}_{\mathrm{op}}^2),$$ or, after taking the square root, $$(\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2^2)^{1/2} \le C(\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2 + \norm{\max_i \abs{X_i}}_{\Psi_\alpha} \sup_{A \in \mathcal{A}}\norm{A}_{\mathrm{op}}).$$ Arguing similarly and using , the same bound also holds for $(\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AY}_2^2)^{1/2}$. Plugging everything into completes the proof. Finally, we prove the central result of this section. The proof works by substituting all the terms involving $Y$ in . First, it immediately follows from Lemma \[KZ3.6\] that $$\label{3.16} \mathbb{E}f(Y) \le \mathbb{E}f(X) + C\big(\norm{\max_i \abs{X_i}}_{\Psi_\alpha} \mathbb{E}\sup_{A\in\mathcal{A}}\norm{AX}_2 + \norm{\max_i \abs{X_i}}_{\Psi_\alpha}^2 \sup_{A \in \mathcal{A}}\norm{A}_{\mathrm{op}}\big).$$ Moreover, by and Lemma \[KZ3.1\] (2), $$\label{3.17} \mathbb{E}\sup_{A \in \mathcal{A}}\norm{AY}_2 + \mathbb{E}\sup_{A\in \mathcal{A}}\norm{\mathrm{Diag}(A)}_2 \le C(\mathbb{E}\sup_{A \in \mathcal{A}}\norm{AX}_2 + \norm{\max_i \abs{X_i}}_{\Psi_\alpha} \sup_{A \in \mathcal{A}}\norm{A}_{\mathrm{op}}).$$ Finally, it follows from , and that $$\begin{aligned} \abs{f(X)-f(Y)} &\le \norm{Z}_2\sup_{A\in\mathcal{A}} \norm{AX}_2 + \norm{Z}_2\sup_{A\in\mathcal{A}}\norm{AY}_2\\ &\le C(\norm{Z}_2\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2 + \norm{Z}_2\norm{\max_i \abs{X_i}}_{\Psi_\alpha}\sup_{A\in\mathcal{A}}\norm{A}_{\mathrm{op}}t^{1/\alpha})\end{aligned}$$ with probability at least $1 - e^{-t}$ for all $t \ge 1$. By , it follows that $$\label{3.17b} \abs{f(X)-f(Y)} \le C(\norm{\max_i \abs{X_i}}_{\Psi_\alpha}\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2t^{1/\alpha} + \norm{\max_i \abs{X_i}}_{\Psi_\alpha}^2\sup_{A\in\mathcal{A}}\norm{A}_{\mathrm{op}}t^{2/\alpha})$$ again with probability at least $1 - e^{-t}$ for all $t \ge 1$. Combining , and and plugging into thus yields that with probability at least $1 - e^{-t}$ for all $t \ge 1$, $$\begin{aligned} f(X) - \mathbb{E}f(X) &\le C(\norm{\max_i \abs{X_i}}_{\Psi_\alpha}\mathbb{E}\sup_{A\in\mathcal{A}} \norm{AX}_2t^{1/\alpha} + \norm{\max_i \abs{X_i}}_{\Psi_\alpha}^2\sup_{A\in\mathcal{A}}\norm{A}_{\mathrm{op}}t^{2/\alpha})\\ &\eqqcolon C(at^{1/\alpha} + bt^{2/\alpha}).\end{aligned}$$ If $u \ge \max(a,b)$, it follows that $$\mathbb{P}(f(X) - \mathbb{E}f(X) \ge u) \le \exp\Big(-C\min\Big(\Big(\frac{u}{a}\Big)^\alpha, \Big(\frac{u}{b}\Big)^{\alpha/2}\Big)\Big).$$ By standard means (a suitable change of constants, using ), this bound may be extended to any $u \ge 0$. Random Tensors: Auxiliary Lemmas {#Section:AuxL} ================================ In this section, we show a number of auxiliary lemmas for the proof of Theorem \[V1.3subg\]. First, we present a result which characterizes random variables with finite Orlicz norms. This generalizes some well-known facts about the characterization of subgaussian and subexponential random variables as can be found in [@Ver18 Proposition 2.5.2 & 2.7.1], for instance: \[proptails\] Let $X$ be a random variable and $\alpha \in (0,2]$. Then, the following statements are equivalent, where the parameters $K_i$ differ from each other by at most a constant ($\alpha$-dependent) factor: 1. $\mathbb{P}(|X|\ge t) \le 2\exp(-t^\alpha/K_1^\alpha)$ for any $t \ge 0$ 2. $\lVert X \rVert_{L^p} \le K_2 p^{1/\alpha}$ for any $p \ge 1$ 3. $\mathbb{E}\exp(\lambda^\alpha |X|^\alpha) \le \exp(K_3^\alpha \lambda^\alpha)$ for all $0 \le \lambda \le 1/K_3$. 4. $\mathbb{E}\exp(|X|^\alpha/K_4^\alpha) \le 2$ If $\alpha \in [1,2]$ and $\mathbb{E}X = 0$, then the above properties are moreover equivalent to (1) $\mathbb{E}\exp(\lambda X) \le \begin{cases} \exp(K_5^2 \lambda^2) & \text{if} \ |\lambda| \le 1/K_5 \\ \exp(K_5^{\alpha/(\alpha-1)} |\lambda|^{\alpha/(\alpha-1)}) & \text{if} \ |\lambda| \ge 1/K_5 \ \text{and} \ \alpha > 1. \end{cases}$ It is not hard to verify that we may take $K_2 = 3\alpha^{-(\alpha+1)/\alpha} K_1$, $K_3 = (2\alpha e)^{1/\alpha} K_2$, $K_4 = K_3/(\log 2)^{1/\alpha}$ as well as $K_1 = K_4$. Since (4) just means $\lVert X \rVert_{\Psi_\alpha} < \infty$, we can in particular take $K_i = C_i\lVert X \rVert_{\Psi_\alpha}$ for some absolute constants $C_i$ depending on $\alpha$ only. The equivalence of (1)–(4) is easily seen by directly adapting the arguments from the proof of [@Ver18 Proposition 2.5.2]. For instance, assuming $K_1 = 1$, we arrive at $$\lVert X \rVert_{L^p} \le \Big(\frac{1}{\alpha}\Big)^{1/\alpha}\Big(\frac{2p}{\alpha}\Big)^{1/p}p^{1/\alpha} \le 3 \alpha^{-(\alpha+1)/\alpha} p^{1/\alpha},$$ while if $K_2 = 1$, we obtain $$\mathbb{E}\exp(\lambda^\alpha|X|^\alpha) \le \sum_{p=0}^\infty (\alpha e \lambda^\alpha)^p = \frac{1}{1-\alpha e \lambda^\alpha} \le \exp(2\alpha e \lambda^\alpha),$$ where the last step follows from $1/(1-x) \le e^{2x}$ for any $x \in [0,1/2]$. To see that (1)–(4) imply (5), first note that since in particular $\lVert X \rVert_{\Psi_1} < \infty$, the bound for $|\lambda| \le 1/K_5$ directly follows from [@Ver18], Proposition 2.7.1 (e). Here, we may take $K_5 = 2 e K_2$. To see the bound for large values of $|\lambda|$, we infer that by the weighted arithmetic-geometric mean inequality (with weights $\alpha-1$ and $1$), $$y^{(\alpha-1)/\alpha} z^{1/\alpha} \le \frac{\alpha-1}{\alpha} y + \frac{1}{\alpha} z$$ for any $y,z \ge 0$. Setting $y \coloneqq |\lambda|^{\alpha/(\alpha-1)}$ and $z \coloneqq |x|^\alpha$, we may conclude that $$\lambda x \le \frac{\alpha-1}{\alpha} |\lambda|^{\alpha/(\alpha-1)} + \frac{1}{\alpha} |x|^\alpha$$ for any $\lambda, x \in \mathbb{R}$. Consequently, using (3) assuming $K_3 = 1$, for any $|\lambda| \ge 1$ $$\begin{aligned} \mathbb{E} \exp(\lambda X) &\le \exp\big(\frac{\alpha-1}{\alpha} |\lambda|^{\alpha/(\alpha-1)}\big)\mathbb{E}\exp(|X|^\alpha/\alpha)\\ &\le \exp\big(\frac{\alpha-1}{\alpha} |\lambda|^{\alpha/(\alpha-1)}\big) \exp(1/\alpha) \le \exp(|\lambda|^{\alpha/(\alpha-1)}).\end{aligned}$$ This yields (5) for $|\lambda| \ge 1/K_5'$, where $K_5' = K_3 = (2\alpha e)^{1/\alpha} K_2$. It remains to adjust $K_5'$, which can be done by replacing $K_2$ by a suitable $K_2'$, for instance. Finally, starting with (5) assuming $K_5=1$, let us check (1). To this end, note that for any $\lambda > 0$, $$\mathbb{P}(X \ge t) \le \exp(-\lambda t) \mathbb{E}\exp(\lambda X) \le \exp(-\lambda t + \lambda^2 {\text{$\mathbbm{1}$}}_{\{\lambda \le 1\}} + \lambda^{\alpha/(\alpha-1)}{\text{$\mathbbm{1}$}}_{\{\lambda > 1\}}).$$ Now choose $\lambda \coloneqq t/2$ if $t \le 2$, $\lambda \coloneqq ((\alpha-1)t/\alpha)^{\alpha-1}$ if $t \ge \alpha/(\alpha-1)$ and $\lambda \coloneqq 1$ if $t \in (2, \alpha/(\alpha-1))$. This yields $$\mathbb{P}(X \ge t) \le \begin{cases} \exp(-t^2/4) & \text{if} \ t \le 2,\\ \exp(-(t-1)) & \text{if} \ t \in (2, \alpha/(\alpha-1)),\\ \exp(-\frac{(\alpha-1)^{\alpha-1}}{\alpha^\alpha} t^\alpha) & \text{if} \ t \ge \alpha/(\alpha-1). \end{cases}$$ Now use , and the fact that $\exp(-(t-1)) \le \exp(-c t^\alpha)$ for any $t \in (2, \alpha/(\alpha-1))$, where $c$ is a suitable $\alpha$-dependent constant. It follows that $$\mathbb{P}(X \ge t) \le 2 \exp(- t^\alpha/{\widetilde}{K}_5^\alpha)$$ for any $t \ge 0$. The same argument for $-X$ completes the proof. Next we have to adapt some preliminary steps of the proofs from [@Ver19]. To this end, first note that $$\lVert X \rVert_2 = \prod_{i=1}^d \lVert X_i \rVert_2.$$ A key step in the proofs of [@Ver19] is a maximal inequality which simultaneously controls the tails of $\prod_{i=1}^k \lVert X_i \rVert_2$, $k = 1, \ldots, d$. In [@Ver19], these results are stated for subgaussian random variables, i.e. $\alpha=2$. Generalizing them to any order $\alpha \in (0,2]$ is not hard. The following preparatory lemma extends [@Ver19 Lemma 3.1]. \[V3.1\] Let $X_1, \ldots, X_d \in \mathbb{R}^n$ be independent random vectors with independent, mean zero and unit variance coordinates such that $\lVert X_{i,j} \rVert_{\Psi_\alpha} \le K$ for some $\alpha \in (0,2]$. Then, for any $t \in [0, 2n^{d/2}]$, $$\mathbb{P} \Big(\prod_{i=1}^d \lVert X_i \rVert_2 > n^{d/2} + t\Big) \le 2 \exp\Big(-c \Big(\frac{t}{K^2 d^{1/2}n^{(d-1)/2}}\Big)^\alpha\Big).$$ By the arithmetic and geometric means inequality and since $\mathbb{E}\lVert X_i \rVert_2 \le \sqrt{n}$, for any $s \ge 0$, $$\begin{aligned} \begin{split}\label{st} \mathbb{P}\Big(\prod_{i=1}^d \lVert X_i \rVert_2 > (\sqrt{n} + s)^d \Big) &\le \mathbb{P}\Big(\frac{1}{d}\sum_{i = 1}^d (\lVert X_i \rVert_2 - \sqrt{n}) > s\Big)\\ &\le \mathbb{P}\Big(\frac{1}{d}\sum_{i = 1}^d (\lVert X_i \rVert_2 - \mathbb{E}\lVert X_i \rVert_2) > s\Big). \end{split} \end{aligned}$$ Moreover, by and [@GSS19 Corollary A.5], $$\big\lVert \lVert X_i \rVert_2 - \mathbb{E}\lVert X_i \rVert_2 \big\rVert_{\Psi_\alpha} \le CK^2$$ for any $i = 1, \ldots, d$. On the other hand, if $Y_1, \ldots, Y_d$ are independent centered random variables with $\lVert Y_i \rVert_{\Psi_\alpha} \le M$, we have $$\begin{aligned} \mathbb{P}\Big(\frac{1}{d}\Big|\sum_{i=1}^d Y_i\Big| \ge t\Big) &\le 2 \exp\Big(-c\min\Big( \Big(\frac{t\sqrt{d}}{M}\Big)^2, \Big(\frac{t\sqrt{d}}{M}\Big)^\alpha \Big)\Big)\\ &\le 2 \exp\Big(-c \Big(\frac{t\sqrt{d}}{M}\Big)^\alpha \Big). \end{aligned}$$ Here, the first estimate follows from [@GK95] ($\alpha > 1$) and [@HMO97] ($\alpha \le 1$), while the last step follows by . As a consequence, can be bounded by $$2 \exp(-cs^\alpha d^{\alpha/2} / K^{2\alpha}).$$ For $u \in [0,2]$ and $s = u \sqrt{n}/2d$, we have $$(\sqrt{n} + s)^d \le n^{d/2}(1+u).$$ Plugging in, we arrive at $$\mathbb{P}\Big(\prod_{i=1}^d \lVert X_i \rVert_2 > n^{d/2}(1 + u) \Big) \le 2 \exp\Big(-c \Big(\frac{n^{1/2} u}{2 K^2 d^{1/2}}\Big)^\alpha\Big).$$ Now set $u \coloneqq t/n^{d/2}$. To control all $k=1, \ldots, d$ simultaneously, we need a generalized version of the maximal inequality [@Ver19 Lemma 3.2] which we prove next. \[V3.2g\] Let $X_1, \ldots, X_d \in \mathbb{R}^n$ be independent random vectors with independent, mean zero and unit variance coordinates such that $\lVert X_{i,j} \rVert_{\Psi_\alpha} \le K$ for some $\alpha \in (0,2]$. Then, for any $u \in [0,2]$, $$\mathbb{P}\Big(\max_{1 \le k \le d}n^{-k/2}\prod_{i=1}^k \lVert X_i \rVert_2 > 1 + u \Big) \le 2 \exp\Big(-c\Big(\frac{n^{1/2} u}{K^2d^{1/2}}\Big)^\alpha\Big).$$ Let us first recall the partition into “binary sets” which appears in the proof of [@Ver19 Lemma 3.2]. Here we assume that $d = 2^L$ for some $L \in \mathbb{N}$ (if not, increase $d$). Then, for any $l \in \{0,1, \ldots, L \}$, we consider the partition $\mathcal{I}_l$ of $\{1, \ldots, d\}$ into $2^l$ successive (integer) intervals of length $d_l \coloneqq d/2^l$ which we call “binary intervals”. It is not hard to see that for any $k = 1, \ldots, d$, we can partition $[1,k]$ into binary intervals of different lengths such that this partition contains at most one interval of each family $\mathcal{I}_l$. Now it suffices to prove that $$\mathbb{P} \Big(\exists l \in \{0, 1, \ldots, L\}, \exists I \in \mathcal{I}_l \colon \prod_{i\in I} \lVert X_i \rVert_2 > (1 + 2 ^{-l/4}u)n^{d_l/2}\Big) \le 2 \exp\Big(-c \Big(\frac{n^{1/2}u}{K^2d^{1/2}}\Big)^\alpha \Big)$$ (cf. Step 3 of the proof of [@Ver19 Lemma 3.2], where the reduction to this case is explained in detail). To this end, for any $l \in \{0, 1, \ldots, L\}$, any $I \in \mathcal{I}_l$ and $d_l \coloneqq |I| = d/2^l$, we apply Lemma \[V3.1\] for $d_l$ and $t \coloneqq 2^{-l/4} n^{d_l/2}u$. This yields $$\begin{aligned} \mathbb{P} \Big(\prod_{i\in I} \lVert X_i \rVert_2 > (1 + 2 ^{-l/4}u)n^{d_l/2}\Big) &\le 2 \exp\Big(-c \Big(\frac{n^{1/2}t}{2^{l/4}K^2d_l^{1/2}}\Big)^\alpha\Big)\\ &= 2 \exp\Big(-c \Big(2^{l/4}\frac{n^{1/2}t}{K^2d^{1/2}}\Big)^\alpha\Big). \end{aligned}$$ Altogether, we arrive at $$\begin{aligned} \label{ZwSchr} \begin{split} \mathbb{P} \Big(\exists l \in \{0, 1, \ldots, L\}, \exists I \in \mathcal{I}_l \colon &\prod_{i\in I} \lVert X_i \rVert_2 > (1 + 2 ^{-l/4}u)n^{d_l/2}\Big)\\ &\le \sum_{l=0}^L 2^l \cdot 2 \exp\Big(-c\Big(2^{l/4}\frac{n^{1/2}u}{K^2d^{1/2}}\Big)^\alpha\Big). \end{split} \end{aligned}$$ We may now assume that $c(n^{1/2} u/(K^2d^{1/2}))^\alpha \ge 1$ (otherwise the bound in Lemma \[V3.2g\] gets trivial by adjusting $c$). Using the elementary inequality $ab \ge (a+b)/2$ for all $a,b \ge 1$, we arrive at $$2^{l\alpha/4} c\Big(\frac{n^{1/2}u}{K^2d^{1/2}}\Big)^\alpha \ge \frac{1}{2} \Big(2^{l\alpha/4} + c \Big(\frac{n^{1/2}u}{K^2 d^{1/2}}\Big)^\alpha\Big).$$ Using this in , we obtain the upper bound $$2 \exp\Big(- \frac{c}{2} \Big(\frac{n^{1/2}u}{K^2d^{1/2}}\Big)^\alpha\Big) \sum_{l=0}^L 2^l \exp(- 2^{l\alpha/4 - 1}) \le C \exp\Big(-\frac{c}{2} \Big(\frac{n^{1/2}u}{K^2d^{1/2}}\Big)^\alpha \Big).$$ By , we can assume $C=2$. The following martingale-type bound is directly taken from [@Ver19]: \[V4.1\] Let $X_1, \ldots X_d$ be independent random vectors. For each $k = 1, \ldots, d$, let $f_k = f_k(X_k, \ldots, X_d)$ be an integrable real-valued function and $\mathcal{E}_k$ be an event that is uniquely determined by the vectors $X_{k+1}, \ldots, X_d$. Let $\mathcal{E}_{d+1}$ be the entire probability space. Suppose that for every $k = 1, \ldots, d$ we have $$\mathbb{E}_{X_k} \exp(f_k) \le \pi_k$$ for every realisation of $X_{k+1}, \ldots, X_d$ in $\mathcal{E}_{k+1}$. Then, for $\mathcal{E} \coloneqq \mathcal{E}_2 \cap \cdots \cap \mathcal{E}_d$, we have $$\mathbb{E} \exp(f_1 + \ldots + f_d)1_\mathcal{E} \le \pi_1 \cdots \pi_d.$$ Finally, we need a bound for the Orlicz norm of $\max_i |X_i|$. \[maxOrl\] Let $X_1, \ldots, X_n$ be independent random variables with $\mathbb{E}X_i = 0$ and $\lVert X_i \rVert_{\Psi_\alpha} \le K$ for any $i$ and some $\alpha > 0$. Then, $$\lVert \max_i |X_i| \rVert_{\Psi_\alpha} \le C K \max\Big\{\Big(\frac{\sqrt{2}+1}{\sqrt{2}-1}\Big)^{1/\alpha},(\log n)^{1/\alpha}\Big(\frac{2}{\log 2}\Big)^{1/\alpha}\Big\}.$$ Here, we may choose $C= \max\{2^{1/\alpha-1}, 2^{1-1/\alpha}\}$. We defer the proof of Lemma \[maxOrl\] to the appendix. Note that for $\alpha \ge 1$, [@PG99 Proposition 4.3.1] provides a similar result. However, we are also interested in the case of $\alpha < 1$ in the present note. The condition $\mathbb{E}X_i = 0$ in Lemma \[maxOrl\] can easily be removed only at the expense of a different absolute constant. Random Tensors: Proofs {#Section:Pfs} ====================== In this section, we prove Theorems \[V1.3subg\] and \[LSIten\]. In fact, we actually prove the following slightly sharper version of Theorem \[V1.3subg\]: \[V1.3subgsp\] Let $n, d \in \mathbb{N}$ and $f \colon \mathbb{R}^{n^d} \to \mathbb{R}$ be convex and 1-Lipschitz. Consider a simple random tensor $X \coloneqq X_1 \otimes \cdots \otimes X_d$ as in . Fix $\alpha \in [1,2]$, and assume that $\lVert X_{i,j} \rVert_{\Psi\alpha} \le K$. Then, for any $t \in [0, C n^{d/2} (\sum_{i=1}^d\lVert \max_j |X_{i,j}| \rVert_{\Psi_\alpha}^2)^{1/2}/(K^2d^{1/2})]$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(-c\Big( \frac{t}{n^{(d-1)/2} (\sum_{i=1}^d\lVert \max_j |X_{i,j}| \rVert_{\Psi_\alpha}^2)^{1/2}}\Big)^\alpha\Big),$$ where $c > 0$ is some absolute constant. On the other hand, if $\alpha \in (0,1)$, then, for any $t \in [0, C n^{d/2} (\sum_{i=1}^d\lVert \max_j |X_{i,j}| \rVert_{\Psi_\alpha}^\alpha)^{1/\alpha}/(K^2d^{1/2})]$, $$\mathbb{P}(|f(X) - \mathbb{E}f(X)| > t) \le 2 \exp\Big(-c\Big( \frac{t}{n^{(d-1)/2} (\sum_{i=1}^d\lVert \max_j |X_{i,j}| \rVert_{\Psi_\alpha}^\alpha)^{1/\alpha}}\Big)^\alpha\Big).$$ From here, to arrive at Theorem \[V1.3subg\], it essentially remains to note that by Lemma \[maxOrl\], we have $$\lVert \max_j |X_{i,j}| \rVert_{\Psi_\alpha} \le C (\log n)^{1/\alpha} \max_j \lVert X_{i,j} \rVert_{\Psi_\alpha} \le C (\log n)^{1/\alpha} K.$$ In fact, Theorem \[V1.3subg\] also gives back [@Ver19 Theorem 1.3], i.e. the convex concentration inequality for a.s. bounded random variables (say, $|X_{ij}| \le M$ a.s.). To see this, take $\alpha = 2$ and note that in this case, $\lVert \max_j |X_{i,j}| \rVert_{\Psi_2} \le C M$. We shall adapt the arguments from [@Ver19]. First let $$\mathcal{E}_k \coloneqq \Big\{ \prod_{i=k}^d \lVert X_i \rVert_2 \le 2 n^{(d-k+1)/2}\Big\},\qquad k = 1, \ldots, d,$$ and let $\mathcal{E}_{d+1}$ be the full space. It then follows from Lemma \[V3.2g\] for $u=1$ that $$\label{E^cg} \mathbb{P}(\mathcal{E}) \ge 1 - 2 \exp\Big(-c\Big(\frac{n^{1/2}}{K^2d^{1/2}}\Big)^\alpha\Big),$$ where $\mathcal{E} \coloneqq \mathcal{E}_2 \cap \cdots \cap \mathcal{E}_d$. Now fix any realization $x_2, \ldots, x_d$ of the random vectors $X_2, \ldots, X_d$ in $\mathcal{E}_2$ and apply Proposition \[prop-alphafl\] to the function $f_1(x_1)$ given by $x_1 \mapsto f(x_1, \ldots x_d)$. Clearly, $f_1$ is convex, and since $$|f(x \otimes x_2 \otimes \cdots \otimes x_d) - f(y \otimes x_2 \otimes \cdots \otimes x_d)| \le \lVert x - y \rVert_2 \prod_{i=2}^d \lVert x_i \rVert_2 \le \lVert x - y \rVert_2 2 n^{(d-1)/2},$$ we see that it is $2 n^{(d-1)/2}$-Lipschitz. Hence, it follows from that $$\label{convexsteps1} \lVert f - \mathbb{E}_{X_1} f \rVert_{\Psi_\alpha(X_1)} \le C 2 n^{(d-1)/2} \lVert \max_j |X_{1,j}| \rVert_{\Psi_\alpha}$$ for any $x_2, \ldots, x_d$ in $\mathcal{E}_2$, where $\mathbb{E}_{X_1}$ denotes taking the expectation with respect to $X_1$ (which, by independence, is the same as conditionally on $X_2, \ldots, X_d$). To continue, fix any realization $x_3, \ldots, x_d$ of the random vectors $X_3, \ldots, X_d$ which satisfy $\mathcal{E}_3$ and apply Proposition \[prop-alphafl\] to the function $f_2(x_2)$ given by $x_2 \mapsto \mathbb{E}_{X_1}f(X_1, x_2, \ldots, x_d)$. Again, $f_2$ is a convex function, and since $$\begin{aligned} &|\mathbb{E}_{X_1}f(X_1 \otimes x \otimes x_3 \otimes \ldots \otimes x_d) - \mathbb{E}_{X_1}f(X_1 \otimes y \otimes x_3 \otimes \ldots \otimes x_d)|\\ &\le \mathbb{E}_{X_1} \lVert X_1 \otimes (x-y) \otimes x_3 \otimes \ldots \otimes x_d \rVert_2 \le (\mathbb{E}\lVert X_1 \rVert_2^2)^{1/2} \lVert x - y \rVert_2 \prod_{i=3}^d \lVert x_i \rVert_2\\ &\le \sqrt{n} \lVert x - y \rVert_2 2 n^{(d-2)/2} = \lVert x - y \rVert_2 2 n^{(d-1)/2},\end{aligned}$$ $f_2$ is $2 n^{(d-1)/2}$-Lipschitz. Applying , we thus obtain $$\label{convexsteps2} \lVert \mathbb{E}_{X_1} f - \mathbb{E}_{X_1,X_2} f \rVert_{\Psi_\alpha(X_2)} \le C 2 n^{(d-1)/2} \lVert \max_j |X_{2,j}| \rVert_{\Psi_\alpha}$$ for any $x_3, \ldots, x_d$ in $\mathcal{E}_3$. Iterating this procedure, we arrive at $$\label{convexstepsg} \lVert \mathbb{E}_{X_1, \ldots, X_{k-1}} f - \mathbb{E}_{X_1,\ldots, X_k} f \rVert_{\Psi_\alpha(X_k)} \le C 2 n^{(d-1)/2} \lVert \max_j |X_{k,j}| \rVert_{\Psi_\alpha}$$ for any realization $x_{k+1}, \ldots, x_d$ of $X_{k+1}, \ldots, X_d$ in $\mathcal{E}_{k+1}$. We now combine for $k = 1, \ldots, d$. To this end, we write $$\Delta_k \coloneqq \Delta_k(X_k, \ldots, X_d) \coloneqq \mathbb{E}_{X_1, \ldots, X_{k-1}}f - \mathbb{E}_{X_1, \ldots, X_k}f,$$ and apply Proposition \[proptails\]. Here we have to distinguish between the cases where $\alpha \in [1,2]$ and $\alpha \in (0,1)$. If $\alpha \ge 1$, we use Proposition \[proptails\] (5) to arrive at a bound for the moment-generating function. Writing $M_k \coloneqq \lVert \max_j |X_{k,j}| \rVert_{\Psi_\alpha}$, we obtain $$\mathbb{E}\exp(\lambda \Delta_k) \le \begin{cases} \exp((C 2 n^{(d-1)/2} M_k)^2 \lambda^2)\\ \exp((C 2 n^{(d-1)/2}M_k)^{\alpha/(\alpha-1)} \lambda^{\alpha/(\alpha-1)}) \end{cases}$$ for all $x_{k+1}, \ldots, x_d$ in $\mathcal{E}_{k+1}$, where the first line holds if $|\lambda| \le 1/(C 2 n^{(d-1)/2}M_k)$ and the second one if $|\lambda| \ge 1/(C 2 n^{(d-1)/2}M_k)$ and $\alpha > 1$. Using Lemma \[V4.1\], we therefore obtain $$\begin{aligned} \label{estimateMGF} &\mathbb{E} \exp(\lambda(f-\mathbb{E}f))1_\mathcal{E} = \mathbb{E}\exp(\lambda(\Delta_1 + \cdots + \Delta_d))1_\mathcal{E}\notag\\ &\le \begin{cases} \exp((C 2 n^{(d-1)/2})^2 (M_1^2 + \cdots + M_d^2) \lambda^2) \\ \exp((C 2 n^{(d-1)/2})^{\alpha/(\alpha-1)} (M_1^{\alpha/(\alpha-1)} + \cdots + M_d^{\alpha/(\alpha-1)}) \lambda^{\alpha/(\alpha-1)}) \end{cases}\notag\\ &\le \begin{cases} \exp((C 2 n^{(d-1)/2})^2 M^2 \lambda^2) \\ \exp((C 2 n^{(d-1)/2})^{\alpha/(\alpha-1)} M^{\alpha/(\alpha-1)} \lambda^{\alpha/(\alpha-1)}). \end{cases}\end{aligned}$$ Here, $M \coloneqq (M_1^2 + \ldots + M_d^2)^{1/2}$, and the first lines hold if $|\lambda| \le 1/(C 2 n^{(d-1)/2}\max_k M_k)$ and the second one if $|\lambda| \ge 1/(C 2 n^{(d-1)/2}\max_k M_k)$ and $\alpha > 1$. On the other hand, if $\alpha < 1$, we use Proposition \[proptails\] (4). Together with Lemma \[V4.1\], this yields $$\begin{aligned} \label{estimateMGFg} \begin{split} &\mathbb{E} \exp(\lambda^\alpha|f-\mathbb{E}f|^\alpha)1_\mathcal{E} \le \mathbb{E}\exp(\lambda^\alpha(|\Delta_1|^\alpha + \cdots + |\Delta_d|^\alpha))1_\mathcal{E}\\ &\le \exp((C 2 n^{(d-1)/2})^\alpha (M_1^\alpha + \cdots + M_d^\alpha) \lambda^\alpha) \end{split}\end{aligned}$$ for $\lambda \in [0, C 2 n^{(d-1)/2} \min_k M_k)$, where we have used the subadditivity of $|\cdot|^\alpha$ for $\alpha \in (0,1)$. To finish the proof, first consider $\alpha \in [1,2]$. Then, for any $\lambda > 0$, we have $$\begin{aligned} \label{estimate1} \begin{split} \mathbb{P}(f - \mathbb{E}f > t) &\le \mathbb{P}(\{f - \mathbb{E}f > t \} \cap \mathcal{E}) + \mathbb{P}(\mathcal{E}^c)\\ &\le \mathbb{P}(\exp(\lambda(f-\mathbb{E}f))1_\mathcal{E} > \exp(\lambda t)) + \mathbb{P}(\mathcal{E}^c)\\ &\le \exp\Big(- \Big(\frac{t}{2 C n^{(d-1)/2} M}\Big)^\alpha\Big) + 2 \exp\Big(-c\Big(\frac{n^{1/2}}{K^2d^{1/2}}\Big)^\alpha\Big). \end{split} $$ where the last step follows by standard arguments (similarly to Proposition \[proptails\]), using and . Now, assume that $t \le C n^{d/2} M/(K^2d^{1/2})$. Then, the right-hand side of is dominated by the first term (possibly after adjusting constants), so that we arrive at $$\mathbb{P}(f - \mathbb{E}f > t) \le 3 \exp\Big(- \Big(\frac{t}{C n^{(d-1)/2} M}\Big)^\alpha\Big).$$ The same arguments hold if $f$ is replaced by $-f$. Finally, we may adjust constants by . If $\alpha \in (0,1)$, similarly to , using , and Proposition \[proptails\], $$\begin{aligned} \mathbb{P}(|f - \mathbb{E}f| > t) &\le \mathbb{P}(\{|f - \mathbb{E}f| > t \} \cap \mathcal{E}) + \mathbb{P}(\mathcal{E}^c)\\ &\le 2\exp\Big(- \Big(\frac{t}{2 C n^{(d-1)/2} M_\alpha}\Big)^\alpha\Big) + 2 \exp\Big(-c\Big(\frac{n^{1/2}}{K^2d^{1/2}}\Big)^\alpha\Big),\end{aligned}$$ where $M_\alpha \coloneqq (M_1^\alpha + \ldots + M_d^\alpha)^{1/\alpha}$. The rest follows as above. Let us now prove Theorem \[LSIten\]. Here we essentially follow the proof of Theorem \[V1.3subgsp\] with a only a few changes. First, recall that it is well-known that in presence of a Poincaré inequality , Lipschitz functions have subexponential tails, i.e. for any $f$ which is $1$-Lipschitz, $$\mathbb{P}(|f(X)-\mathbb{E}f(X)| \ge t) \le 2 \exp(-ct/\sigma)$$ for every $t \ge 0$, which can be rewritten as $$\label{PIOrl} \lVert f(X) - \mathbb{E}f(X) \rVert_{\Psi_1} \le C\sigma.$$ Likewise, by the famous Herbst argument, the LSI property yields subgaussian tails for Lipschitz functions, i.e. if $f$ is $1$-Lipschitz, then $$\mathbb{P}(|f(X)-\mathbb{E}f(X)| \ge t) \le 2 \exp(-ct^2/\sigma^2)$$ for every $t \ge 0$, which can be rewritten as $$\label{LSIOrl} \lVert f(X) - \mathbb{E}f(X) \rVert_{\Psi_2} \le C\sigma.$$ Therefore, we may follow the proof of Theorem \[V1.3subgsp\] in the case of $\alpha=1, 2$, respectively. The arguments based on Lemma \[V3.2g\] remain valid since and in particular imply that $\norm{X_{i,j}}_{\Psi_1} \le C\sigma$ or $\norm{X_{i,j}}_{\Psi_2} \le C\sigma$, respectively. Apart from that, the main difference is that we have to replace the arguments based on convex concentration, i.e. , and , by making use of or , respectively. The rest of the proof is easily adapted. To prove Lemma \[maxOrl\], we first present a number of lemmas and auxiliary statements. In particular, recall that if $\alpha \in (0, \infty)$, then for any $x,y \in (0,\infty)$, $$\label{auxlem1} c_\alpha(x^\alpha + y^\alpha) \le (x+y)^\alpha \le C_\alpha(x^\alpha+y^\alpha),$$ where $c_\alpha \coloneqq 2^{\alpha-1} \wedge 1$ and $C_\alpha \coloneqq 2^{\alpha-1} \vee 1$. Indeed, if $\alpha \le 1$, using the concavity of the function $x \mapsto x^\alpha$ it follows by standard arguments that $2^{\alpha-1}(x^\alpha + y^\alpha) \le (x+y)^\alpha \le x^\alpha + y^\alpha$. Likewise, for $\alpha \ge 1$, using the convexity of $x \mapsto x^\alpha$ we obtain $x^\alpha + y^\alpha \le (x+y)^\alpha \le 2^{\alpha-1}(x^\alpha + y^\alpha)$. \[auxlem2\] Let $X_1, \ldots, X_n$ be independent random variables such that $\mathbb{E} X_i = 0$ and $\lVert X_i \rVert_{\Psi_\alpha} \le 1$ for some $\alpha > 0$. Then, if $Y \coloneqq \max_i |X_i|$ and $c \coloneqq (c_\alpha^{-1}\log n)^{1/\alpha}$, we have $$\mathbb{P}(Y \ge c + t) \le 2 \exp(-c_\alpha t^\alpha)$$ with $c_\alpha$ as in . We have $$\begin{aligned} \mathbb{P}(Y \ge c + t) &\le n \mathbb{P}(|X_i| \ge c + t) \le 2n \exp(-(c+t)^\alpha)\\ &\le 2n \exp(-c_\alpha (t^\alpha + c^\alpha) = 2 \exp(-c_\alpha t^\alpha),\end{aligned}$$ where we have used in the next-to-last step. \[auxlem3\] Let $Y \ge 0$ be a random variable which satisfies $$\mathbb{P}(Y \ge c + t) \le 2 \exp(-t^\alpha)$$ for some $c \ge 0$ and any $t \ge 0$. Then, $$\lVert Y \rVert_{\Psi_\alpha} \le C_\alpha^{1/\alpha} \max\Big\{\Big(\frac{\sqrt{2}+1}{\sqrt{2}-1}\Big)^{1/\alpha},c\Big(\frac{2}{\log 2}\Big)^{1/\alpha}\Big\}$$ with $C_\alpha$ as in . By , $C_\alpha \ge 1$ and monotonicity, we have $Y^\alpha \le C_\alpha((Y-c)_+^\alpha + c^\alpha)$, where $x_+ \coloneqq \max(x,0)$. Thus, $$\begin{aligned} \mathbb{E}\exp\Big(\frac{Y^\alpha}{s^\alpha}\Big) &\le \exp\Big(\frac{C_\alpha c^\alpha}{s^\alpha}\Big) \mathbb{E}\exp\Big(\frac{C_\alpha (Y-c)_+^\alpha}{s^\alpha}\Big)\\ &= \exp\Big(\frac{c^\alpha}{t^\alpha}\Big) \mathbb{E}\exp\Big(\frac{(Y-c)_+^\alpha}{t^\alpha}\Big) \eqqcolon I_1 \cdot I_2,\end{aligned}$$ where we have set $t \coloneqq sC_\alpha^{-1/\alpha}$. Obviously, $I_1 \le \sqrt{2}$ if $t \ge c(1/\log\sqrt{2})^{1/\alpha}$. As for $I_2$, we have $$\begin{aligned} I_2 &= 1 + \int_1^\infty \mathbb{P}((Y-c)_+\ge t(\log y)^{1/\alpha}) dy\\ &\le 1 + 2 \int_1^\infty \exp(-t^\alpha \log y) dy = 1 + 2\int_1^\infty \frac{1}{y^{t^\alpha}} dy \le \sqrt{2}\end{aligned}$$ if $t \ge ((\sqrt{2}+1)/(\sqrt{2}-1))^{1/\alpha}$. Therefore, $I_1I_2 \le 2$ if $t \ge \max\{((\sqrt{2}+1)/(\sqrt{2}-1))^{1/\alpha}, c(2/\log 2)^{1/\alpha}\}$, which finishes the proof. Having these lemmas at hand, the proof of Lemma \[maxOrl\] is easily completed. The random variables ${\widetilde}{X}_i \coloneqq X_i/K$ obviously satisfy the assumptions of Lemma \[auxlem2\]. Hence, setting $Y \coloneqq \max_i |{\widetilde}{X}_i| = K^{-1} \max_i |X_i|$, $$\mathbb{P}(c_\alpha^{1/\alpha}Y \ge (\log n)^{1/\alpha} + t) \le 2 \exp(-t^\alpha).$$ Therefore, we may apply Lemma \[auxlem3\] to ${\widetilde}{Y} \coloneqq c_\alpha^{1/\alpha} K^{-1} \max_i |X_i|$. This yields $$\lVert {\widetilde}{Y} \rVert_{\Psi_\alpha} \le C_\alpha^{1/\alpha} \max\Big\{\Big(\frac{\sqrt{2}+1}{\sqrt{2}-1}\Big)^{1/\alpha},(\log n)^{1/\alpha}\Big(\frac{2}{\log 2}\Big)^{1/\alpha}\Big\},$$ i.e. the claim of Lemma \[maxOrl\], where we have set $C \coloneqq (C_\alpha c_\alpha^{-1})^{1/\alpha}$.
--- abstract: 'We investigate simple extensions of the Mirror Twin Higgs model in which the twin color gauge symmetry and the discrete ${\ensuremath{\mathbb{Z}_2}}$ mirror symmetry are spontaneously broken. This is accomplished in a minimal way by introducing a single new colored triplet, sextet, or octet scalar field and its twin along with a suitable scalar potential. This spontaneous ${\ensuremath{\mathbb{Z}_2}}$ breaking allows for a phenomenologically viable alignment of the electroweak vacuum, and leads to dramatic differences between the visible and mirror sectors with regard to the residual gauge symmetries at low energies, color confinement scales, and particle spectra. In particular, several of our models feature a remnant $SU(2)$ or $SO(3)$ twin color gauge symmetry with a very low confinement scale in comparison to $\Lambda_{\rm QCD}$. Furthermore, couplings between the colored scalar and matter provide a new dynamical source of twin fermion masses, and due to the mirror symmetry, these lead to a variety of correlated visible sector effects that can be probed through precision measurements and collider searches.' author: - Brian Batell - Wei Hu - 'Christopher B. Verhaaren' bibliography: - 'TC.bib' title: Breaking Mirror Twin Color --- Introduction {#sec:intro} ============ The Twin Higgs [@Chacko:2005pe] and other ‘Neutral Naturalness’ scenarios [@Barbieri:2005ri; @Chacko:2005vw; @Burdman:2006tz; @Poland:2008ev; @Cai:2008au; @Craig:2014aea; @Batell:2015aha; @Csaki:2017jby; @Serra:2017poj; @Cohen:2018mgv; @Cheng:2018gvu; @Dillon:2018wye; @Xu:2018ofw; @Serra:2019omd; @Ahmed:2020hiw] feature color-neutral symmetry-partner states which stabilize the electroweak scale, thereby reconciling a natural Higgs with the stringent direct constraints on colored states from the Large Hadron Collider (LHC). The original Mirror Twin Higgs (MTH) [@Chacko:2005pe] provides the first and perhaps structurally simplest model of this kind, hypothesizing an exact copy of the Standard Model (SM) along with a discrete ${\ensuremath{\mathbb{Z}_2}}$ symmetry that exchanges each SM field with a corresponding partner in the mirror sector. Assuming the scalar sector respects an approximate $SU(4)$ symmetry that is spontaneously broken, the Higgs doublet arises as a pseudo-Nambu-Goldstone boson (pNGB) at low energies. The ${\ensuremath{\mathbb{Z}_2}}$ exchange symmetry and the presence of mirror top-partners and gauge-partners shield the Higgs from the most dangerous quadratically divergent contributions to its mass. The leading contribution to the Higgs potential is only logarithmically sensitive to the cutoff, which can naturally be of order 5 TeV. Thus, the MTH offers a solution to the little hierarchy problem, and, furthermore, a variety of ultraviolet (UV) completions exist [@Falkowski:2006qq; @Chang:2006ra; @Batra:2008jy; @Craig:2013fga; @Geller:2014kta; @Barbieri:2015lqa; @Low:2015nqa; @Katz:2016wtw; @Asadi:2018abu]. Several considerations motivate extensions of this basic framework. First, the ${\ensuremath{\mathbb{Z}_2}}$ symmetry must be broken to achieve a phenomenologically viable vacuum, featuring a hierarchy between the global $SU(4)$ breaking scale and the electroweak scale. From a bottom up perspective a suitable source of ${\ensuremath{\mathbb{Z}_2}}$ breaking can be implemented ‘by hand’ in a variety of ways, including a ‘soft’ breaking mass term in the scalar potential [@Chacko:2005pe] or a ‘hard’ breaking through the removal of a subset of states in the twin sector, as in the Fraternal Twin Higgs [@Craig:2015pha]. A second issue is that in the standard thermal cosmology the MTH predicts too many relativistic degrees of freedom at late times, clashing with observations of primordial element abundances and the microwave background radiation. The removal of the lightest first and second generation twin fermions, which are not strictly required by naturalness considerations, provides a simple way to evade this problem [@Craig:2015pha; @Craig:2015xla; @Craig:2016kue] though other methods have also been explored [@Farina:2015uea; @Craig:2016lyx; @Chacko:2016hvu; @Barbieri:2016zxn; @Csaki:2017spo; @Harigaya:2019shz]. Following these successes many other cosmological topics can be addressed, including the nature of dark matter [@Garcia:2015loa; @Craig:2015xla; @Garcia:2015toa; @Farina:2015uea; @Freytsis:2016dgf; @Farina:2016ndq; @Barbieri:2016zxn; @Barbieri:2017opf; @Hochberg:2018vdo; @Cheng:2018vaj; @Terning:2019hgj; @Koren:2019iuv; @Badziak:2019zys], the order of the electroweak phase transition [@Fujikura:2018duw], baryogenesis [@Farina:2016ndq; @Earl:2019wjw], and large and small scale structure [@Prilepina:2016rlq; @Chacko:2018vss]. It is appealing to have a dynamical origin for these soft and/or hard ${\ensuremath{\mathbb{Z}_2}}$ breaking mechanisms. One possibility is that the ${\ensuremath{\mathbb{Z}_2}}$ is an exact symmetry of the theory but is spontaneously broken [@Beauchesne:2015lva; @Harnik:2016koz; @Yu:2016bku; @Yu:2016swa; @Jung:2019fsp]. Such spontaneous ${\ensuremath{\mathbb{Z}_2}}$ breaking could result from a pattern of gauge symmetry breaking in the mirror sector that differs from the SM’s electroweak symmetry breaking pattern. Interestingly, such spontaneous mirror gauge symmetry breaking can dynamically generate effective soft ${\ensuremath{\mathbb{Z}_2}}$ breaking mass terms in the scalar potential required for vacuum alignment. They can also produce new twin fermion and gauge boson mass terms, which mimic the hard breaking of the Fraternal Twin Higgs scenario [@Craig:2015pha] by raising the light twin sector states. Due to the exact ${\ensuremath{\mathbb{Z}_2}}$ symmetry, this scenario generically leads to a variety of new phenomena in the visible sector that can be probed through precision tests of baryon and lepton number violation, quark and lepton flavor violation, CP violation, the electroweak and Higgs sectors, and directly at high energy colliders such as the LHC.[^1] This approach was advocated recently in Ref. [@Batell:2019ptb; @Liu:2019ixm], which explored the simultaneous spontaneous breakdown of mirror hypercharge gauge symmetry and ${\ensuremath{\mathbb{Z}_2}}$ symmetry. In this work we examine the spontaneous breakdown of the twin color symmetry. Beginning from a MTH model, with an exact ${\ensuremath{\mathbb{Z}_2}}$ symmetry, we add a new scalar field charged under $SU(3)_c$ and its twin counterpart. A suitable scalar potential causes the twin colored scalar to develop a vacuum expectation value (VEV), spontaneously breaking both twin color and ${\ensuremath{\mathbb{Z}_2}}$. Depending on the scalar representation and potential, a variety of symmetry breaking patterns can be realized with distinct consequences. There are several possible residual color gauge symmetries of the twin sector which may or may not confine, and when they do at vastly different scales. The possible couplings of the scalar to fermions may also produce new twin fermion mass terms. All of these possibilities lead to very different twin phenomenology and the rich variation that can spring from an initially mirror ${\ensuremath{\mathbb{Z}_2}}$ set up. While the complete breakdown of twin color was explored in Ref. [@Liu:2019ixm], the aim was a particular cosmology and employed two scalars that acquired VEVs. We focus on a different part of the vast span of possibilities that is in some sense a minimal set of color breaking patterns. These follow from the introduction of a single new colored multiplet (in each sector) which may transform in the triplet, sextet, or octet representation. This scalar field is assumed to be a singlet under the weak gauge group, though it may carry hypercharge. A detailed analysis of these possibilities is presented in Sec. \[sec:framework\]. In Sec. \[sec:scalar-matter-couplings\] the couplings of the colored scalars to fermions are investigated and shown to dynamically generate new twin fermion mass terms, providing a possible way to realize a fraternal-like twin fermion spectrum. The correlated effects of these couplings in the visible sector through a variety of precision tests are discussed in Sec. \[sec:indirect\]. The new colored scalars can also be directly probed at the LHC and future high energy colliders, and we detail the current limits and prospects for these searches in Sec. \[sec:collider\]. Finally, we conclude with some perspectives on future studies in Sec. \[sec:outlook\]. Spontaneous breakdown of twin color {#sec:framework} =================================== Our basic starting point is a MTH model, with its exact copy of the SM called the twin sector. In all that follows the label $A$ ($B$) denotes visible (twin) sector fields and the exact ${\ensuremath{\mathbb{Z}_2}}$ exchange symmetry interchanges $A$ and $B$ fields. To this base we add the scalar fields, $\Phi_A$ and $\Phi_B$, that are respectively charged under SM and twin $SU(3)_c$ gauge symmetries. We consider the following complex triplet, complex sextet, and real octet representations for the scalar fields: $$({\bf 3}, {\bf 1}, Y_\Phi),~~~ ({\bf 6}, {\bf 1}, Y_\Phi), ~~~ ({\bf 8}, {\bf 1}, 0), \label{eq:scalar-rep}$$ which are singlets under $SU(2)_L$ so that the weak symmetry breaking pattern is not modified. Several specific values of the scalar hypercharge $Y_\Phi$, which allow different couplings to SM and twin fermions, are explored in Sec. \[sec:scalar-matter-couplings\]. Given an appropriate scalar potential, $\Phi_B$ obtains a VEV, spontaneously breaking twin color and ${\ensuremath{\mathbb{Z}_2}}$ with sufficient freedom to align the vacuum in a phenomenologically viable direction. A few remarks apply to this general scenario. First, the phenomenologically desirable vacuum always gives $\Phi_B$ a nonzero VEV, while $\langle\Phi_A\rangle=0$. A consequence of the exact ${\ensuremath{\mathbb{Z}_2}}$ symmetry of the theory, however, is the existence of another vacuum of equal depth in which the VEV lies entirely in the $A$ sector, i.e., $\langle \Phi_A \rangle \neq 0$ and $\langle \Phi_B \rangle = 0$. This vacuum is phenomenologically unacceptable as it breaks $[SU(3)_c]_A$, and our universe must therefore correspond to the other vacuum, $\langle \Phi_A \rangle = 0$ and $\langle \Phi_B \rangle \neq 0$. Second, the spontaneous breaking of the discrete ${\ensuremath{\mathbb{Z}_2}}$ symmetry raises potential concerns of a domain wall problem. However, this problem can be circumvented if, for instance, there is a low Hubble scale during inflation, or if there are additional small explicit sources of ${\ensuremath{\mathbb{Z}_2}}$ breaking in the theory. See Ref. [@Batell:2019ptb] for further related discussion in scenarios where mirror hypercharge and ${\ensuremath{\mathbb{Z}_2}}$ are spontaneously broken. Warmup: colored scalar potential analysis ----------------------------------------- In this subsection we analyze the symmetry breaking dynamics of the colored scalar sector in isolation. This enables us to highlight some of the differences in the color symmetry breaking for the triplet, sextet and octet cases. The investigation of the entire scalar potential including the Higgs fields and the full electroweak and color gauge symmetry breaking is carried out in subsequent subsections. Throughout we use the standard definitions for the $SU(3)$ generators, $T^a = \tfrac{1}{2}\lambda^a$ with Gell-Mann matrices $\lambda^a$ and $a = 1,2,\dots 8$, and $SU(2)$ generators, $\tau^\alpha = \tfrac{1}{2}\sigma^\alpha$ with Pauli matrices $\sigma^\alpha$ and $\alpha = 1,2,3$. ### Color triplet scalar {#sec:triplets-isolated} First, consider triplet scalars $\Phi_{A,B} \sim ({\bf 3},{\bf 1},Y_\Phi)$, which can be represented as a complex vectors, i.e, $(\Phi_A)_{i}$, with color index $i = 1,2,3$. The ${\ensuremath{\mathbb{Z}_2}}$ symmetric scalar potential for $\Phi_A$ and $\Phi_B$ is $$\begin{aligned} V_\Phi = -\mu^2 \, ( |\Phi_A|^2 +|\Phi_B|^2) + \lambda\, ( |\Phi_A|^2 +|\Phi_B|^2)^2 + \delta \, \left( |\Phi_A|^4 + |\Phi_A|^4 \right). \label{eq:Vtriplet}\end{aligned}$$ The $\mu^2$ and $\lambda$ terms respect a large $U(6)$ global symmetry while the $\delta$ term preserves only a smaller $U(3)_A \times U(3)_B \times {\ensuremath{\mathbb{Z}_2}}$ symmetry. We are often interested in the parameter regime $|\delta| \ll \lambda$. [^2] When $\delta < 0$, the vacuum spontaneously breaks ${\ensuremath{\mathbb{Z}_2}}$ [@Barbieri:2005ri]. The desired vacuum is $$\langle \Phi_{A \, i} \rangle = 0, ~~~~~~~~ \langle \Phi_B \rangle = \left( \begin{array}{c} 0 \\ 0 \\ f_\Phi \end{array} \right), ~~~~~~~~ f_\Phi = \sqrt{ \frac{\mu^2}{2(\lambda+\delta)} }~, \label{eq:VEV-triplet}$$ corresponding to the gauge symmetry breaking pattern $[SU(3)_c \rightarrow SU(2)_c]_B$. Fluctuations around the vacuum are parameterized as $$\Phi_A = \phi_A, ~~~~~~~~ \Phi_B = \left( \begin{array}{c} \eta_B^{(2)} \\ f_\Phi + \tfrac{1}{\sqrt{2}} (\varphi_B + i \eta_B) \end{array} \right),$$ with $\phi_A$ a triplet under $[SU(3)_c]_A$, $\eta_B^{(2)}$ a doublet under $[SU(2)_c]_B$, and $\varphi_B$ and $\eta_B$ being singlets. Expanding the potential in Eq. (\[eq:Vtriplet\]) about the vacuum, the scalar masses are found to be $$m_{\phi_A}^2 = -2 \delta f_\Phi^2, ~~~~~~~ m_{\varphi_B}^2 = 4(\lambda+\delta)f_\Phi^2 , ~~~~~~~ m_{\eta_B^{(2)}}^2 = 0, ~~~~~~~ m_{\eta_B}^2 = 0.$$ In the limit $|\delta| \ll \lambda$ the global symmetry breaking pattern is $U(6) \rightarrow U(5)$, yielding 11 NGBs (complex $[SU(3)_c]_A$ triplet $\phi_A$, complex $[SU(2)_c]_B$ doublet $\eta^{(2)}_B$, and real singlet $\eta_B$). The field $\phi_A$ obtains a mass proportional to the $U(6)$ breaking coupling $\delta$ and can be considered to be a pNGB in this limit. The fields $\eta^{(2)}_B$, $\eta_B$ are exact NGBs and are eaten by the five massive twin gluons, which obtain masses of order $m_{G_B} \sim g_S f_\Phi$. Since the triplet scalar is also assumed to carry hypercharge $Y_\Phi$, it gives a mass to the twin hypercharge boson. We will examine these effects below when we include the Higgs fields in the scalar potential. Finally, there is the radial mode $\varphi_B$ with mass of order $\sqrt{\lambda}f_\Phi$. ### Color sextet scalar {#sec:sextets-isolated} We next take $\Phi_{A,B} \sim ({\bf 6},{\bf 1},Y_\Phi)$ as color sextets, which can be represented as complex symmetric tensor fields, i.e, $(\Phi_A)_{ij}$, with $i,j = 1,2,3$. The most general ${\ensuremath{\mathbb{Z}_2}}$ symmetric potential for $\Phi_A$ and $\Phi_B$ is $$\begin{aligned} V_\Phi & = -\mu^2 \left( {\mbox{Tr}}{ \, \Phi_A^\dag \Phi_A} + {\mbox{Tr}}{ \, \Phi_B^\dag \Phi_B} \right) + \lambda\, \left( {\mbox{Tr}}{ \, \Phi_A^\dag \Phi_A} + {\mbox{Tr}}{ \, \Phi_B^\dag \Phi_B} \right)^2 \nonumber \\ &~~~+ \delta_1 \, \left[ ({\mbox{Tr}}{ \, \Phi_A^\dag \Phi_A})^2 + ({\mbox{Tr}}{ \, \Phi_B^\dag \Phi_B})^2 \right] + \delta_2 \, \left[ ({\mbox{Tr}}{ \, \Phi_A^\dag \Phi_A \Phi_A^\dag \Phi_A}) + ({\mbox{Tr}}{ \, \Phi_B^\dag \Phi_B \Phi_B^\dag \Phi_B}) \right]. \label{eq:Vsextet}\end{aligned}$$ The first line of Eq. (\[eq:Vsextet\]) respects $U(12)$ global symmetry. The second line explicitly breaks $U(12)$, with $\delta_1$ preserving $U(6)_A \times U(6)_B \times {\ensuremath{\mathbb{Z}_2}}$ and $\delta_2$ preserving $U(3)_A \times U(3)_B \times {\ensuremath{\mathbb{Z}_2}}$. We focus on the regime $|\delta_{1,2}| \ll \lambda$. The vacuum structure is analyzed following the techniques of Ref. [@Li:1973mq] and is governed by the values $\delta_1$ and $\delta_2$. There are two spontaneous ${\ensuremath{\mathbb{Z}_2}}$ breaking vacua of interest, which we now discuss. #### $[SU(3)_c \rightarrow SU(2)_c]_B$ : The first relevant sextet vacuum leads to the gauge symmetry breaking pattern $[SU(3)_c \rightarrow SU(2)_c]_B$. The orientation of this vacuum is $$\langle \Phi_{A\, ij}\rangle = 0, ~~~~~~~ \langle \Phi_B \rangle = f_\Phi \, \left( \begin{array}{ccc} 0& 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right), ~~~~~~~~ f_\Phi = \sqrt{\frac{\mu^2}{2( \lambda + \, \delta_1 + \delta_2)}}~. \label{eq:VEV-sextet-1}$$ Assuming $|\delta_{1,2}|\ll \lambda$, this vacuum is a global minimum for the parameter regions $\delta_2 < 0 $ and $\delta_1 < - \delta_2 $. The fluctuations around the vacuum can be parameterized as $$\Phi_A = \phi_A, ~~~~~~ \Phi_B = \left( \begin{array}{c|c} -i \sigma^2 \phi_B & \tfrac{1}{\sqrt{2}}\eta_B^{(2)} \\ \hline \tfrac{1}{\sqrt{2}}\eta_B^{(2)\,T} & f_\Phi + \tfrac{1}{\sqrt{2}}(\varphi_B+i\eta_B) \\ \end{array} \right), \label{eq:sextet-fluctuation-1}$$ with $\phi_A$ a sextet under $[SU(3)_c]_A$, $\phi_B = \phi_B^{\alpha} \tau^{\alpha}$ a complex triplet under $[SU(2)_c]_B$, $\eta_B^{(2)}$ a doublet under $[SU(2)_c]_B$, and $\varphi_B$ and $\eta_B$ singlets. Inserting (\[eq:sextet-fluctuation-1\]) into the potential (\[eq:Vsextet\]), the masses of the scalar fluctuations are found to be $$\begin{aligned} & m_{\phi_A}^2 = -2 ( \delta_1 + \delta_2 ) f_\Phi^2, ~~~~~~~ m_{\varphi_B}^2 = 4( \lambda+ \delta_1 + \delta_2 )f_\Phi^2 , & \nonumber \\ & m_{\phi_B}^2 = -2 \, \delta_2 \, f_\Phi^2, ~~~~~~~ m_{\eta_B^{(2)}}^2 = 0, ~~~~~~~ m_{\eta_B}^2 = 0. &\end{aligned}$$ For small $\delta_{1,2}$ the symmetry breaking pattern is $U(12) \rightarrow U(11)$, producing 23 NGBs (complex $[SU(3)_c]_A$ sextet $\phi_A$, complex $[SU(2)_c]_B$ triplet $\phi_B$, $[SU(2)_c]_B$ doublet $\eta^{(2)}_B$, and real singlet $\eta_B$). The field $\phi_A$ is a pNGB and obtains a mass proportional to the $U(12)$ breaking couplings $\delta_1, \delta_2$. But, since $\delta_1$ respects a $U(6)_B$ symmetry, which is spontaneously broken to $U(5)_B$, it does not contribute to the $\phi_B$ mass. However, as $\delta_2$ explicitly breaks $U(6)_B$ to $U(3)_B$, $\phi_B$ is a pNGB with mass proportional to $\delta_2$. The fields $\eta_B^{(2)}$ and $\eta_B$ are exact NGBs, and are eaten by the heavy gluons. The radial mode $\varphi_B$ has a mass proportional to $\sqrt{\lambda} f_\Phi$. #### $[SU(3)_c \rightarrow SO(3)_c]_B$ The second viable sextet vacuum produces the gauge symmetry breaking pattern $[SU(3)_c \rightarrow SO(3)_c]_B$. The orientation of this vacuum is $$\langle \Phi_A\rangle = 0, ~~~~~~~~ \langle \Phi_B \rangle = \frac{f_\Phi}{\sqrt{3}} \, \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right), ~~~~~~~~ f_\Phi = \sqrt{\frac{\mu^2}{2( \lambda + \delta_1 + \delta_2/3)}}~. \label{eq:VEV-sextet-2}$$ Assuming $|\delta_{1,2}|\ll \lambda$, this vacuum is a global minimum for the parameter regions $\delta_2 > 0 $ and $\delta_1 < - \delta_2 /3$. The fluctuations around the vacuum can be parameterized as $$\Phi_A = \phi_A, ~~~~~~ \Phi_B =\frac{1}{\sqrt{3}} \left[ f_\Phi + \frac{1}{\sqrt{2}}(\varphi_B+i\eta_B) \right] \times \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) + \phi_{B} + i \eta_B^{(5)}, \label{eq:sextet-fluctuation-2}$$ where we have defined the real $[SO(3)_c]_B$ quintuplets $\phi_B = \phi_B^{\bar a} T^{\bar a}$ and $\eta_B^{(5)} = \eta_B^{\bar a} T^{\bar a}$, with barred index referring to the broken $SU(3)$ generators, $\bar a = 1,3,4,6,8$. Inserting (\[eq:sextet-fluctuation-2\]) into the potential (\[eq:Vsextet\]), the masses of the scalar fluctuations are found to be $$\begin{aligned} & m_{\phi_A}^2 = -2 \left( \delta_1 + \displaystyle{\frac{\delta_2}{3}} \right) f_\Phi^2, ~~~~~~~ m_{\varphi_B}^2 = 4 \left( \lambda + \delta_1 + \displaystyle{\frac{\delta_2}{3} }\right)f_\Phi^2 , & \nonumber \\ & m_{\phi_B}^2 = \displaystyle{\frac{4}{3}} \delta_2 f_\Phi^2, ~~~~~~~ m_{\eta_B^{(5)}}^2 = 0, ~~~~~~~ m_{\eta_B}^2 = 0. &\end{aligned}$$ In the $|\delta_{1,2}| \ll \lambda$ limit the symmetry breaking pattern is again $U(12) \rightarrow U(11)$, yielding 23 NGBs (complex $[SU(3)_c]_A$ sextet $\phi_A$, two real $[SO(3)_c]_B$ quintuplets $\phi_B$ and $\eta_B^{(5)}$, and real singlet $\eta_B$). The field $\phi_A$ is a pNGB with mass proportional to the $U(12)$ breaking couplings $\delta_1, \delta_2$. But, since $\delta_1$ respects a $U(6)_B$ symmetry, which is spontaneously broken to $U(5)_B$, it does not contribute to the $\phi_B$ mass. The coupling $\delta_2$ explicitly breaks $U(6)_B$ to $U(3)_B$, however, so $\phi_B$ is a pNGB with mass proportional to $\delta_2$. The fields $\eta_B^{(5)}$ and $\eta_B$ are exact NGBs at this level and are eaten by the five heavy gluons and the hypercharge gauge boson. Finally, the radial mode $\varphi_B$ has a mass proportional to $\sqrt{\lambda} f_\Phi$. ### Color octet scalar {#sec:octets-isolated} Finally, consider real octet scalars, $\Phi_{A,B} \sim ({\bf 8},{\bf 1},0)$, which can be written in matrix notation as, e.g. $(\Phi_A)_i^j = \Phi_A^a (T^a)_i^j$. A ${\ensuremath{\mathbb{Z}_2}}$ symmetric potential involving the colored scalars is given by $$\begin{aligned} V_\Phi & = -\mu^2 \left( {\mbox{Tr}}{ \, \Phi_A^2 } + {\mbox{Tr}}{ \, \Phi_B^2} \right) + \lambda\, \left( {\mbox{Tr}}{ \, \Phi_A^2} + {\mbox{Tr}}{ \, \Phi_B^2} \right)^2 \nonumber \\ & + \delta \, \left[ ({\mbox{Tr}}{ \, \Phi_A^2})^2 + ({\mbox{Tr}}{ \, \Phi_B^2})^2 \right] + V_3 + V_6~. \label{eq:Voctet}\end{aligned}$$ The first line of Eq. (\[eq:Voctet\]) respect a $O(16)$ global symmetry. The second line explicitly breaks $O(16)$, with $\delta$ preserving $O(8)_A \times O(8)_B \times {\ensuremath{\mathbb{Z}_2}}$. The potential $V_3$ contains the cubic couplings, $ {\mbox{Tr}}\, \Phi_A^3 + {\mbox{Tr}}\, \Phi_B^3$, which respects $SU(3)_A \times SU(3)_B \times {\ensuremath{\mathbb{Z}_2}}$, while the $V_6$ term contains dimension six operators, which are discussed below. Again, the vacuum structure is obtained following the methods of Ref. [@Li:1973mq]. We first suppose $V_3$ and $V_6$ are set to zero. The cubic coupling in $V_3$ can be forbidden by a parity symmetry, $\Phi_{A,B} \rightarrow -\Phi_{A,B}$, while the higher dimension terms in $V_6$ are generally expected to be subleading. For $\delta < 0$ the vacuum spontaneously breaks the ${\ensuremath{\mathbb{Z}_2}}$ symmetry, and can be parameterized as $$\langle \Phi_A \rangle = 0, ~~~~~~~ \langle \Phi_B \rangle = \sqrt{2} \, f_\Phi \, (\sin \beta \, T^3 + \cos \beta \, T^8), ~~~~~~~ f_\Phi = \frac{\mu}{\sqrt{2\, (\lambda+\delta)}}~. \label{eq:octet-vacuum-angle}$$ The vacuum angle $\beta$ does not appear in the potential at this level, and thus corresponds to a flat direction. Several possible dynamical effects can explicitly break the large $O(8)_A \times O(8)_B$ symmetry, lifting the flat direction and generating a unique ground state. These include tree level contributions to $V_3$ and $V_6$ as well as radiative contributions to the potential. #### Cubic term Let us first consider the cubic coupling, $$V_3 = A \,( {\mbox{Tr}}\, \Phi_A^3 + {\mbox{Tr}}\, \Phi_B^3),$$ where $A$ is taken real and positive without loss of generality, and we consider the $A/\mu \ll 1$ regime. For $\delta < 0$ the vacuum spontaneously breaks the ${\ensuremath{\mathbb{Z}_2}}$ symmetry and is described by the configuration $$\langle \Phi_A \rangle = 0, ~~~~~~~ \langle \Phi_B \rangle = \sqrt{2} \, f_\Phi \, \, T^8, ~~~~~~~ f_\Phi \simeq \frac{\mu}{\sqrt{2(\lambda+\delta)}} + \frac{\sqrt{3} A}{8\sqrt{2}( \lambda+\delta)}~. \label{eq:VEV-octet-1}$$ The twin color gauge symmetry is broken from $[SU(3)_c]_B$ down to $[SU(2)_c \times U(1)_c]_B$. The scalar fluctuations are parameterized as $$\Phi_A = \phi_A, ~~~~~~~~~~ \Phi_B = (\sqrt{2} \, f_\Phi + \varphi_B) \,T^8 + \left( \begin{array}{c|c} \phi_B & \tfrac{1}{\sqrt{2}}\eta_B^{(2)} \\ \hline \tfrac{1}{\sqrt{2}}\eta_B^{(2)\,\dag} & 0 \\ \end{array} \right), \label{eq:octet-fluctuation-1}$$ where $\phi_A$ is a real octet under $[SU(3)_c]_A$, $\phi_B = \phi_B^{\alpha} \tau^{\alpha}$ is a real $[SU(2)_c]_B$ triplet, $\eta_B^{(2)}$ is a $[SU(2)_c]_B$ doublet, and $\varphi_B$ is a singlet. Inserting (\[eq:octet-fluctuation-1\]) into the potential (\[eq:Voctet\]) and expanding about the vacuum, the scalar masses are found to be $$\begin{aligned} m_{\phi_A}^2 = \left( -2 \, \delta + \sqrt{\frac{3}{8}} \frac{A}{f_\Phi}\right) f_\Phi^2, ~~~~~~~ m_{\phi_B}^2 = \sqrt{\frac{27}{8}}\, A\, f_\Phi, \\ m_{\eta_B^{(2)}}^2 = 0, ~~~~~~~~~~~~ m_{\varphi_B}^2 = \left(4 \lambda + 4\delta -\sqrt{\frac{3}{8}}\frac{A}{f_\Phi} \right) f_\Phi^2. \nonumber\end{aligned}$$ In the small $\delta, A/\mu$ regime the symmetry breaking pattern is $O(16) \rightarrow O(15)$, generating 15 NGBs (a real $[SU(3)_c]_A$ octet $\phi_A$, a real $[SU(2)_c]_B$ triplet $\phi_B$, and a $[SU(2)_c]_B$ doublet $\eta_B^{(2)}$). The field $\phi_A$ is a pNGB, with mass proportional to the $O(16)$ breaking couplings $\delta$ and $A$. But, since $\delta$ respects a $O(8)_B$ symmetry, which is spontaneously broken to $O(7)_B$, it does not contribute to the $\phi_B$ mass. However, the coupling $A$ explicitly breaks $O(8)_B$ to $SU(3)_B$, so $\phi_B$ is a pNGB with mass proportional to $A$. The field $\eta_B^{(2)}$ is an exact NGB, and is eaten to generate mass terms for the heavy gluons. Finally, $\varphi_B$ is the radial mode with mass proportional to $\sqrt{\lambda}f_\Phi$. #### Higher dimension operators Since a cubic term in the potential aligns the vacuum in the $T^8$ direction, it is interesting, in light of Eq. (\[eq:octet-vacuum-angle\]), to ask if the vacuum can point entirely along $T^3$. To this end, we consider a dimension six operator, which, given that the MTH model should have a relatively low UV cutoff, is generally expected to appear. Imposing the parity symmetry $\Phi_{A,B} \rightarrow -\Phi_{A,B}$, which forbids the cubic term, we consider a simple representative dimension six operator $$V_6 = \frac{c}{\Lambda^2} \left( \,{\mbox{Tr}}{\,\Phi_A^6} + {\mbox{Tr}}{\, \Phi_B^6} \, \right), \label{eq-V6}$$ where $\Lambda$ is the UV cutoff and $c$ is the Wilson coefficient. We work in the regime $c\mu^2/\Lambda^2 \ll 1$. For $\delta < 0$ and $c>0$ we find the following ${\ensuremath{\mathbb{Z}_2}}$ breaking vacuum orientation: $$\langle \Phi_A \rangle = 0, ~~~~~~~ \langle \Phi_B \rangle =\sqrt{2} \, f_\Phi \, \, T^3, ~~~~~~~ f_\Phi^2 \simeq \frac{\mu^2}{2(\lambda+\delta)} - \frac{3 \,c \, \mu^4}{32 \, (\lambda+\delta)^3 \,\Lambda^2}. \label{eq:VEV-octet-2}$$ The twin color gauge symmetry is broken from $[SU(3)_c]_B$ down to $[U(1)_c\times U(1)'_c]_B$. The fluctuations around the vacuum are parameterized as $$\Phi_A = \phi_A, ~~~~~~~~~~ \Phi_B = (\sqrt{2} \, f_\Phi + \varphi_B) \, T^3 + \phi_B \, T^8 + \left( \begin{array}{ccc} 0 & \tfrac{1}{\sqrt{2}} \eta_B & \tfrac{1}{\sqrt{2}}\eta'_B \\ \tfrac{1}{\sqrt{2}} \eta^{*}_B & 0 & \tfrac{1}{\sqrt{2}}\eta^{''}_B \\ \tfrac{1}{\sqrt{2}}\eta^{'*}_B & \tfrac{1}{\sqrt{2}}\eta^{''*}_B & 0 \\ \end{array} \right), \label{eq:octet-fluctuation-2}$$ Inserting (\[eq:octet-fluctuation-2\]) into the potential given in Eqs. (\[eq:Voctet\]) and (\[eq-V6\]) and expanding about the vacuum, the scalar masses are found to be $$\begin{aligned} m_{\phi_A}^2 =- \left(2 \, \delta + \frac{3}{4}\frac{c f_\Phi^2}{\Lambda^2}\right) f_\Phi^2, ~~~~~~~ m_{\phi_B}^2 = \frac{c f_\Phi^4}{2 \Lambda^2}, ~~~~~~~~~~~~ \\ m_{\eta_B}^2 = m_{\eta^{'}_B}^2=m_{\eta^{''}_B}^2 = 0, ~~~~~~~~~~~~ m_{\varphi_B}^2 = \left(4 \, \lambda + 4 \, \delta + \frac{3 \, c \, f_\Phi^2}{\Lambda^2} \right) f_\Phi^2~. \nonumber\end{aligned}$$ In the small $\delta, c \mu^2/\Lambda^2$ limit the symmetry breaking pattern is $O(16) \rightarrow O(15)$, supplying 15 NGBs (a real $[SU(3)_c]_A$ octet, a real scalar $\phi_B$, three complex scalars $\eta_B, \eta^{'}_B,$ and $\eta^{''}_B$). The field $\phi_A$ is a pNGB with mass proportional to the $O(16)$ breaking couplings $\delta$ and $c$. But, since $\delta$ respects a $O(8)_B$ symmetry, which is spontaneously broken to $O(7)_B$, it does not contribute to the $\phi_B$ mass. However, the coupling $c$ explicitly breaks $O(8)_B$ to $SU(3)_B$, so $\phi_B$ is pNGB with mass proportional to $c$. The three complex scalars $\eta_B, \eta^{'}_B,$ and $\eta^{''}_{B}$ are true NGBs, and are eaten by the massive gluons. Finally, the radial mode $\varphi_B$ has a mass proportional to $\sqrt{\lambda}f_\Phi$. #### Radiative scalar potential Finally, we must consider radiative contributions to the scalar potential. Even if $V_3=0$ and $V_6$ is negligible, the $SU(3)_c$ gauge interactions explicitly break the large $O(8)_A \times O(8)_B$ symmetry present in the first line of the tree-level potential (\[eq:Voctet\]), leading to a radiatively generated potential for the vacuum angle $\beta$ in Eq. (\[eq:octet-vacuum-angle\]). This is conveniently studied by computing the one-loop effective potential in the $\overline{\rm MS}$ scheme: $$V_{\Phi,{\rm 1-loop}} = \frac{3 g_S^4 f_\Phi^4}{8 \pi^2} \sum_{n=0}^2 \left\{ \sin^4(\beta- n \pi/3) \log{\left[ \frac{2 g_S^2 f_\Phi^2 \sin^2(\beta- n \pi/3) }{\hat\mu^2} \right] } -\frac{5}{6} \right\}.$$ The potential has minima at $\beta = n \pi/3$, which, noting Eq. (\[eq:octet-vacuum-angle\]), each lead to the gauge symmetry breaking pattern $[SU(3)_c \rightarrow SU(2)_c\times U(1)_c]_B$. Each is simply an $SU(3)_c$ transformation from $T^8$, so without loss of generality we consider the vacuum orientation as given by Eq. (\[eq:octet-vacuum-angle\]) with $\beta = 0$, i.e., $$\langle \Phi_B \rangle =\sqrt{2} \, f_\Phi \, T^8.$$ So, the analysis mimics that of the cubic term, but with the pNGB mass of order $\alpha_s f_\Phi$. Full scalar potential and nonlinear realization {#sec:nonlinear} ----------------------------------------------- The previous analysis can be adapted to realistic potentials involving both the Higgs and the colored scalar fields. We use a nonlinear parameterization of the scalar fields, working in unitary gauge and including only the light pNGB degrees of freedom to provide a simple and clear description of the low energy dynamics. The technical details of each analysis are similar to each other and to analysis of the hypercharge scalar in Ref. [@Batell:2019ptb]. Therefore, we present only the triplet scalar case in detail. We do comment on how the sextet and octet models differ, but relegate much of the details to the Appendix. ### Color triplet scalar {#color-triplet-scalar} Taking the new scalars to be color triplets (see Sec. \[sec:triplets-isolated\] above), we now include the Higgs fields. The ${\ensuremath{\mathbb{Z}_2}}$ symmetric scalar potential is given by $$\begin{aligned} V & = - M_H^2 \, |H|^2 + \lambda_H \, |H|^4 - M_\Phi^2 \, |\Phi|^2 + \lambda_\Phi \, |\Phi|^4 + \lambda_{H\Phi} \, |H|^2 \, |\Phi|^2 \label{eq:H-triplet-potential} \\ & + \delta_H \left(|H_A|^4 + |H_B|^4\right) + \delta_\Phi \left(|\Phi_A|^4 + |\Phi_B|^4\right) + \delta_{H\Phi} \left(|H_A|^2 - |H_B|^2 \right) \left(|\Phi_A|^2 - |\Phi_B|^2\right), \nonumber\end{aligned}$$ where we have defined $|H|^2 = H_A^\dag H_A + H_B^\dag H_B$ and $|\Phi|^2 = \Phi_A^\dag \Phi_A + \Phi_B^\dag \Phi_B$. The terms in the first line of Eq. (\[eq:H-triplet-potential\]) respect a $U(4) \times U(6)$ global symmetry, while those in the second line explicitly break this symmetry. We demand that the symmetry breaking quartics $\delta_H$ and $\delta_{H\Phi}$ are small compared to $\lambda_H$ and $\lambda_{H\Phi}$, to ensure the twin protection mechanism for the light Higgs boson. Though not strictly required, if $\delta_\Phi$ is small compared to $\lambda_\Phi$ the color triplet scalar in the visible sector can naturally be lighter than $f_\Phi$. In the absence of the colored scalar fields, choosing $\delta_H >0$ leads to a vacuum with $\langle H_A\rangle=\langle H_B\rangle$. This implies order one modifications of the light Higgs boson’s couplings to SM fields, which is experimentally excluded. However, we saw in Sec. \[sec:triplets-isolated\] that taking $\delta_\Phi<0$ spontaneously breaks the ${\ensuremath{\mathbb{Z}_2}}$ symmetry, with $\Phi_B$ obtaining a VEV but $\langle\Phi_A\rangle=0$. Crucially, this symmetry breaking makes the $\delta_{H\Phi}$ interaction into an effective ${\ensuremath{\mathbb{Z}_2}}$ breaking mass term for the Higgs scalars, allowing the desired vacuum alignment, with $\langle H_A \rangle \ll \langle H_B\rangle$. The nonlinear parameterization for the Higgs fields is given by (see also Ref. [@Batell:2019ptb]) $$H_A = \left( \begin{array}{c} 0 \\ f_H \sin{ \left(\displaystyle{ \frac{v_H + h }{\sqrt{2} f_H }} \right) } \end{array} \right), ~~~~~~ H_B = \left( \begin{array}{c} 0 \\ f_H \cos{ \left(\displaystyle{ \frac{v_H + h }{\sqrt{2} f_H }} \right) } \end{array} \right), ~~~ \label{eq:NLP-Higgs}$$ while for the colored scalars we have $$\Phi_A = \phi_A \frac{ \sin{(\sqrt{|\phi_A|^2}/f_\Phi)}}{ \sqrt{|\phi_A|^2}/f_\Phi }, ~~~~~~~~~~ \Phi_B = \left( \begin{array}{c} 0 \\ 0 \\ f_\Phi \cos{( \sqrt{|\phi_A|^2}/f_\Phi)} \\ \end{array} \right). ~~~ \label{eq:NLP-triplet-1}$$ Here $f_H$ is the global $U(4)$ breaking VEV, $v_H$ is related to the VEV of $H_A$, $h$ is the physical Higgs fluctuation, and $\phi_A$ is a triplet of $[SU(3)_c]_A$. Inserting the nonlinear fields in Eqs. (\[eq:NLP-Higgs\]) and  into the scalar potential, Eq. (\[eq:H-triplet-potential\]), and neglecting the constant terms, we find the scalar potential for the pNGB fields: $$\begin{aligned} V & = -\frac{\delta_H f_H^4}{2} \sin^2{\!\left[ \frac{\sqrt{2} (v_H \! + \! h)}{f_H} \right]} - \frac{\delta_\Phi f_\Phi^4}{2} \sin^2{\! \left[ \frac{ 2\sqrt{ |\phi_A|^2 } }{ f_\Phi } \right] } \nonumber \\ & ~~~ + \delta_{H\Phi} f_H^2 f_\Phi^2 \cos{\!\left[ \frac{\sqrt{2} (v_H \! + \! h)}{f_H} \right]} \cos{\!\left[ \frac{ 2\sqrt{ |\phi_A|^2 } }{ f_\Phi } \right]}. \label{eq:potential-triplet-1-pNGB}\end{aligned}$$ The potential (\[eq:potential-triplet-1-pNGB\]) has a minimum with $\langle \phi_A \rangle = 0$, $v_H \neq 0$ which obeys the relation $$f_\Phi^2 \, \delta_{H\Phi} + f_H^2 \, \delta_H \cos( 2 \vartheta) = 0, \label{eq:EWvaccum-triplet-1}$$ where we have introduced the vacuum angle $\vartheta = v_H/(\sqrt{2} f_H)$. Expanding the potential about the minimum and using Eq. (\[eq:EWvaccum-triplet-1\]), we obtain the masses of the physical scalar fields $h$ and $\phi_A$: $$\begin{aligned} m_h^2 & = 2 \, f_H^2 \, \delta_H \, \sin^2(2\vartheta), \label{eq:Higgsmass-triplet-1} \\ m_{\phi_A}^2 & = 2 \left( -\delta_{\Phi } + \frac{\delta_{H\Phi}^2}{\delta_H} \right) f_\Phi^2, \label{eq:phiAmass-triplet-1} \end{aligned}$$ To ensure the Higgs mass in Eq.  is positive we require $\delta_H > 0$, and combining this requirement with the vacuum relation (\[eq:EWvaccum-triplet-1\]) leads to the condition $\delta_{H\Phi} < 0$. We also demand that $m_{\phi_A}^2 >0$ in Eq. (\[eq:phiAmass-triplet-1\]), which restricts the value of $\delta_\Phi$ once $\delta_H$, $\delta_{H\Phi}$ are specified. To make contact with the standard definition of the weak gauge boson masses, we define the electroweak VEV and its twin counterpart as $$\label{eq:VEV-triplet-1} v_A \equiv f_H \sqrt{2} \sin \vartheta, ~~~~~~ v_{B} \equiv f_H \sqrt{2} \cos \vartheta ,$$ where $v_A = v_{\rm EW} = 246$ GeV. Using Eqs. (\[eq:EWvaccum-triplet-1\])(\[eq:VEV-triplet-1\]) we can trade the parameters $f_H, \delta_H, \delta_{\Phi}, \delta_{H \Phi}$ for $v_A$, $\vartheta$, $m_h$, $m_{\phi_A}$. In particular, the quartic couplings may be written as $$\begin{aligned} \delta_H & = & \frac{m_h^2}{4 \, v_{A}^2 \cos^2\vartheta}, \nonumber \\ \delta_{H\Phi} & = &- \frac{m_h^2}{f_\Phi^2} \,\frac{\cos{(2\vartheta)}}{2 \sin^2{(2\vartheta)}}, \nonumber \\ \delta_{\Phi} & = & - \frac{m_{\phi_A}^2}{2 f_\Phi^2} + \frac{v_{A}^2 \, m_h^2}{f_\Phi^4} \,\frac{ \cos^2{\vartheta} \cos^2{2\vartheta} }{ \sin^4{ 2 \vartheta} }, \label{eq:trade-par-triplet-1}\end{aligned}$$ Fixing the vacuum angle to be $\sin\vartheta \lesssim 1/3$, the free parameters of the model can then be chosen as $m_{\phi_A}$ and $f_\Phi$.[^3] We can also estimate the scale of these parameters. This follows from imposing certain restrictions on the symmetry breaking quartics, $\delta_\Phi$ and $\delta_{H\Phi}$, which are related to $m_{\phi_A}$ and $f_\Phi$ via Eq. (\[eq:trade-par-triplet-1\]). Since the gauge and Yukawa interactions break the $U(4) \times U(6)$ symmetry, the symmetry breaking quartics will be generated radiatively and cannot be taken too small without fine tuning. The quartic $\delta_\Phi$ is generated by strong interactions at one loop, implying its magnitude is larger than roughly $\alpha_s^2 \sim 10^{-2}$. On the other hand, $\delta_{H\Phi}$ is generated at one loop by hypercharge interactions, or at two loops due to top quark Yukawa and strong interactions, suggesting its magnitude be larger than about $10^{-4}$. We also take these couplings to be smaller than the $U(4) \times U(6)$ preserving quartics and thus require $|\delta_{\Phi,H\Phi}| \lesssim 1$ for strongly coupled UV completions. Collectively, these conditions suggest $m_{\phi_A}$ and $f_\Phi$ fall within the 100 GeV10 TeV range. Of course, direct constraints from the LHC typically require $m_{\phi_A}$ to be $\gtrsim1$ TeV, as we discuss later. The potential (\[eq:potential-triplet-1-pNGB\]) contains cubic interactions involving the Higgs and colored scalar. In particular, we find $V \supset A_{h \phi_A^\dag \phi_A } h \, |\phi_A|^2$, with $$A_{h \phi_A^\dag \phi_A } = - \frac{ m_h^2 \, v_A }{f_\Phi^2} \frac{ \cot(2\vartheta) }{\sin \vartheta}. \label{eq:triplet-cubic-scalar}$$ Such couplings can lead to modifications of the Higgs couplings to gluons and photons, and are discussed in Sec. \[sec:collider\]. ### Color sextet and octet models A similar analysis can be carried out for color sextet or octet, and we refer the reader to the Appendix for details on their nonlinear parameterizations. One important difference in those models is the presence of additional pNGB scalar degrees of freedom $\phi_B$ in the twin sector, as was already apparent in Secs. \[sec:sextets-isolated\] and \[sec:octets-isolated\]. Otherwise, the analyses of the sextet and octet are very similar to that of the triplet. In particular, the trilinear coupling involving the visible sector Higgs boson and colored scalar are always given by Eq. (\[eq:triplet-cubic-scalar\]). Twin gauge dynamics and confinement ----------------------------------- We now discuss the gauge interactions in the various models, including the nature of the unbroken non-Abelian and $U(1)$ gauge symmetries and confinement in the twin sector. As seen above, several twin color breaking patterns are possible depending on the representation of the colored scalar and form of the scalar potential. By accounting for both twin color and electroweak symmetry breaking, we found five distinct patterns of gauge symmetry breaking: $$\begin{aligned} {\bf I}:&~~~~~({\bf 3}, {\bf 1} ,Y_\Phi)~~~&~~~[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SU(2)_c \times U(1)'_{\rm EM}]_B \\ {\bf II}:&~~~~~({\bf 6}, {\bf 1} ,Y_\Phi) ~~~&~~~[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SU(2)_c \times U(1)'_{\rm EM}]_B \\ {\bf III}:&~~~~~({\bf 6}, {\bf 1} ,Y_\Phi) ~~~&~~~[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SO(3)_c]_B \\ {\bf IV}:&~~~~~({\bf 8}, {\bf 1} ,0) ~~~&~~~[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SU(2)_c \times U(1)_c \times U(1)_{\rm EM}]_B \\ {\bf V}:&~~~~~ ({\bf 8}, {\bf 1} ,0) ~~~&~~~[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow U(1)_c \times U(1)'_c \times U(1)_{\rm EM}]_B \end{aligned}$$ Of these, cases [**I**]{} feature a residual non-Abelian color gauge symmetry and confinement at a low scale. In cases ${\bf I}$, ${\bf II}$, and ${\bf IV}$, this non-Abelian group is $SU(2)_c$, while in case ${\bf III}$ it is $SO(3)_c$. All models except ${\bf III}$, where the twin photon picks up a mass from the color sextet VEV, have one or more unbroken abelian gauge symmetries. At least one of these $U(1)$s is similar to the usual electromagnetic (EM) gauge symmetry, with the massless gauge boson an admixture of weak, hypercharge, and, in cases ${\bf I}$ and ${\bf II}$, color gauge bosons. In the color octet models there are also color $U(1)$ gauge symmetries which are remnants of $[SU(3)_c]_B$. In MTH models with unbroken color gauge symmetry the confinement scale is similar to the ordinary QCD confinement scale, $\Lambda_A \sim 1$ GeV. In models [**I**]{} confinement naturally occurs at a much lower scale, because the number of massless gluonic degrees of freedom contributing to the running below the TeV scale is much smaller. The one-loop beta function can be written as $ d\alpha_s^{-1}/d \ln Q = b/2\pi$, with $$b = \frac{11}{3} C_{\rm Ad} - \frac{2}{3} \sum_f c_f T_f - \frac{1}{6} \sum_s c_sT_s, \label{eq:1-loop-beta}$$ where $C_{\rm Ad}$ is the quadratic Casimir for the adjoint representation and $T_f$ ($T_s$) is the Dynkin index for fermions (scalars) charged under the strong gauge group. The factors $c_f = 1 (2)$ for Majorana (Dirac) fermions, and $c_s = 1 (2)$ for real (complex) scalars. The fermions in both the SM and twin sectors all have masses below the TeV scale and transform in the fundamental representation of the given gauge group, with index $T_f = \tfrac{1}{2}$. In estimating the evolution of the strong coupling constant we make the mild assumption that the twin fermions are married into Dirac states, similar to SM fermions. In the simplest case the twin fermion masses are given by $m_{f_B} = m_{f_A}\cot\vartheta \approx {\rm few} \times m_{f_A}$. In the visible sector, we have $C_{\rm Ad} = 3$ for $[SU(3)_c]_A$ at all energy scales, while for the twin sector below $f_\Phi$ we have $C_{\rm Ad} = 2$ for $[SU(2)_c]_B$ and $C_{\rm Ad}= \tfrac{1}{2}$ for $[SO(3)_c]_B$. There may be additional colored pNGBs in both sectors with TeV masses; the number and particular index $T_s$ are model dependent. Before estimating the confinement scale for these models, we note that additional dynamical ${\ensuremath{\mathbb{Z}_2}}$ breaking effects, such as new twin fermion mass terms or a shift in the strong gauge coupling at the UV scale, $\alpha_s^B(f_\Phi) =\alpha_s^A(f_\Phi) + \delta \alpha_s$, may raise or lower this scale by several orders of magnitude. Nevertheless, the general expectation is that the twin confinement scale is much lower than that in the visible sector, in contrast to MTH models with unbroken $[SU(3)_c]_B$. ### Cases ${\bf I}$, ${\bf II}$, ${\bf IV}$ : unbroken $[SU(2)_c \times U(1)^{(')}_{\rm EM}]_B$ symmetry Cases ${\bf I}$, ${\bf II}$, and ${\bf IV}$ have very similar gauge dynamics at low energy owing to the unbroken $[SU(2)_c \times U(1)^{(')}_{\rm EM}]_B$ color and electromagnetic gauge symmetries. Considering case [**I**]{} of the color triplet for concreteness, the beta function coefficients (\[eq:1-loop-beta\]) associated with the unbroken color symmetries in the $A$ and $B$ sectors are given by $$\begin{aligned} b_A & = 11 - \frac{2}{3} n_f^A -\frac{1}{6} n_s^A, \nonumber \\ b_B & = \frac{22}{3} - \frac{2}{3} n_f^B, \label{beta-triplet}\end{aligned}$$ where $n_f^A$ ($n_f^B$) denotes the number of active Dirac fermions in the $A$ ($B$) sector at a given energy scale. The visible sector potentially contains a color triplet scalar $\phi_A$ in the effective theory, with index $T_s = \tfrac{1}{2}$, and $n_s^A $ the number of light triplet scalars in the $A$ sector. In the left panel of Fig. \[fig:running\] we display the evolution of the strong coupling constants in the visible (red) and twin (blue) sectors. We see that the twin strong coupling becomes large near scales of order $\Lambda_B \sim$ MeV. As mentioned above, this is primarily a consequence of having fewer twin gluonic degrees of freedom and thus a smaller $b_B$ in Eq. (\[beta-triplet\]). While we have explicitly studied case [**I**]{} here, the running is essentially identical in the other cases with residual $[SU(2)_c]_B$, [**II**]{} and [**IV**]{}. The only difference is the contribution of TeV scale colored scalar degrees of freedom, which have essentially no quantitative impact on the results. ![*Left:* One-loop evolution of the strong fine structure constants in the visible (red) and twin (blue) sectors for a color triplet scalar with unbroken $[SU(2)_c]_B$ twin color symmetry, case [**I**]{} . The twin confinement scale is of order MeV. *Right:* Same plot for color sextet scalar with unbroken $[SO(3)_c]_B$ twin color symmetry, case [**III**]{}. The twin confinement scale is of order $10^{-23}$ GeV. In both plots we fix $\alpha^A_s(m_Z) = 0.1179$, $f_\Phi = 3$ TeV, and assume pNGB colored scalars have 1 TeV masses. Visible and twin sector gauge couplings are matched at $Q = f_\Phi$. []{data-label="fig:running"}](Figures/running-triplet.pdf "fig:"){width="43.80000%"}     ![*Left:* One-loop evolution of the strong fine structure constants in the visible (red) and twin (blue) sectors for a color triplet scalar with unbroken $[SU(2)_c]_B$ twin color symmetry, case [**I**]{} . The twin confinement scale is of order MeV. *Right:* Same plot for color sextet scalar with unbroken $[SO(3)_c]_B$ twin color symmetry, case [**III**]{}. The twin confinement scale is of order $10^{-23}$ GeV. In both plots we fix $\alpha^A_s(m_Z) = 0.1179$, $f_\Phi = 3$ TeV, and assume pNGB colored scalars have 1 TeV masses. Visible and twin sector gauge couplings are matched at $Q = f_\Phi$. []{data-label="fig:running"}](Figures/running-sextet-2.pdf "fig:"){width="45.00000%"} The generator of the unbroken electromagnetic symmetry for each case are $$\begin{aligned} {\bf I}:&~~~~~({\bf 3}, {\bf 1} ,Y_\Phi)~~~& ~~~Q_B^{\rm EM} = \tau^3 + Y + \sqrt{3} \, Y_\Phi \, T^8~, \label{eq:EM-generator-1} \\ {\bf II}:&~~~~~({\bf 6}, {\bf 1} ,Y_\Phi) ~~~&~~~Q_B^{\rm EM} = \tau^3 + Y + \frac{\sqrt{3}}{2} \, Y_\Phi \, T^8~, \label{eq:EM-generator-2} \\ {\bf IV}:&~~~~~({\bf 8}, {\bf 1} ,0) ~~~&~~~ Q_B^{\rm EM} = \tau^3 + Y ~. \label{eq:EM-generator-4}\end{aligned}$$ In cases ${\bf I}$ and ${\bf II}$ the twin electric charges depends on a particle’s $T^8$ as well as the colored scalar’s hypercharge $Y_\Phi$. This occurs because the triplet and sextet can carry hypercharge, which leads to mass mixing between the neutral hypercharge and color gauge bosons. On the other hand, the octet in case [**IV**]{} is real, so the EM generator is identical to the SM. According to Eqs. (\[eq:EM-generator-1\])(\[eq:EM-generator-4\]) the twin electric charges of the twin leptons are equal to the electric charges of the visible leptons. Following symmetry breaking, the twin quark fields decompose into doublets and singlets under the unbroken $[SU(2)_c]_B$, which carry distinct electric charges. Before symmetry breaking, we denote the quark fields as $Q_{B} \sim ({\bf 3}, {\bf 2}, \tfrac{1}{6})$, $\bar u_B \sim ({\bf \bar 3}, {\bf 1}, -\tfrac{2}{3})$, $\bar d_B \sim ({\bf \bar 3}, {\bf 1}, \tfrac{1}{3})$ using two component Weyl fermions. These fields decompose as $$\begin{aligned} Q_{B i} = \left( \begin{array}{c} \hat Q_{\! B \, \hat i} \\ \hat Q_{\! B 3} \end{array} \right) = \left( \begin{array}{cc} \hat u_{B \, \hat i} ~&~ \hat d_{B \, \hat i } \\ \hat u_{B 3} ~&~ \hat d_{B 3} \end{array} \right), ~~~~~ \bar u_{ B}^i = \left( \begin{array}{c} \epsilon^{\hat i \hat j } \, \hat {\bar u}_{ B \, \hat j} \\ \hat {\bar u}_{ B 3} \end{array} \right), ~~~~~ \bar d_{B}^i = \left( \begin{array}{c} \epsilon^{\hat i \hat j } \, \hat {\bar d}_{ B \, \hat j}\\ \hat {\bar d}_{B 3 } \end{array} \right),~~~~~ \label{eq:triplet-quarks-hatted}\end{aligned}$$ where hatted fields denote states of definite charge under $[SU(2)_c]_B$, and $\hat i = 1,2$. For example, $\hat {\bar d}_{ B \, \hat i}$ ($\hat {\bar d }_{B \, 3}$) is a doublet (singlet) under $[SU(2)_c]_B$. In Table \[tab:triplet-charges\] we indicate the electric charges of the twin quark fields for the several choices of $Y_\Phi$ for these cases. These choices of $Y_\Phi$ allow Yukawa-type couplings of the colored scalar to pairs of fermions, and their implications are explored in Sec. \[sec:scalar-matter-couplings\]. We emphasize here the great difference in the twin particle spectrum compared to the basic MTH model. Though much of the dynamics are determined by the ${\ensuremath{\mathbb{Z}_2}}$ twin symmetry with the SM fields, we end up with new unconfined quarks, from the part of the field along the VEV direction, as well as new $SU(2)_c$ bound states. Insights into this bound state spectrum and dynamics of the phase transition can be found in, for example, [@Hands:1999md; @Kogut:1999iv; @Aloisio:2000if; @Kogut:2001na; @Kogut:2002cm; @Kogut:2003ju; @Nishida:2003uj; @Lombardo:2008vc; @Buckley:2012ky; @Detmold:2014qqa; @Forestell:2016qhc; @DeGrand:2019vbx], but a few qualitative items are worth mentioning. First, the lightest quark masses are a few MeV, which is just above the confinement scale so mesons, composed of a quark and an anti-quark, and baryons, composed of two quarks, can likely be simulated as nonrelativisitic bound states. In the absence of additional scalar couplings to matter there is a conserved baryon number that renders the lightest twin baryon stable, which may be interesting from a cosmological perspective. In addition, the mass of the lightest $SU(2)$ glueball is $m_0\sim 5\, \Lambda_B$ [@Teper:1998kw; @Lucini:2008vi] so it is likely that the glueball and meson/baryon spectrum will overlap. However, as the lightest glueball is a $0^{++}$ state it will decay rapidly a pair of twin photons. [|l||\*[8]{}[c|]{}]{}\ & & & &\  [$Q_{B}^{\rm EM} \big[ \hat u_{\! B \, \hat i} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar u}_{\! B}^{\, \hat i} \big]$ ]{} & $3/2$ & $1$ & $1/2$ & $0$\  [$Q_{B}^{\rm EM} \big[ \hat d_{\! B \, \hat i} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar d}_{\! B}^{\, \hat i} \big] $ ]{} & $1/2$ & $0$ & $-1/2$ & $-1$\  [$Q_{B}^{\rm EM} \big[ \hat u_{\! B 3} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar u}_{\! B}^{\, 3} \big]$ ]{} & $-1$ & $0$ & $1$ & $2$\ [$Q_{B}^{\rm EM} \big[ \hat d_{\! B 3} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar d}_{\! B}^{\, 3} \big]$ ]{} & $-2$ & $-1$ & $0$ & $1$\      [|l||\*[8]{}[c|]{}]{}\ & & &\  [$Q_{B}^{\rm EM} \big[ \hat u_{\! B \, \hat i} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar u}_{\! B}^{\, \hat i} \big]$ ]{} & $1$ & $3/4$ & $1/2$\  [$Q_{B}^{\rm EM} \big[ \hat d_{\! B \, \hat i} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar d}_{\! B}^{\, \hat i} \big] $ ]{} & $0$ & $-1/4$ & $-1/2$\  [$Q_{B}^{\rm EM} \big[ \hat u_{\! B 3} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar u}_{\! B}^{\, 3} \big]$ ]{} & $0$ & $1/2$ & $1$\ [$Q_{B}^{\rm EM} \big[ \hat d_{\! B 3} \big] \! = \! - Q_{B}^{\rm EM} \big[ \hat {\bar d}_{\! B}^{\, 3} \big]$ ]{} & $-1$ & $-1/2$ & $0$\ ### Case ${\bf III}$ : unbroken $[SO(3)_c]_B$ symmetry In case ${\bf III}$, with sextet scalar, the unbroken twin color symmetry is $[SO(3)_c]_B$. Within the effective theory, the visible sector contains a (complex) color sextet scalar $\phi_A$ with index $T_s = \tfrac{5}{2}$, while the twin sector contains a real quintuplet scalar $\phi_B$ with index $T_s = \tfrac{5}{2}$. The beta function coefficients (\[eq:1-loop-beta\]) in each sector are given by $$\begin{aligned} b_A & = 11 - \frac{2}{3} n_f^A -\frac{5}{6} n_s^A, \\ b_B & = \frac{11}{6} - \frac{2}{3} n_f^B -\frac{5}{12} n_s^B, \\end{aligned}$$ where $n_f^A$ ($n_f^B$) denotes the number of active Dirac fermions in the $A$ ($B$) sector, and $n_s^A $ ($n_s^B$) is the number of active colored scalars in the $A$ ($B$) sector. In the right panel of Fig. \[fig:running\] we display the evolution of the strong coupling in the visible (red) and twin (blue) sectors. We observe that the twin strong coupling blows up near scales of order $\Lambda_B \sim 10^{-23}$ GeV, many, many orders of magnitude below the QCD confinement scale. This is due to smaller color charge of the $[SO(3)_c]_B$ gluons, in comparison to the $[SU(2)_c]_B$ case. One observes from the figure that the twin gauge coupling runs to smaller values for some range of scales below $f_\Phi$. Thus, at energies below the twin quark masses where the beta function becomes negative, the coupling is comparatively small in magnitude, leading it to run very slowly. Interestingly there are no unbroken $U(1)$ gauge symmetries in this case, as the sextet VEV lifts the twin photon, with mass of order $g' Y_\Phi f_\Phi$. The heavy twin gluons pick up a mass of order $g_s f_\Phi$, and form a quintuplet under the unbroken $[SO(3)_c]_B$ gauge symmetry. The twin quarks on the other hand transform in the fundamental representation of $[SO(3)_c]_B$. This again shows how different the twin and visible sectors can be, even though they are fundamentally related by the ${\ensuremath{\mathbb{Z}_2}}$ symmetry. If the twin sector is much colder than the SM, as perhaps motivated by $N_\text{eff}$ bounds, the quarks would just barely act like quirks [@Kang:2008ea], but with the width of the color flux tubes connecting them set by $1/\Lambda_B$ the scale of confining forces is about that of a planet. Similarly, the the lightest bound states are glueballs with small masses likely a few times $\Lambda_B$, and these objects are again roughly Earth-sized. However, we typically expect that the twin quarks and gluons were in equilibrium at some point in the early universe, and the cosmic evolution of this dark sector with such a low confinement scale brings with it many open questions. Such novel dynamics and their cosmological implications is clearly worth further exploration. ### Case ${\bf V}$ : unbroken $[U(1)_c \times U(1)'_c \times U(1)_{\rm EM}]_B$ symmetry In the color octet model of case ${\bf V}$ there is no residual non-Abelian gauge symmetry. There are, however, three unbroken abelian symmetries, $[U(1)_c \times U(1)'_c \times U(1)_{\rm EM}]_B$, with generators $T^3$, $T^8$, and $Q_{B}^{\rm EM} = \tau^3+Y$, respectively. The heavy gluons can be grouped into complex vectors which carry charges under the $U(1)^{(')}_c$ gauge symmetries. In particular, $G_{B}^{1,2}$ couple to $G_{B}^3$ but not $G_{B}^8$, while $G_{B}^{4,5,6,7}$ couple to both $G_{B}^3$ and $G_{B}^8$. Similarly, the different colors of quarks couple with different strengths to the massless $U(1)$ color gluons according to the generators $T^3$, $T^8$, while their twin electric charges are the same as the electric charges of their ${\ensuremath{\mathbb{Z}_2}}$ partners in the visible sector. We expect in this model that there can be a rich variety of atomic states, some of which may have important cosmological applications. Scalar couplings to matter {#sec:scalar-matter-couplings} ========================== [|c | c | c | c| c|]{} & Coupling to & &\ & fermion bilinear & &\ \ \[-2.7ex\] &\ &   $\Phi \,(Q \,Q)$  &  $\phi_A \rightarrow \bar u\, \bar d$  &\ &   $\Phi^\dag \, (Q\, L)$  &  $\phi_A \rightarrow u \, e, d \, \nu$  &\ &   $\Phi^\dag \, \bar u \, \bar d$  &  $\phi_A \rightarrow \bar u \,\bar d$  &\ &   $\Phi\, \bar u\, \bar e$  &  $\phi_A \rightarrow u \,e$  &\ &   $\Phi\, \bar d\, (L\,H)$  &  $\phi_A \rightarrow d \, \nu$  &\ &   $\Phi\, (H^\dag Q) (Q \, H)$  &  $\phi_A \rightarrow \bar u \, \bar d$  &\ &   $\Phi^\dag\, (H^\dag Q) (L\, H)$  &  $\phi_A \rightarrow d \, \nu$  &\ &   $\Phi^\dag \, (Q \, H) (H^\dag L)$  &  $\phi_A \rightarrow u \, e$  &\ &   $\Phi^\dag \,\bar d \, \bar d$  &  $\phi_A \rightarrow \bar d\, \bar d$  &\ &   $\Phi \, \bar u \, (L\, H)$  &  $\phi_A \rightarrow u \, \nu$  &\ &   $\Phi \, \bar d \, (H^\dag L)$  &  $\phi_A \rightarrow d \,\bar e$  &\ &   $\Phi^\dag \, (H^\dag Q) \, \bar e$  &  $\phi_A \rightarrow d \,\bar e$  &\ &   $\Phi \, (H^\dag Q) (H^\dag Q)$  &  $\phi_A \rightarrow \bar d\, \bar d$  &\ &   $\Phi^\dag \, (Q\,H) (L \, H)$  &  $\phi_A \rightarrow u \, \nu$  &\ &   $\Phi^\dag \,\bar u \, \bar u$  &  $\phi_A \rightarrow \bar u \, \bar u$  &\ &   $\Phi \, \bar d \, \bar e$  &  $\phi_A \rightarrow d \, e$  &\ &   $\Phi \, (Q\, H) \, (Q\, H)$  &  $\phi_A \rightarrow \bar u \, \bar u$  &\ &   $\Phi^\dag \, (H^\dag Q) (H^\dag L) $  &  $\phi_A \rightarrow d \, e$  &\ &   $\Phi^\dag \, (Q\, H) \, \bar e$  &  $\phi_A \rightarrow u \, \bar e$  &\ &   $\Phi \, \bar u\, (H^\dag L) \,$  &  $\phi_A \rightarrow u \, \bar e$  &\ \ \[-2.7ex\] & [$[SU(2)_c \times U(1)'_{\rm EM}]_B$ ]{} & [$[SO(3)_c]_B$ ]{}\ &   $\Phi^\dag \,(Q \,Q)$  &  $\phi_A \rightarrow u\, d$  & $\hat u_{B3}\, \hat d_{B3}$ & $ u_{B}\, d_{B}$\ &   $\Phi \, \bar u \, \bar d$  &  $\phi_A \rightarrow u\, d$  & $\hat{ \bar u}_{B3} \, \hat{ \bar d}_{B3}$ & ${ \bar u}_{B} \, {\bar d}_{B}$\ &   $\Phi^\dag \, (Q \, H) (H^\dag Q )$  &  $\phi_A \rightarrow u\, d$  & $\hat u_{B3}\, \hat d_{B3}$ & $ u_{B}\, d_{B}$\ &   $\Phi \,\bar d \, \bar d$  &  $\phi_A \rightarrow d\, d$  & $\hat {\bar d}_{B3}\, \hat {\bar d}_{B3}$ & $ {\bar d}_{B}\, {\bar d}_{B}$\ &   $\Phi^\dag (H^\dag Q)(H^\dag Q) $  &  $\phi_A \rightarrow d\, d$  & $\hat { d}_{B3}\, \hat { d}_{B3}$ & $ {d}_{B}\, {d}_{B}$\ &   $\Phi \,\bar u \, \bar u$  &  $\phi_A \rightarrow u \, u$  & $\hat {\bar u}_{B3}\, \hat {\bar u}_{B3}$ & $ {\bar u}_{B}\, {\bar u}_{B}$\ &  $\Phi^\dag (Q\, H)(Q \, H) $  &  $\phi_A \rightarrow u \, u $  & $\hat {u}_{B3}\, \hat {u}_{B3}$ & $ {u}_{B}\, {u}_{B}$\ \ & [$[SU(2)_c \times U(1)_c \times U(1)_{\rm EM}]_B$]{} & [$[U(1)_c \times U(1)'_c \times U(1)_{\rm EM}]_B$]{}\ &   $\Phi \,(Q \,H) \bar u $  &  $\phi_A \rightarrow u\, \bar u$  & $ \hat u_{B}\, \hat {\bar u}_B -2 \hat u_{B3}\, \hat {\bar u}_{B3} $ & $ \hat u_{B1}\, \hat {\bar u}_{B1} - \hat u_{B2}\, \hat {\bar u}_{B2} $\ &   $\Phi \,(H^\dag Q ) \bar d $  &  $\phi_A \rightarrow d \, \bar d$  & $ \hat d_{B}\, \hat {\bar d}_B -2 \hat d_{B3}\, \hat {\bar d}_{B3} $ & $ \hat d_{B1}\, \hat {\bar d}_{B1} - \hat d_{B2}\, \hat {\bar d}_{B2} $\ Thus far we have only considered the dynamics of the gauge sector and scalar potential. We now investigate the consequences of new couplings of the colored scalars to fermions. These couplings have two primary motivations. First, they cause the visible sector colored scalar $\phi_A$ to decay, explaining in a simple way the absence of stable colored relics. Second, following spontaneous color breaking in the mirror sector, such couplings produce new dynamical twin fermion mass terms. Consequently, the spectrum of twin fermions can be deformed with respect to the mirror symmetric model, which may have important consequences for cosmology and phenomenology. We emphasize, however, that the exact ${\ensuremath{\mathbb{Z}_2}}$ symmetry in our setup produces tight correlations between variations in the twin mass spectrum and visible sector phenomenology, including the collider signals of $\phi_A$ (Sec. \[sec:collider\]) and indirect precision tests (Sec. \[sec:indirect\]). Given these motivations, we focus mainly on couplings involving a single colored scalar to a pair of fermions. For the $SU(2)_L$ singlet, color triplet $({\bf 3}, {\bf 1}, Y_\Phi)$, sextet $({\bf 6}, {\bf 1}, Y_\Phi)$, and real octet $({\bf 8}, {\bf 1}, 0)$ scalars considered in this work, we find eight distinct representations that allow such couplings. These representations are shown in Table \[tab:scalar-representations\], along with the complete set of couplings to fermion bilinears which respect the full SM gauge symmetry. Fermions are written using two component left chirality Weyl spinors. The quantum numbers of the visible sector fields are $Q_A^T = (u_A, d_A)^T \sim ({\bf 3}, {\bf 2}, \tfrac{1}{6})$, $\bar u_A \sim ({\bf \bar 3}, {\bf 1}, -\tfrac{2}{3})$, $\bar d_A \sim ({\bf \bar 3}, {\bf 1}, \tfrac{1}{3})$, $L_A^T = (\nu_A, e_A)^T \sim ({\bf 1}, {\bf 2}, -\tfrac{1}{2})$, $\bar e_A \sim ({\bf 1}, {\bf 1}, 1)$, $H_A \sim ({\bf 1}, {\bf 2}, \tfrac{1}{2})$ and similarly for the mirror sector. The table also indicates the corresponding decays of $\phi_A$ and the twin fermion mass terms generated by each coupling, which will be discussed in more detail below. We will also make a few brief remarks below regarding possible couplings beyond those in Table \[tab:scalar-representations\]. Decays of $\phi_A$ ------------------ From Table \[tab:scalar-representations\], the visible sector colored scalars $\phi_A$ can decay in a variety of ways, depending on their quantum numbers and the particular couplings allowed by gauge symmetry. Color triplets can decay to a pair of SM quarks, a quark and a neutrino, or a quark and a charged lepton. To illustrate, consider $\Phi_A \sim ({\bf 3}, {\bf 1}, \tfrac{2}{3} )$ with general Lagrangian containing the following interactions: $$\begin{aligned} -{\cal L} & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ \Phi^\dag_A \, \bar d_A \, \bar d_A + \frac{c_{\bar u L}}{ \Lambda }\, \Phi_A \, \bar u_A \, (L_A H_A) + \frac{c_{\bar d L}}{ \Lambda }\, \Phi_A \, \bar d_A \, (H^\dag_A L_A) + \frac{c_{Q \bar e}}{ \Lambda }\, \Phi^\dag_A (H^\dag_A Q_A) \, \bar e_A \nonumber \\ & + \frac{c_{Q Q}}{ 2 \Lambda^2 }\, \Phi_A \, (H^\dag_A Q_A) (H^\dag_A Q_A) + \frac{c_{Q L}}{ \Lambda^2 }\, \Phi^\dag_A \, (Q_A H_A) (L_A H_A) + {\rm H.c.} \nonumber \\ & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ \phi^\dag_A \, \bar d_A \, \bar d_A + \frac{c_{\bar u L} v_A}{\sqrt{2} \Lambda }\, \phi_A \, \bar u_A \, \nu_A \, + \frac{c_{\bar d L} v_A}{\sqrt{2} \Lambda }\, \phi_A \, \bar d_A \, e_A + \frac{c_{Q \bar e} v_A}{\sqrt{2} \Lambda }\, \phi^\dag_A \, d_A \, \bar e_A \nonumber \\ & + \frac{c_{Q Q} v_A^2}{4\Lambda^2 }\, \phi_A \, d_A \, d_A + \frac{c_{Q L}v_A^2}{2 \Lambda^2 }\, \phi^\dag_A \, u_A \, \nu_A + {\rm H.c.} ~, \label{eq:L-triplet-2/3-A} \end{aligned}$$ where in the second line we have used Eqs. (\[eq:NLP-Higgs\]) and (\[eq:NLP-triplet-1\]). The interactions in Eq. (\[eq:L-triplet-2/3-A\]) lead to the decays $\phi_A \rightarrow \bar d \, \bar d, u \,\nu, d \, \bar e$. [^4] On the other hand, color sextets (octets) decay strictly to pairs of quarks (quark-antiquark pairs). For instance, in the case of the sextet scalar $\Phi \sim ({\bf 6}, {\bf 1}, -\tfrac{2}{3} ) $, we can write $$\begin{aligned} -{\cal L} & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ \Phi_A \, \bar d_A \, \bar d_A + \frac{c_{Q Q}}{ 2\Lambda^2 }\, \Phi^\dag_A \, (H^\dag_A Q_A) (H^\dag_A Q_A) + {\rm H.c.} \nonumber \\ & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ \phi_A \, \bar d_A \, \bar d_A + \frac{c_{Q Q} v_A^2}{ 4\Lambda^2 }\, \phi^\dag_A \, d_A \, d_A + {\rm H.c.}~, \label{eq:L-sextet-2/3-A}\end{aligned}$$ which lead to the decay $\phi_A \rightarrow dd$. Taking into account the various flavors of quark and lepton, there are a variety of potential collider signatures of the colored scalars, which we explore in Sec. \[sec:collider\]. Of course, the colored scalar can decay in more channels than those listed in Table \[tab:scalar-representations\]. One possibility is that $\phi_A$ decays to a pair of SM bosons. For instance, the color octet may decay to a pair of gluons through the dimension five operator ${\rm Tr}\, \Phi_A G_A G_A$. Another interesting possibility emerges if operators that couple fields in the two sectors are present. These are typically higher dimension operators, and can naturally arise when ‘singlet’ fields [@Bishara:2018sgl], which transform by at most a sign under ${\ensuremath{\mathbb{Z}_2}}$, are integrated out. As an example, taking $\Phi_{A,B} \sim ({\bf 3}, {\bf 1}, \tfrac{2}{3} )$, we can write the operator $(\Phi_A \bar u_A) (\Phi_B \bar u_B) \supset f_\Phi \, \phi_A \bar u_A \hat {\bar u}_{B3}$, leading to the decay of $\phi_A$ to one SM quark and one twin quark. The same operator could allow the twin quark to decay back into the visible sector via an off-shell $\phi_A$. Dynamical twin fermion masses ----------------------------- Before considering new twin fermion masses, we first recall the ordinary mass terms originating from twin electroweak symmetry breaking: $$\begin{aligned} \label{eq:Y} - {\cal L} & \supset & y_e (H_B^\dag L_B) \, \bar e_B + y_u (Q_B H_B) \bar u_B + y_d (H_B^\dag Q_B ) \bar d_B \, + \, \frac{c_\nu}{\Lambda_\nu} (L_B H_B)(L_B H_B) +{\rm H.c.} \nonumber \\ & \supset & \frac{y_\ell \,v_B}{\sqrt{2}} e_B \bar e_B + \frac{y_u \, v_B}{\sqrt{2}} u_B \bar u_B + \frac{y_d \, v_B}{\sqrt{2}} d_B \bar d_B + \frac{c_\nu \,v_{\! B}^2}{2 \Lambda_\nu} \nu_B\nu_B + {\rm H.c.} ~.\end{aligned}$$ These Higgs Yukawa interactions lead to the usual mass terms that are larger than those in the SM by the factor $v_B/v_A = \cot \vartheta \approx$ few. The new twin fermion masses generated by spontaneous color symmetry breaking depend on the particular scalar representation and symmetry breaking pattern. The following discussion is intended to be illustrative, with examples presented for triplet, sextet, and octet models. The full set of possible twin fermion mass terms for a given model is provided in Table \[tab:scalar-representations\]. While we restrict our analysis to the SM fermion field content, we note that additional interesting possibilities for twin fermion masses arise if new singlet fermions are present in the theory [@Liu:2019ixm]. ### Color triplets We first study a triplet example with quantum numbers $\Phi \sim ({\bf 3}, {\bf 1}, \tfrac{2}{3} ) $. The Lagrangian contains the following interactions coupling the scalar to pairs of fermions: $$\begin{aligned} -{\cal L} & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ \Phi^\dag_B \, \bar d_B \, \bar d_B + \frac{c_{\bar u L}}{ \Lambda }\, \Phi_B \, \bar u_B \, (L_B H_B) + \frac{c_{\bar d L}}{ \Lambda }\, \Phi_B \, \bar d_B \, (H^\dag_B L_B) + \frac{c_{Q \bar e}}{ \Lambda }\, \Phi^\dag_B (H^\dag_B Q_B) \, \bar e_B \nonumber \\ & + \frac{c_{Q Q}}{ 2\Lambda^2 }\, \Phi_B \, (H^\dag_B Q_B) (H^\dag_B Q_B) + \frac{c_{Q L}}{ \Lambda^2 }\, \Phi^\dag_B \, (Q_B H_B) (L_B H_B) + {\rm H.c.} \nonumber \\ & \supset \frac{1}{2} \lambda_{\bar d \bar d} f_\Phi \, \hat {\bar d }_B \, \hat {\bar d}_B + \frac{c_{\bar u L} v_B f_\Phi}{\sqrt{2} \Lambda } \, \hat{\bar u}_{B 3} \, \nu_B + \frac{c_{\bar d L} v_B f_\Phi}{ \sqrt{2} \Lambda } \, \hat{ \bar d}_{B 3} \,e_B + \frac{c_{Q \bar e} v_B f_\Phi }{\sqrt{2} \Lambda }\, \hat{d}_{B 3} \, \bar e_B \nonumber \\ & + \frac{c_{Q Q} v_B^2 f_\Phi }{ 4 \Lambda^2 } \, \hat d_B \, \hat d_B + \frac{c_{Q L} v_B^2 f_\Phi }{ 2\Lambda^2 }\, \hat{u}_{B 3} \nu_B + {\rm H.c.}~, \label{eq:L-triplet-2/3-B}\end{aligned}$$ where in the second line we have set the scalar to its VEV, $\langle \Phi_i \rangle = f_\Phi \delta_{i3}$ (Eq. (\[eq:VEV-triplet\])), effecting the spontaneous symmetry breakdown of $[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SU(2)_c \times U(1)'_{\rm EM}]_B $. We have also used the quark decomposition in Eq. (\[eq:triplet-quarks-hatted\]). We note that the couplings $\lambda_{\bar d \bar d}$, $c_{Q Q}$ in Eq. (\[eq:L-triplet-2/3-B\]) are antisymmetric in generation space. We see that new twin fermion mass terms beyond those generated by the Higgs VEV arise from the interactions in Eq. (\[eq:L-triplet-2/3-B\]). In particular, there are ‘Majorana-like’ mass terms for the down-type quark fields, which are allowed since these fields are not charged under the unbroken twin electromagnetic gauge symmetry; see Table \[tab:triplet-charges\].[^5] There are also mass terms which marry ‘3rd color’ ($[SU(2)_c]_B$ singlet) quark fields with leptons. From the electric charges in Table \[tab:triplet-charges\] it is easy to verify that the operators in the second line of Eq. (\[eq:L-triplet-2/3-B\]) respect the unbroken twin electromagnetic gauge symmetry. Different physical mass hierarchies can arise depending on the size of the various couplings in Eq. (\[eq:L-triplet-2/3-B\]). For instance, consider a simple case in which only $\lambda_{\bar d \bar d}^{12} = - \lambda_{\bar d \bar d}^{21} \neq 0$. Accounting for the Higgs Yukawa interactions, we have the following mass terms in the down-strange $[SU(2)_c]_B$ doublet sector: $$\begin{aligned} -{\cal L}& \supset & \overline M_d \, \hat{\bar d}_B \, \hat{ \bar s}_B \, + m_{d_B} \, \hat{\bar d}_B \, \hat{ d}_B+ m_{s_B} \, \hat{\bar s}_B \, \hat{ s}_B +{\rm H.c.} ,\end{aligned}$$ where we have defined the mass parameters $m_{d_B} = y_d v_B/\sqrt{2}$, $m_{s_B} = y_s v_B/\sqrt{2}$, and $\overline M_d = \lambda_{\bar d\bar d}^{12} f_\Phi$. In the limit $\overline M_d \gg m_{s_B}, m_{d_B}$, a seesaw mechanism operates with the two mass eigenstates fermions having approximate eigenvalues $\overline M_d $ and $m_{s_B} m_{d_B}/ \overline M_d$. Taking $f_\Phi \sim \Lambda \sim 5$ TeV, $\sin \vartheta \simeq 1/3$, and $\lambda_{\bar d \bar d}^{12}$ order one, the mass eigenvalues of order $5$ TeV and $100$ eV. On the other hand, if both $\lambda_{\bar d \bar d}^{12} = - \lambda_{\bar d \bar d}^{21} \neq 0$ and $c_{Q Q}^{12} = - c_{QQ}^{21} \neq 0$ both give large contributions to the quark masses relative to those from the Higgs Yukawa couplings, then the two masses are $\overline M_d = \lambda_{\bar d\bar d}^{12} f_\Phi$ and $M_d = - \frac{c_{QQ}^{12} v_B^2 f_\Phi}{2\Lambda^2}$. Taking $f_\Phi \sim \Lambda \sim 5$ TeV, $\sin \vartheta \simeq 1/3$, and order one values for $\lambda_{\bar d \bar d}^{12}$ and $c_{QQ}^{12}$, we find $\overline M_d \sim 5$ TeV, and $M_d \sim 50$ GeV. Twin fermion masses can be distorted away from the MTH expectation in a variety of ways, but there are correlated effects in the visible sector due to the ${\ensuremath{\mathbb{Z}_2}}$ related interactions. For example, if both $\lambda_{\bar d \bar d}$ and $c_{\bar u L}$ in Eq. (\[eq:L-triplet-2/3-B\]) are nonzero, both baryon number and lepton number are violated by one unit, leading to nucleon decay in the visible sector. These and other indirect constraints on scalar-fermion couplings are outlined in Sec. \[sec:indirect\]. ### Color sextet For the color sextet scalar we focus, for concreteness, on the case $\Phi_B \sim ({\bf 6}, {\bf 1}, -\tfrac{2}{3} ) $. With these quantum numbers we can add the following interactions to the Lagrangian: $$\begin{aligned} -{\cal L} & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ \Phi_B \, \bar d_B \, \bar d_B + \frac{c_{Q Q}}{ 2\Lambda^2 }\, \Phi^\dag_B \, (H^\dag_B Q_B) (H^\dag_B Q_B) + {\rm H.c.} ~, \label{eq:L-sextet-2/3-B-0}\end{aligned}$$ where the couplings $\lambda_{\bar d \bar d}$, $c_{Q Q}$ in Eq. (\[eq:L-sextet-2/3-B-2\]) are symmetric in generation space. In contrast to the triplet case, no lepton mass terms are generated from Eq. (\[eq:L-octet-0-B-1\]). There are, however, new mass terms generated for down type quarks. We examine each of the two possible gauge symmetry breaking patterns for the color sextet in turn. For case [**II**]{}, the sextet scalar obtains a VEV, $\langle \Phi_{Bij}\rangle = f_\Phi \delta_{i3}\delta_{j3}$ (Eq. (\[eq:VEV-sextet-1\])), leading to the symmetry breaking pattern $[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SU(2)_c \times U(1)'_{\rm EM}]_B $. Using Eq. (\[eq:triplet-quarks-hatted\]), the twin quark masses that follow from Eq. (\[eq:L-sextet-2/3-B-0\]) are given by $$\begin{aligned} -{\cal L} & \supset \frac{1}{2} \lambda_{\bar d \bar d} ~ f_\Phi \, \hat{\bar d}_{B 3} \,\hat{ \bar d}_{B 3} + \frac{c_{Q Q} v_B^2 f_\Phi }{ 4\Lambda^2 }\, \hat d_{B3} \, \hat d_{B3} + {\rm H.c.}~. \label{eq:L-sextet-2/3-B-1}\end{aligned}$$ These are Majorana mass terms for the ‘3rd color’ ($[SU(2)_c]_B$ singlet) down quark fields, and are consistent with the fact that these quarks are not charged under the unbroken twin electromagnetic gauge symmetry; see Table \[tab:triplet-charges\]. Alternatively, if the symmetry breakdown proceeds via $[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SO(3)_c]_B$ due to the VEV $\langle \Phi_{B\, i j} \rangle = \tfrac{f_\Phi}{\sqrt{3}}\delta_{ij}$ (Eq. (\[eq:VEV-sextet-2\])), case [**III**]{}, the down type quarks obtain a mass $$\begin{aligned} -{\cal L} & \supset \frac{ \lambda_{\bar d \bar d} \, f_\Phi }{2\sqrt{3} }\, \bar d_B \, \bar d_B + \frac{c_{Q Q} \, v_B^2 f_\Phi }{ 4 \sqrt{3}\,\Lambda^2 } \,d_B \, d_B + {\rm H.c.} ~. \label{eq:L-sextet-2/3-B-2}\end{aligned}$$ We see that Majorana mass terms for the $[SO(3)_c]_B$ down quark fields are generated. The presence of such mass terms is consistent with the fact that there are no unbroken $U(1)$ gauge symmetries in the low energy theory. The new mass terms in Eqs. (\[eq:L-sextet-2/3-B-1\]) and (\[eq:L-sextet-2/3-B-2\]) can dominate over the usual EW ones for large enough couplings, and may or may not feature a seesaw behavior in analogy with the color triplet example discussed above. In case [**II**]{}, Eq. (\[eq:L-sextet-2/3-B-1\]), only the ‘3rd color’, $[SU(2)_c]_B $ singlet quark obtains a mass. Conversely, in case [**III**]{}, Eq. (\[eq:L-sextet-2/3-B-2\]), all quark colors can be lifted. ### Color octet In models with a real octet scalar, $\Phi_B \sim ({\bf 8}, {\bf 1}, 0 )$, there are two possible couplings to quark pairs that arise from dimension 5 operators, $$\begin{aligned} -{\cal L} & \supset \frac{c_{Q \bar u}}{ \Lambda }\, \Phi_B (Q_B H_B) \, \bar u_B + \frac{c_{Q \bar d}}{ \Lambda }\, \Phi_B (H^\dag_B Q_B) \, \bar d_B + {\rm H.c.}~. \label{eq:L-octet-0-B-1}\end{aligned}$$ As with the sextet, no lepton mass terms are generated from Eq. (\[eq:L-octet-0-B-1\]), while the resulting quark mass terms are similar to the standard ones arising from the Higgs Yukawa couplings (\[eq:Y\]) in that they marry $SU(2)_L$ singlet and doublet quarks. The precise form of the quark masses depend on the pattern of gauge symmetry breaking. For case [**IV**]{}, the octet scalar obtains a VEV, $\Phi_B = \sqrt{2} f_\Phi T^8$ (Eq. (\[eq:VEV-octet-1\])), leading to the symmetry breaking pattern $[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow SU(2)_c \times U(1)_c \times U(1)_{\rm EM}]_B$. Using Eq. (\[eq:triplet-quarks-hatted\]), the twin fermion masses that follow from Eq. (\[eq:L-octet-0-B-1\]) are given by $$\begin{aligned} -{\cal L} & \supset \frac{c_{Q \bar u} \, v_B \, f_\Phi }{2 \sqrt{3}\, \Lambda }\, \left( \hat u_B \, \hat{ \bar u}_B - 2 \, \hat u_{B 3} \, \hat{ \bar u}_{B 3} \right) + \frac{c_{Q \bar d} \, v_B \, f_\Phi }{ 2 \sqrt{3}\, \Lambda }\, \left( \hat d_B \, \hat{ \bar d}_B - 2 \, \hat d_{B 3} \, \hat{ \bar d}_{B 3} \right) + {\rm H.c.}~. \label{eq:L-octet-0-B-2}\end{aligned}$$ Interestingly, in this case all quark colors obtain a mass from a single interaction. In case [**V**]{}, the octet scalar obtains a VEV, $\Phi_B = \sqrt{2} f_\Phi T^3$ (Eq. (\[eq:VEV-octet-2\])), leading to the symmetry breaking pattern $[SU(3)_c \times SU(2)_L\times U(1)_Y \rightarrow U(1)_c \times U(1)'_c \times U(1)_{\rm EM}]_B$. The twin quark masses resulting from Eq. (\[eq:L-octet-0-B-1\]) are $$\begin{aligned} -{\cal L} & \supset \frac{c_{Q \bar u} v_B f_\Phi}{2 \Lambda }\, \left( u_{B 1}\, \bar u_{B1} -u_{B 2} \, \bar u_{B2} \right) + \frac{c_{Q \bar d} v_B f_\Phi }{2 \Lambda }\, \left( d_{B 1} \, \bar d_{B1} -d_{B 2} \, \bar d_{B2} \right) + {\rm H.c.} ~. \label{eq:L-octet-0-B-3}\end{aligned}$$ In this case, only the first and second quark colors are lifted, while the third color does not obtain a mass. This is consistent with the unbroken $[U(1)_c \times U(1)'_c \times U(1)_{\rm EM}]_B$ gauge symmetry. The mass terms in Eqs. (\[eq:L-octet-0-B-2\]) and (\[eq:L-octet-0-B-3\]) can be as large as ${\cal O}(100\, {\rm GeV})$ for order one Wilson coefficients and $f_\Phi \sim \Lambda$. ### Other sources of twin fermion masses Thus far we have considered twin fermion masses involving a single colored scalar field, and all such possibilities of this type are shown in Table \[tab:scalar-representations\]. Additional options arise from couplings involving two colored scalars. First, there is always the possibility of coupling the gauge singlet operator $|\Phi_B|^2$ to the usual Higgs Yukawa operators, e.g., $|\Phi_B|^2 (H^\dag L_B) \bar e_B$. After $\Phi_B$ obtains a VEV, effective Yukawa couplings are generated in the twin sector, which can exceed the SM ones by a factor of 10100 for the light generations without spoiling naturalness; see the discussion in Ref. [@Batell:2019ptb] for further details. Furthermore, we can couple two color triplet scalars to pairs of quark fields in nontrivial ways to generate new twin quark masses. As an illustration consider $\Phi \sim ({\bf 3}, {\bf 1}, \tfrac{2}{3} ) $, with operator $\Phi_{B\, i} \, \Phi_{B\, j} \, \bar u_B^i \, \bar u_B^j \supset f_\Phi^2 \, \hat{ \bar u}_{B 3} \, \hat{ \bar u}_{B 3}$, which provides an additional mass term beyond those presented in Eq. (\[eq:L-triplet-2/3-B\]). Indirect constraints {#sec:indirect} ==================== The previous section showed that the spontaneous breakdown of twin color and ${\ensuremath{\mathbb{Z}_2}}$ can also dynamically generate new twin fermion mass terms, when there are sizable couplings between the colored scalar fields and matter fields. The exact ${\ensuremath{\mathbb{Z}_2}}$ symmetry correlates these new masses to visible sector phenomena, including baryon and lepton number violation, quark and lepton flavor changing processes, deviations in electroweak probes, and CP-violation. Indirect tests in the visible sector can limit the size and structure of the new twin fermion mass terms. Given the range of models and possible new couplings (see Table \[tab:scalar-representations\]), a complete vetting of these constraints is beyond our scope. Instead, we provide illustrative examples of the characteristic phenomena that can occur. Many of the phenomena we consider here occur in the context of R-parity violating supersymmetry; for a review see Ref. [@Barbier:2004ez]. Baryon and lepton number violation ---------------------------------- In triplet models with hypercharge $Y_\Phi=\frac23,-\frac13,-\frac43$ the proton may decay, which leads to strong constraints on certain combinations of couplings. For a comprehensive review on proton decay see Ref. [@Nath:2006ut]. For example, consider $\Phi \sim ({\bf 3}, {\bf 1},-\tfrac{1}{3})$ with non-vanishing couplings to the first generation, $$\begin{aligned} {\cal L}& \supset & \lambda_{QL}^{11} \, \Phi_A^\dag \, (Q_A^1 L_A^1 ) + \lambda_{\bar u \bar d}^{11} \, \Phi_A^\dag \, \bar u_A^1 \, \bar d_A^1 +{\rm H.c.} \nonumber \\ & \supset & \lambda_{QL}^{11} \, \phi_A^\dag \, u_A \, e_A + \lambda_{\bar u \bar d}^{11} \, \phi_A^\dag \, \bar u_A \, \bar d_A + {\rm H.c.}~. \label{eq:L-BL1}\end{aligned}$$ In this case, tree level exchange of $\phi_A$ allows the proton to decay into a pion and positron, $p^+ \rightarrow e^+ \pi^0$, with decay width $$\begin{aligned} \Gamma(p^+ \rightarrow e^+ \pi^0) & = & \frac{ | \lambda_{QL}^{11} \, \lambda_{\bar u \bar d}^{11} \, |^2}{m_{\phi_A}^4} \frac{|\alpha|^2 (1+F+D)^2m_p}{64 \pi f^2}\left(1-\frac{m_\pi^2}{m_p^2}\right)^2 \\ & \simeq & (10^{34}\, {\rm yr})^{-1} \left( \frac{\sqrt{| \lambda_{QL}^{11} \, \lambda_{\bar u \bar d}^{11} \, |}}{4 \times 10^{-13}} \right)^4\left( \frac{{\rm TeV}}{m_{\phi_A}} \right)^4 \nonumber\end{aligned}$$ where $|\alpha| = 0.0090\, {\rm GeV}^3$ [@Tsutsui:2004qc] is the nucleon decay hadronic matrix element, $F+D \simeq 1.267$ [@Cabibbo:2003cu] is a baryon chiral Lagrangian parameter, and $f = 131$ MeV. The current limits from Ref. [@Miura:2016krn] for this channel are $\tau_p/{ {\rm Br}(p^+ \rightarrow e^+ \pi^0)} > 1.6 \times 10^{34}$ yrs at 90$\%$ C.L. The non-observation of proton decay generally places strong limits on pairs of couplings that violate $B$ in triplet scalars models. Depending on the flavor structure of the couplings, there may be other proton decay modes and other nucleon/baryon decays allowed. In scenarios with a single colored scalar in the visible sector, nucleon decays with $\Delta B = 1$ are usually the most sensitive probes of $B$ violating couplings. Processes like neutron-antineutron oscillations and dinucleon decays with $\Delta B = 2$ are expected to be less sensitive. However, if there are additional colored scalar fields present then such $\Delta B = 2$ processes can be observable; see e.g., Ref. [@Arnold:2012sd] for a recent study. In triplet models with $Y_\Phi=\frac23,-\frac13$, certain combinations of scalar-fermion couplings can violate lepton number by two units while conserving baryon number. In such cases we generally expect that neutrino masses are generated radiatively. For instance, consider again $\Phi \sim ({\bf 3}, {\bf 1}, -\tfrac{1}{3})$, but with with the following interactions: $$\begin{aligned} -{\cal L} & \supset \lambda_{Q L} \, \Phi^\dag_A ( Q_A L_A ) + \frac{c_{\bar d L}}{ \Lambda }\, \Phi_A \, \bar d_A \, (L_A H_A) + {\rm H.c.} \nonumber \\ & \supset - \lambda_{Q L} \, \phi^\dag_A \, d_A \, \nu_A + \frac{c_{\bar d L} v_A}{ \sqrt{2} \Lambda }\, \phi_A \, \bar d_A \, \nu_A + {\rm H.c.}~. \label{eq:L-triplet-1/3-A}\end{aligned}$$ These interactions break lepton number by two units. Neutrino masses will be generated at one loop, with characteristic size $$m_\nu \sim \frac{\lambda_{QL} \, c_{\bar d L} \, m_d \, v_A }{16\sqrt{2} \pi^2 \Lambda } \log\left( \frac{\Lambda}{m_{\phi_A}} \right) \approx 0.1 \, {\rm eV} \, \left( \frac{ \lambda_{QL} \, c_{\bar d L} }{ 10^{-7} } \right) \left( \frac{ 5 \, { \rm TeV} }{ \Lambda } \right).$$ Here we have fixed $m_{\phi_A} = 1 \, {\rm TeV}$ and used the bottom mass for $m_d$, which leads to the strongest constraint. Quark and lepton FCNC --------------------- The interactions of the colored scalars with matter in Table \[tab:scalar-representations\] can also lead to new tree level or radiative flavor changing neutral currents (FCNCs) in the quark and lepton sectors. A variety of rare FCNC processes are possible, many of which impose strong constraints on the new scalar-fermion couplings. For instance, sextet and octet models can mediate new tree level contributions to $\Delta F = 2$ transitions in the kaon system. Taking $\Phi \sim ({\bf 6}, {\bf 1},-\tfrac{2}{3})$ as an example, we write the interaction $${\cal L} \supset \frac{1}{2} \, \lambda_{\bar d \bar d} \, \phi_A \, \bar d_A \, \bar d_A \, +{\rm H.c.}~.$$ If the diagonal couplings $\lambda^{11}_{\bar d \bar d}$ and $\lambda^{22}_{\bar d \bar d}$ are nonvanishing, then tree level sextet scalar exchange generates the effective interaction $${\cal L} \supset C_{V,RR}^{sd}\, (\bar s_A \gamma^\mu P_R d_A) (\bar s_A \gamma^\mu P_R d_A) +{\rm H.c.}~, \label{eq:Kaon-O-VRR}$$ with Wilson coefficient $$C_{V,RR}^{sd} = \frac{ \lambda^{11}_{\bar d \bar d} ~ \lambda^{22^*}_{\bar d \bar d} }{8 m_{\phi_A}^2} \approx \left(\frac{1}{10^4 \,{\rm TeV}}\right)^2 \left(\frac{\rm TeV}{m_{\phi_A}}\right)^2 \left( \frac{ \lambda^{11}_{\bar d \bar d} ~ \lambda^{22^*}_{\bar d \bar d} }{10^{-7}} \right). \label{eq:Kaon-O-VRR-2}$$ Current constraints on such operators probe new physics scales of order $10^4$ TeV [@Bona:2007vi], which, noting Eq. (\[eq:Kaon-O-VRR-2\]), limits the typical size of these couplings to be at the level of $10^{-3}$ or smaller. Octet scalars, $\Phi \sim ({\bf 8}, {\bf 1}, 0 )$, can also induce neutral meson mixing at tree level. After electroweak symmetry breaking, the scalar-quark coupling is $$\begin{aligned} - {\cal L} & \supset \frac{c_{Q \bar d}\, v_A }{\sqrt{2} \Lambda }\, \phi_A \, d_A \, \bar d_A + {\rm H.c.}~. \label{eq:L-octet-0-A}\end{aligned}$$ If, for instance, $c_{Q\bar d}^{12}$ is nonzero, exchange of $\phi_A$ generates the effective interaction $${\cal L} \supset C_{S,LL}^{sd}\, (\bar s_A^{\,i} P_L d_{A j}) (\bar s_A^{\,j} P_L d_{A i}) +{\rm H.c.}~,$$ where $i,j$ denote color indices. The Wilson coefficient is given by $$C_{S,LL}^{sd} = \frac{ \, (c^{12}_{Q \bar d}\,)^2 \, v_A^2 }{8 m_{\phi_A}^2 \Lambda^2} \approx \left(\frac{1}{10^4 \,{\rm TeV}}\right)^2 \left(\frac{\rm TeV}{m_{\phi_A}}\right)^2 \left(\frac{5 \, \rm TeV}{\Lambda}\right)^2 \left( \frac{ c^{11}_{Q \bar d} }{6 \times 10^{-3}} \right)^2.$$ While color triplet scalars do not mediate tree level $\Delta F = 2$ transitions, sizable loop contributions to these operators can arise. As an example consider $\Phi \sim ({\bf 3}, {\bf 1}, -\tfrac{1}{3} )$ with interaction $$-{\cal L} = \lambda_{\bar u \bar d} \, \phi_A^\dag \, \bar u_A \, \bar d_A +{\rm H.c.}~.$$ There are two types of one-loop box diagrams that generate contributions to Kaon mixing [@Barbieri:1985ty; @Slavich:2000xm]. The first involves the exchange of two colored scalars and leads to the effective Lagrangian (\[eq:Kaon-O-VRR\]). In the limit $m_{\phi_A} \gg m_t$, the Wilson coefficient is $$-C_{V,RR}^{sd} = \frac{ \left( \sum_{I} \lambda_{\bar u \bar d}^{I2} \lambda_{\bar u \bar d}^{I1*} \right)^2}{64 \pi^2 m_{\phi_A}^2} \approx \left(\frac{1}{10^4 \,{\rm TeV}}\right)^2 \left(\frac{\rm TeV}{m_{\phi_A}}\right)^2 \left( \frac{ \sum_{I} \lambda_{\bar u \bar d}^{I2} \lambda_{\bar u \bar d}^{I1*} }{3 \times 10^{-3}} \right)^2.$$ The second type of diagram involves the exchange of one $W$ boson and one colored scalar, leading to the effective Lagrangian $${\cal L} \supset C_{S,RL}^{sd}\, \left[ (\bar s_A^{\,i} \, P_R \, d_{A i}) (\bar s_A^{\,j} \, P_L \, d_{A j}) - (\bar s_A^{\,i} \, P_R \, d_{A j}) (\bar s_A^{\,j} \, P_L \, d_{A i})\right] +{\rm H.c.}~,$$ For anarchic couplings $\lambda_{\bar u \bar d}$ and heavy scalar mass $m_{\phi_A} \gg m_t$, the leading contribution is $$C_{S,RL}^{sd} = \frac{ G_F }{ 8 \sqrt{2} \pi^2 } V_{td} V_{ts}^* \, \lambda_{\bar u \bar d}^{32} \, \lambda_{\bar u \bar d}^{31*} \, \frac{ m_t^2 }{m_\phi^2} \, \log \left( \frac{ m_\phi^2 }{m_W^2}\right) \approx \left(\frac{1}{10^4 \,{\rm TeV}}\right)^2 \left(\frac{\rm TeV}{m_{\phi_A}}\right)^2 \left( \frac{ \lambda_{\bar u \bar d}^{32}\, \lambda_{\bar u \bar d}^{31*} }{2 \times 10^{-3}} \right).$$ Thus, the typical constraints on the couplings in this case are at the $10^{-2}$$10^{-1}$ level. Color triplets can also facilitate lepton flavor violation, such as the decay $\mu \rightarrow e \gamma$. If $\Phi \sim ({\bf 3}, {\bf 1}, -\tfrac{1}{3})$, for example, the coupling $\lambda_{QL}$ in Eq. (\[eq:L-BL1\]) is $$-{\cal L} \supset \lambda_{Q L} \, \Phi^\dag_A ( Q_A L_A ) +{\rm H.c.}~.$$ The $\mu \rightarrow e \gamma$ branching ratio is found to be $$\begin{aligned} {\rm Br}(\mu \rightarrow e \gamma) & = & \tau_\mu \frac{\alpha \, | \sum_I \lambda_{Q L}^{I1*} \lambda_{Q L}^{I2} |^2 \,m_\mu^5}{2^{14} \, \pi^4 \, m_{\phi}^4}\nonumber \\ & \simeq & 4 \times 10^{-13} \left(\frac{1 \, \rm TeV}{m_\phi} \right)^4 \left( \frac{ | \sum_I \lambda_{Q L}^{I1*} \lambda_{Q L}^{I2} |^2 }{2 \times 10^{-6}} \right), \end{aligned}$$ where $\tau_\mu \simeq 2.2 \times 10^{-6}$ s is the muon lifetime. The MEG experiment has placed a 90$\%$ CL upper bound on the branching ratio, ${\rm Br}(\mu \rightarrow e \gamma)_{\rm MEG} < 4.2 \times 10^{-13}$ [@TheMEG:2016wtm]. So, for a colored triplet with mass of order 1 TeV, the couplings are typically constrained to be smaller than about 0.03. Electric dipole moments ----------------------- When multiple scalar-fermion couplings are present in the theory new physical complex phases to appear. These can source new flavor-diagonal CP violation in the form of fermion electric dipole moments (EDMs). To illustrate, we investigate the contribution to electron electric dipole moment coming from a triplet $\Phi \sim ({\bf 3}, {\bf 1}, -\tfrac{1}{3})$ with interactions $$-{\cal L} \supset \lambda_{QL} \Phi^\dag_A \, (Q_A L_A ) + \lambda_{\bar u \bar e} \, \Phi_A \, \bar u_A \, \bar e_A +{\rm H.c.}~.$$ Exchange of up-type quarks leads to an electron EDM at one loop, described by the effective Lagrangian $${\cal L} \supset - \frac{i}{2} \, d_e \, \bar e_A \, \sigma_{\mu\nu} \gamma^5 e_A \, F_A^{\mu\nu}.$$ In the case of flavor anarchic couplings, the top loop dominates and leads to the prediction $$d_e \simeq \frac{e \, m_t }{32 \pi^2 m_\phi^2} \left[ 7 + 4 \log \left(\frac{m_t^2}{m_\phi^2}\right)\right] {\rm Im}[ \lambda_{QL}^{31} \, \lambda_{\bar u \bar e}^{31}] \approx 10^{-29} \, e \, {\rm cm} \left( \frac{1 \, {\rm TeV}}{m_{\phi_A}} \right)^2 \left( \frac{ {\rm Im}[ \lambda_{QL}^{31} \, \lambda_{\bar u \bar e}^{31}] }{10^{-10}} \right).$$ The best constraint on the electron EDM comes from the ACME collaboration: $|d_e| < 1.1 \times 10^{-29} e$ cm [@Andreev:2018ayy]. We see that for generic complex phases the constraints on the couplings are quite severe for this scenario. We expect that the neutron EDM can also provide a promising probe of certain combinations of couplings. Charged current processes ------------------------- The new interactions of fermions with colored scalars can also lead to new charged current processes. To illustrate, we consider here the decays of charged pions that occur for $\Phi \sim ({\bf 3}, {\bf 1}, -\tfrac{1}{3})$ with interaction $$\begin{aligned} {\cal L}& \supset & \lambda_{QL} \, \Phi_A^\dag \, (Q_A \, L_A) +{\rm H.c.} \end{aligned}$$ Nonvanishing $(\lambda_{QL})_{11}$ or $(\lambda_{QL})_{12}$ lead to a modification to the lepton universality ratio, $$R_\pi \equiv \frac{\Gamma(\pi^- \rightarrow e^- \bar \nu_e)}{\Gamma(\pi^- \rightarrow \mu^- \bar \nu_\mu)} \simeq R_\pi^{\rm SM}\left( 1+ \frac{ |\lambda_{QL}^{11}|^2- |\lambda_{QL}^{12}|^2}{2\, \sqrt{2} \, G_F \, |V_{ud}| \, m_{\phi_A}^2} \right). \label{eq:Rpi}$$ We have neglected the effects of decays such as $\pi^- \rightarrow e^- \bar \nu_\mu$, etc., which do not interfere with the SM weak contribution, retaining only the dominant coherent contributions. The SM prediction [@Bryman:2011zz] and measured value [@Aguilar-Arevalo:2015cdf] are $$R_\pi^{\rm SM} = 1.2352(2) \times 10^{-4}, ~~~~~ R_\pi^{\rm exp} = 1.2344(30) \times 10^{-4},$$ where the experimental uncertainty dominates the theoretical uncertainty. We apply a $2\sigma$ C.L. bound by demanding the new physics correction in Eq. (\[eq:Rpi\]) is less than twice the experimental uncertainty. This leads to the constraint $$\sqrt{|\lambda_{QL}^{11}|^2 - |\lambda_{QL}^{12}|^2} < 0.4 \, \left( \frac{m_{\phi_A}}{1\, \rm TeV}\right)$$ In addition to pion decays, such couplings may be probed in hadronic tau decays as well as tests of charged current universality in the quark sector. Discussion ---------- Evidently, interactions between the colored scalar and matter can manifest in a host of precision tests. The exact ${\ensuremath{\mathbb{Z}_2}}$ symmetry in our scenario ties any constraints coming from these measurements to the possible form and maximum size of the new twin fermion mass terms generated by those couplings (see Sec. \[sec:scalar-matter-couplings\]). We have seen that some of these constraints can be quite stringent (e.g., from baryon number violation or FCNCs), although it is clear that they hinge, in many cases, on a particular coupling combination or flavor structure. Though it is beyond our scope, it would be interesting to explore more broadly how the various patterns of new twin fermion mass terms arising from twin gauge symmetry breaking intersect with experimental constraints. Collider phenomenology {#sec:collider} ====================== Direct searches for colored scalars ----------------------------------- The colored scalar field $\phi_A$ in the visible sector can naturally have a mass near the TeV scale and could therefore be produced in large numbers at hadron colliders like the LHC. We concentrate on pair production, $p\, p \rightarrow \phi_A \, \phi_A^*$, since as an inevitable consequence of the strong interaction it provides the most robust probe of the colored scalars. There can also be single $\phi_A$ production channels provided the scalar-fermion couplings discussed in Sec. \[sec:scalar-matter-couplings\] are sizeable, e.g., $q q' \rightarrow \phi_A$, $q g \rightarrow \phi_A \ell$, etc, but we focus on the various signatures expected from colored scalar pair production. - [ *Squark Searches*]{}:    Color triplet scalars with quantum numbers $({\bf 3}, {\bf 1}, -\tfrac{1}{3})$, $({\bf 3}, {\bf 1}, \tfrac{2}{3})$ can decay to any quark flavor and a neutrino, $\phi_A \rightarrow q\, \nu$. The resulting collider signatures are identical to those of squark pair production in the Minimal Supersymmetric Standard Model, in which the squark decays to a quark and a massless stable neutralino. Therefore, searches for first and second generation squarks, sbottoms, and stops can be directly applied to these scenarios. A CMS search based on 137 ${\rm fb}^{-1}$ at $\sqrt{s} = 13$ TeV rules out a single squark decaying to a light jet and massless neutralino for squark masses below about 1.2 TeV [@Sirunyan:2019ctn], while comparable limits have been obtained by ATLAS [@ATLAS-CONF-2019-040]. Final states containing a bottom or top quark along with a neutrino resemble sbottom or stop searches, which constrain the triplet scalars to be heavier than about 1.2 TeV [@Sirunyan:2019ctn; @ATLAS-CONF-2020-003]. The HL-LHC and, especially, a future 100 TeV hadron collider will be able to significantly extend the mass reach for such scalars. Taking stops as an example, the HL-LHC (3 ${\rm ab}^{-1}$, $\sqrt{s} = 14$ TeV) will be able to constrain scalar masses up to about 1.6 TeV [@CidVidal:2018eel], while a future 100 TeV collider can probe scalars as heavy as 10 TeV [@Benedikt:2018csr]. - [*Leptopquark Searches*]{}:    The color triplet models may also feature ‘leptoquark’ signals if the scalar decays to a quark and a charged lepton. A number of searches have targeted various leptoquark signals, depending on the flavor of the quark and charged lepton in the decay. Searches for first- and second-generation leptoquarks focus on the signature $\ell \ell j j $, with $\ell$ being an electron or muon. The best limits to date exclude scalar masses in the 1.41.6 TeV range and below [@Aaboud:2019jcc; @Sirunyan:2018btu; @Sirunyan:2018ryt]. The scalar may also have a significant branching ratio into a light jet and a neutrino. To cover these scenarios experiments have searched for the $\ell \nu j j $ final state, though these tend to give somewhat weaker constraints in comparison to the $\ell \ell j j $ channel. In the future, the HL-LHC will be able to probe first and second generation leptoquarks in the 23 TeV range, while a future 100 TeV hadron collider will be able to extend the reach to the 10 TeV range and beyond; see, e.g., Ref. [@Allanach:2019zfr] for a phenomenological study of the prospects in the $\mu\mu jj$ channel. Various searches for third generation leptoquarks exist in which the scalar decays involve one or more of $\tau, b, t$. For example, scalars decaying to $t \tau$ ($b \tau$) are constrained to be heavier than about 900 GeV (1 TeV) by ATLAS and CMS searches [@Aaboud:2019bye; @Sirunyan:2018vhk; @Sirunyan:2018nkj]. There is also a CMS search in the $t \mu$ channel that constrains scalar masses below 1.4 TeV [@Sirunyan:2018ruf]. Bounds on scalar leptoquarks decaying to $t e$ have been obtained from a recast of a CMS SUSY multipleptons analysis [@Diaz:2017lit; @CMS:2017iir] and probe scalar masses below about 900 GeV. Finally, ATLAS searches [@Aaboud:2017opj] for scalar leptoquarks decaying to $b e$ and $b \mu$ place mass limits in the 1.5 TeV range. See Refs. [@Diaz:2017lit; @Schmaltz:2018nls] for a comprehensive guide to leptoquark searches. - [ *Diquark searches*]{}:    Colored triplets, sextets, and octets may also decay to pairs of quarks or quark-antiquark pairs, $\phi_A \rightarrow qq$ or $\phi_A \rightarrow q \bar q$. Pair produced colored scalars then form four quark final states. Both ATLAS [@Aaboud:2017nmi] and CMS [@Sirunyan:2018rlj] have searched for such paired dijet resonances using a portion of the Run 2 dataset, and constrain color triplet scalars below about 500 GeV (600 GeV) when the scalar decays to light jets (one bottom jet and one light jet). The ATLAS study also interprets their result in the context of color octet scalars decaying to a pair of jets, limiting octet scalars below about 800 GeV. Because the pair production cross section for sextet scalars is comparable to that of octets [@Chen:2008hh; @GoncalvesNetto:2012nt; @Degrande:2014sta], we expect that similar limits for sextets decaying to pairs of light jets. In the long term, we expect the full HL-LHC dataset to improve the mass reach by a factor of two or more. Decays to $t \bar t$ are another interesting channel though the collaborations have not yet undertaken dedicated studies for pair produced scalars decaying to top-quarks. However, a recast of a CMS analysis of SM four top production has been performed [@Darme:2018dvz] and constrains color octets with masses below about 1 TeV. By scaling up to the full HL-LHC 3${\rm ab}^{-1}$ dataset at $\sqrt{s}=14$ TeV this limit can be extended to octet masses of about 1.3 TeV [@Azzi:2019yne] . - [*Long-lived particle signatures*]{}:   The signatures discussed above assume prompt scalar decays. However, if the couplings of the scalar to fermions discussed in Sec. \[sec:scalar-matter-couplings\] are suppressed, the scalar may be long-lived on collider scales. A variety of potential signatures exist in this case, many of which are quite striking and have small SM backgrounds. Examples include heavy stable R-hadrons, displaced vertices and kinked tracks. There is an active program at the LHC to search for signatures of this kind, and we refer the readers to the recent review articles [@Lee:2018pag; @Alimena:2019zri] for an in-depth survey. Higgs coupling modifications ---------------------------- A coupling between the colored scalar and the Higgs fields is an essential ingredient in our scenario. This couplings allows for viable electroweak vacuum alignment, following spontaneous ${\ensuremath{\mathbb{Z}_2}}$ breaking by the $\Phi_B$ VEV. Consequently, the physical Higgs scalar and the colored scalars are coupled, $V \supset A_{h \phi_A^\dag \phi_A } h \, |\phi_A|^2$, where $A_{h \phi_A^\dag \phi_A }$ is given in Eq. (\[eq:triplet-cubic-scalar\]). Through this coupling the new colored, charged scalars generate one loop contributions to the $h\gamma \gamma$ and $h gg$ effective couplings, which can modify the decay of the Higgs to two photons or the production of the Higgs in gluon fusion. These modifications can be expressed in terms of modifications of the Higgs partial widths. Assuming $2 m_\phi \gg m_h$, we find (see e.g., Ref. [@Batell:2011pz]): $$\begin{aligned} \frac{\Gamma(h\rightarrow \gamma \gamma)}{\Gamma(h\rightarrow \gamma \gamma)_{\rm SM}} & \simeq & \bigg\vert \cos\vartheta - c_\Phi \, d_\Phi \, Y_\Phi^2 \, \frac{A_{h \phi_A \phi_A^*} \, v_A }{6\, m_\phi^2\, A_{\gamma \gamma}^{\rm SM}} \bigg\vert^2, \\ \frac{\Gamma(h\rightarrow gg)}{\Gamma(h\rightarrow gg)_{\rm SM}} & \simeq & \bigg\vert \cos\vartheta + c_\Phi \,T_\Phi \, \frac{A_{h \phi_A \phi_A^*} \, v_A }{3 \, m_\phi^2 \, A_{gg}^{\rm SM}} \bigg\vert^2,\end{aligned}$$ where $A_{\gamma\gamma}^{\rm SM}\approx 6.5$, $A_{gg}^{\rm SM}\approx 1.4$, $d_\Phi$ is the dimension of the scalar representation, $T_\Phi$ is its Dynkin index, and $c_\Phi = 1$ $(\tfrac{1}{2})$ for complex (real) scalars. The LHC has measured the $h\gamma \gamma$ and $h gg$ couplings with 10% precision [@Aad:2019mbh; @Sirunyan:2018koj]. For $\sin \vartheta \lesssim 1/3$, we find that current measurements can only probe relatively light scalars and low symmetry breaking scales $f_\Phi$, typically below about 300 (500 GeV) for color triplet (sextet and octet) scalars. In most cases direct searches for pair produced colored scalars yield stronger limits. However, as these searches depend on the assumed decay mode, Higgs coupling measurements still offer a complementary test of light colored and charged scalars. Looking forward, the Higgs coupling measurements at the HL-LHC and at future colliders may be able to achieve percent level precision, probing smaller values of $\sin\vartheta$ and/or heavy colored scalar masses. The radial modes of the color symmetry breaking will also have a small effect upon the Higgs couplings, but as shown for the analogous hypercharge case the effect is typically negligible [@Batell:2019ptb]. Outlook {#sec:outlook} ======= The Mirror Twin Higgs provides an elegant symmetry-based understanding of the apparent little hierarchy between the EW scale and the dynamics at the 510 TeV scale posited to address the big hierarchy problem. Arguments related to vacuum alignment and cosmology suggest that the mirror symmetry protecting the light Higgs must be broken, and an attractive possibility is that this ${\ensuremath{\mathbb{Z}_2}}$ breaking is spontaneous in nature. In this work, we have investigated the simultaneous spontaneous breakdown of the twin color gauge symmetry and ${\ensuremath{\mathbb{Z}_2}}$. Remarkably, despite being related by an exact mirror symmetry in the UV, vast differences between the two sectors are exhibited in the low energy effective theory below the TeV scale as a result of spontaneous symmetry breaking. These difference manifest in the residual unbroken gauge symmetries, color confinement scale, and particle spectrum. The richness of these effects is tied to the variety of possible colored scalar representations and associated symmetry breaking patterns. We have outlined five minimal possibilities for models with a single color triplet, sextet, or octet, and explored how the twin sector departs from the mirror onset. In particular, we have shown how new dynamical mass terms may be generated for the twin fermions. These effects are tied by the discrete ${\ensuremath{\mathbb{Z}_2}}$ symmetry to precision tests in the visible sector, allowing additional handles on uncovering the twin structure without direct access to many of the states. Furthermore, the new colored states may be probed at the LHC and at future high energy colliders. There are a number of open questions worthy of further consideration. Seeing as departures from MTH scenarios are often motivated by cosmology it would be very interesting to examine the possible cosmological histories within our models. For instance, the addition of a new colored field could play a role in baryogenesis. Moreover, the twin baryons and other bound states of the various residual color symmetries may provide interesting dark matter candidates or manifest as a new form of dark radiation. In many cases these dark sectors may exhibit novel gauge interactions, including new long range forces and/or very low confinement scales. Another direction concerns the possible UV completions of our models. In particular, we expect that the new colored scalars utilized in this work may find a natural home in supersymmetric completions as a superpartner of a quark, or in composite Higgs models as a colored pNGB. B.B. and W. H. are supported by the U.S. Department of Energy under grant No. DE-SC0007914. C.B.V. is supported in part by NSF Grant No. PHY-1915005 and in part by Simons Investigator Award \#376204. Nonlinear realizations\[a.RadPhi\] ================================== In this appendix we provide some details pertaining to the nonlinear parameterizations and scalar potential analyses for the sextet and octet models. The analysis closely follows that of the scalar triplet in Sec. \[sec:nonlinear\]. In each case we use Eq. (\[eq:NLP-Higgs\]) for the Higgs fields and provide the unitary gauge nonlinear parameterization of the colored scalar fields. Color Sextet ------------ Including the Higgs fields, the ${\ensuremath{\mathbb{Z}_2}}$ symmetric scalar potential is given by $$\begin{aligned} V & = - M_H^2 \, |H|^2 + \lambda_H \, |H|^4 - M_\Phi^2 \, |\Phi|^2 + \lambda_\Phi \, |\Phi|^4 + \lambda_{H\Phi} \, |H|^2 \, |\Phi|^2 \nonumber \\ & + \delta_H \, \left(\, |H_A|^4 + |H_B|^4 \,\right) + \delta_{\Phi 1} \left[ ( {\mbox{Tr}}\, \Phi_A^\dag \Phi_A)^2 + ( {\mbox{Tr}}\, \Phi_B^\dag \Phi_B)^2 \right] \label{eq:H-sextet-potential} \\ &+ \delta_{\Phi 2} \left( {\mbox{Tr}}\, \Phi_A^\dag \Phi_A \Phi_A^\dag \Phi_A + {\mbox{Tr}}\, \Phi_B^\dag \Phi_B \Phi_B^\dag \Phi_B \right) + \delta_{H\Phi} \,\left( |H_A|^2 - |H_B|^2 \right) \left( {\mbox{Tr}}\, \Phi_A^\dag \Phi_A - {\mbox{Tr}}\, \Phi_B^\dag \Phi_B \right), \nonumber\end{aligned}$$ where $|H|^2 = H_A^\dag H_A + H_B^\dag H_B$ and $|\Phi|^2 = {\mbox{Tr}}\, \Phi_A^\dag \Phi_A +{\mbox{Tr}}\, \Phi_B^\dag \Phi_B$. As shown in Sec. \[sec:sextets-isolated\] there are two symmetry breaking patterns to consider: ### $[SU(3)_c \rightarrow SU(2)_c]_B$ In this case, the colored scalar fields can be parameterized in unitary gauge as $$\Phi_A = \phi_A \frac{ \sin{(\hat \phi/f_\Phi)}}{ \hat \phi/f_\Phi }, ~~~~~~~~~~ \Phi_B = \left( \renewcommand*{{1.1}}{0.3} \begin{array}{c|c} -i \sigma^2 \phi_B \displaystyle{\frac{ \sin{(\hat \phi/f_\Phi)}}{ \hat \phi/f_\Phi }} & 0 \\ \\ \hline \\ 0& f_\Phi \cos{(\hat \phi/f_\Phi)} \\ \end{array} \right), ~~~ \label{eq:NLP-sextet-1}$$ where $\phi_A$ is a complex sextet of $[SU(3)_c]_A$, $\phi_B$ is a complex triplet under $[SU(2)_c]_B$, and $\hat \phi^2 \equiv {\mbox{Tr}}\, \phi_A^\dag \phi_A +{\mbox{Tr}}\, \phi_B^\dag \phi_B$. The sextet is represented as a symmetric tensor, $(\phi_A)_{ij}$ with $i, j = 1,2,3$, and the complex triplet can be represented as $\phi_B = \phi_B^\alpha \tau^\alpha$, with complex components $\phi_B^\alpha$, $\alpha = 1,2,3$. Inserting Eqs. (\[eq:NLP-Higgs\]) and (\[eq:NLP-sextet-1\]) into Eq. (\[eq:H-sextet-potential\]) yields the potential for the pNGB fields. Minimizing this potential leads to the same condition defining the vacuum angle as was found for the triplet scalar, Eq. (\[eq:EWvaccum-triplet-1\]), as well as the same expression for the physical Higgs boson mass, Eq. (\[eq:Higgsmass-triplet-1\]). Furthermore, we find the following expressions for the masses of the physical colored scalar fields: $$\begin{aligned} m_{\phi_A}^2 & = 2 \left( -\delta_{\Phi 1} -\delta_{\Phi 2} + \frac{\delta_{H\Phi}^2}{\delta_H} \right) f_\Phi^2, \label{eq:phiAmass-sextet-1} \\ m_{\phi_B}^2 & = - 2 \, \delta_{\Phi 2} \, f_\Phi^2, \label{eq:phiBmass-sextet-1}\end{aligned}$$ The same expression for the cubic scalar coupling $V \supset A_{h \phi_A^\dag \phi_A } h \, {\mbox{Tr}}\, \phi_A^\dag \phi_A$, as in Eq. (\[eq:triplet-cubic-scalar\]), is also obtained. ### $SU(3) \rightarrow SO(3)$ In this case, the colored scalar fields can be parameterized in unitary gauge as $$\Phi_A = \phi_A \frac{ \sin ( \hat \phi/ f_\Phi ) }{ \hat \phi/ f_\Phi }, ~~~~~~~~~~ \Phi_B = \frac{f_\Phi}{\sqrt{3}} \cos ( \hat \phi/ f_\Phi ) \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) + \phi_B \frac{ \sin{ ( \hat \phi / f_\Phi )}}{ \hat \phi/f_\Phi}, ~~~~ \label{eq:NLP-sextet-2}$$ where $\phi_A$ is a complex sextet of $[SU(3)_c]_A$, $\phi_B$ is a real quintuplet under $[SO(3)_c]_B$, and $\hat \phi^2 \equiv {\mbox{Tr}}\, \phi_A^\dag \phi_A +{\mbox{Tr}}\, \phi_B^2$. In particular, we represent the sextet as a symmetric tensor, $(\phi_A)_{ij}$ with $i, j = 1,2,3$, and the real quintuplet as $\phi_B = \phi_B^{\bar a} T^{\bar a}$, with real components $\phi_B^{\bar a}$ and index $\bar a = 1,3,4,6,8$ running over the broken generators. By inserting Eqs. (\[eq:NLP-Higgs\]) and (\[eq:NLP-sextet-2\]) into Eq. (\[eq:H-sextet-potential\]) we can derive the potential for the pNGB scalars. Minimizing this potential leads to the same condition defining the vacuum angle as was found for the triplet scalar, Eq. (\[eq:EWvaccum-triplet-1\]), as well as the same expression for the physical Higgs boson mass, Eq. (\[eq:Higgsmass-triplet-1\]). Furthermore, we find the following expressions for the masses of the physical colored scalar fields: $$\begin{aligned} m_{\phi_A}^2 & = 2 \left( - \delta_{\Phi 1} -\frac{\delta_{\Phi 2}}{3} + \frac{\delta_{H\Phi}^2}{\delta_H} \right) f_\Phi^2, \label{eq:phiAmass-sextet-2} \\ m_{\phi_B}^2 & = \frac{4}{3} \, \delta_{\Phi 2} \, f_\Phi^2. \label{eq:phiBmass-sextet-2}\end{aligned}$$ We also obtain the same expressions for the cubic scalar coupling $V \supset A_{h \phi_A^\dag \phi_A } h \, {\mbox{Tr}}\, \phi_A^\dag \phi_A$, as in Eq. (\[eq:triplet-cubic-scalar\]). Color Octet ----------- Including the Higgs fields, we will consider the following ${\ensuremath{\mathbb{Z}_2}}$ symmetric scalar potential: $$\begin{aligned} V & = - M_H^2 \, |H|^2 + \lambda_H \, |H|^4 - M_\Phi^2 \, |\Phi|^2 + \lambda_\Phi \, |\Phi|^4 + \lambda_{H\Phi} \, |H|^2 \, |\Phi|^2 \nonumber \\ & + \delta_H \left(\, |H_A|^4 + |H_B|^4 \,\right) + \delta_{\Phi} \left[ ( {\mbox{Tr}}\, \Phi_A^2 )^2 + ( {\mbox{Tr}}\, \Phi_B^2)^2 \right] \nonumber \\ & + \delta_{H\Phi} \,\left( |H_A|^2 - |H_B|^2 \right) \left( {\mbox{Tr}}\, \Phi_A^2 - {\mbox{Tr}}\, \Phi_B^2 \right) + V_3 + V_6~, \label{eq:H-octet-potential} \end{aligned}$$ where $|H|^2 = H_A^\dag H_A + H_B^\dag H_B$ and $|\Phi|^2 = {\mbox{Tr}}\, \Phi_A^2 +{\mbox{Tr}}\, \Phi_B^2$. We have included the possibility of a cubic interaction and higher dimension operators, $$\begin{aligned} \label{eq:octet-cubic} V_3 & = A\, ( {\mbox{Tr}}\, \Phi_A^3 + {\mbox{Tr}}\, \Phi_B^3), \\ \label{eq:octet-dimsix} V_6 & = \frac{c}{\Lambda^2} \,( {\mbox{Tr}}\, \Phi_A^6 + {\mbox{Tr}}\, \Phi_B^6 ).\end{aligned}$$ As discussed in Sec. \[sec:octets-isolated\], the inclusion of such terms leads to a unique ground state in which the residual unbroken twin color gauge symmetry is either $[SU(2)_c\times U(1)_c]_B$ or $[U(1)_c\times U(1)'_c]_B$. We discuss each case in turn. ### $[SU(3)_c \rightarrow SU(2)_c\times U(1)_c]_B$ In this case, the color octet can be parameterized in unitary gauge as $$\Phi_A = \phi_A \frac{ \sin{(\hat \phi/f_\Phi)}}{ \hat \phi/f_\Phi }, ~~~~~~~~~~ \Phi_B = \sqrt{2}\, f_\Phi \cos{(\hat \phi/f_\Phi)} T^8 + \left( \renewcommand*{{1.1}}{0.3} \begin{array}{c|c} \phi_B \displaystyle{\frac{ \sin{(\hat \phi/f_\Phi)}}{ \hat \phi/f_\Phi }} & 0 \\ \\ \hline \\ 0& 0 \\ \end{array} \right), ~~~ \label{eq:NLP-octet-1}$$ where $\phi_A$ is a real octet of $[SU(3)_c]_A$, $\phi_B$ is a real triplet under $[SU(2)_c]_B$, and $\hat \phi^2 \equiv {\mbox{Tr}}\, \phi_A^2 +{\mbox{Tr}}\, \phi_B^2$. We represent the octet as $\phi_A = \phi_A^a T^a$ with $a = 1,2, \dots 8$ and the triplet as $\phi_B= \phi_B^\alpha \tau^\alpha$ with $\alpha = 1,2,3$. All components $\phi_A^a$, $\phi_B^\alpha$ are real scalars. Inserting Eqs. (\[eq:NLP-Higgs\]) and (\[eq:NLP-octet-1\]) into Eq. (\[eq:H-octet-potential\]) including the cubic term $V_3$ (\[eq:octet-cubic\]), we can derive the potential for the pNGB scalars. Minimizing this potential leads to the same condition defining the vacuum angle as was found for the triplet scalar, Eq. (\[eq:EWvaccum-triplet-1\]), as well as the same expression for the physical Higgs boson mass, Eq. (\[eq:Higgsmass-triplet-1\]). Furthermore, we find the following expressions for the masses of the physical colored scalar fields: $$\begin{aligned} m_{\phi_A}^2 & = \left( -2 \,\delta_{\Phi } + \sqrt{\frac{3}{8}} \frac{A}{f_\Phi} + \frac{2\,\delta_{H\Phi}^2}{\delta_H} \right) f_\Phi^2, \label{eq:phiAmass-octet-1} \\ m_{\phi_B}^2 & = \sqrt{\frac{27}{8}} A \, f_\Phi, \label{eq:phiBmass-octet-1}\end{aligned}$$ We also obtain the same expressions for the cubic scalar coupling $V \supset A_{h \phi_A^\dag \phi_A } h \, {\mbox{Tr}}\, \phi_A^\dag \phi_A$, as in Eq. (\[eq:triplet-cubic-scalar\]). For completeness we note that a cubic coupling ${\mbox{Tr}}\, \phi_A^3$ is present in this case, with coupling constant equal to $A$. ### $[SU(3)_c \rightarrow U(1)_c\times U(1)'_c]_B$ In this case we can parameterize the fields as $$\Phi_A = \phi_A \frac{ \sin{(\hat \phi/f_\Phi)}}{ \hat \phi/f_\Phi }, ~~~~~~~~~~ \Phi_B = \sqrt{2} \, f_\Phi \cos{(\hat \phi/f_\Phi)} \, T^3 + \phi_B \displaystyle{\frac{ \sin{(\hat \phi/f_\Phi)}}{ \hat \phi/f_\Phi }} \, T^8, \label{eq:NLP-octet-2}$$ Inserting Eqs. (\[eq:NLP-Higgs\]) and (\[eq:NLP-octet-2\]) into Eq. (\[eq:H-octet-potential\]) including the dimension-six operator $V_6$ (\[eq:octet-dimsix\]), we can derive the potential for the pNGB scalars. Minimizing this potential leads to the same condition defining the vacuum angle as was found for the triplet scalar, Eq. (\[eq:EWvaccum-triplet-1\]), as well as the same expression for the physical Higgs boson mass, Eq. (\[eq:Higgsmass-triplet-1\]). Furthermore, we find the following expressions for the masses of the physical colored scalar fields: $$\begin{aligned} m_{\phi_A}^2 & = \left( -2 \, \delta_{\Phi } - \frac{3 }{4}\frac{c\, f_\Phi^2}{\Lambda^2} + \frac{2\, \delta_{H\Phi}^2}{\delta_H} \right) f_\Phi^2, \label{eq:phiAmass-octet-2} \\ m_{\phi_B}^2 & = \frac{ c \, f_\Phi^4}{2 \,\Lambda^2}, \label{eq:phiBmass-octet-2}\end{aligned}$$ We also obtain the same expressions for the cubic scalar coupling $V \supset A_{h \phi_A^\dag \phi_A } h \, {\mbox{Tr}}\, \phi_A^\dag \phi_A$, as in Eq. (\[eq:triplet-cubic-scalar\]). [^1]: Other connections between Twin Higgs models and SM flavor structure have been explored in [@Csaki:2015gfd; @Barbieri:2017opf; @Altmannshofer:2020mfp]. [^2]: Note that $\delta$ is radiatively generated by the $SU(3)_c$ interactions with characteristic size $\delta \sim \alpha_s^2 \sim 10^{-2}$. [^3]: Higgs coupling measurements imply that $\vartheta$ cannot be too big, while naturalness suggests it not be too small [@Burdman:2014zta]. [^4]: We note that, e.g., $\bar d$ here (without the subscript $A$) denotes the outgoing particle state in the decay rather than the field variable in the Lagrangian, in this case anti-down quark. [^5]: Strictly speaking these are not Majorana mass terms, since they marry quarks of different flavor and color.
--- abstract: 'A rephasing invariant parametrization is introduced for three flavor neutrino mixing. For neutrino propagation in matter, these parameters are shown to obey evolution equations as functions of the induced neutrino mass. These equations are found to preserve (approximately) some characteristic features of the mixing matrix, resulting in solutions which exhibit striking patterns as the induced mass varies. The approximate solutions are compared to numerical integrations and found to be quite accurate.' author: - 'S. H. Chiu[^1]' - 'T. K. Kuo[^2]' title: Rephasing invariance and neutrino mixing --- Introduction ============ Flavor mixing plays a central role in the physics of flavors. For quarks, the CKM ($V_{CKM}$) matrix has stood the test of time and is found to be sufficient in describing all of the relevant physics. Similarly, the PMNS ($V_{\nu}$) matrix has been used to analyze neutrino oscillation with no known discrepancies. Mathematically, both matrices belong to elements of $U(3)$, the $3 \times 3$ unitary matrices. Physically, for quarks, since the phases of individual quark states are unobservable, the rephasing transformation, $V_{CKM} \rightarrow PV_{CKM}P'$, where $P$ and $P'$ are arbitrary phase matrices, leaves the physics unchanged. Thus, only the rephasing invariant part of $V_{CKM}$ is physical. For $V_{\nu}$, while the charged lepton phases are unobservable, for Majorana neutrinos actually there are two observable, CP-violating, phases [@ref1]. However, as long as one restricts oneself to lepton number conserving processes, such as in neutrino oscillations, these phases also become unphysical so that the physical $V_{\nu}$ is again rephasing invariant. Related to the rephasing invariance of mixing matrices is their parametrization. While it may appear that the choice of parametrization is not important, since, at the end of the day, the physical quantities must be grouped into rephasing invariant combinations. However, when one deals with a situation where relations between parameters are considered, a particular choice may be advantageous over others. For instance, when the mixing depends on the energy scale, as in the RGE for mass matrices, we have a set of evolution equations relating parameters at neighboring scales. Another example deals with neutrino mixing in matter. Here, the mixing depends on the density of matter and neutrino energy, contributing to an induced neutrino mass. One can establish the relation between parameters for neighboring densities, resulting in a set of evolution equations as a function of the neutrino effective mass. They are very similar to those of the energy scale as described by a set of RGE. It turns out that, in both cases, the use of explicitly rephasing invariant parameters simplifies the evolution equations. In the following we will derive a set of evolution equations, as a function of the effective mass of neutrinos, for neutrino parameters in matter. These equations are based on the use of rephasing invariant parameters developed earlier. We find that they have simple, analytic, albeit approximate, solutions. It is interesting that the parameters in matter preserve a number of salient features of those in vacuum, resulting in a matter-dependent PMNS matrix that can be grasped at a glance. The paper is organized as follows. Section II is a brief summary of the rephasing invariant parametrization that is adopted in this work. In stead of directly solving the eigenvalue problem, we derive in Section III the evolution equations for the neutrino mixing parameters and masses from the effective Hamiltonian in matter. Certain well-known invariants are also derived using the symmetric properties of the equations. Section IV is devoted first to solving the two-flavor problem using this rephasing invariant formulation, and then the three-flavor case. Making use of the known properties of measured neutrino parameters, analytic, approximate, solutions are obtained. In Section V, the accuracy of the solutions are confirmed by comparison with numerical integration of the equations. Section VI is the summary. In appendix A, we also derive the neutrino transition probabilities in matter using the adopted rephasing invariant parametrization. Rephasing invariant parametrization =================================== In this section, we briefly summarize the rephasing invariant parametrization introduced earlier for quark mixing [@Kuo:05], and will now be adopted for neutrino mixing, valid for lepton number conserving processes. For the PMNS matrix ($V$), without loss of generality, we can impose the condition $\mbox{det}V=+1$. There are then a set of rephasing invariants $$\label{eq:g} \Gamma_{ijk}=V_{1i}V_{2j}V_{3k}=R_{ijk}-iJ,$$ where their common imaginary part can be identified with the Jarlskog invariant $J$ [@Jar:85]. Their real parts are defined as $$(R_{123},R_{231},R_{312};R_{132},R_{213},R_{321}) =(x_{1},x_{2},x_{3};y_{1},y_{2},y_{3}).$$ These variables are bounded by $\pm 1$: $-1 \leq (x_{i},y_{j}) \leq +1$, with $y_{j} \leq x_{i}$ for any ($i,j$). They satisfy two constraints $$\begin{aligned} \label{cons} \mbox{det}V=(x_{1}+x_{2}+x_{3})-(y_{1}+y_{2}+y_{3})=1, \\ (x_{1}x_{2}+x_{2}x_{3}+x_{3}x_{1})-(y_{1}y_{2}+y_{2}y_{3}+y_{3}y_{1})=0.\end{aligned}$$ In addition, it is found that $$\label{eq:J} J^{2}=x_{1}x_{2}x_{3}-y_{1}y_{2}y_{3}.$$ The $(x,y)$ parameters are related to $|V_{ij}|^{2}$ by $$\label{eq:w} W = \left[|V_{ij}|^{2}\right] = \left(\begin{array}{ccc} x_{1}-y_{1} & x_{2}-y_{2} & x_{3}-y_{3} \\ x_{3}-y_{2} & x_{1}-y_{3} & x_{2}-y_{1} \\ x_{2}-y_{3} & x_{3}-y_{1} & x_{1}-y_{2} \\ \end{array}\right).$$ One can readily obtain the parameters $(x,y)$ from $W$ by computing its cofactors, which form the matrix $w$ with $w^{T}W=(\mbox{det}W)I$, and is given by $$\label{eq:co} w = \left(\begin{array}{ccc} x_{1}+y_{1} & x_{2}+y_{2} & x_{3}+y_{3} \\ x_{3}+y_{2} & x_{1}+y_{3} & x_{2}+y_{1} \\ x_{2}+y_{3} & x_{3}+y_{1} & x_{1}+y_{2} \\ \end{array}\right).$$ Eqs. (\[eq:w\]) and  (\[eq:co\]) establish the close relationship between the two rephasing invariant parametrizations $(x,y)$ and $|V_{ij}|^{2}$. Besides the obvious difference in the number of constraints (two for $(x,y)$ and five for $|V_{ij}|^{2}$), the set $(x,y)$ has built-in symmetry amongst the three states considered, which, as we will see, helps to make the evolution equations simpler. For the PMNS matrix in vacuum, its elements squared are well-approximated by $$\label{w0} W_{0} = \left(\begin{array}{ccc} \frac{2(1-\epsilon^{2})}{3}-2\eta & \frac{1-\epsilon^{2}}{3}+2\eta & \epsilon^{2} \\ \frac{1+2\epsilon^{2}-\xi}{6}+\beta+\eta & \frac{2+\epsilon^{2}-2\xi}{6}-\beta-\eta & \frac{1-\epsilon^{2}+\xi}{2} \\ \frac{1+2\epsilon^{2}+\xi}{6}-\beta+\eta & \frac{2+\epsilon^{2}+2\xi}{6}+\beta-\eta & \frac{1-\epsilon^{2}-\xi}{2} \\ \end{array}\right),$$ with $(\epsilon, \eta, \beta, \xi) \ll 1$. $W_{0}$ reduces to the tri-bimaximal [@tribi] matrix when $\epsilon=\eta=\beta=\xi=0$. If we allow the parameters $(\epsilon, \eta, \beta, \xi)$ to take on arbitrary values, the matrix above can be used as a general parametrization of the mixing matrix. Also, it is related to the familiar “standard parametrization" [@data] by $S^{2}_{13}=\epsilon^{2}$, $S^{2}_{12}=\frac{1}{3}+\frac{2\eta}{1-\epsilon^{2}}$, $S^{2}_{23}=\frac{1}{2}+\frac{1}{2}\frac{\xi}{1-\epsilon^{2}}$, and $2\beta=(S^{2}_{23}-C^{2}_{23})[-\frac{2}{3}C^{2}_{13}+C^{2}_{12}-S^{2}_{12}S^{2}_{13}]+ 4S_{12}C_{12}S_{13}C_{23}S_{23}\cos\phi$, so that, if $(\epsilon, \eta, \xi) \ll1$, $\beta \simeq \frac{\sqrt{2}}{3}C_{\phi}S_{13}$. The matrix $W_{0}$ in Eq. (\[w0\]) exhibit several interesting features. When $\epsilon=\eta=\beta=\xi=0$, we find $x_{10}= 1/3$, $x_{20}= 1/6$, $x_{30}= 0$, and $x_{i0}+y_{i0}= 0$ $(i=1,2,3)$. The conditions $x_{30}=y_{30}=0$ come from $W_{13}=0$ (so also $V_{13}=0$). The conditions $x_{i0}+y_{i0}=0$ are equivalent to $W_{2i}=W_{3i}$ [@Wij]. From known experimental bounds, for non-vanishing $(\epsilon,\eta,\beta,\xi)$, these conditions are valid to $\mathcal{O}(10^{-2})$. Evolution of neutrino mixing parameters ======================================= It is well-established that neutrino mixing is modified by the presence of matter [@MSW]. Their effect has been used in the analyses of solar neutrinos, and is expected to impact those of the supernova neutrinos, when and if they become available. Closer to home, there is a plethora of long baseline experiments either in operation or in the planning stage. For these studies, it is essential to include the matter effects in order to understand neutrino mixing at the fundamental level. In the literature, effort has been devoted to solving problems along this line (see, $e.g.$, [@group]). However, the process involves the complication of the cubic eigenvalue problems, and the results are usually far from transparent for a clear extraction of the physical implications. In this work we study this problem from another angle. It is well-known that, when neutrinos propagate through matter, the latter contributes an induced mass to the neutrinos. Similar to the case of RGE, we may write down, as a function of the induced mass, a set of evolution equations for the neutrino parameters. It turns out that, with the initial conditions given by $W_{0}$ in Eq. (\[w0\]), we can find simple, approximate, solutions to these equations, as we will detail in Sec. IV. These results were summarized in a previous publication [@CKL]. To derive these equations, we start from the effective Hamiltonian for neutrino propagation in matter $$H_{eff}=H/2E,$$ where $H$ is given, in the flavor basis, by $$\label{H} H=\left[ V_{0} \left(\begin{array}{ccc} m_{1}^{2} & & \\ & m_{2}^{2} & \\ & & m_{3}^{2} \\ \end{array}\right) V_{0}^{\dag} + \left(\begin{array}{ccc} A & & \\ & 0 & \\ & & 0 \\ \end{array}\right)\right],$$ where $m_{1}$, $m_{2}$, and $m_{3}$ are the neutrino masses in vacuum, $V_{0}$ is the mixing matrix in vacuum, $E$ is the neutrino energy, and the induced mass $A=\sqrt{2}G_{F}n_{e}E$. The matrix $H$ can be diagonalized, $$H=VDV^{\dag}=V \left(\begin{array}{ccc} D_{1} & & \\ &D_{2} & \\ & & D_{3} \\ \end{array}\right) V^{\dag},$$ where $D_{i}=M_{i}^{2}$ is the squared mass in matter. To study how the elements of $V$ evolve in matter, one may start with $dH/dA$, which leads to $$\label{eq:matrix} V^{\dag}\frac{d}{dA}[VDV^{\dag}]V =\left(\begin{array}{ccc} |V_{11}|^{2} & V_{12}V_{11}^{*} & V_{13}V_{11}^{*} \\ V_{11}V_{12}^{*} & |V_{12}|^{2} & V_{13}V_{12}^{*} \\ V_{11}V_{13}^{*} & V_{12}V_{13}^{*} & |V_{13}|^{2} \\ \end{array}\right).$$ Taking the diagonal terms of Eq. (\[eq:matrix\]), we find $$\label{eq:di} \frac{dD_{i}}{dA}= |V_{1i}|^{2}=x_{i}-y_{i}, \hspace{.2in} (i=1,2,3).$$ The off-diagonal terms yield $$\label{eq:VV} [(\frac{dV^{\dag}}{dA})V]_{ik}=\frac{V^{*}_{1i}V_{1k}}{D_{i}-D_{k}}, \hspace{.2in} (i \neq k).$$ The diagonal elements $[(dV^{\dag}/dA)V]_{ii}$ are not constrained by Eq. (\[eq:matrix\]). Fortunately, it is rephasing dependent [@Chiu:09], and we can set it to vanish by a proper choice of the phase. This means that, when we multiply Eq. (\[eq:VV\]) by $(V^{\dag})_{kj}$, and sum over $k \neq i$ on the right hand side, we may sum over all $k$-values on the left. The result is $$\label{eq:dV} \frac{dV_{ij}}{dA} =\sum_{k\neq j} \frac{V_{ik}V_{1j}}{D_{j}-D_{k}} V_{1k}^{*}.$$ Note that the dependence is only on the mass differences, $(D_{j}-D_{k})$, in accordance with the invariance of $V_{ij}$ if $H \rightarrow H+\mbox{constant}$. While Eq. (\[eq:dV\]) is valid only with a particular choice of phase, this rephasing ambiguity is removed if one uses it to compute rephasing invariant quantities, $e.g.$, $$\frac{d\Gamma_{123}}{dA} =\frac{d}{dA}(V_{11}V_{22}V_{33})= \frac{dx_{1}}{dA}-i\frac{dJ}{dA}.$$ After some algebra, separating the real and imaginary parts, in addition to using different $\Gamma^{'}_{ijk}s$, we obtain the evolution equations for all $(x_{i},y_{i})$ and $d\ln J/dA$, which are collected in Table I. Note that, since Eq. (\[eq:dV\]) can be obtained from Eq.(3.6) (Ref. [@Chiu:09]) in appropriate limits, the entries in Table I can be identified with those in Table II of Ref. [@Chiu:09]. Indeed, it can be verified that $dx_{i}/dA=\sum[(\bar{B}_{i})_{2r}-(\bar{B}_{i})_{3r}]/(D_{s}-D_{t})$, $dy_{j}/dA=\sum[(\bar{B}'_{j})_{2r}-(\bar{B}'_{j})_{3r}]/(D_{s}-D_{t})$, where the sum is over cyclicly permuted $(r,s,t)=(1,2,3)$, and $\bar{B}_{i} (\bar{B}'_{j})$ are obtained from $B_{i} (B'_{j})$ in Table II of Ref. [@Chiu:09] by exchanging $x_{2} \leftrightarrow x_{3}$, since $V \leftrightarrow V^{\dag}$ under the usual conventions in going from quarks to neutrinos. $1/(D_{1}-D_{2})$ $1/(D_{2}-D_{3})$ $1/(D_{3}-D_{1})$ --------------- ------------------------------------------------ ------------------------------------------------ ----------------------------------------------- $dx_{1}/dA$ $x_{1}x_{2}-2x_{1}y_{2}+y_{1}y_{2}$ $-x_{1}x_{2}+x_{1}x_{3}+y_{1}y_{2}-y_{1}y_{3}$ $-x_{1}x_{3}+2x_{1}y_{3}-y_{1}y_{3}$ $dx_{2}/dA$ $-x_{1}x_{2}+2x_{2}y_{1}-y_{1}y_{2}$ $x_{2}x_{3}-2x_{2}y_{3}+y_{2}y_{3}$ $x_{1}x_{2}-x_{2}x_{3}-y_{1}y_{2}+y_{2}y_{3}$ $dx_{3}/dA$ $-x_{1}x_{3}+x_{2}x_{3}+y_{1}y_{3}-y_{2}y_{3}$ $-x_{2}x_{3}+2x_{3}y_{2}-y_{2}y_{3}$ $x_{1}x_{3}-2x_{3}y_{1}+y_{1}y_{3}$ $dy_{1}/dA$ $ -x_{1}x_{2}+2x_{2}y_{1}-y_{1}y_{2}$ $-x_{1}x_{2}+x_{1}x_{3}+y_{1}y_{2}-y_{1}y_{3}$ $x_{1}x_{3}-2x_{3}y_{1}+y_{1}y_{3}$ $dy_{2}/dA$ $x_{1}x_{2}-2x_{1}y_{2}+y_{1}y_{2}$ $-x_{2}x_{3}+2x_{3}y_{2}-y_{2}y_{3}$ $x_{1}x_{2}-x_{2}x_{3}-y_{1}y_{2}+y_{2}y_{3}$ $dy_{3}/dA$ $-x_{1}x_{3}+x_{2}x_{3}+y_{1}y_{3}-y_{2}y_{3}$ $x_{2}x_{3}-2x_{2}y_{3}+y_{2}y_{3}$ $-x_{1}x_{3}+2x_{1}y_{3}-y_{1}y_{3}$ $d(\ln J)/dA$ $-x_{1}+x_{2}+y_{1}-y_{2}$ $-x_{2}+x_{3}+y_{2}-y_{3}$ $x_{1}-x_{3}-y_{1}+y_{3}$ The symmetric form of these equations allows us to find readily the result: $$\label{JD} \frac{d}{dA}\ln[J(D_{1}-D_{2})(D_{2}-D_{3})(D_{3}-D_{1})]=0,$$ $i.e.$, the product $[J(D_{1}-D_{2})(D_{2}-D_{3})(D_{3}-D_{1})]$ is a constant as $A$ changes, a well known result derived with different methods [@HNK]. From Table I, we find $$\label{x1y1} \frac{1}{2}\frac{d}{dA}\ln(x_{1}-y_{1})=\frac{x_{2}-y_{2}}{D_{1}-D_{2}}- \frac{x_{3}-y_{3}}{D_{3}-D_{1}},$$ $$\label{x2y2} \frac{1}{2} \frac{d}{dA}\ln(x_{2}-y_{2})= -\frac{x_{1}-y_{1}}{D_{1}-D_{2}}+ \frac{x_{3}-y_{3}}{D_{2}-D_{3}},$$ $$\label{x3y3} \frac{1}{2} \frac{d}{dA}\ln(x_{3}-y_{3})=-\frac{x_{2}-y_{2}}{D_{2}-D_{3}}+ \frac{x_{1}-y_{1}}{D_{3}-D_{1}}.$$ We see that there is another “matter invariant": $$\frac{d}{dA}[\frac{J^{2}}{(x_{1}-y_{1})(x_{2}-y_{2})(x_{3}-y_{3})}]=0.$$ Or, $$\label{eq:in2} J^{2}/(|V_{11}|^{2}|V_{12}|^{2}|V_{13}|^{2})=\mbox{constant}.$$ When we use the “standard parametrization", it is seen that $J^{2}/(|V_{11}|^{2}|V_{12}|^{2}|V_{13}|^{2})=S^{2}_{\phi}S^{2}_{23}C^{2}_{23}$, $i.e.$, $S_{\phi}\sin2\theta_{23}$ is independent of $A$, a result obtained earlier [@TP]. The evolution equations for $(x,y)$ also have a structure akin to that of the fixed point of single variable equations. It can be verified that, if $x_{i}+y_{i}=0$ $(i=1,2,3)$, then $$\label{sigma} \frac{d}{dA}(x_{j}+y_{j})=0, \hspace{.2in} j=(1,2,3).$$ This result is understandable since the conditions $x_{i}+y_{i}=0$ are equivalent to $W_{2i}=W_{3i}$, which, in turn, imply that the effective Hamiltonian $H$ has a $\mu-\tau$ exchange symmetry [@23-sym]. This symmetry is clearly independent of $A$ in Eq. (\[H\]), resulting in Eq. (\[sigma\]). Note also that there are actually only two independent constraints in $x_{i}+y_{i}=0$. Given any two of them, say for $i=1,2$, we can use Eq.(4) to derive $x_{3}+y_{3}=0$. Thus, the set of evolution equations has a “fixed surface" (in the four-dimensional parameter space), points on the surface defined by $x_{i}+y_{i}=0$ stay on it as $A$ varies. Approximate Solutions ===================== While analytical solutions to the equations in Table I are not available, as we will see, given the known physical parameters, one can exploit certain characteristic properties thereof to arrive at simple, but fairly accurate, solutions to these equations. Before we do that, it is instructive to first study the two flavor problem, which can be compared to the traditional approach, since exact solutions can be obtained in both cases. Two-flavor problem ------------------ For two flavors, we have $$\frac{dH}{dA}=\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array}\right),$$ with the familiar diagonalization matrix $$V=\left(\begin{array}{cc} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \\ \end{array}\right),$$ so that $x=V_{11}V_{22}=\cos^{2}\theta$, $y=V_{12}V_{21}=-\sin^{2}\theta$, $x-y=1$. The evolution equations are $$\begin{aligned} \label{evo} \frac{dD}{dA}&=&-(x+y), \nonumber \\ \frac{dx}{dA}&=& \frac{2xy}{D}=\frac{dy}{dA}, \end{aligned}$$ where $D \equiv D_{2}-D_{1}$. It follows that $$\label{alpha} \frac{d}{dA}(xyD^{2})=0,$$ $$\label{beta} \frac{d}{dA}[D (x+y)]=-(x+y)^{2}+4xy=-1.$$ Eq. (\[alpha\]) is the familiar result $$D^{2}\sin^{2}2\theta=D^{2}_{0} \sin^{2} 2\theta_{0}.$$ Eq. (\[beta\]) gives $$D (x+y)=-A+D_{0}(x+y)_{0},$$ and thus $$D^{2}=[(A-D_{0}\cos 2\theta_{0})^{2}+D^{2}_{0} \sin^{2} 2\theta_{0}],$$ which is the well-known resonance formula with $D=\mbox{min.}$ at $A=D_{0}\cos 2\theta_{0}$. These results show that the use of evolution equations is equivalent to the traditional method, that of finding directly the eigenvalues of the effective Hamiltonian. We now turn to the case of three flavors. Three-flavor problem -------------------- Experimentally, it is known that $\delta_{0}=m^{2}_{2}-m^{2}_{1} \cong 7.6 \times 10^{-5} eV^{2}$, $\Delta_{0}=m^{2}_{3}-m^{2}_{2} \cong 2.4 \times 10^{-3} eV^{2}$, so that $\delta_{0}/\Delta_{0} \ll 1$ (We assume the “normal" ordering of neutrino masses. The “inverted" case can be similarly treated). Note that these values are relevant to long baseline experiments since $A=\sqrt{2}G_{F}n_{e}E \sim (7.6 \times 10^{-5} eV^{2})(E/GeV)(\rho/g cm^{-3})$. Since $\delta_{0} \ll \Delta_{0}$, we expect that the three-flavor problem can be approximated by a pair of well separated two-flavor problems [@KP:89]. Indeed, the structure of the differential equations in Table I shows that the variables $(x_{i},y_{i})$ evolve slowly as a function of $A$ except for two regions, where $D_{1} \approx D_{2}$ and $D_{2} \approx D_{3}$, corresponding to the two resonance regions. More precisely, let us denote by $(A_{0},A_{l},A_{i},A_{h},A_{d})$ the values of $A$ in vacuum $(A_{0}=0)$, at the lower resonance $(A_{l}, [d(D_{1}-D_{2})/dA]_{A_{l}}=0)$, the intermediate range $(A_{i})$, the higher resonance $(A_{h}), [d(D_{2}-D_{3})/dA]_{A_{h}}=0)$, and for dense medium $(A_{d})$. Rapid evolution for $(x_{i},y_{i})$ only occurs for $A\approx A_{l}$ and $A\approx A_{h}$. Near the lower resonance region, $(D_{2}-D_{1}) \ll (D_{3}-D_{1})$ or $(D_{3}-D_{2})$. For the higher resonance region, $(D_{3}-D_{2}) \ll (D_{3}-D_{1})$ or $(D_{2}-D_{1})$. Thus, for these two regions, we need only to keep terms $\propto 1/(D_{2}-D_{1})$ and $1/(D_{3}-D_{2})$, respectively. This approximation is generally valid to $\mathcal{O}(10^{-2})$. We now turn to a detailed analysis. For $0<A<A_{i}$, in the neighborhood of $A_{l}$, we keep terms $\propto 1/(D_{1}-D_{2})$ in Table I. Let’s concentrate on the variables $x_{i}-y_{i}=|V_{1i}|^{2}$ and use Eq. (\[eq:di\]) and Eqs. (\[x1y1\])-(\[x3y3\]). We define $$\begin{aligned} X&=&x_{1}-y_{1}, \nonumber \\ Y&=&x_{2}-y_{2}, \nonumber \\ Z&=&x_{3}-y_{3}, \nonumber \\ \delta&=&D_{2}-D_{1}.\end{aligned}$$ Then, $$\begin{aligned} \label{eq:XZ} \frac{dX}{dA}&=&-\frac{2XY}{\delta}=-\frac{dY}{dA}, \nonumber \\ \frac{dZ}{dA}&=&0, \nonumber \\ \frac{d\delta}{dA}&=&-(X-Y).\end{aligned}$$ These equations are identical to those for the two flavor problem, Eq. (\[evo\]), with $X \rightarrow x$, $Y \rightarrow -y$, and $\delta \rightarrow D$. Also, in place of $x-y=1$, we have $$X+Y=p_{l},$$ where $p_{l}$ is a constant (since $d(X+Y)/dA=0$), and $p_{l}=1-Z$, $Z=|V_{13}|^{2}=\mbox{constant}$. The solutions are $$\begin{aligned} \label{sol-low} XY\delta^{2}&=&X_{0}Y_{0}\delta^{2}_{0}, \nonumber \\ (X-Y)\delta&=&-p^{2}_{l} A+q_{l} \delta_{0}, \nonumber \\ q_{l}&=&X_{0}-Y_{0}.\end{aligned}$$ Explicitly, we have $$\begin{aligned} \label{low} \delta^{2}&=&p^{2}_{l}A^{2}-2q_{l}\delta_{0}A+\delta_{0}^{2},\nonumber \\ X&=&\frac{1}{2}[p_{l}-(p^{2}_{l}A-q_{l}\delta_{0})/\delta], \nonumber \\ Y&=&\frac{1}{2}[p_{l}+(p^{2}_{l}A-q_{l}\delta_{0})/\delta].\end{aligned}$$ Thus, $(\delta,X,Y)$ exhibit the classic resonance behavior, with the resonance location at the minimum of $\delta$: $$A_{l}=(\frac{q_{l}}{p_{l}^{2}})\delta_{0}.$$ Substituting in the vacuum input values, $X_{0} \cong 2/3$, $Y_{0} \cong 1/3$, $A_{l} \cong \delta_{0}/3$. The width of the resonance is $$(\delta A)_{l}=(1-q^{2}_{l}/p^{2}_{l})^{1/2}(\delta_{0}/p_{l}).$$ For the physical PMNS matrix, $(\delta A)_{l} \simeq \delta_{0}$. This means that the intermediate $A$ value, $A_{i}$, already starts at $A \gtrsim (2-3)\delta_{0}$. For $A \sim A_{i}$, $\delta \rightarrow A$, $X \rightarrow 0$, $Y \rightarrow 1$, with $p_{l} \cong 1$. Turning to the higher resonance, we define $$\Delta=D_{3}-D_{2}.$$ The evolution equations are $$\begin{aligned} \label{eq:XY} \frac{dX}{dA}&=&0, \nonumber \\ \frac{dY}{dA}&=&-\frac{2YZ}{\Delta}=-\frac{dZ}{dA}, \nonumber \\ \frac{d \Delta}{dA}&=&-(Y-Z),\end{aligned}$$ with the solutions $$\begin{aligned} YZ\Delta^{2}&=& Y_{0}Z_{0}\Delta^{2}_{0}, \nonumber \\ (Y-Z)\Delta &=& -p^{2}_{h}A+q_{h}\Delta_{0}, \end{aligned}$$ or, $$\begin{aligned} \label{high} \Delta^{2}&=&p^{2}_{h} A^{2}-2q_{h}\Delta_{0} A+\Delta^{2}_{0}, \nonumber \\ Y&=&\frac{1}{2}[p_{h}-(p^{2}_{h} A-q_{h} \Delta_{0})/\Delta], \nonumber \\ Z&=&\frac{1}{2}[p_{h}+(p^{2}_{h} A-q_{h} \Delta_{0})/\Delta],\end{aligned}$$ where $p_{h}=Y_{0}+Z_{0}$, $q_{h}=Y_{0}-Z_{0}$. Here, the values $Y_{0}$ and $Z_{0}$ are taken at $A=A_{i} \gg \delta_{0}$, so that $Y_{0} \cong 1$, $Z_{0} \cong |V_{13}|^{2} \cong 0$, from the solutions for the lower resonance. The position of the higher resonance is at $$A_{h}=(\frac{q_{h}}{p_{h}^{2}})\Delta_{0} \cong \Delta_{0}.$$ Its width is $$(\delta A)_{h}=(1-q^{2}_{h}/p^{2}_{h})^{1/2}(\Delta_{0}/p_{h}) \cong 2 Z_{0}\Delta_{0} \ll \Delta_{0}.$$ The above analyses show that the two-flavor approximation yields simple solutions to the mixing parameters $|V_{1i}|^{2}$, for all $A$ values. However, the vacuum mixing, given by $W_{0}$ in Eq. (\[w0\]), has another important feature, namely, $(W_{0})_{2i} \cong (W_{0})_{3i}$, or $x_{i0}+y_{i0} \cong 0$. This feature, according to Eq. (\[sigma\]), is preserved by the evolution equations and so $W_{2i} \cong W_{3i}$, or $x_{i}+y_{i} \cong 0$, for all $A$. Thus, with the known solutions for $W_{1i}$ from above, all elements of $W$ are determined by unitarity. Explicit solutions for $W$ or $(x_{i},y_{i})$ were presented in Ref. [@CKL], obtained by using both approximations. We may divide the full range of $A$ values into a low-$A$ and a high-$A$ regions. The former covers the range from $A=0$ to a value below the higher resonance region, while the later starts from beyond the lower resonance region and ends at $A=\infty$. In these regions, the evolution equations are dominated by contributions from pole terms, $1/(D_{1}-D_{2})$ for low-$A$ and $1/(D_{2}-D_{3})$ for high-$A$. The exact demarcation between low-$A$ and high-$A$ is not important, since in the intermediate region contributions from either pole are small, and there can be considerable overlap between low-$A$ and high-$A$, corresponding to the large range of $A_{i}$. It should be noted that “pole dominance", which was used to go from Eqs. (\[x1y1\]-\[x3y3\]) to Eqs. (\[eq:XZ\]) and  (\[eq:XY\]), is an excellent approximation in this situation. This is because the terms dropped are doubly suppressed, first by the large denominators, and then by the small numerators ($ (x_{3}-y_{3}) \ll 1$ throughout the low-$A$ region and $(x_{1}-y_{1}) \ll 1$ for high-$A$). We should also emphasize that Eqs. (\[eq:XZ\]) and  (\[eq:XY\]) are derived independently of the approximations $x_{i}+y_{i} \simeq 0$. While in our earlier work [@CKL], they were used to arrive at similar equations of evolution. The “pole dominance" approximation, less accurately, may also be used for other variables, such as $J^{2}$. This gives rise to relations, which we shall dub as “partial matter invariants", valid only for either the low-$A$ or the high-$A$ regions. Thus, from Eq. (\[sol-low\]) and using Table I, $$\begin{aligned} \label{pmiL} (D_{1}-D_{2})^{2}|V_{11}|^{2}|V_{12}|^{2} &\cong& \mbox{constant}, \hspace{0.2in} (\mbox{low-A}) \nonumber \\ J^{2}(D_{1}-D_{2})^{2} &\cong& \mbox{constant}.\end{aligned}$$ Similarly, $$\begin{aligned} \label{pmiH} (D_{2}-D_{3})^{2}|V_{12}|^{2}|V_{13}|^{2} &\cong& \mbox{constant}, \hspace{0.2in} (\mbox{high-A}) \nonumber \\ J^{2}(D_{2}-D_{3})^{2} &\cong& \mbox{constant}.\end{aligned}$$ These “partial matter invariants" are useful in understanding some detail properties of the parameters. $E.g.$, for $A \sim A_{i}$, $|V_{11}|^{2}\cong (2/9)(\delta_{0}/A)^{2}$. The behavior of $J^{2}$ is also clarified, as we will see in the discussion on Fig. 4. In summary, the solution to the three flavor problem can be made simple by dividing the full range of $A$ into a low-$A$ and a high-$A$ regions, In the low-$A$ region, the evolution equations are dominated by pole terms $\propto 1/(D_{1}-D_{2})$, and the solution centers around the lower resonance. For the high-$A$ region, correspondingly, pole terms $\propto 1/(D_{2}-D_{3})$ dominate, and the solution can be characterized by the higher resonance. The mixing parameters change appreciably only in two regions: 1) lower resonance, $[A_{l}-(\delta A)_{l}] \lesssim A \lesssim [A_{l}+(\delta A)_{l}]$, $A_{l} \cong \delta_{0}/3$, $(\delta A)_{l} \cong \delta_{0}$; 2) higher resonance, $[A_{h}-(\delta A)_{h}] \lesssim A \lesssim [A_{h}+(\delta A)_{h}]$, $A_{h} \cong \Delta_{0}$, $(\delta A)_{h} \cong 2|V_{13}|^{2}\Delta_{0} \ll \Delta_{0}$. The solutions for $X (|V_{11}|^{2})$, $Y(|V_{12}|^{2})$, and $Z (|V_{13}|^{2})$ are: 1) $A=0$, $X_{0} \cong 2/3$, $Y_{0} \cong 1/3$, $Z_{0} \cong \epsilon^{2} \ll 1$, which are the given vacuum values; 2) $A=A_{l}$, $X \cong Y \cong 1/2$, $Z \cong \epsilon^{2} \cong 0$; 3) $A=A_{i}$, $A_{i}$ covers roughly the range, $2\delta_{0} \lesssim A_{i} \lesssim \Delta_{0}(1-2\epsilon^{2})$, $X \cong 0$, $Y \cong 1$, $Z \cong \epsilon^{2}$; 4) $A=A_{h}$, $X \cong 0$, $Y \cong Z \cong 1/2$; 5) $A=A_{d}$, with $A_{d} \gtrsim \Delta_{0} (1+2\epsilon^{2})$, $X\cong Y \cong 0$, $Z \cong 1$. When we incorporate the other feature of the vacuum PMNS matrix, that $(W_{0})_{2i} \cong (W_{0})_{3i}$, which is preserved by the evolution equations, the result is that $W_{2i} \cong W_{3i}$, for all $A$. Given $W_{1i}$ from above, the matrix $W$ is then completely determined by unitarity. Our results can be put together by giving the matrices $W$ at $A=(A_{0},A_{l},A_{i},A_{h},A_{d})$: $$\begin{aligned} \label{eq:sum} W_{0} & \cong & \left(\begin{array}{ccc} 2/3 & 1/3 & 0 \\ 1/6 & 1/3 & 1/2 \\ 1/6 & 1/3 & 1/2 \\ \end{array}\right), \hspace{.15in} W_{l} \cong \left(\begin{array}{ccc} 1/2 & 1/2 & 0 \\ 1/4 & 1/4 & 1/2 \\ 1/4 & 1/4 & 1/2 \\ \end{array}\right), \nonumber \\ W_{i} & \cong & \left(\begin{array}{ccc} 0 & 1 & 0 \\ 1/2 & 0 & 1/2 \\ 1/2 & 0 & 1/2 \\ \end{array}\right), \hspace{.15in} W_{h} \cong \left(\begin{array}{ccc} 0 & 1/2 & 1/2 \\ 1/2 & 1/4 & 1/4 \\ 1/2 & 1/4 & 1/4 \\ \end{array}\right), \nonumber \\ W_{d} & \cong & \left(\begin{array}{ccc} 0 & 0 & 1 \\ 1/2 & 1/2 & 0 \\ 1/2 & 1/2 & 0 \\ \end{array}\right).\end{aligned}$$ As a group, these matrices exhibit the remarkable simplicity of the PMNS matrix as $A$ varies from $0$ to $\infty$. Note that all of the matrices have at least one zero, $W_{1I}=0$, implying $x_{I}=y_{I}=0$. The other feature, as mentioned before, is that they have equal elements in their second and third rows, $W_{2i}=W_{3i}$. This means that the $W$ matrix is completely fixed by its first row, $W_{1i}$. These elements, in turn, control $dD_{i}/dA$, Eq. (\[eq:di\]). Thus, the progression of $W$ as a function of $A$ can be read off from the plot of $D_{i}(A)$, which is given in Fig. 1. Generally, the accuracy of the entries in Eq. (\[eq:sum\]) is of the order $10^{-2}$. Note also that $A_{i}$ covers a rather large range, $2\delta_{0} \lesssim A_{i} \lesssim \Delta_{0}(1-2\epsilon^{2})$. The validity of Eq. (\[eq:sum\]) will be confirmed by numerical integrations, to be given in the next section. Where applicable, they also agree with numerical results in the literature [@group], after the proper change of variables is carried out. Numerical solutions =================== It is straightforward to numerically integrate the evolution equations for $(x,y)$. To do this we first obtain the vacuum expressions for the $(x,y)$ parameters from Eqs. (6) and (8): $$\begin{aligned} \label{eq:initial} x_{10} & = & \frac{1}{6}(2-3\beta-2\epsilon^{2}), \hspace{.2in} y_{10} = \frac{1}{6}(-2-3\beta+2\epsilon^{2}), \nonumber \\ x_{20} & = & \frac{1}{6}(1-3\beta-\epsilon^{2}), \hspace{.2in} y_{20} = \frac{1}{6}(-1-3\beta+\epsilon^{2}), \nonumber \\ x_{30} & = & \frac{1}{2} (\beta+\epsilon^{2}), \hspace{.2in} y_{30} = \frac{1}{2}(\beta-\epsilon^{2}),\end{aligned}$$ where $\xi=\eta=0$ is chosen and the terms in $\mathcal{O}(\beta \epsilon^{2})$ are ignored. In addition, we choose the initial values $\epsilon=0.17$, $\beta =0.02$, corresponding to the experimental bounds $|V_{e3}|^{2} \leq 0.03$ [@data] and an assumed CP violation phase $\cos \phi=1/4$. With the input hierarchy $\delta_{0}/\Delta_{0}=1/32$, the numerical results for $D_{i}$ and that for $(x_{i},y_{i})$ are then compared with the approximate solutions obtained earlier (Eqs. (\[low\]), (\[high\])) in Fig. 1 and Fig. 2, respectively. The agreements are quite good. Given the possible normal or inverted mass hierarchies, Fig. 3 summaries the evolution of the mixing parameters for both the $\nu$ and the $\bar{\nu}$ sectors. Note that the parameters $(\bar{x}_{i},\bar{y}_{i})$ for the $\bar{\nu}$ sector can be obtained by replacing $A$ with $-A$ and $V$ with $V^{*}$ in that for the $\nu$ sector. With the establishment of these “basic solutions" for the mixing parameters, we may easily study other physical quantities that are relevant to practical calculations for the neutrino propagation in matter. For illustration purpose, we plot some of the quantities numerically under the normal hierarchy in the following. The corresponding solutions under the inverted hierarchy can be manipulated likewise. The evolution of $J^{2}=x_{1}x_{2}x_{3}-y_{1}y_{2}y_{3}$ ($\bar{J}^{2}=\bar{x}_{1}\bar{x}_{2}\bar{x}_{3}-\bar{y}_{1}\bar{y}_{2}\bar{y}_{3}$) in matter is shown in Fig. 4. Compared to its vacuum value, it is seen that, except for some enhancement for $J^{2}$ near $A=A_{l}$ and $A=A_{h}$, the general trend is for it to decrease with $A$. We note that near $A_{l}$, in the $1/(D_{1}-D_{2})$ dominance approximation, $J^{2}(D_{1}-D_{2})^{2} \approx \mbox{constant}$, so while $(D_{2}-D_{1})$ goes through a dip, $J^{2}$ has a bump near $A_{l}$, after which $J^{2}/J^{2}_{0} \approx (\delta_{0}/A)^{2}$, for $\delta_{0} \ll A \lesssim \Delta_{0}$. A similar behaviour occurs near $A_{h}$, with $J^{2}(D_{2}-D_{3})^{2} \approx \mbox{constant}$, although the effects are hardly noticeable. Similar numerical results were also reached by solving directly the eigenvalue problem [@group]. In addition, Fig. 5 shows the evolution of the first two rows of $W_{ij}$ in matter. We do not present the plots of $W_{3i}$ since they are almost indistinguishable from those of $W_{2i}$. The patterns of $W_{ij}$ in Eq. (\[eq:sum\]) are clearly seen from the plots. Furthermore, the mixing angles of the standard parametrization: $\sin^{2}\theta_{12}$, $\sin^{2}\theta_{23}$, and $\sin^{2}\theta_{13}$, are related to the $(x,y)$ parameters: $$\sin^{2}\theta_{12}=1/(1+\frac{x_{1}-y_{1}}{x_{2}-y_{2}}),$$ $$\sin^{2}\theta_{23}=1/(1+\frac{x_{1}-y_{2}}{x_{2}-y_{1}}),$$ $$\sin^{2}\theta_{13}=x_{3}-y_{3}.$$ The numerical results for both the $\nu$ and $\bar{\nu}$ sectors are shown in Fig. 6. These plots are in agreement with those in the literature [@group]. For $\theta_{12}$ and $\theta_{13}$, note the characteristic step-function resonance behaviors near $A_{l}$ and $A_{h}$. Also, $\theta_{23} \cong \pi/4$ is a reflection of $W_{23} \cong W_{33}$, for all $A$. The phase angle $(\phi)$ is not plotted since it also remains constant due to the invariance of $S_{\phi}\sin2\theta_{23}$. Conclusion ========== Understanding the propagation of neutrinos through matter is one of the core problems in neutrino physics. In a medium of constant density, it is well-known that the electron neutrino acquires an induced mass which alters both the eigenvalues and the mixing matrix of the neutrinos. Traditionally, one studies directly the eigenvalue problem of the effective Hamiltonian. The neutrino parameters are expressed as complicated formulas in terms of the induced mass and their values in vacuum. One then resorts to numerical plots by assuming specific values for these partially known parameters. The drawback of this method is the lack of insights into the nature of the solutions, and it is not easy to gain an overview of the mixing as a function of the induced mass. In this paper we try a different approach, by finding the evolution equations of the neutrino parameters as a function of the induced mass. The resulting equations, when written in terms of a rephasing invariant parametrization, turn out to be manageable and we are able to find simple, approximate, solutions with the help of two important features of the vacuum neutrino parameters. 1) The two measured mass differences are widely separated so that the two-flavor resonance approximation becomes applicable. 2) The vacuum PMNS matrix has an approximate $\mu-\tau$ symmetry, which is preserved by the set of evolution equations. The result is summarized in Eq. (\[eq:sum\]), showing the striking simplicity of the neutrino mixing matrix as a function of $A$. Approximate solutions for the parameters are explicitly given in Eqs. (\[low\]) and  (\[high\]). The evolution equations also facilitate the derivation of “matter invariants", given in Eq. (\[JD\]) and  (\[eq:in2\]). In addition, there are also “partial matter invariants", Eqs. (\[pmiL\]) and  (\[pmiH\]). These are useful in obtaining properties of the various parameters without performing detailed calculations. Based on the incomplete measurements that exist for the vacuum parameters, our analyses show that those in matter, to a good approximation, can already be determined. We hope that these results will be helpful in the exploration of the physics of neutrino propagation in matter. S.H.C. is supported by the National Science Council of Taiwan, grant No. NSC 98-2112-M-182-001-MY2. Neutrino transition probabilities in $(x,y)$ parameters ======================================================= As the neutrinos travel through a baseline $L$ in matter of constant density, the flavor transition probability is given by $$\begin{aligned} P(\nu_{\alpha} \rightarrow \nu_{\beta})=\delta_{\alpha \beta}&-& 4\sum_{j>i}Re(V_{\alpha i}V^{*}_{\beta j}V^{*}_{\alpha j}V_{\beta j}) \sin^{2}(D_{ij}) \nonumber \\ &+& 2\sum_{j>i}Im(V_{\alpha i}V^{*}_{\beta i}V^{*}_{\alpha j}V_{\beta j}) \sin(2D_{ij}),\end{aligned}$$ where $D_{ij} \equiv (D_{i}-D_{j})L/4E$. For $\alpha \neq \beta$, we obtain the explicit expression, $$\begin{aligned} \label{eq:im} P(\nu_{\alpha} \rightarrow \nu_{\beta})=&-&4[Re(V_{\alpha 1}V^{*}_{\beta 1} V^{*}_{\alpha 2}V_{\beta 2}) \sin^{2}(D_{12})+Re(V_{\alpha 1}V^{*}_{\beta 1} V^{*}_{\alpha 3}V_{\beta 3}) \sin^{2}(D_{13}) \nonumber \\ &+&Re(V_{\alpha 2}V^{*}_{\beta 2} V^{*}_{\alpha 3}V_{\beta 3}) \sin^{2}(D_{23})] \nonumber \\ & + & 2[Im(V_{\alpha 1}V^{*}_{\beta 1} V^{*}_{\alpha 2}V_{\beta 2}) \sin(2D_{12})+Im(V_{\alpha 1}V^{*}_{\beta 1} V^{*}_{\alpha 3}V_{\beta 3}) \sin(2D_{13}) \nonumber \\ &+&Im(V_{\alpha 2}V^{*}_{\beta 2} V^{*}_{\alpha 3}V_{\beta 3}) \sin(2D_{23})].\end{aligned}$$ For a specific process, $e.g.$, $P(\nu_{\mu} \rightarrow \nu_{e})$, we have $$\begin{aligned} Re(V_{21}V_{12}V^{*}_{22}V^{*}_{11})&=&x_{2}x_{3}+x_{1}y_{2}-y_{1}y_{2}-y_{2}y_{3} \equiv F^{\mu e}_{21}, \nonumber \\ Re(V_{21}V_{13}V^{*}_{23}V^{*}_{11})&=&-x_{1}x_{3}-x_{2}x_{3}+x_{3}y_{1}+y_{2}y_{3} \equiv F^{\mu e}_{31}, \nonumber \\ Re(V_{22}V_{13}V^{*}_{23}V^{*}_{12})&=&x_{1}x_{3}+x_{2}y_{3}-y_{1}y_{3}-y_{2}y_{3} \equiv F^{\mu e}_{32},\end{aligned}$$ and the probability, $$\begin{aligned} P(\nu_{\mu} \rightarrow \nu_{e})=&-&4[F^{\mu e}_{21} \sin^{2}(D_{21})+ F^{\mu e}_{31} \sin^{2}(D_{31})+F^{\mu e}_{32} \sin^{2}(D_{32})] \nonumber \\ & + & 8J\sin(D_{21})\sin(D_{31})\sin(D_{32}), \end{aligned}$$ where $Im[V_{\alpha i}V_{\beta j}V^{*}_{\alpha j}V^{*}_{\beta i}] =J\sum_{\gamma,k}\epsilon_{\alpha\beta\gamma}\epsilon_{ijk}$ has been used in reducing the sum of the imaginary parts in Eq. (\[eq:im\]). In addition, the probability for the T-conjugate process takes the form $$\begin{aligned} P(\nu_{e} \rightarrow \nu_{\mu})=&-&4[F^{e \mu}_{21} \sin^{2}(D_{21})+ F^{e \mu}_{31} \sin^{2}(D_{31})+F^{e \mu}_{32} \sin^{2}(D_{32})] \nonumber \\ & - & 8J\sin(D_{21})\sin(D_{31})\sin(D_{32}), \end{aligned}$$ where $$\begin{aligned} F^{e \mu}_{21} & = &-x_{1}x_{2}-x_{1}x_{3}+x_{1}y_{2}+y_{1}y_{3}, \nonumber \\ F^{e \mu}_{31} & = &x_{1}x_{2}+x_{3}y_{1}-y_{1}y_{2}-y_{1}y_{3}, \nonumber \\ F^{e \mu}_{32} & = &-x_{1}x_{2}-x_{2}x_{3}+x_{2}y_{3}+y_{1}y_{2}.\end{aligned}$$ We may verify the relation $F^{\mu e}_{ij}=F^{e \mu}_{ij}$ using Eq.(4). The explicit probabilities for other processes can be derived following the same procedure: $$\begin{aligned} P(\nu_{e} \rightarrow \nu_{\tau})=&-&4[(x_{1}x_{3}+x_{2}y_{1}-y_{1}y_{2}-y_{1}y_{3})\sin^{2}(D_{21}) \nonumber \\ & + & (-x_{1}x_{2}-x_{1}x_{3}+x_{1}y_{3}+y_{1}y_{2})\sin^{2}(D_{31}) \nonumber \\ & + & (x_{1}x_{2}+x_{3}y_{2}-y_{1}y_{2}-y_{2}y_{3})\sin^{2}(D_{32})] \nonumber \\ & + & 8J\sin(D_{21})\sin(D_{31})\sin(D_{32}) \end{aligned}$$ $$\begin{aligned} P(\nu_{\tau} \rightarrow \nu_{e})=&-&4[(-x_{1}x_{2}-x_{2}x_{3}+x_{2}y_{1}+y_{2}y_{3})\sin^{2}(D_{21}) \nonumber \\ & + & (x_{2}x_{3}+x_{1}y_{3}-y_{1}y_{3}-y_{2}y_{3})\sin^{2}(D_{31}) \nonumber \\ & + & (-x_{1}x_{3}-x_{2}x_{3}+x_{3}y_{2}+y_{1}y_{3})\sin^{2}(D_{32})] \nonumber \\ & - & 8J\sin(D_{21})\sin(D_{31})\sin(D_{32}) \end{aligned}$$ $$\begin{aligned} P(\nu_{\mu} \rightarrow \nu_{\tau})=&-&4[(-x_{1}x_{3}-x_{2}x_{3}+x_{3}y_{3}+y_{1}y_{2})\sin^{2}(D_{21}) \nonumber \\ & + & (x_{1}x_{3}+x_{2}y_{2}-y_{1}y_{2}-y_{2}y_{3})\sin^{2}(D_{31}) \nonumber \\ & + & (-x_{1}x_{2}-x_{1}x_{3}+x_{1}y_{1}+y_{2}y_{3})\sin^{2}(D_{32})] \nonumber \\ & - & 8J\sin(D_{21})\sin(D_{31})\sin(D_{32}) \end{aligned}$$ $$\begin{aligned} P(\nu_{\tau} \rightarrow \nu_{\mu})=&-&4[(x_{1}x_{2}+x_{3}y_{3}-y_{1}y_{3}-y_{2}y_{3})\sin^{2}(D_{21}) \nonumber \\ & + & (-x_{1}x_{2}-x_{2}x_{3}+x_{2}y_{2}+y_{1}y_{3})\sin^{2}(D_{31}) \nonumber \\ & + & (x_{2}x_{3}+x_{1}y_{1}-y_{1}y_{2}-y_{1}y_{3})\sin^{2}(D_{32})] \nonumber \\ & + & 8J\sin(D_{21})\sin(D_{31})\sin(D_{32}) \end{aligned}$$ We may also write down the expressions for the $\bar{\nu}$ sector, $P(\bar{\nu}_{\alpha} \rightarrow \bar{\nu}_{\beta})$, by replacing the parameters for the $\nu$ sector with that for the $\bar{\nu}$ sector: $x \rightarrow \bar{x}$, $y \rightarrow \bar{y}$, $D_{ij} \rightarrow \bar{D}_{ij}$, and thus $F^{\alpha \beta}_{ij} \rightarrow \bar{F}^{\alpha \beta}_{ij}$, $J \rightarrow \bar{J}$. As an example, the probability $P(\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e})$ is given by $$\begin{aligned} P(\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e})=&-&4[\bar{F}^{\mu e}_{21} \sin^{2}(\bar{D}_{21})+ \bar{F}^{\mu e}_{31} \sin^{2}(\bar{D}_{31})+\bar{F}^{\mu e}_{32} \sin^{2}(\bar{D}_{32})] \nonumber \\ & - & 8\bar{J}\sin(\bar{D}_{21})\sin(\bar{D}_{31})\sin(\bar{D}_{32}), \end{aligned}$$ where $$\begin{aligned} \bar{F}^{\mu e}_{21} & = &\bar{x}_{2}\bar{x}_{3}+\bar{x}_{1}\bar{y}_{2}-\bar{y}_{1}\bar{y}_{2}-\bar{y}_{2}\bar{y}_{3}, \nonumber \\ \bar{F}^{\mu e}_{31} & = &-\bar{x}_{1}\bar{x}_{3}-\bar{x}_{2}\bar{x}_{3}+\bar{x}_{3}\bar{y}_{1}+\bar{y}_{2}\bar{y}_{3}, \nonumber \\ \bar{F}^{\mu e}_{32} & = &\bar{x}_{1}\bar{x}_{3}+\bar{x}_{2}\bar{y}_{3}-\bar{y}_{1}\bar{y}_{3}-\bar{y}_{2}\bar{y}_{3}.\end{aligned}$$ The evolution equations and the analytic, approximate, solutions for $(\bar{x}_{i},\bar{y}_{i})$ can be obtained following the same method outlined in this work. The numerical solutions for $(\bar{x}_{i},\bar{y}_{i})$ are shown in Fig. 3. Note that 1) $F^{\alpha \beta}_{ij}=\bar{F}^{\alpha \beta}_{ij}= F^{\beta \alpha}_{ij}=\bar{F}^{\beta \alpha}_{ij}$ in vacuum; and 2) the functions $F^{\alpha \beta}_{ij}$, $F^{\beta \alpha}_{ij}$, $\bar{F}^{\alpha \beta}_{ij}$, and $\bar{F}^{\beta \alpha}_{ij}$ can take varied forms in terms of $(x,y)$ and $(\bar{x},\bar{y})$ since there are different ways [@Kuo:05] of reducing $Re(V_{\alpha i}V^{*}_{\beta j}V^{*}_{\alpha j}V_{\beta j})$. See, $e.g.$, M. Doi, T. Kotani, H. Nishiura, K. Okuda, and E. Takasugi, Phys. Lett. B  [**103**]{}, 219 (1981), Erratum-$ibid.$  [**113**]{}, 513 (1982). T. K. Kuo and T.-H. Lee, Phys. Rev. D  [**71**]{}, 093001 (2005). C. Jarlskog, Phys. Rev. Lett.  [**55**]{}, 1039 (1985). P. F. Harrison, D. H. Perkins, and W. G. Scott, Phys. Lett. B  [**530**]{}, 167 (2002). Particle Data Group, Phys. Lett. B [**592**]{}, 130 (2004). Note that the phase angle “$\delta$" is changed to “$\phi$" in this paper. Clearly, if $W_{2i}=W_{3i}$, $w_{1i}=0$, and $x_{i}+y_{i}=0$. Conversely, given, $e.g.$, $W_{22}W_{33}-W_{23}W_{32}=0$ and $W_{21}W_{33}-W_{23}W_{31}=0$, using unitarity, one finds $W_{2i}=W_{3i}$. S. P. Mikheyev and A. Yu. Smirnov, Yad. Fiz.  [**42**]{}, 1441 (1985); Sov. J. Nucl. Phys.  [**42**]{}, 913 (1985); S. P. Mikheyev and A. Yu. Smirnov, Nuovo Cimento C  [**9**]{}, 17 (1986); L. Wolfenstein, Phys. Rev. D  [**17**]{}, 2369 (1978). As an incomplete list, see, $e.g.$, V. Barger, K. Whisnant, S. Pakvasa, and R. J. N. Phillips, Phys. Rev. D  [**22**]{}, 2718 (1980); H. W. Zaglauer and K. H. Schwarzer, Z. Phys. C  [**40**]{}, 273 (1988); T. Ohlsson and H. Snellman, J. Math. Phys.  [**41**]{}, 2768 (2000), Erratum-$ibid.$  [**42**]{}, 2345 (2001); M. Freund, Phys. Rev. D  [**64**]{}, 053003 (2001); Zhi-zhong Xing, Int. J. Mod. Phys. A  [**19**]{}, 1 (2004); M. Honda, Y. Kao, N. Okamura, and T. Takeuchi, arXiv:hep-ph/0602115 (2006). S. H. Chiu, T. K. Kuo, and Lu-Xin Liu, Phys. Lett. B [**687**]{}, 184 (2010); arXiv:hep-ph/1001.1469. S. H. Chiu, T. K. Kuo, T.-H. Lee, and C. Xiong, Phys. Rev. D  [**79**]{}, 013012 (2009). P. F. Harrison and W. G. Scott, Phys. Lett. B [**476**]{}, 349 (2000); V. A. Naumov, Phys. Lett. B [**323**]{}, 351 (1994); K. Kimura, A. Takamura, and H. Yokomakura, Phys. Lett. B [**537**]{}, 86 (2002). S. Toshev, Mod. Phys. Lett. A  [**6**]{}, 455 (1991); P. I. Krastev and S. T. Petcov, Phys. Lett. B  [**205**]{}, 84 (1988). C. S. Lam, Phys. Lett. B  [**507**]{}, 214 (2001); P. F. Harrison and W. G. Scott, Phys. Lett. B  [**547**]{}, 219 (2002). T. K. Kuo and J. Pantaleone, Rev. Mod. Phys.  [**61**]{}, 937 (1989). [^1]: schiu@mail.cgu.edu.tw [^2]: tkkuo@purdue.edu
--- abstract: 'Let $G$ be a compact, simply connected Lie group. If ${\mathcal{C}}_1,{\mathcal{C}}_2$ are two $G$-conjugacy classes, then the set of elements in $G$ that can be written as products $g=g_1g_2$ of elements $g_i\in {\mathcal{C}}_i$ is invariant under conjugation, and its image under the quotient map $G\to G/{ {\operatorname}{Ad} }(G)={\mathfrak{A}}$ is a convex polytope. In this note, we will prove an analogous statement for *twisted conjugations* relative to group automorphisms. The result will be obtained as a special case of a convexity theorem for group-valued moment maps which are equivariant with respect to the twisted conjugation action.' address: 'University of Toronto, Department of Mathematics, 40 St George Street, Toronto, Ontario M4S2E4, Canada ' author: - 'E. Meinrenken' title: Convexity for twisted conjugation --- .2in Introduction ============ Let $G$ be a compact connected Lie group, with maximal torus $T$, and let ${{\mathfrak{g}}},{{\mathfrak{t}}}$ be their Lie algebras. Fix a positive Weyl chamber ${{\mathfrak{t}}}_+{\subseteq}{{\mathfrak{t}}}$, and denote by $p\colon {{\mathfrak{g}}}\to {{\mathfrak{t}}}_+$ the quotient map, with fiber $p^{-1}(\xi)={\mathcal{O}}_\xi$ the adjoint orbit of $\xi$. For any $r>1$, the set $$\label{eq:sum} \{(\xi_1,\ldots,\xi_r)\in {{\mathfrak{t}}}_+\times \cdots \times {{\mathfrak{t}}}_+\big|\ \exists \zeta_i\in {\mathcal{O}}_{\xi_i}\colon \ \zeta_1+\ldots +\zeta_r=0\}$$ is a convex polyhedral known as the *Horn cone*. Fixing $\xi_1,\ldots,\xi_{r-1}$, the Horn cone describes the set of adjoint orbits contained in the sum of adjoint orbits ${\mathcal{O}}_{\xi_1}+\ldots+{\mathcal{O}}_{\xi_{r-1}}$. For the case of $G={\operatorname}{U}(N)$, the projection $p(\zeta)$ signifies the set of eigenvalues of a Hermitian matrix $\zeta$, hence the Horn cone thus describes the possible eigenvalues of sums of Hermitian matrices with prescribed eigenvalues. The defining inequalities for the ${\mathfrak}{u}(n)$-Horn cone were obtained by Klyachko [@kl:st], who gave a description in terms of the Schubert calculus of the Grassmannian. This was extended to arbitrary compact groups by Berenstein-Sjamaar [@be:coa]. See Ressayre [@res:ge] and Vergne-Walter [@ver:in] for recent developments. Suppose in addition that $G$ is simply connected. Let ${\mathfrak{A}}{\subseteq}{{\mathfrak{t}}}_+$ be the Weyl alcove. Then ${\mathfrak{A}}$ labels the set of conjugacy classes in $G$, in the sense that there is a quotient map $q\colon G\to {\mathfrak{A}}$, with fiber $q^{-1}(\xi)={\mathcal{C}}_\xi$ the conjugacy class of $\exp(\xi)$. As observed in Meinrenken-Woodward [@me:lo Corollary 4.13], the set $$\label{eq:product1} \{(\xi_1,\ldots,\xi_r)\in {\mathfrak{A}}\times \cdots \times {\mathfrak{A}}\big|\ \exists g_i\in {\mathcal{C}}_{\xi_i}\colon\ g_1\ldots g_r=e\}$$ is a convex polytope. Put differently, this polytope describes the conjugacy classes arising in products of a collection of prescribed conjugacy classes. In the case of $G={ {\operatorname}{SU}}(n)$, it describes the possible eigenvalues of *products* of special unitary matrices with prescribed eigenvalues; these eigenvalue inequalities were determined, in terms of quantum Schubert calculus on flag manifolds, by Agnihotri-Woodward [@ag:ei] and Belkale [@bel:loc]. (See also Belkale-Kumar [@bel:mu].) This was extended to general $G$ by Teleman-Woodward [@te:pa]. In this note we will show that there are similar polytopes for *twisted* conjugations. Recall that the twisted conjugation action relative to a group automorphism $\kappa\in { {\operatorname}{Aut} }(G)$ is the action $$\label{eq:twistaction} { {\operatorname}{Ad} }_g^{(\kappa)}(a)=g\,a\,\kappa(g^{-1}).$$ As we will explain, it suffices to consider automorphisms $\kappa$ defined by Dynkin diagram automorphisms. These automorphisms preserve ${{\mathfrak{t}}}$, with fixed point set ${{\mathfrak{t}}}^\kappa$, and there is a convex polytope (*alcove*) ${\mathfrak{A}}^{(\kappa)}{\subseteq}{{\mathfrak{t}}}^\kappa$ with a quotient map $q^{(\kappa)}\colon G\to {\mathfrak{A}}^{(\kappa)}$ whose fiber $(q^{(\kappa)})^{-1}(\xi)={\mathcal{C}}_\xi^{(\kappa)}$ is the $\kappa$-twisted conjugacy class of $\exp(\xi)$. \[th:convexity\] Let $\kappa_1,\ldots, \kappa_r$ be diagram automorphisms with $\kappa_r\circ \cdots \circ \kappa_1=1$. Then the set $$\label{eq:product2} \{(\xi_1,\ldots,\xi_r)\in {\mathfrak{A}}^{(\kappa_1)}\times \cdots\times {\mathfrak{A}}^{(\kappa_r)}\big|\ \exists g_i\in {\mathcal{C}}_{\xi_i}^{(\kappa_i)}\colon \ \ g_1\cdot\ldots\cdot g_r=e\}$$ is a convex polytope. It would be interesting to obtain an explicit description of the defining inequalities of the polytopes . (In Section \[sec:example\], we will work out the case of $G={ {\operatorname}{SU}}(3)$ and $r=3$ by direct computation.) Note that these polytopes arise if one considers products of conjugacy classes of *disconnected* compact Lie groups $K$; indeed each conjugacy class of $K$ is a finite union of twisted conjugacy classes of the identity component $G=K_0$. We will obtain Theorem \[th:convexity\] as a special case of a convexity result for group-valued moment maps that are equivariant under *twisted conjugation*. Examples of such spaces are the twisted conjugacy classes, or components of moduli spaces of flat connections for disconnected groups on surfaces with boundary. We have (cf.  Theorem \[th:convex\]): Let $(M,{\omega},\Phi)$ be a compact, connected q-Hamiltonian $G$-space with a $\kappa$-twisted equivariant moment map $\Phi\colon M\to G$. Then the fibers of the moment map are connected, and the image $$\Delta(M):=q^{(\kappa)}(\Phi(M)){\subseteq}{\mathfrak{A}}^{(\kappa)}$$ is a convex polytope. In a very recent paper, Boalch and Yamakawa [@boa:twi] independently considered twisted group-valued moment maps in the context of twisted wild character varieties, generalizing earlier results of Boalch [@boa:qu; @boa:geo]. In particular, their work has a discussion of twisted moduli spaces, similar to Section \[subsec:twistmod\]. I also learned about a forthcoming article by Alex Takeda, using twisted group-valued moment maps in the setting of shifted symplectic geometry. .3in [**Acknowledgments.**]{} I am grateful to Tom Baird for discussions on twisted conjugations and twisted moduli spaces, and to the referee for requesting an explicit example. Twisted conjugation =================== We begin by reviewing some background material on twisted conjugation actions. References include Baird [@bai:cl], Kac [@kac:inf], Mohrdieck [@moh:the], Mohrdieck-Wendt [@moh:int], and Springer [@spr:twi]. Twisted conjugation ------------------- Let ${\operatorname}{Aut}(G)$ be the group of automorphisms of a Lie group $G$, and ${\operatorname}{Inn}(G)\cong G/Z(G)$ the normal subgroup of inner automorphisms ${ {\operatorname}{Ad} }_a,\ a\in G$. The quotient group is denoted ${\operatorname}{Out}(G)={\operatorname}{Aut}(G)/{\operatorname}{Inn}(G)$. For $\kappa\in {\operatorname}{Aut}(G)$, define the $\kappa$-twisted conjugation action as $${ {\operatorname}{Ad} }_g^{(\kappa)}(h)=gh\kappa(g^{-1}).$$ Its orbits ${\mathcal{C}}{\subseteq}G$ are called the $\kappa$-*twisted conjugacy classes*. In terms of the semi-direct product $G\rtimes {\operatorname}{Aut}(G)$, the twisted conjugation action can be regarded as an ordinary conjugation, $$(g,1)(h,\kappa)(g^{-1},1)=(gh\kappa(g^{-1}),\kappa).$$ For this reason, we will sometimes use the notation $G\kappa$ for the space $G$, regarded as a $G$-space under $\kappa$-twisted conjugation. For later reference, we note that if $\kappa_1,\kappa_2$ are two automorphisms, then $$\label{eq:later} { {\operatorname}{Ad} }_g^{(\kappa_2\kappa_1)}(h_1h_2)={ {\operatorname}{Ad} }_g^{(\kappa_1)}(h_1) { {\operatorname}{Ad} }_{\kappa_1(g)}^{(\kappa_2)}(h_2)$$ for all $g,h_1,h_2\in G$. The differential of $\kappa\in {\operatorname}{Aut}(G)$ at the group unit is an automorphism of the Lie algebra ${{\mathfrak{g}}}$, still denoted by $\kappa$. The generating vector fields for the $\kappa$-twisted conjugation action are $\xi_G=\kappa(\xi)^L-\xi^R$ for $\xi\in{{\mathfrak{g}}}$. In terms of *right* trivialization of the tangent bundle, we have $\xi_G(h)=({ {\operatorname}{Ad} }_h\circ\,\kappa-I)\xi$. Hence, the Lie algebra of the stabilizer of $h\in G$ is $$\label{eq:stab} {{\mathfrak{g}}}_h={ {\operatorname}{ker}}({ {\operatorname}{Ad} }_h\circ\,\kappa-I),$$ while the tangent space to the twisted conjugacy class ${\mathcal{C}}={ {\operatorname}{Ad} }_G^{(\kappa)}(h)$ is $$\label{eq:tangent}T_h{\mathcal{C}}={\operatorname}{ran}({ {\operatorname}{Ad} }_h\circ\,\kappa-I),$$ in *right* trivialization $T_hG={{\mathfrak{g}}}$. Suppose $\kappa'={ {\operatorname}{Ad} }_a\circ\, \kappa$ for some $a\in G$. Then the corresponding twisted conjugations are related by right multiplication $r_a\colon G\to G$: $$r_a\circ { {\operatorname}{Ad} }_g^{(\kappa')}={ {\operatorname}{Ad} }_g^{(\kappa)}\circ\, r_a.$$ That is, $g\mapsto ga^{-1}$ defines a $G$-map $G\kappa\to G\kappa'$. In particular, if ${\mathcal{C}}$ is a $\kappa$-twisted conjugacy class then ${\mathcal{C}}'=r_{a^{-1}}({\mathcal{C}})$ is a $\kappa'$-twisted conjugacy class. \[ex:conjugacy\] Suppose $\kappa_1,\ldots,\kappa_r\in { {\operatorname}{Aut} }(G)$, and let ${\mathcal{C}}_i$ be $\kappa_i$-twisted conjugacy classes. Then the subset $${\mathcal{C}}_1\cdots {\mathcal{C}}_r:=\{h_1\cdots h_r|\ h_i\in {\mathcal{C}}_i\}{\subseteq}G$$ is invariant under $\kappa:=\kappa_r\cdots \kappa_1$-twisted conjugation. This follows by induction from . Let $\kappa_i'={ {\operatorname}{Ad} }_{a_i}\circ\,\kappa_i$ for some $a_i\in G$, and put ${\mathcal{C}}'_i=r_{a_i^{-1}}({\mathcal{C}}_i)$ and $\kappa'=\kappa_r'\cdots\, \kappa_1'$. Then the problem of finding $h_i\in {\mathcal{C}}_i$ with product $h_1\cdots h_r$ in a prescribed $\kappa$-twisted conjugacy class ${\mathcal{C}}$ is equivalent to a similar problem for the ${\mathcal{C}}_i'$. To see this, let $u_1,\ldots,u_{r+1}$ be inductively defined as $ u_{i+1}=a_i \kappa_i(u_i) $ with $u_1=e$, and put $a=u_{r+1}$. Then $\kappa'={ {\operatorname}{Ad} }_a \circ\,\kappa$, hence ${\mathcal{C}}'=r_{a^{-1}}({\mathcal{C}})$ is a $\kappa'$-twisted conjugacy class. A straightforward calculation shows that if $h_i\in {\mathcal{C}}_i$ satisfy $h:=h_1\cdots h_r\in {\mathcal{C}}$, then the elements $$h_i'={ {\operatorname}{Ad} }_{u_i}^{(\kappa_i)}(h_i) \ a_i^{-1}\in {\mathcal{C}}_i'$$ have product $h'=h a^{-1}\in {\mathcal{C}}'$. Diagram automorphisms --------------------- Let $G$ be a compact and simply connected Lie group, with maximal torus $T$ and Weyl group $W=N_G(T)/T$. Fix a positive Weyl chamber ${{\mathfrak{t}}}_+{\subseteq}{{\mathfrak{t}}}$, with corresponding alcove ${\mathfrak{A}}{\subseteq}{{\mathfrak{t}}}_+$. The walls of the Weyl chamber are defined by the simple roots $\alpha_1,\ldots,\alpha_l\in {{\mathfrak{t}}}^*$. Let $\alpha_i^\vee\in{{\mathfrak{t}}}$ be the simple coroots, and let $e_i,\, f_i\in{{\mathfrak{g}}}^{\mathbb{C}}$ be the Chevalley generators, for $i=1,\ldots,\,l$. Consider an automorphisms of the Dynkin diagram, given by a bijection $i\mapsto i'$ of its set of vertices preserving all Cartan integers: ${\langle}\alpha_i,\alpha_j^\vee{\rangle}={\langle}\alpha_{i'},\alpha_{j'}^\vee{\rangle}$. Any diagram automorphism defines a unique Lie algebra automorphism $\kappa\in{\operatorname}{Aut}({{\mathfrak{g}}}^{\mathbb{C}})$ such that $\kappa(e_i)= e_{i'},\ \kappa(f_i)=f_{i'}$. This automorphism preserves the real Lie algebra ${{\mathfrak{g}}}{\subseteq}{{\mathfrak{g}}}^{\mathbb{C}}$, and exponentiates to the Lie group $G$. We will refer to the resulting $\kappa\in { {\operatorname}{Aut} }(G)$ itself as a diagram automorphism. Every element of ${\operatorname}{Out}(G)={\operatorname}{Aut}(G)/{\operatorname}{Inn}(G)$ is represented by a unique diagram automorphism, and the resulting splitting ${\operatorname}{Out}(G){\hookrightarrow}{\operatorname}{Aut}(G)$ identifies $${\operatorname}{Aut}(G)={\operatorname}{Inn}(G)\rtimes {\operatorname}{Out}(G).$$ That is, any automorphism of $G$ can be written as $\kappa'={ {\operatorname}{Ad} }_a\circ\,\kappa$ with $a\in G$ and $\kappa\in{\operatorname}{Out}(G)$. To understand the orbit structure of $\kappa$-twisted conjugation actions, it hence suffices to consider the case that $\kappa\in {\operatorname}{Aut}(G)$ is a diagram automorphism. In particular, $\kappa$ preserves $T$, with fixed point set $T^\kappa{\subseteq}G^\kappa$. Let ${{\mathfrak{t}}}^\kappa,\ {{\mathfrak{t}}}_\kappa$ be the kernel and range of $\kappa|_{{\mathfrak{t}}}-I\colon {{\mathfrak{t}}}\to {{\mathfrak{t}}}$. Then ${{\mathfrak{t}}}^\kappa$ is the Lie algebra of $T^\kappa$, and ${{\mathfrak{t}}}_\kappa=({{\mathfrak{t}}}^\kappa)^\perp$ is the orthogonal space in ${{\mathfrak{t}}}$ (relative to a $W$-invariant metric). Put $T_\kappa=\exp({{\mathfrak{t}}}_\kappa)$. Then $T=T^\kappa\ T_\kappa$, with finite intersection $$T^\kappa\cap T_\kappa.$$ Let $W^\kappa{\subseteq}W$ the subgroup of elements $w$ whose action on ${{\mathfrak{t}}}$ commutes with $\kappa$. For $a\in G$, denote by $G_a$ the stabilizer under the $\kappa$-twisted adjoint action. For Propositions \[prop:p1\] and \[prop:p2\] below, see [@moh:int], [@spr:twi], and references therein. \[prop:p1\] Let $\kappa\in { {\operatorname}{Aut} }(G)$ be a diagram automorphism. Then: 1. The group $G^\kappa$ contains $T^\kappa$ as a maximal torus, with Weyl group $W^\kappa$. The intersection ${{\mathfrak{t}}}^\kappa_+={{\mathfrak{t}}}^\kappa\cap {{\mathfrak{t}}}_+$ is a positive Weyl chamber for $G^\kappa$. 2. Every $\kappa$-twisted conjugacy class ${\mathcal{C}}{\subseteq}G$ intersects the torus $T^\kappa$ in an orbit of the finite group $(T^\kappa\cap T_\kappa)\rtimes W^\kappa$. Here $T^\kappa\cap T_\kappa$ acts by multiplication on $T^\kappa$. 3. For all $a\in T^\kappa$, the stabilizer group $G_a$ under the twisted conjugation action contains $T^\kappa$ as a maximal torus. Let $\Lambda=\exp_T^{-1}(e){\subseteq}{{\mathfrak{t}}}$ be the integral lattice of $T$. Since $G$ is simply connected, it coincides with the coroot lattice of $(G,T)$. The fixed point set $\Lambda^\kappa{\subseteq}{{\mathfrak{t}}}^\kappa$ is the integral lattice of $T^\kappa$. It is contained in the lattice, $$\Lambda^{(\kappa)}=\exp_{T^\kappa}^{-1}(T^\kappa\cap T_\kappa).$$ \[prop:p2\] There is a unique closed convex polytope ${\mathfrak{A}}^{(\kappa)}\subseteq {{\mathfrak{t}}}^\kappa_+$, containing the origin, such that $G_{\exp\xi}=T^\kappa$ for elements $\xi\in {\operatorname}{int}({\mathfrak{A}}^{(\kappa)})$, and such that the map $${\mathfrak{A}}^{(\kappa)}\xrightarrow{\exp} G{\longrightarrow}G/{ {\operatorname}{Ad} }_G^{(\kappa)}$$ is a bijection. Furthermore, 1. The cone over ${\mathfrak{A}}^{(\kappa)}$ is ${{\mathfrak{t}}}_+^\kappa$. 2. For each open face $\sigma\subseteq {\mathfrak{A}}^{(\kappa)}$, the stabilizer group $G_\sigma:=G_{\exp\xi}$ of elements $\xi\in \sigma$ does not depend on $\xi$, The stabilizer groups satisfy $G_\sigma\supseteq G_\tau$ for $\sigma\subseteq{\overline}{\tau}$. 3. The group ${W_{{\operatorname}{aff}}}^{(\kappa)}=\Lambda^{(\kappa)}\rtimes W^\kappa$ is an affine reflection group, generated by reflections across the facets of ${\mathfrak{A}}^{(\kappa)}$, and having ${\mathfrak{A}}^{(\kappa)}$ as a fundamental domain. Clearly, if $\kappa=1$ then ${\mathfrak{A}}^{(\kappa)}$ is just the usual Weyl alcove, parametrizing the set of (untwisted) conjugacy classes in $G$. Note that in general, $\Lambda^{(\kappa)},\ {\mathfrak{A}}^{(\kappa)}$ are different from the coroot lattice and alcove of $({{\mathfrak{g}}}^\kappa,{{\mathfrak{t}}}^\kappa)$. The group ${W_{{\operatorname}{aff}}}^{(\kappa)}$ is the Weyl group of the twisted affine Lie algebra defined by $\kappa$, see Kac [@kac:inf]. Slices {#subsec:slices} ------ The conjugation action of $G$ on itself has distinguished slices, labeled by the faces of the alcove. We will generalize this fact to twisted conjugation actions. \[lem:lemma\] Let $\kappa\in {\operatorname}{Aut}(G)$, where $G$ is compact. Let ${\mathcal{C}}{\subseteq}G$ be the $\kappa$-twisted conjugacy class of an element $a\in G^\kappa$. Then $$\label{eq:tanspace} T_aG=T_a(G_a)\oplus T_a{\mathcal{C}}.$$ Pick an ${\operatorname}{Aut}(G)$-invariant inner product on ${{\mathfrak{g}}}$, defining a bi-invariant Riemannian metric on the Lie group $G$ which is also invariant under $\kappa$. Since $\kappa(a)=a$, we have $a\in G_a$, and we obtain $T_aG_a={{\mathfrak{g}}}_a$ in right trivialization. On the other hand, by and we have $T_a{\mathcal{C}}={\operatorname}{ran}({ {\operatorname}{Ad} }_a\circ\, \kappa-I)={{\mathfrak{g}}}_a^\perp$ in right trivialization. Since the two spaces are orthogonal, the Lemma follows. Using again that $\kappa(a)=a$, the twisted conjugation action of $a$ on $G$ restricts to the usual conjugation action on $G_a$. In particular, $G_a$ is a ${ {\operatorname}{Ad} }^{(\kappa)}_a$-invariant submanifold of $G$. The Lemma shows that any sufficiently small invariant open neighborhood of $a$ in $G_a$ is a slice for the twisted conjugation action. If $G$ is also simply connected, and $\kappa$ is a diagram automorphism, there is a specific ‘largest’ slice, as follows. For any face $\sigma{\subseteq}{\mathfrak{A}}^{(\kappa)}$, let ${\mathfrak{A}}^{(\kappa)}_\sigma$ be the relatively open subset of ${\mathfrak{A}}^{(\kappa)}$ given as the union of faces $\tau{\subseteq}{\mathfrak{A}}^{(\kappa)}$ such that $\sigma\subseteq{\overline}{\tau}$. Put $$\label{eq:usigma} U_\sigma={ {\operatorname}{Ad} }_{G_\sigma}^{(\kappa)}\exp({\mathfrak{A}}_\sigma^{(\kappa)}),$$ a subset of $G_\sigma{\subseteq}G$. \[prop:slice\] The subset $U_\sigma{\subseteq}G_\sigma$ is open, and invariant under the twisted conjugation action of $G_\sigma$. The map $$\label{eq:actionmap} G\times_{G_\sigma} U_\sigma\to G,\ [(g,a)]\mapsto { {\operatorname}{Ad} }_g^{(\kappa)}a$$ is an embedding as an open subset of $G$. That is, $U_\sigma$ is a slice for the twisted conjugation action. Pick $\zeta\in\sigma$, and put $c=\exp\zeta$ so that $G_c=G_\sigma$. For all $\xi\in {{\mathfrak{t}}}^\kappa$ and $g\in G_\sigma$, $$\label{eq:c} { {\operatorname}{Ad} }_g^{(\kappa)}\exp(\xi)= { {\operatorname}{Ad} }_g^{(\kappa)}(\exp(\xi-\zeta)c)= { {\operatorname}{Ad} }_g(\exp(\xi-\zeta))\ c.$$ It follows that $U_\sigma=U_\sigma'\,c$ where $$U_\sigma'={ {\operatorname}{Ad} }_{G_\sigma}\exp\big( {\mathfrak{A}}_\sigma^{(\kappa)}-\zeta\big).$$ Equation also shows that for $\xi\in{\mathfrak{A}}_\sigma^{(\kappa)}{\subseteq}{{\mathfrak{t}}}^\kappa$, the stabilizer of $\exp(\xi)$ under the twisted conjugation action of $G$ (which lies in $G_\sigma$, by definition of ${\mathfrak{A}}_\sigma^{(\kappa)}$) equals the stabilizer of $\exp(\xi-\zeta)$ under the usual conjugation action of $G_\sigma$. Consequently, ${\mathfrak{A}}_\sigma^{(\kappa)}-\zeta$ is a relatively open subset of an alcove of $(G_\sigma,T^\kappa)$. It follows that $U_\sigma'$ is open in $G_\sigma$, and hence $U_\sigma$ is open in $G_\sigma$. We next show that the map is injective. Thus suppose ${ {\operatorname}{Ad} }_g^{(\kappa)}a={ {\operatorname}{Ad} }_{g'}^{(\kappa)}a'$, where $a,a'\in U_\sigma$ and $g,g'\in G$. Since $a,a'$ are in the same twisted conjugacy class, there is a unique $\xi\in {\mathfrak{A}}_\sigma$ and elements $h,h'\in G_\sigma$ such that $$a={ {\operatorname}{Ad} }_h^{(\kappa)} \exp(\xi),\ \ \ \ a'={ {\operatorname}{Ad} }_{h'}^{(\kappa)} \exp(\xi).$$ We thus obtain $${ {\operatorname}{Ad} }^{(\kappa)}_{gh} \exp(\xi)={ {\operatorname}{Ad} }^{(\kappa)}_{g'h'} \exp(\xi),$$ which implies $ghk=g'h'$ for some $k\in G_{\exp\xi}{\subseteq}G_\sigma$. Setting $u=h' k^{-1} h^{-1}\in G_\sigma$, we obtain $g'=g u^{-1}$, while $a'={ {\operatorname}{Ad} }_u^{(\kappa)} a$. That is, $[(g,a)]=[(g',a')]$. To complete the proof, it suffices to show that has surjective differential. By equivariance, it is enough to verify this at elements $[(e,a)]$ with $a\in \exp({\mathfrak{A}}_{\sigma}^{(\kappa)}){\subseteq}T^\kappa$. The range of the differential of at such a point contains $T_aG_\sigma+T_a{\mathcal{C}}$. Since $G_a{\subseteq}G_\sigma$, hence $T_a G_a{\subseteq}T_aG_\sigma$, Lemma \[lem:lemma\] shows that this is all of $T_aG$. q-Hamiltonian spaces ==================== Let $G$ be a Lie group, with an invariant inner product $\cdot $ on its Lie algebra ${{\mathfrak{g}}}$, and let $\eta\in {\Omega}^3(G)$ be the bi-invariant closed 3-form $$\eta={\frac}{1}{12}\theta^L\cdot [\theta^L,\theta^L] ={\frac}{1}{12}\theta^R\cdot [\theta^R,\theta^R]$$ where $\theta^L,\theta^R\in {\Omega}^1(G,{{\mathfrak{g}}})$ are the left, right invariant Maurer-Cartan forms. Suppose $\kappa\in { {\operatorname}{Aut} }(G)$ is an automorphism. It will be convenient to denote the group $G$, viewed as a $G$-manifold under $\kappa$-twisted conjugation, by $G\kappa$. $G\kappa$-valued moment maps ---------------------------- A *q-Hamiltonian $G$-space with $G\kappa$-valued moment map* is a $G$-manifold $M$, together with an $G$-invariant 2-form ${\omega}$ and a $G$-equivariant smooth map $\Phi\colon M\to G\kappa$. These are required to satisfy the following axioms: 1. ${{\mbox{d}}}{\omega}=-\Phi^*\eta$, 2. $\iota(\xi_M){\omega}=-{{\textstyle {\frac}{1}{2}}}\Phi^*(\kappa(\xi)\cdot \theta^L+\xi\cdot \theta^R)$, 3. ${ {\operatorname}{ker}}({\omega})\cap { {\operatorname}{ker}}(T\Phi)=0$. These axioms generalize the $G$-valued moment maps from [@al:mom]. In terms of equivariant de Rham forms, the first two properties may be combined into a single condition ${{\mbox{d}}}_G\omega=-\Phi^*\eta_G^{(\kappa)}$, where $$\eta_G^{(\kappa)}(\xi)=\eta-{{\textstyle {\frac}{1}{2}}}(\kappa(\xi)\cdot \theta^L+\xi\cdot \theta^R).$$ is a closed equivariant 3-form on $G\kappa$. $\kappa$-twisted conjugacy classes ${\mathcal{C}}{\subseteq}G$ are q-Hamiltonian $G$-spaces, with the $G\kappa$-valued moment map given as the inclusion. The 2-form is uniquely determined by the moment map condition (b), and is given by $${\omega}(\xi_{\mathcal{C}},\tau_{\mathcal{C}})={{\textstyle {\frac}{1}{2}}}(({ {\operatorname}{Ad} }_\phi\circ\,\kappa)-({ {\operatorname}{Ad} }_\phi\circ\,\kappa)^{-1})\xi\cdot\tau.$$ Note that the twisted conjugacy classes can be odd-dimensional. For example, in the case of $G={ {\operatorname}{SU}}(3)$ with $\kappa$ given by complex conjugation, the generic stabilizer under twisted conjugation is a circle, and hence the generic twisted conjugacy classes are 7-dimensional. These are associated to any compact oriented surface with boundary, with marked points on the boundary components, with a prescribed homomorphism from the fundamental groupoid into ${\operatorname}{Aut}(G)$. This will be discussed in Section \[subsec:twistmod\] below. \[ex:fusion\] Further examples are created by *fusion*: Suppose $M_i$ for $i=1,2$ are two q-Hamiltonian $G$-spaces with $G\kappa_i$-valued moment map. Then $M_1\times M_2$ with the new $G$-action $g.(m_1,m_2)=(g.m_1,\kappa_1(g).m_2)$ and the 2-form $${\omega}={\omega}_1+{\omega}_2+{{\textstyle {\frac}{1}{2}}}\Phi_1^*\theta^L\cdot \Phi_2^*\theta^R$$ becomes a q-Hamiltonian $G$-space with $G\,\kappa_2\kappa_1$-valued moment map $\Phi_1\Phi_2$. Properties (a) and (b) may be verified directly; for property (c) it is best to use the Dirac-geometric approach as in Remark \[rem:dirac\]. For example, if ${\mathcal{C}}$ is a $\kappa$-twisted conjugacy class in $G$, and $M$ is a q-Hamiltonian $G$-space with (non-twisted) $G$-valued moment map, then the fusion product $M\times {\mathcal{C}}$ is a q-Hamiltonian $G$-space with $G\kappa$-valued moment map. Also, if ${\mathcal{C}}_i{\subseteq}G$ are $\kappa_i$-twisted conjugacy classes, for $i=1,\ldots,r$, then their fusion product ${\mathcal{C}}_1\times\cdots \times{\mathcal{C}}_r$ is a q-Hamiltonian space with $G\kappa$-valued moment map, where $\kappa=\kappa_r\cdots \kappa_1$. See example \[ex:conjugacy\]. Let $L^{(\kappa)}G$ be the twisted loop group, consisting of paths $g\colon {\mathbb{R}}\to G$ with the property that $g(t+1)=\kappa(g(t))$ for all $t$. There is a notion of Hamiltonian $L^{(\kappa)}G$-space generalizing that of a Hamiltonian $LG$-space [@me:lo], and by the same proof as for $\kappa=1$ [@al:mom] one sees that there is a 1-1 correspondence between Hamiltonian $L^{(\kappa)}G$-spaces with proper moment maps and $\kappa$-twisted q-Hamiltonian $G$-spaces. \[rem:dirac\] The definition of $G\kappa$-valued moment maps has a Dirac-geometric interpretation, similar to [@bur:di] and [@al:pur]. Using the notation from [@al:pur], let $\mathbb{A}=\mathbb{T} G_\eta$ be the standard Courant algebroid over $G$, with the Courant bracket twisted by the closed 3-form $\eta$. It has a canonical trivialization $\mathbb{A}=G\times ({\overline}{{{\mathfrak{g}}}}\oplus {{\mathfrak{g}}})$, where ${\overline}{{{\mathfrak{g}}}}$ stands for ${{\mathfrak{g}}}$ with the opposite metric. Any Lagrangian Lie subalgebra ${\mathfrak}{s}{\subseteq}{\overline}{{{\mathfrak{g}}}}\oplus {{\mathfrak{g}}}$ defines a Dirac structure $E_{{\mathfrak}{s}}=G\times {\mathfrak}{s}{\subseteq}\mathbb{A}$. Taking ${\mathfrak}{s}$ to be the diagonal, one obtains the Cartan-Dirac structure $E_\Delta=E$. Taking ${\mathfrak}{s}=\{(\xi,\kappa(\xi)|\ \xi\in{{\mathfrak{g}}}\}$ for ${\mathfrak}{\kappa}\in{ {\operatorname}{Aut} }(G)$, one obtains a Dirac structure $E_{{\mathfrak}{s}}=E^{(\kappa)}$ generalizing the Cartan-Dirac structure. As a Lie algebroid, it is the action Lie algebroid for the $\kappa$-twisted conjugation action. For a q-Hamiltonian space $(M,{\omega},\Phi)$ with $G\kappa$-valued moment map, the pair $(\Phi,\omega)$ defines a full morphism of Manin pairs, $(\mathbb{T} M,TM){\dasharrow}(\mathbb{A},E^{(\kappa)})$. Conversely, such a morphism defines a ${{\mathfrak{g}}}$-action on $M$ for which the underlying map $\Phi\colon M\to G$ is equivariant, and it also defines an invariant 2-form on $M$ satisfying the axioms above. The Dirac-geometric approach explains many of the properties of $G\kappa$-valued moment maps; for example the fusion construction finds a conceptual explanation in terms of a Dirac morphism $$({ {\operatorname}{Mult}}_G,\ \varsigma)\colon (\mathbb{A},E^{(\kappa_1)})\times (\mathbb{A},E^{(\kappa_2)}) \to (\mathbb{A},E^{(\kappa_2\kappa_1)})$$ with the 2-form $\varsigma={{\textstyle {\frac}{1}{2}}}{{\operatorname}{pr}}_1^*\theta^L\cdot {{\operatorname}{pr}}_2^*\theta^R$. See [@al:pur Section 4.4]. Twisted moduli spaces {#subsec:twistmod} --------------------- Let $\Sigma=\Sigma_h^r$ be a compact, connected, oriented surface of genus $h$ with $r>0$ boundary components, and let ${\mathcal{V}}=\{x_1,\ldots,x_r\}$ be a collection of base points on the boundary components, $x_i\in (\partial\Sigma)_i\cong S^1$. Let $$\pi_1(\Sigma,{\mathcal{V}})\rightrightarrows{\mathcal{V}}$$ denote the fundamental groupoid, consisting of homotopy classes $\lambda$ of paths for which both the initial point ${\mathsf{s}}(\lambda)$ and the end point ${\mathsf{t}}(\lambda)$ are in ${\mathcal}{V}$. Suppose we are given a groupoid homomorphism (‘twist’) $$\kappa\in { {\operatorname}{Hom}}\big(\pi_1(\Sigma,{\mathcal{V}}),{\operatorname}{Aut}(G)\big).$$ Such a $\kappa$ may be obtained by assigning elements of ${\operatorname}{Aut}(G)$ to a system of free generators of the fundamental groupoid, and extending by the homomorphism property. Let $$\label{eq:homg}M={\operatorname}{Hom}_\kappa\big(\pi_1(\Sigma,{\mathcal{V}}),G\big)$$ be the space of $\kappa$-twisted homomorphisms, consisting of maps $\lambda\mapsto \phi_\lambda$ such that $$\phi_{\lambda_1\circ \lambda_2}=\phi_{\lambda_1}\ \kappa_{\lambda_1}(\phi_{\lambda_2})$$ whenever ${\mathsf{s}}(\lambda_1)={\mathsf{t}}(\lambda_2)$. (The space $M$ may be regarded as a certain moduli space of flat connections.) [^1] The group ${\operatorname}{Map}({\mathcal{V}},G)=G\times \cdots \times G$ act on the space as $$(g.\phi)_\lambda=g_{{\mathsf{t}}(\lambda)}\ \phi_\lambda\ \kappa_\lambda(g_{{\mathsf{s}}(\lambda)}^{-1}).$$ Let $\kappa_1,\ldots,\kappa_r\in {\operatorname}{Aut}(G)$ be the values of $\kappa$ on the oriented boundary loops $\lambda_1,\ldots,\lambda_r$. Then $M$ is a q-Hamiltonian $G^r$-space, with a $G\kappa_1\times \cdots \times G\kappa_r$-valued moment map $\Phi$ given by evaluation on boundary loops. We won’t describe the 2-form here, since for the case that $\kappa$ takes values in diagram automorphisms it may be regarded as a component of the moduli space of flat $G\rtimes {\operatorname}{Out}(G)$-bundles – see Section \[sec:changing\] below. This construction also gives new examples of *non-twisted* q-Hamiltonian spaces. For example, take $\Sigma=\Sigma_1^1$ be the surface of genus $1$ with one boundary component. Its fundamental group(oid) has free generators $\alpha,\beta$, with the boundary loop given as $\alpha\beta\alpha^{-1}\beta^{-1}$. Attach an automorphism $\sigma\in { {\operatorname}{Aut} }(G)$ to $\beta$, and $1$ to $\alpha$, and extend to a homomorphism $\kappa$ as above. Then the corresponding $M$ is $G\times G$, with elements $(a,b)$ corresponding to holonomies along $\alpha,\beta$. The group $G$ acts on $a$ by conjugation and on $b$ by $\kappa$-twisted conjugation. The boundary holonomy is a $G$-valued moment map $$(a,b)\mapsto a b \kappa(a^{-1})\kappa(b^{-1}),$$ a twisted group commutator. Let $K$ be a disconnected group with identity component $G=K_0$. The space ${ {\operatorname}{Hom}}(\pi_1(\Sigma,{\mathcal{V}}),K)$ is a moduli space of flat $K$-bundles over $\Sigma$, with framings at the base points. This space is disconnected, in general. The conjugation action of $K$ on its identity component defines a group homomorphism $K\to {\operatorname}{Aut}(G)$. Hence, any element $x\in { {\operatorname}{Hom}}(\pi_1(\Sigma,{\mathcal{V}}),K)$ determines an element $\kappa\in { {\operatorname}{Hom}}(\pi_1(\Sigma,{\mathcal{V}}),{\operatorname}{Aut}(G))$, and the connected component containing $x$ is identified with a component of ${ {\operatorname}{Hom}}_\kappa(\pi_1(\Sigma,{\mathcal{V}}),G)$. The example giving rise to the convex polytope in Theorem \[th:convexity\] is obtained from the $r$-holed sphere ${\Sigma}={\Sigma}_0^r$. Here $\pi_1(\Sigma,{\mathcal{V}})$ is freely generated by $\lambda_1,\ldots,\lambda_{r-1}$, represented by oriented boundary loops based at $x_1,\ldots,x_{r-1}$, together with $\mu_1,\ldots,\mu_{r-1}$ represented by non-intersecting paths connecting these points to $x_r$. The element $\lambda_r$ represented by the remaining boundary loop satisfies $$\label{eq:lambdas} \prod_{i=1}^{r-1}\mu_i\lambda_i\mu_i^{-1}=\lambda_r^{-1}.$$ Given $\kappa\in { {\operatorname}{Hom}}(\pi_1(\Sigma,{\mathcal{V}}),{\operatorname}{Aut}(G))$, we denote by $\kappa_i$ the images of the $\lambda_i$’s, and by $\sigma_i$ the images of the $\mu_i$’s. Then $$\label{eq:kappas} \prod_{i=1}^{r-1}\sigma_i\kappa_i\sigma_i^{-1}=\kappa_r^{-1}.$$ We find ${ {\operatorname}{Hom}}_\kappa(\pi_1(\Sigma,{\mathcal{V}}),G)=G^{2r-2}$, consisting of tuples $(d_1,\ldots,d_{r-1},a_1,\ldots,a_{r-1})$, where $d_i$ are holonomies attached to the $\lambda_i$, and $a_i$ are attached to the $\mu_i$. The holonomy $d_r$ around the $r$-th boundary loop is determined from $$\label{eq:holes} \prod_{i=1}^{r-1} (a_i,\sigma_i)(d_i,\kappa_i)(a_i,\sigma_i)^{-1}=(d_r,\kappa_r)^{-1}.$$ \[lem:modspace0r\] Let $\kappa_1,\ldots,\kappa_r$ be holonomies attached to the boundaries of $\Sigma_0^r$, with $\kappa_r\kappa_{r-1}\cdots \kappa_1=1$. Then there is an extension to a homomorphism $\kappa\in{ {\operatorname}{Hom}}(\pi_1(\Sigma,{\mathcal}{V}),{\operatorname}{Aut}(G))$, in such a way that the moment map image of $M={\operatorname}{Hom}_\kappa(\pi_1(\Sigma,{\mathcal}{V}),\,G)$ consists of all $(d_1,\ldots,d_r)\in G^r$ for which there exists $(g_1,\ldots,g_r)$ with $g_i\in { {\operatorname}{Ad} }_G^{(\kappa_i)}(d_i)$ and $\prod_{i=1}^r g_i=e$. Using the notation above, put $\sigma_1=1,\ \sigma_2=\kappa_1^{-1}, \ldots,\ \sigma_{r-1}=\kappa_1^{-1}\cdots \kappa_{r-2}^{-1}$. Equation becomes the condition $\kappa_r\kappa_{r-1}\cdots \kappa_1=1$. Introducing $$a_1'=a_1,\ a_2'=\kappa_1(a_2),\ a_3'=\kappa_2(\kappa_1(a_3)),\ \ldots$$ the equation for the holonomies becomes $$\prod_{i=1}^{r} a_i' d_i \kappa_i((a_i')^{-1}) =e$$ where we put $a_r'=e$. That is $\prod g_i=e$ where $$g_i= a_i' d_i \kappa_i((a_i')^{-1})\in { {\operatorname}{Ad} }_G^{(\kappa_i)}(d_i).$$ The moment map for $M$ is the map taking $(d_1,\ldots,d_{r-1},a_1,\ldots,a_{r-1})$ to $(d_1,\ldots,d_r)$, with $d_r$ determined from the condition $\prod_i g_i=e$. Basic properties of $G\kappa$-valued moment maps ------------------------------------------------ The following statement extends a well-known property of moment maps in symplectic geometry. Let $(M,{\omega},\Phi)$ be a q-Hamiltonian $G$-space with $G\kappa$-valued moment map. For all $m\in M$ we have $${ {\operatorname}{ker}}(T_m\Phi)^{\omega}=T_m(G\cdot m),\ \ \ {\operatorname}{ran}(\Phi^*\theta^R)_m={{\mathfrak{g}}}_m^\perp.$$ (For any subspace $V{\subseteq}T_mM$, the notation $V^{\omega}$ means the set of all $v\in T_mM$ such that $\omega(v,w)=0$ for all $w\in V$.) In terms of $A={ {\operatorname}{Ad} }_{\Phi(m)}\circ\,\kappa$, the moment map condition gives $$\label{eq:rewrite} \iota(\xi_M){\omega}_m =-{{\textstyle {\frac}{1}{2}}}((A+I)\xi)\cdot (\Phi^*\theta^R)_m.$$ In particular, for $\xi\in{{\mathfrak{g}}}_m$, we get that $${{\textstyle {\frac}{1}{2}}}((A+I)\xi )\cdot (\Phi^*\theta^R)_m=0.$$ But ${{\mathfrak{g}}}_m{\subseteq}{{\mathfrak{g}}}_{\Phi(m)}={ {\operatorname}{ker}}(A-I)$, so $A$ acts as the identity on ${{\mathfrak{g}}}_m$. Hence we obtain $\xi\cdot (\Phi^*\theta^R)_m=0$, proving ${\operatorname}{ran}(\Phi^*\theta^R)_m{\subseteq}{{\mathfrak{g}}}_m^\perp$. On the other hand, it is immediate from the moment map condition that ${ {\operatorname}{ker}}(T_m\Phi)^{\omega}\supseteq T_m(G\cdot m)$. Equality of both inclusions follows by a dimension count: $$\begin{split} \dim(G\cdot m) &\le \dim({ {\operatorname}{ker}}(T_m\Phi)^{\omega}) \\ & =\dim T_mM-\dim ({ {\operatorname}{ker}}(T_m\Phi))\\ &=\dim({\operatorname}{ran}(\Phi^*\theta^R)_m)\\ & \le \dim {{\mathfrak{g}}}_m^\perp=\dim(G\cdot m). \end{split}$$ Here we used ${ {\operatorname}{ker}}({\omega})\cap { {\operatorname}{ker}}(T\Phi)=0$ for the first equality sign. The map ${{\mathfrak{g}}}\to T_mM$given by the infinitesimal action restricts to an isomorphism, $${ {\operatorname}{ker}}({ {\operatorname}{Ad} }_{\Phi(m)}\circ\,\kappa+I)\xrightarrow{\cong} { {\operatorname}{ker}}({\omega}_m).$$ Here, the Dirac-geometric viewpoint from Remark \[rem:dirac\] is convenient. Let $\mathbb{T} G_\eta$ be as in that remark. The subspace $$E_1=\{T\Phi(v)+\alpha\in \mathbb{T} G_\eta|\ v\in T_mM,\ \alpha\in T^*_{\Phi(m)}G,\ \Phi^*\alpha=\iota(v)\omega_m\}$$ is the ‘forward image’ of $T_mM{\subseteq}\mathbb{T} M=TM\oplus T^*M$ under the linear Dirac morphism $(T_m\Phi,\omega_m)$; in particular it satisfies $E_1=E_1^\perp$. The axioms show that $E_1$ contains the space $$E=\big\{\xi_G +{\frac}{1}{2}\theta^R\cdot(A+I)\xi\ \big|\ \ \xi\in{{\mathfrak{g}}}\big\}$$ (everything evaluated at $\Phi(m)$). Here $\xi_G$ are the generating vector fields for the $\kappa$-twisted conjugation, $$\xi_G=\kappa(\xi)^L-\xi^R=((A-I)\xi)^R.$$ But it is easily checked that $E=E^\perp$, which together with $E{\subseteq}E_1$ implies $E_1=E$. In particular, taking $\alpha=0$ in the definition of $E_1$ we see that $$(T_m\Phi)({ {\operatorname}{ker}}{\omega}_m)=\big\{\xi_G(\Phi(m))\big|\ (A+I)\xi=0\big\}.$$ Since ${ {\operatorname}{ker}}({\omega}_m)\cap { {\operatorname}{ker}}(T_m\Phi)=0$, the map $T_m\Phi$ is injective on ${ {\operatorname}{ker}}({\omega}_m)$. Consequently, ${ {\operatorname}{ker}}({\omega}_m)=\{\xi_M(m)|\ (A+I)\xi=0\}$. Changing $\kappa$ by inner automorphisms {#sec:changing} ---------------------------------------- Suppose $\kappa'={ {\operatorname}{Ad} }_a \circ\,\kappa$, and let $(M,{\omega},\Phi)$ be a q-Hamiltonian $G$-space with $G\kappa$-valued moment map. Then the manifold $M$ with the same $G$-action and 2-form, but with a shifted moment map $\Phi'=r_{a^{-1}}\circ \Phi$, is a q-Hamiltonian $G$-space with $G \kappa'$-valued moment map. For this reason, if $G$ is compact and simply connected, it usually suffices to consider the case of diagram automorphism $\kappa\in{\operatorname}{Out}(G)$. But for $\kappa\in{\operatorname}{Out}(G)$, the q-Hamiltonian $G$-spaces with $G\kappa$-valued moment map are simply q-Hamiltonian spaces with moment maps valued in the disconnected group $G\rtimes {\operatorname}{Out}(G)$, whose image is contained in the component $G\times \{\kappa\}$. (The only wrinkle is that we only consider the action of the identity component $G$ of this group, but this doesn’t affect the theory from [@al:mom].) In this sense, the examples considered above are not new, at least for $G$ compact and simply connected. For instance, in the fusion procedure \[ex:fusion\], first apply the automorphism $\kappa_1$ to the second space, thus obtaining $(M_2,{\omega}_2,\Phi_2')$ with the new $G$-action $m\mapsto \kappa_1(g).m$, and a $G\kappa'$-valued moment map $\Phi_2'=\kappa_1^{-1}\circ \Phi_2$, where $\kappa'=\kappa_1^{-1}\kappa_2\kappa_1$. Since $$(\Phi_1,\kappa_1)(\kappa_1^{-1}\circ\Phi_2,\kappa_2)=(\Phi_1\Phi_2,\kappa_2\circ\,\kappa_1),$$ we recognize the fusion product \[ex:fusion\] as a standard fusion product [@al:mom] for q-Hamiltonian $G$-spaces with $G\rtimes {\operatorname}{Out}(G)$-valued moment maps. Convexity properties ==================== We now turn to the convexity properties of $G\kappa$-valued moment maps. The arguments are mostly straightforward adaptations of those in [@me:lo] and [@le:co]. Throughout, we will assume that $G$ is compact and simply connected, and that $\kappa\in {\operatorname}{Aut}(G)$ is a diagram automorphism. We denote by ${\mathfrak{A}}^{(\kappa)}{\subseteq}{{\mathfrak{t}}}^\kappa$ the alcove, and by $$q^{(\kappa)}\colon G\to {\mathfrak{A}}^{(\kappa)}$$ the quotient map, with fibers $(q^{(\kappa)})^{-1}(\xi)$ the $\kappa$-twisted conjugacy classes of $\exp(\xi)$. Recall from \[subsec:slices\] the definition of the slices $U_\sigma$. Let $\kappa_\sigma$ denote the restriction of $\kappa$ to $G_\sigma$. Let $(M,{\omega},\Phi)$ be a connected q-Hamiltonian $G$-space with $G\kappa$-valued moment map. For any face $\sigma{\subseteq}{\mathfrak{A}}^{(\kappa)}$, the pre-image $Y_\sigma={\Phi^{-1}}(U_\sigma)$ is a q-Hamiltonian $G_\sigma\kappa_\sigma$-space, with the pullback of $\omega$ as the 2-form and the restriction of $\Phi$ as the moment map. The proof is parallel to the result for non-twisted q-Hamiltonian spaces, see [@al:mom], which in turn is a version of the cross-section theorem for Hamiltonian spaces, due to Guillemin-Sternberg [@gu:no] and Marle [@ma:mo]. Recall that for any connected $G$-manifold $M$, the principal stratum $M_{{\operatorname}{prin}}$ is the set of all points whose stabilizer is subconjugate to any other stabilizer. It is connected, and open and dense in $M$. Let $(M,{\omega},\Phi)$ be a connected q-Hamiltonian $G$-space with $G\kappa$-valued moment map. Then: 1. The stabilizer $G_m$ of any point $m\in M_{{\operatorname}{prin}}$ is an ideal in $G_{\Phi(m)}$. 2. All points in $M_{{\operatorname}{prin}}\cap {\Phi^{-1}}(\exp({\mathfrak{A}}^{(\kappa)}))$ have the same stabilizer $H$. 3. The image $q^{(\kappa)}(\Phi(M_{{\operatorname}{prin}}))$ is a connected, relatively open subset of $$(x+{{\mathfrak{h}}}^\perp)\cap {\mathfrak{A}}^{(\kappa)},$$ where ${{\mathfrak{h}}}$ is the Lie algebra of $H$, and $x$ is any point of $q^{(\kappa)}(\Phi(M_{{\operatorname}{prin}}))$. The parallel statements for ordinary Hamiltonian $G$-spaces are proved in [@le:co Section 3.3]. In particular, if $N$ is a connected Hamiltonian $G$-space, with moment map $\Psi\colon N\to {{\mathfrak{g}}}^*$, then for each $n\in N_{{\operatorname}{prin}}$, the stabilizer $G_n$ is an ideal in $G_{\Phi(n)}$, and the stabilizer $H=G_n$ of points in $N_{{\operatorname}{prin}}\cap \Psi^{-1}({{\mathfrak{t}}}^*_+)$ is independent of $n$. We will use cross-sections $Y_\sigma$ to reduce to the Hamiltonian case. As noted in the proof of Proposition \[prop:slice\], the automorphism $\kappa_\sigma=\kappa|_{G_\sigma}$ is inner, and is given by ${ {\operatorname}{Ad} }_{a^{-1}}$ for any choice of $a\in \exp(\sigma)$. Hence, $Y_\sigma$ becomes a q-Hamiltonian $G_\sigma$-space with (untwisted) $G_\sigma$-valued moment map $r_{a^{-1}}\circ \Phi_\sigma$. Furthermore, this then becomes an ordinary Hamiltonian $G_\sigma$-space with a moment map $$\Phi_{0,\sigma}\colon Y_\sigma\to {{\mathfrak{g}}}_\sigma\cong {{\mathfrak{g}}}_\sigma^*,\ \ m\mapsto \log(\Phi_\sigma(m) a^{-1}).$$ We conclude that for all $m\in (Y_\sigma)_{{\operatorname}{prin}}=Y_\sigma\cap M_{{\operatorname}{prin}}$, the stabilizer $G_m$ is an ideal in the stabilizer of $\Phi_{0,\sigma}(m)$ under the adjoint action. The latter coincides with stabilizer of $\Phi(m)=\exp(\Phi_{0,\sigma}(m))a$ under twisted conjugation. Hence $G_m$ is an ideal in $G_{\Phi(m)}$. Since the flow-outs of all the $Y_\sigma$’s under twisted conjugation cover $M$, this proves (a). The map $M_{{\operatorname}{prin}}\cap {\Phi^{-1}}(\exp({\mathfrak{A}}^{(\kappa)}))\to M_{{\operatorname}{prin}}/G$ is surjective, and has connected fibers $G_{\Phi(m)}.m=G_{\Phi(m)}/G_m$. Since the target of this map is connected, it follows that $M_{{\operatorname}{prin}}\cap {\Phi^{-1}}(\exp({\mathfrak{A}}^{(\kappa)}))$ is connected. Consider the decomposition of each $Y_\sigma$ into its connected components $Y_\sigma^i$. Passing to the corresponding Hamiltonian $G_\sigma$-space as above, and using the general results for connected Hamiltonian spaces, we see that all points of $Y_\sigma^i\cap M_{{\operatorname}{prin}}\cap {\Phi^{-1}}(\exp({\mathfrak{A}}^{(\kappa)}))$ have the same stabilizer. Since the union of these sets, over all $\sigma,i$, covers $M_{{\operatorname}{prin}}\cap {\Phi^{-1}}(\exp({\mathfrak{A}}^{(\kappa)}))$, it follows that all points of this intersection have the same stabilizer, proving (b). Each $q^{(\kappa)}(\Phi(M_{{\operatorname}{prin}})\cap Y_\sigma^i)$ is a connected, relatively open subset of $(x+{{\mathfrak{h}}}^\perp)\cap {\mathfrak{A}}^{(\kappa)}_\sigma$, for any choice of $x\in q^{(\kappa)}(\Phi(M_{{\operatorname}{prin}})\cap Y_\sigma^i)$. (Once again, this follows from the corresponding statement for Hamiltonian spaces, see [@le:co Section 3.3].) This implies (c). Let $(M,\omega,\Phi)$ be a connected q-Hamiltonian $G$-space with $G\kappa$-valued moment map. Then there exists a unique open face $\sigma$ of ${\mathfrak{A}}^{(\kappa)}$ such that $$q^{(\kappa)}(\Phi(M))\subseteq {\overline}{q^{(\kappa)}(\Phi(M))\cap \sigma}.$$ (Equality holds if $M$ is compact.) Alternatively, $\sigma$ is characterized as the smallest face such that the corresponding cross-section $Y_\sigma$ satisfies $\Phi(Y_\sigma){\subseteq}\exp(\sigma)$. This *principal cross-section* $Y_\sigma$ is a connected q-Hamiltonian $T^\kappa$-space, with the restriction of $\Phi$ as the moment map, and $$M={\overline}{G\cdot Y_\sigma}.$$ Using the notation from the previous proposition, let $\sigma$ be the lowest dimensional face of ${\mathfrak{A}}^{(\kappa)}$ whose closure contains $(x+{{\mathfrak{h}}}^\perp)\cap {\mathfrak{A}}^{(\kappa)}$. Since $q^{(\kappa)}(\Phi(M_{{\operatorname}{prin}}))$ is a relatively open subset of $(x+{{\mathfrak{h}}}^\perp)\cap {\mathfrak{A}}^{(\kappa)}$, its intersection with $\sigma$ is non-empty. It follows that $q^{(\kappa)}(\Phi(M))\cap\sigma =q^{(\kappa)}(\Phi(Y_\sigma))$. That is, $\Phi(Y_\sigma){\subseteq}\exp(\sigma){\subseteq}T^\kappa$, so that $Y_\sigma$ may be regarded as a q-Hamiltonian $T^\kappa$-space, for the restriction of the moment map. By construction, $G\cdot Y_\sigma={\Phi^{-1}}((q^{(\kappa)})^{-1}(\sigma))$. The difference $$\label{eq:difference} M_{{\operatorname}{prin}}-((G\cdot Y_\sigma)\cap M_{{\operatorname}{prin}})= M_{{\operatorname}{prin}}-\big({\Phi^{-1}}((q^{(\kappa)})^{-1}(\sigma))\cap M_{{\operatorname}{prin}}\big)$$ is the union over all ${\Phi^{-1}}((q^{(\kappa)})^{-1}(\tau))\cap M_{{\operatorname}{prin}}$ where $\tau$ ranges over proper faces of ${\overline}{\sigma}$. But those are submanifolds of codimension at least $3$, hence removing them will not disconnect $M_{{\operatorname}{prin}}$. Thus $(G\cdot Y_\sigma)\cap M_{{\operatorname}{prin}}$ is connected, which implies that $G\cdot Y_\sigma=G\times_{G_\sigma}Y_\sigma$ is connected, and therefore $Y_\sigma$ is connected. Note that since the principal cross-section $Y_\sigma$ is a q-Hamiltonian $T^\kappa$-space, it is in particular symplectic. \[th:convex\] Let $(M,{\omega},\Phi)$ be a compact, connected q-Hamiltonian $G$-space with $G\kappa$-valued moment map. Then the fibers of the moment map $\Phi$ are connected, and the image $$\Delta(M):=q^{(\kappa)}(\Phi(M)){\subseteq}{\mathfrak{A}}^{(\kappa)}$$ is a convex polytope. The principal cross-section $Y=Y_\sigma$ is a connected q-Hamiltonian $T^\kappa$-space, with the restriction $\Phi_Y=\Phi|_Y$ as its moment map. We can regard $Y$ as an ordinary Hamiltonian $T^\kappa$-space, with a moment map $\Phi_{Y,0}=q^{(\kappa)}\circ \Phi_Y$ that is proper as a map to ${\sigma}{\subseteq}{{\mathfrak{t}}}^\kappa$. Since ${\sigma}$ is convex, [@le:co Theorem 4.3] shows that $\Phi_{Y,0}$ has connected fibers, and its image is a convex set of the form $$q^{(\kappa)}(\Phi(Y))=\Phi_{Y,0}(Y)=P\cap \sigma,$$ where $P$ is some convex polytope in ${\overline}{\sigma}$. But then $q^{(\kappa)}(\Phi(M))={\overline}{q^{(\kappa)}(\Phi(Y))}=P$. Finally, if $x\in q^{(\kappa)}(\Phi(M))$, then the same argument as in [@le:co] shows that for any open ball $B$ around $x$, the pre-image ${\Phi^{-1}}((q^{(\kappa)})^{-1}(B))$ is connected. By a continuity argument [@le:co Lemma 5.1] this implies that ${\Phi^{-1}}(x)$ is connected. We obtain Theorem \[th:convexity\] as a special case: Consider again the twisted moduli space for the $r$-holed sphere ${\Sigma}_0^r$, corresponding to $\kappa_i\in {\operatorname}{Out}(G)$ with $\kappa_r\kappa_{r-1}\cdots \kappa_1=1$, as in Lemma \[lem:modspace0r\]. We had found that the moment map image consists of all $(d_1,\ldots,d_r)$ for which there exist elements $g_i\in G$ in the $\kappa_i$-twisted conjugacy class of $d_i$, such that $g_1\cdots g_r=e$. Hence, by Theorem \[th:convex\] the set is a convex polytope. An example {#sec:example} ========== We will illustrate Theorem \[th:convexity\] in a simple setting, were the resulting polytope can be computed by hand. Let $G=A_2\cong { {\operatorname}{SU}}(3)$, with its standard maximal torus $T$ consisting of diagonal matrices, and its usual choice of positive roots. We denote by $\alpha,\beta$ the simple roots, and let $\gamma=\alpha+\beta$ be their sum. The fundamental alcove ${\mathfrak{A}}{\subseteq}{{\mathfrak{t}}}$ is defined by the inequalities ${\langle}\alpha,\xi{\rangle}\ge 0,\ {\langle}\beta,\xi{\rangle}\ge 0,\ {\langle}\gamma,\xi{\rangle}\le 1$. Let $\kappa\in{\operatorname}{Aut}(G)$ be the nontrivial diagram automorphism of $G$ given by $\kappa(\alpha)=\beta$ and $\kappa(\beta)=\alpha$. The Lie algebra ${{\mathfrak{t}}}^\kappa$ consists of all $\xi$ such that ${\langle}\alpha,\xi{\rangle}={\langle}\beta,\xi{\rangle}$; it is thus the line spanned by the coroot $\gamma^\vee$. The alcove ${\mathfrak{A}}^{(\kappa)}$ is ‘half’ of the intersection ${\mathfrak{A}}\cap {{\mathfrak{t}}}^\kappa$, i.e. it consists of elements of ${{\mathfrak{t}}}^\kappa$ with ${\langle}\gamma,\xi{\rangle}\in [0,{\frac}{1}{2}]$. We thus label the $\kappa$-twisted conjugacy classes by a parameter $s\in [0,{\frac}{1}{2}]$, where ${\mathcal{C}}^{(\kappa)}_s$ contains $\exp(\xi_s)$ for a unique $\xi_s\in {\mathfrak{A}}^\kappa$ with ${\langle}\gamma,\xi_s{\rangle}=s$. Consider the setting of Theorem \[th:convexity\], with $r=3$. Unless all $\kappa_i=1$, two of the automorphisms $\kappa_1,\kappa_2,\kappa_3$ have to be $\kappa$, and the third is the identity. We may assume $\kappa_1=\kappa_2=\kappa$ and $\kappa_3=1$. Hence, $${\mathfrak{A}}^{(\kappa_1)}\times {\mathfrak{A}}^{(\kappa_2)}\times {\mathfrak{A}}^{(\kappa_3)} =\big[0,{\frac}{1}{2}\big]\times \big[0,{\frac}{1}{2}\big]\times {\mathfrak{A}}.$$ For $G=A_2\cong { {\operatorname}{SU}}(3)$ with its non-trivial diagram automorphism $\kappa$, the polytope of all $(s_1,s_2,\xi)\in \big[0,{\frac}{1}{2}\big]\times \big[0,{\frac}{1}{2}\big]\times {\mathfrak{A}}$ such that there exists $(g_1,g_2,g_3)\in {\mathcal{C}}^{(\kappa)}_{s_1}\times {\mathcal{C}}^{(\kappa)}_{s_2}\times {\mathcal{C}}_\xi$ with $g_1g_2g_3=e$, is given by the inequalities $0\le s_i\le {\frac}{1}{2}$ together with $$|s_1-s_2| \le {\langle}\alpha+\beta,\xi{\rangle}\le 1\,\ \ \ \ |s_1-s_2| \le 1-{\langle}\alpha,\xi{\rangle}\le 1,\ \ \ \ |s_1-s_2| \le 1-{\langle}\beta,\xi{\rangle}\le 1.$$ The problem of computing this polytope is equivalent to computing the moment polytope of the fusion product ${\mathcal{C}}^{(\kappa)}_{s_1}\times {\mathcal{C}}^{(\kappa)}_{s_2}$ for any $s_1,s_2$. This fusion product is an untwisted q-Hamiltonian $G$-space, with action $$h\cdot(g_1,g_2)=\big(h\,g_1\,\kappa(h)^{-1},\ \kappa(h)g_2\,h^{-1}\big)$$ and moment map $(g_1,g_2)\mapsto g_1g_2$; its moment polytope is a 2-dimensional convex polytope inside ${\mathfrak{A}}$. Observe that the set of $g_1g_2$ with $g_i\in {\mathcal{C}}^{(\kappa)}_{s_i}$ is invariant under left-translation by central elements $c\in Z(G)\cong {\mathbb{Z}}_3$. This follows from $${ {\operatorname}{Ad} }^\kappa_{c^{-1}}(g)=c^{-1}g\kappa(c)=c^{-1} g c^2=cg.$$ Left multiplication of the center on $G$ induces an action on the set of conjugacy classes, and the resulting action of ${\mathbb{Z}}_3$ on the alcove ${\mathfrak{A}}$ is by ‘rotation’. Hence, the moment polytope is invariant under ‘rotations’ of the alcove. If $s_1=s_2=0$, this implies that the moment polytope must be all of ${\mathfrak{A}}$, since it contains the origin. If at least one of $s_1,s_2$ is non-zero, the moment polytope does *not* contain the origin. Using standard results from symplectic geometry, applied to the symplectic cross-section, it is cut out from the alcove by affine half-spaces orthogonal to 1-dimensional stabilizer groups. But the generic stabilizer for the twisted conjugation action of $G$ on itself is $T^\kappa$, and all other 1-dimensional stabilizers are $W$-conjugate to $T^\kappa$. (The fixed point set of $T$ is trivial.) Together with the rotational symmetry, it follows that the moment polytope is cut out from the alcove by inequalities of the form $r\le {\langle}\gamma,\xi{\rangle},\ \ r\le 1-{\langle}\alpha,\xi{\rangle},\ \ r\le 1-{\langle}\beta,\xi{\rangle}$, for some $0<r<{\frac}{1}{2}$. To find $r$, it suffices to determine the fixed point set of $T^\kappa$ on the product of twisted conjugacy classes, and takes its image under the multiplication map. Since the action of $T^\kappa$ is just ordinary conjugation, and since $T^\kappa$ contains regular elements, the fixed point set for each factor is $${\mathcal{C}}^{(\kappa)}_{s_i}\cap T=\exp(\xi_{s_i}+{{\mathfrak{t}}}_\kappa)\cup \exp(-\xi_{s_i}+{{\mathfrak{t}}}_\kappa),$$ and the image under multiplication is $\exp(\xi_{\pm s_1\pm s_2}+{{\mathfrak{t}}}^\kappa){\subseteq}T$. We conclude $r=|s_1-s_2|$. \#1[0=]{} \#1[0=]{} \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} C. Agnihotri, S. and Woodward, *Eigenvalues of products of unitary matrices and quantum [S]{}chubert calculus*, Math. Res. Lett. **5** (1998), no. 6, 817–836. A. Alekseev, H. Bursztyn, and E. Meinrenken, *Pure spinors on [L]{}ie groups*, Astérisque **327** (2009), 131–199. A. Alekseev, A. Malkin, and E. Meinrenken, *[L]{}ie group valued moment maps*, J. Differential Geom. **48** (1998), no. 3, 445–495. T. Baird, *[Classifying spaces of twisted loop groups]{}*, Alg. Geom. Topology **16** (2016), no. 1, 211–229. P. Belkale, *Local systems on [$\Bbb P^1-S$]{} for [$S$]{} a finite set*, Compositio Math. **129** (2001), no. 1, 67–86. P. Belkale and S. Kumar, *The multiplicative eigenvalue problem and deformed quantum cohomology*, Advances in Math. **288**, (2016), 1309–-1359 A. Berenstein and R. Sjamaar, *[Coadjoint orbits, moment polytopes, and the [H]{}ilbert-[M]{}umford criterion]{}*, J. Amer. Math. Soc. **13** (2000), 433–466. P. Boalch, *[Quasi-Hamiltonian Geometry of Meromorphic Connections]{}*, Duke Math. J. **139** (2007), 369–405. , *[Geometry and braiding of Stokes data; Fission and wild character varieties]{}*, Annals of Math. **179** (2014), 301–365. P. Boalch and D. Yamakawa, *Twisted wild character varieties*, Preprint, arXiv:1512.08091. H. Bursztyn and M. Crainic, *Dirac structures, momentum maps, and quasi-[P]{}oisson manifolds*, The breadth of symplectic and Poisson geometry, Progr. Math., vol. 232, Birkhäuser Boston, Boston, MA, 2005, pp. 1–40. V. Guillemin and S. Sternberg, *A normal form for the moment map*, Differential Geometric Methods in Mathematical Physics (Jerusalem, 1982) (S. Sternberg, ed.), Mathematical Physics Studies, vol. 6, D. Reidel Publishing Company, Dordrecht, 1984, pp. 161–175. V. Kac, *Infinite-dimensional [L]{}ie algebras*, second ed., Cambridge University Press, Cambridge, 1985. A. Klyachko, *Stable bundles, representation theory and [H]{}ermitian operators*, Selecta Math. (N.S.) **4** (1998), no. 3, 419–445. E. Lerman, E. Meinrenken, S. Tolman, and C. Woodward, *Non-abelian convexity by symplectic cuts*, Topology **37** (1998), 245–259. C.-M. Marle, *Modèle d’action [H]{}amiltonienne d’un groupe de [L]{}ie sur une variété symplectique*, Rend. Sem. Mat. Univ. Politec. Torino (1985), 227–251. E. Meinrenken and C. Woodward, *[H]{}amiltonian loop group actions and [V]{}erlinde factorization*, J. Differential Geom. **50** (1999), 417–470. S. Mohrdieck, *Conjugacy classes of non-connected semisimple algebraic groups*, Universität Hamburg, 2000. S. Mohrdieck and R. Wendt, *Integral conjugacy classes of compact [L]{}ie groups*, Manuscripta Math. **113** (2004), no. 4, 531–547. N. Ressayre, *Geometric invariant theory and the generalized eigenvalue problem*, Invent. Math. **180** (2010), no. 2, 389–441. T. A. Springer, *Twisted conjugacy in simply connected groups*, Transform. Groups **11** (2006), no. 3, 539–545. C. Teleman and C. Woodward, *Parabolic bundles, products of conjugacy classes, and quantum cohomology*, Ann. Inst. Fourier (Grenoble) (2003), 713–748. M. Vergne. and M. Walter, *Inequalities for moment cones of finite-dimensional representations*, Prerint (2014). arXiv:1410.8144. [^1]: Alternatively, ${\operatorname}{Hom}_\kappa$ is the lift of $\kappa$ to the space ${\widetilde}{M}={ {\operatorname}{Hom}}\big(\pi_1(\Sigma,{\mathcal{V}}),G\rtimes {\operatorname}{Aut}(G)\big)$.
--- abstract: 'Strangeness flavor yield $s$ and the entropy yield $S$ are the observables of the deconfined quark-gluon state of matter which can be studied in the entire available experimental energy range at AGS, SPS, RHIC, and, in near future, at the LHC energy range. We present here a comprehensive analysis of strange, soft hadron production as function of energy and reaction volume. We discuss the physical properties of the final state and argue how evidence about the primordial QGP emerges.' address: - '$^1$Department of Physics, University of Arizona, TUCSON, AZ 85718, USA' - | $^2$Laboratoire de Physique Théorique et Hautes Energies\ Université Paris 7, 2 place Jussieu, 75251 Cedex 05, France author: - 'Johann Rafelski$^1$, Jean Letessier$^2$' title: | Strangeness and the Discovery of\ Quark-Gluon Plasma --- . -12.cm [*5th International Conference on Physics and Astrophysics of Quark Gluon Plasma,* ]{}\ February 8 - 12, 2005, Salt Lake City, Kolkata, India,\ [*Journal of Physics: Conference Series*]{} 11.cm Introduction {#intro} ============ The deconfined interacting quark–gluon plasma phase (QGP) is the [*equilibrium*]{} state of matter at high temperature and/or density. It is believed that this state has been present in the early Universe, 10-20$\mu$s into its evolution. The question is if, in the short time, $10^{-22}$–$10^{-23}$ s, available in a laboratory heavy ion collision experiment, the color frozen nuclear phase can melt and turn into the QGP state of matter. There is no valid first principles answer to this question available today, nor as it seems, will a first principles simulation of the dynamic heavy ion environment become available in the foreseeable future. To address this issue we study QGP experimentally, which requires development of laboratory experiments and suitable observables. To form QGP in the laboratory we perform relativistic heavy ion collisions in which a domain of (space, time) much larger than normal hadron size is formed, in which color-charged quarks and gluons are propagating constrained by external ‘frozen vacuum’, which abhors color [@RBRC]. We expect a pronounced boundary in temperature and baryon density between confined and deconfined phases of matter, irrespective of the question if there is, or not, a true phase transition. We search for a boundary between phases considering the size of the interacting region and the magnitude of the reaction energy. Detailed study of the properties of the deconfined state shows that QGP is rich in entropy and strangeness. The enhancement of entropy $S$ arises because the color bonds are broken and gluons can be created. Enhancement of strangeness $s$ arises because the mass threshold for strangeness excitation is considerably lower in QGP than in hadron matter. Moreover there are new mechanisms of strangeness formation in QGP involving reactions between (thermal) gluons. Thus $S$ and $s$ are the two elementary observables which are explored with soft hadronic probes, for further theoretical details and historical developments see our book [@CUP]. The numerical work presented here was carried out with the public package of programs SHARE [@share]. This report is a self contained summary of our recent results, see [@AGS; @bdepend; @edepend; @acta03]. Entropy enhancement, observed in terms of enhanced hadron multiplicity per net charge, has been among the first indications of new physics reach of CERN-SPS experimental heavy ion program [@entro]. The enhancement of strange hadron production both as function of the number of participating baryons, and reaction energy has been explored in several experiments at at BNL-AGS, CERN-SPS and and BNL-RHIC. We refrain from extensive historical survey of these results and present perhaps the latest, STAR-RHIC result in  [@Caines:2004ej]. In this presentation one sees the yield per participant $N_{\rm part}$ divided by a reference yield obtained in $pp$ reactions. We observe that the enhancement rises both with the strangeness content in the hadron and with the size of the reaction region, indicating that the cause of this enhancement is a increased yield of strange quarks in the source, a qualitative expectation we will address in our quantitative analysis below. The gradual increase of the enhancement over the range of $N_{\rm part}$ is an important indicator of the physics mechanisms at work. This behavior agrees with our studies of kinetic strangeness production and strangeness yield increasing with the size of the reaction region. This enhancement of strange antibaryons which demonstrates that a novel strangeness production mechanism is present has been extensively studied at SPS energy range, where it was originally discovered [@Bruno:2004pv; @Gaz]. 0.2cm Further evidence for parton dynamics prior to final state hadronization is obtained from the study of strange hadron transverse energy spectrum. The identity of hyperon and antihyperon spectra in particular $\Lambda, \overline\Lambda$ and $\Xi, \overline\Xi$ implies that both strange matter and antimatter must have been produced from a common source by the same fundamental mechanism. Furthermore they were not subject to interactions in their passage through the baryon rich hadron gas present at the SPS energy range. The $\overline\Lambda, \overline\Xi$ annihilation in baryon-rich hadron gas is strongly momentum dependent. This should deform the shape of the antihyperon spectra as compared to the spectra of hyperons. Thus symmetry of the hyperon-antihyperon spectra also implies that there was no appreciable annihilation of the $\overline\Lambda,\overline\Xi$ after their formation. The working hypothesis is therefore that hadronization of the QGP deconfined phase formed in high energy nuclear collision is direct, fast (sudden) and occurs without significant sequel interactions. That can be further tested in a study of yields and spectra of unstable resonances [@Rafelski:2001hp]. At RHIC the parton level dynamics is convincingly demonstrated by quark content scaling of azimuthal asymmetry of the collective flow $v_2$. Further evidence is derived from the consideration of quark recombination formation of hadrons from QGP. We refer to the recent comprehensive survey of the RHIC result [@RBRC], and presentations at the meeting addressing these very interesting and recent developments. We will assume in this report that the case of quark-parton dynamics prior to hadronization is convincing, and will use our analysis of soft hadron production to find the thresholds of the onset of deconfinement and to determine the properties of the deconfined fireball at the time of its breakup into hadrons. In next section \[StatHad\] we introduce the statistical hadronization method of analysis of hadron production. We discuss data analysis as function of impact parameter and energy dependence of soft hadron production in section \[depen\]. We will present both the systematics of statistical model parameters, and the associated physical properties. In section \[compare\], we address the physical QGP signatures indicating presence of a phase boundary, giving particular attention to the explanation of the ‘horn’ in the ${\rm K}^+/\pi^+$ ratio, and the strangeness to entropy relative yield. We discuss, in section \[final\], the role these results play in understanding of the phase boundary separating QGP from normal confined matter. We also discuss briefly possible new soft hadron physics at LHC. Statistical hadronization model {#StatHad} ================================ To describe the yields of particle produced we employ the statistical hadronization model (SHM). SHM is by definition a model of particle production in which the birth process of each particle fully saturates (maximizes) the quantum mechanical probability amplitude, and thus, the relative yields are determined solely by the appropriate integrals of the accessible phase space. For a system subject to global dynamical evolution such as collective flow, this is understood to apply within each local co-moving frame. When particles are produced in hadronization, we speak of chemical freeze-out. Hadron formation from QGP phase has to absorb the high entropy content of QGP which originates in broken color bonds. The lightest hadron is pion and most entropy per energy is consumed in hadronization by producing these particles abundantly. It is thus important to free the yield of these particles from the chemical equilibrium constraint. The normalization of the particle yields is, aside of the freeze-out temperature $T$, directly controlled by the particle fugacity $\Upsilon_i\equiv e^{ \sigma_i /T}$, where $ \sigma_i$ is particle ‘$i$’ chemical potential. Since for each related particle and antiparticle pair, we need two chemical potentials, it has become convenient to choose parameters such that we can control the difference, and sum of these separately. For example, for nucleons $N$, and, respectively, antinucleons $\overline{N}$ the two chemical factors are: $$\sigma_{N}\equiv \mu_b +T\ln\gamma_N ,\qquad \sigma_{\overline{N}}\equiv -\mu_b +T\ln\gamma_N,$$ $$\Upsilon_N=\gamma_N e^{ \mu_b /T}, \qquad\qquad \Upsilon_{\overline{N}}=\gamma_N e^{- \mu_b /T}.$$ The (baryo)chemical potential $\mu_b$, controls the baryon number, arising from the particle difference. $\gamma_N$, the phase space occupancy, regulates the number of nucleon–antinucleon pairs present. There are many different hadrons, and in principle, we could assign to each a chemical potential and then look for chemical reactions which relate these chemical potentials. However, a more direct way to accomplish the same objective consists in characterizing each particle by the valance quark content. The relation between quark based fugacity and chemical potentials ($\lambda_{q,s}=e^{\mu_{q.,s}/T}$) and the two principal hadron based chemical potentials of baryon number and hadron strangeness $\mu_i,\ i=b,{\rm S}$ is: \_b=3\_q\_s=1 3 \_b-\_[S]{}, \_s=[\_q\_[S]{}]{}. An important (historical) anomaly is the negative S-strangeness in $s$-carrying-baryons. We will in general follow quark flavor and use quark chemical factors to minimize the confusion arising. In the local rest frame, the particle yields are proportional to the momentum integrals of the quantum distribution. As example, for the yield of pions $\pi$, nucleons $N$ and antinucleons $\overline N$ we have: $$\begin{aligned} \label{Npi} \pi &=& {V} g_\pi\!\!\int\!\!\frac{d^3p}{(2\pi)^3} \frac{1}{\gamma_q^{ -2}e^{E_\pi/T}-1}\,, \qquad E_i=\sqrt{m_i^2+p^2},\quad \gamma_q^2<e^{m_\pi/T}\\[0.3cm] N&=&{V}g_N\!\!\int\!\!\frac{d^3p}{(2\pi)^3} \frac{1}{\gamma_q^{ -3}\lambda_q^{ -3 }e^{E_N/T}+1},\quad \overline {N}= {V}g_N\!\!\int\!\!\frac{d^3p}{(2\pi)^3} \frac{1}{\gamma_q^{ -3 }\lambda_q^{ +3 }e^{E_{\bar N}/T}+1}.\end{aligned}$$ There are two types of chemical factors $\gamma_i$ and $\mu_i$, and thus two types of chemical equilibriums. These are shown in table \[parameters\]. The absolute equilibrium is reached when the phase space occupancy approaches unity, $\gamma_i\to 1$. The distribution of flavor (strangeness) among many hadrons is governed by the relative chemical equilibrium. 0.3cm --------------- --------------------------------- ------------------- $\gamma_{i}$ controls overall abundance Absolute chemical of quark ($i=q,s$) pairs equilibrium $\lambda_{i}$ controls difference between Relative chemical quarks and antiquarks ($i=q,s$) equilibrium --------------- --------------------------------- ------------------- : \[parameters\]Four quarks $s,\ \overline{s},\ q,$ and $\ \overline{q} $ require four chemical parameters; right: name of the associated chemical equilibrium In order to arrive at the full particle yield, one has to be sure to include all the hadronic resonances which decay feeding into the yield considered, [*e.g.*]{}, the decay $K^*\to K+\pi$ feeds into $K$ and $\pi$ yields. The contribution is sensitive to temperature at which these particles are formed. Inclusion of the numerous resonances constitutes a book keeping challenge in study of particle multiplicities, since decays are contributing at the 50% level to practically all particle yields. A public statistical hadronization program, SHARE (Statistical HAdronization with REsonances) has simplified this task considerably [@share]. The resonance decay contribution is dominant for the case for the pion yield. This happens even though each resonance contributes relatively little in the final count. However, the large number of resonances which contribute compensates and the sum of small contributions competes with the direct pion yield. For the more heavy hadrons, generally there is a dominant contribution from just a few, or even from a single resonance. The exception are the $\Omega,\overline\Omega$ which have no known low mass resonances, and also $\phi$ – the known resonances are very heavy and very few. A straightforward test of sudden hadronization and the SHM is that within a particle ‘family’, particle yields with same valance quark content are in relation to each other well described by integrals of relativistic phase space. The relative yield of, [*e.g.*]{}, $K^*(\bar s q)$ and $K(\bar s q)$ or $\Delta$ and $N$ are controlled by the particle masses $m_i$, statistical weights (degeneracy) $g_i$ and the hadronization temperature $T$. In the Boltzmann limit one has (star denotes the resonance): $$\label{RRes} {N^*\over N}= {g^*m^{*\,2}K_2(m^*/T)\over g\,m^{2}K_2(m/T)}.$$ Validity of this relation implies insensitivity of the quantum matrix element governing the coalescence-fragmentation production of particles to intrinsic structure (parity, spin, isospin), and particle mass. The measurement of the relative yield of hadron resonances is a sensitive test of the statistical hadronization hypothesis and lays the foundation to the application of the method in data analysis. The method available to measure resonance yields depends in its accuracy significantly on the the precise nature of the hadronization process: the resonance yield is derived by reconstruction of the invariant mass of the resonance from decay products energies $E_i$ and momenta $p_i$. Should the decay products of resonances after formation rescatter on other particles, then often their energies and momenta will change enough for the invariant mass to fail the acceptance conditions set in the experimental analysis. Generally, the rescattering effect depletes more strongly the yields of shorter lived resonances , as a greater fraction of these will decay shortly after formation, when elastic scattering of decay products on other produced particles is possible. We further hear often the argument that the general scattering process of hadrons in matter can form additional resonance states. In our opinion, the loss of observability (caused by [*any*]{} scattering of [*any*]{} of the decay products is considerably greater than a possible production gain. The loss of resonance yield provides additional valuable information about the freeze-out conditions (temperature and time duration)[@Rafelski:2001hp]. Phase thresholds: volume and energy dependence {#depen} ============================================== Statistical model parameters {#paramet} ---------------------------- In order to explore the properties of the fireball at hadronization as function of the volume at the top RHIC energy $\sqrt{s_{NN}}=200$GeV Au–Au, we study the 11 centrality bins in which the $\pi^\pm, {\rm K}^\pm, p$ and $\bar p$ rapidity yields $dN/dy$ for $y_{\rm CM}=0$ have been recently presented, see table I and table VIII in Ref. [@phenixyield]. These 6 particle yields and their ratios change rapidly. On the other hand, the additional two experimental results, the STAR ${\rm K}^*(892)/{\rm K}^-$ [@haibin2200], and $\phi/{\rm K}^-$ [@phiyld] show little centrality dependence. In addition, three supplemental constraints help to determine the best fit:\ A) strangeness conservation, [*i.e.*]{}, the (grand canonical) count of $s$ quarks in all hadrons equals such $\bar s$ count for each rapidity unit;\ B) the electrical charge to net baryon ratio in the final state is the same as in the initial state;\ C) the ratio $\pi^+/\pi^-=1.\pm0.02$, which helps constrain the isospin asymmetry.\ This last ratio appears redundant, as we already independently use the yields of $\pi^+$ and $\pi^-$. These yields have a large systematic error and do not constrain their ratio well, and thus the supplemental constraint is introduced, since SHARE allows for the isospin asymmetry effect. The 7 SHM parameters (volume per unit of rapidity $dV/dy$, temperature $T$, four chemical parameters $\lambda_q, \lambda_s, \gamma_q, \gamma_s$ and the isospin factor $\lambda_{I3}$ are in this case studied in a systematic fashion as function of impact parameter using 11 yields and/or ratios and/or constraints, containing one (pion ratio) redundancy. Although the number of degrees of freedom in such analysis is small, the $\chi^2$ minimization yielding good significance is easily accomplished, showing good consistency of the data sample. The resulting statistical parameters are shown in , as function of participant number. We show on left results for the full non-equilibrium model allowing $\gamma_q\ne 1, \gamma_s\ne 1$ (full circles, blue) and semi-equilibrium setting $ \gamma_s= 1$ (open circles, red). From top to bottom the (chemical) freeze-out temperature $T$, the occupancy factors $\gamma_q$, $\gamma_s/\gamma_q$ and together in the bottom panel the baryon $ \mu_B$ and hyperon $\mu_S$ chemical potentials. On left, in , we present the results for the centrality dependence, as function of participant number, on right as function of energy. To study the energy dependence, we must assemble several different experimental results from different facilities and experiments [@edepend]. The results of this extensive analysis are shown on right hand side in . We note that when we are able to consider the total particle abundances, the number of participating nucleons is a tacit fit parameter. The lowest energy result is from our AGS study [@AGS], the SPS data we used are from NA49 energy dependence exploration at the CERN-SPS [@Gaz; @GazPriv]. These results are for the total particle yields. The highest energy two results are based on studies of RHIC data at 130 and 200 GeV at central rapidity and address the $dN/dy$ particle yields, with the highest point corresponding to the results presented at greatest centrality on the left hand side in . There are several relevant features in Fig.\[gammu\]. We see on left, that, for $A>20$, there is no centrality dependence in freeze-out temperature $T$ and chemical potentials $\mu_{B,S}$ (up to the variation which can be associated with fluctuation in the data sample). However, there is a change in values of the chemical potentials with reaction energy. This is result of rapidly decreasing baryon density, which due to reduced stopping is distributed over a wider range of rapidity as the reaction energy increases. The middle sections, in , address the phase space occupancies which were obtained in terms of hadron particle yields. The quark-side occupancy parameters could be considerably different, indeed as model studies show, a factor 2 lower at the discussed conditions [@CUP]: the hadron side phase space size is in general different from the quark-side phase space, since the particle degeneracies, particle spectra are quite different. We see similar behavior of $\gamma_q$ as of $T$ both for volume (that is, ‘wounded’ participant number $A$) and energy dependence seen in Fig.\[gammu\]: the two lowest energy bins (top AGS and lowest SPS energy) deviate from the behavior seen at all other energies, as do the bins with $A<20$. $\gamma_s/\gamma_q$ as function of centrality rises steadily indicating longer lifespan of the fireball with increasing size. As function of energy, $\gamma_s/\gamma_q$ reaches a plateau at 30 $A$GeV, with further rise only seen in the central rapidity results at RHIC. We note that, in our analysis, there is no saturation of the $\gamma_s$ as we approach the most central reactions. This is inherent in the data we consider which includes the yields of $\phi$, and $K^*$. This result is consistent with the implication that strangeness is not fully saturated in the QGP source, though it appears over-saturated when measured in the hadron phase space. The deviation at the most peripheral centrality bins and at lowest reaction energy from trends set by other results could be an indication of the change in the reaction mechanism. As a threshold of centrality and/or energy are crossed a relatively small value of $\gamma_q\simeq 0.5$ grows rapidly to the maximum allowed by pion condensation condition, $\gamma_q\simeq e^{m_\pi/2T}\simeq 1.6$. This behavior signals a transformation of a chemically under-saturated phase of matter into something novel, where chemical equilibration is easy, and results in hadronization in a over-saturated phase of matter. Independent of the chemical (non-)equilibrium assumption, the baryochemical potential $\mu_B=25\pm1$ MeV is seen across 10 centrality bins. Similarly, we find strangeness chemical potential $\mu_S=5.5\pm0.5$ MeV (related to strange quark chemical potential $\mu_s=\mu_B/3-\mu_S$). The most notable variation, in Fig.\[gammu\], is the gradual increase in strangeness phase space occupancy $\gamma_s/\gamma_q$ and thus strangeness yield with collision centrality. This effect was predicted and originates in an increasing lifespan of the fireball [@impact]. The over-saturation of the phase space has been also expected due to both, the dynamics of expansion [@RHICPred], and/or reduction in phase space size as a parton based matter turns into HG [@JRBielefeld]. This latter effect is also held responsible for the saturation of light quark phase space $\gamma_q\to e^{m_\pi/2T}$. A systematic increase of $\gamma_s$ with collision centrality has been reported for several reaction energies [@Kampfer:2003pf]. Hadronization Temperature {#Tphase} ------------------------- Let us now discuss more in depth the magnitude of the hadronization temperature which at high energy/central collisions we find at $T=140$ MeV. Some prefer the statistical hadronization to occur at higher temperature, perhaps as high as$ T=175$ MeV, a point argued at this meeting in great detail by Dr. Peter Braun-Münzinger. In his presentation we heard that he believes that the lattice results will reach to such high temperature near to $mu_{\rm B}=0$ and this is where one should expect to see hadronization We disagree with both claims. For one, We note that [*chemical equilibrium QCD lattice*]{} [@Fodor:2004nz] results are mature and yield $T =163\pm2$ MeV when extrapolated to physical quark mass scale. This result is in a very good agreement with the prior work on 2 and 3 flavor QCD which brackets this result by $T_{n_f=2,3}=T\pm 10$ MeV [@Karsch:2001vs]. Moreover, the heavy ion collisions present a highly dynamical environment and one has to pay tribute to this especially regarding the value of hadronization temperature. For this reason we do not expect to find that the observed hadronization condition $T(\mu_{\rm B})$ will line up with the phase boundary curve obtained in study of a statistical system in thermodynamic limit on the lattice: a)\ A widely discussed effect which displaces the hadronization condition from the phase boundary is the expansion dynamics of the fireball. When the collective flow occurs at parton level, the color charge flow, like a wind, pushes out the vacuum [@Csorgo:2002kt], adding to thermal pressure a dynamical component. This can in general lead to supercooling and a sudden breakup of the fireball. We find that this can reduce the effective hadronization temperature by up to 20 MeV [@suddenPRL]. b)\ Lattice result have been discussed for 2-flavor lattice QCD at corresponding to $\gamma_q=1,\gamma_s=0$ (called) and for 2+1 flavor, corresponding to $\gamma_q=\gamma_s=1$. While the precise nature of the phase limit is still under study it appears that for 2-flavor case the phase boundary temperature rises by about 7-10 MeV compared to the 2+1 case. We refer to the recent review of lattice QCD for further details [@Karsch:2003jg]. Similarly the phase limit in pure gauge case corresponding in loose sense to $\gamma_q=\gamma_s=0$ was seen near or even above $T=200$ MeV. These results do suggest that presence and the number of quarks matters regarding the precise location of the phase boundary and its nature. Its importance could be greatly enhanced, should the over-saturation of quark phase space have the same effect as would additional quark degrees of freedom. These are known to cause even for $\mu_{\rm B}=0$ the conversion of the phase crossover into a 1st-order phase transition which would, with these additional degrees of freedom, be expected at just the temperature we find in the SHM analysis. We can be nearly sure that the chemical conditions matter and can displace the transition temperature. Because the degree of equilibration in the QGP depends on the collision energy, as does the collective expansion velocity, we cannot at all expect a simple hadronization scheme appropriate for the hadronization of nearly adiabatically expanding Universe. Leaving this issue we note that, in the data analysis assuming chemical equilibrium, we find $T=155\pm8$ MeV for the chemical equilibrium and strangeness non-equilibrium freeze-out, see . The error is our estimate of the propagation of the systematic data error, combined with the fit uncertainty; the reader should note that the error comparing centrality to centrality is negligible. The freeze-out temperature is for the semi-equilibrium and equilibrium model about 10% greater than the full chemical non-equilibrium freeze-out. This result for $T$, in the equilibrium case, is in mild disagreement (1.5 s.d.) with earlier equilibrium fits [@BDMRHIC; @Broniowski:2003ax]. This, we believe, is due to some differences in data sample used, specifically, the hadron resonance production results used provide a very strong constraint for the fitted temperature, and more complete treatment by SHARE of hadron mass spectrum. Interestingly, it seems that the general consensus about the chemical equilibrium best analysis result is in gross disagreement with the results advanced at this meeting by Dr. Peter Braun-Münzinger. Physical Properties of the Fireball {#physres} ----------------------------------- We now turn our attention to the physical properties of the hadronizing fireball obtained summing individual contributions made by made of each of the hadronic particles produced. Often particles observed experimentally dominate ([*e.g.*]{} pions dominate pressure, kaon strangeness yield etc). However, it is important to include in this yields of particles predicted in terms of the SHM fit to the available results. Again, on left in Fig.\[Phys\]. we show the behavior as function of impact parameter and on right as function of energy. From top to bottom we show the pressure $P$, energy density $\epsilon$, entropy density $\sigma$, and the dimensionless ratio $\epsilon/T\sigma=E/TS$. All contributions are evaluated using relativistic expression, see [@CUP]. When we fitted particle rapidity yields, the global fitted yield normalization factor is $dV(A)/dy $, for total particle yields it is $V$ which is also a function of centrality trigger. When we consider ratios of two bulk properties e.g. $E/TS$, the results are in general smoother indicating cancellation in the error propagating from the fit. The overall error is of the magnitude of the particle yield, for example pressure is dominated by pions and hence its precision is limited by this error. However, on left, the point-to-point error is minimal as the systematic error is common. On right, the absolute error matters as the fluctuations in the results presented show. We note, in , that as the reaction energy passes the volume threshold $A=20$ and even more so, the energy threshold $6.26\,{\rm GeV}<\sqrt{s_{\rm NN}}<7.61\,{\rm GeV}$ the hadronizing fireball becomes much denser. The entropy density jumps by factor 3–4, and the energy and baryon number density by a factor 2–3. The hadron pressure jumps up from $P=25\, {\rm MeV}/{\rm fm}^3$ initially by factor 2 and ultimately more than factor 3. There is a gradual increase of $P/\epsilon=0.115$ at low reaction energy to 0.165 at the top available energy. It is important to note that exactly the same behavior of the fireball physical properties arises both as function of reaction volume and reaction energy. This is the case for both the physical properties and the statistical parameters. We believe that this shows a common change in the physical state created as function of energy available and reaction volume. Search for an Energy Threshold {#compare} ============================== Kaon to pion ratio {#Kpisec} ------------------ One of the most interesting questions is if there is an energy threshold for the formation of a new state of matter. An important observable of the deconfined phase of matter is aside of strangeness, the high entropy content, which is arising from broken color bonds. The observable for both is the K$^+/\pi^+ \propto \bar s/\bar d$ yield ratio [@Glendenning:1984ta]. This ratio has been studied experimentally [@Gaz] and a pronounced horn structure arises. We can describe this structure in our study of the particle yields only within the chemical non-equilibrium model. Although this change is associated a rather sudden modification of chemical conditions in the dense matter fireball, this effect is caused by two distinct phenomena: the rapid rise in strangeness $\bar s$ production below, and a rise in the antiquark $\bar d$ yield above a energy reaction threshold. -.4cm -.25cm -14.75cm 4.0cm 2.6cm The measured ${\rm K}^+/{\pi}^+$ ratio by NA49 is shown at top left of , where for comparison we also show the $pp$ results. On right top, we present our results reduced to the correct relative scale, both for the total yield ratio for the AGS–RHIC energy range, and for the central rapidity results from RHIC. The solid line guides the eye to the fit results we obtained. To show that the $K^+/\pi^+$ ratio drop is due to a decrease in baryon density which leads to rise in the $\bar d$ yield, we show in bottom section of , the nearly baryon density independent ${K}/{\pi}$ double ratio ratio , on left as function of $\sqrt{s_{\rm NN}}$, and on right as function of centrality of the reaction for $\sqrt{s_{\rm NN}}=200$ GeV: =. Both upper and lower portion of are also drawn on same relative scale. Seen how the horn specifically arises in the one $K^+/\pi^+$, one can wonder if this is really a physical effect and how, in qualitative terms, a parameter $\gamma_q$, which controls the light quark yield, can help explain the horn structure seen in top of . We observe that this horn structure in the ${\rm K}^+/\pi^+$ ratio traces out the final state valance quark ratio $\bar s/\bar d$, and in language of quark phase space occupancies $\gamma_i$ and fugacities $\lambda_i$, we have: $$\label{KPi} {{\rm K}^+\over \pi^+}\to {\bar s\over \bar d} \propto F(T) \left({\lambda_s\over \lambda_d}\right)^{\!-1} {\gamma_s\over \gamma_d} = F(T)\left(\sqrt{\lambda_{I3}}{\lambda_s\over \lambda_q}\right)^{\!-1} {\gamma_s\over \gamma_q}.$$ In chemical equilibrium models $\gamma_s/\gamma_q=1$, and the ${\rm K}^+/\pi^+$ ratio and its horn must arise solely from the variation in the ratio $\lambda_s/ \lambda_q$ and the change in temperature $T$ which both are usually smooth function of reaction energy. The isospin factor $\lambda_{I3}$ is insignificant in this consideration. As collision energy is increased, increased hadron yield leads to a decreasing $\lambda_q=e^{\mu_{\rm B}/3T}$. We recall the smooth decrease of $\mu_{\rm B}$ with reaction energy seen in bottom panel in . The two chemical fugacities $\lambda_s$ and $\lambda_q$ are coupled by the condition that the strangeness is conserved. This leads to a smooth $\lambda_s/ \lambda_q$. The chemical potential effect is suggesting a smooth increase in the K$^+/\pi^+$ ratio. For the interesting range of freeze-out temperature, $F(T)$ is a smooth function of $T$. Normally, one expects that $T$ increases with collision energy, hence on this ground as well we expect an monotonic increase in the ${\rm K}^+/\pi^+$ ratio as function of reaction energy. With considerable effort, one can arrange the chemical equilibrium fits to bend over to a flat behavior at $\sqrt{s_{\rm NN}^{\rm cr}}$. It is quasi impossible to generate the horn with chemical equilibrium model. To accomplish this, an additional parameter appears necessary, capable to change rapidly when hadronization conditions change. This is $\gamma_q$. It presence also allows $T$ to vary in non-monotonic fashion, as is seen in . QGP degrees of freedom and $s/S$ ratio {#sSrat} -------------------------------------- The full SHM is capable to describe the data, and we now show that it can pinpoint the properties of the phase of matter that was created early on in the reaction. To see this we consider what we learn from the final state data about strangeness and entropy production. For this purpose we consider both the specific per baryon and per entropy yield of strangeness. In addition, we look at the cost in thermal energy to make strangeness. All these quantities are nearly independent of the dynamics of hadronization, since they are related to processes occurring early on, ‘deep’ inside the collision region, and long before hadronization. In the QGP, the dominant entropy production occurs during the initial glue thermalization, and the thermal strangeness production occurs in parallel and/or just a short time later [@Alam:1994sc]. The entropy production occurs predominantly early on in the collision during the thermalization phase. Strangeness production by gluon fusion is most effective in the early, high temperature environment, however it continues to act during the evolution of the hot deconfined phase until hadronization [@RM82]. Both strangeness and entropy are nearly conserved in the evolution towards hadronization and thus the final state hadronic yield analysis value for $s/S$ is closely related to the thermal processes in the fireball at $\tau\simeq 1$–2 fm/c. We believe that for reactions in which the system approaches strangeness equilibrium in the QGP phase, one can expect a prescribed ratio of strangeness per entropy, the value is basically the ratio of the QGP degrees of freedom. We estimate the magnitude of $s/S$ deep in the QGP phase, considering the hot stage of the reaction. For an equilibrated non-interacting QGP phase with perturbative properties: = = [0.027 ]{}. Here, we used for the number of flavors $n_{\rm f}=2.5$ and $m_{ s}/T=1$. We see that the result is a slowly changing function of $\lambda_q$; for large $\lambda_q\simeq 4$, we find at modest SPS energies, the value of $s/S$ is reduced by 10%. Considering the slow dependence on $x=m_{ s}/T\simeq 1$ of $W(x)=x^2 K_2(x)$ there is minor dependence on the much less variable temperature $T$. The dependence on the degree of chemical equilibration which dominates is easily obtained separating the different degrees of freedom: =0.027 [\_s\^[QGP]{} ]{}. We assume that the interaction effects are at this level of the discussion canceling. Seen we expect to see a gradual increase in $s/S$ as the QGP source of particles approaches chemical equilibrium with increasing collision energy and/or increasing volume. We repeat that it is important to keep in mind that this ratio $s/S$ is established early on in the reaction, the above relations, and the associated chemical conditions we considered, apply to the early hot phase of the fireball. At hadron freeze-out the QGP picture used above does not apply. Gluons are likely to freeze faster than quarks and both are subject to much more complex non-perturbative behavior. However, the value of $s/S$ is nearly preserved from the hot QGP to the final state of the reaction. How does this simple prediction compare to experiment? Given the statistical parameters, we can evaluate the yields of particles not yet measured and obtain the rapidity yields of entropy, net baryon number, net strangeness, and thermal energy, both for the total reaction system and also for the central rapidity condition, also as function of centrality. In passing, we note that, in the most central reaction bin at RHIC-200, $dB/dy\simeq 15$ baryons per unit rapidity interval, implying a rather large baryon stopping in the central rapidity domain. The rise of strangeness yield with centrality is faster than the rise of baryon number yield: $(ds/dy)/ (dB/dy)\equiv s/B$ is seen in the top left panel in Fig.\[PEST\]. The solid (blue) lines are for the chemical nonequilibrium central rapidity yields of particles at RHIC-200. Solid (green) lines, on right, are for total hadron yields and thus total yields of the considered quantities, [*e.g.*]{} strangeness, entropy. For the most central head-on reactions, we reach at RHIC-200 $ s/B =9.6\pm 1$. In the middle panel in we compare strangeness with entropy production $s/S$, which we just evaluated theoretically. Again, on left, as function of participant number $A$ at RHIC-200 and, on right, for the total reaction system for the most central 5-7% reactions. On left, we see a smooth transition from a flat peripheral behavior where $s/S\lessim 0.02 $ to smoothly increasing $s/S$ reaching $s/S\simeq 0.028$ in most central reactions. This indicates that even at RHIC-200 for the more central reactions some new mechanisms of strangeness production becomes activated. On right, we see that the change on $s/S$ is much more drastic as function of reaction energy. After an initial rapid rise the further increase occurs beyond the threshold energy at slower pace. In the bottom panel, on left, we see the thermal energy cost $E_{\rm th}/s$ of producing a pair of strange quarks as function of the size of the participating volume ([*i.e.*]{} $A$) This quantity shows a smooth drop which can be associated with transfer of thermal energy into collective transverse expansion after strangeness is produced. Thus, it seems that the cost of strangeness production is independent of reaction centrality. The result is different when we consider $\sqrt{s_{\rm NN}}$ dependence of this quantity, see bottom panel on right. There is a very clear change in the energy efficiency of making strangeness at the threshold energy. We will return to discuss possible reaction mechanisms below. Final remarks {#final} ============= Phase boundary and hadronization conditions {#PhaseB} ------------------------------------------- The chemical freeze-out conditions we have determined presents, in the $T$–$\mu_{\rm B}$ plane, a more complex picture than naively expected, see . Considering results shown in , we are able to assign to each point in the $T$–$\mu_{\rm B}$ plane the associate value of $\sqrt{s_{\rm NN}}$. The RHIC $dN/dy$ results are to outer left. They are followed by RHIC and SPS $N_{4\pi}$ results. The dip corresponds to the 30 and 40 $A$GeV SPS results. The top right is the lowest 20 $A$GeV SPS and top 11.6 $A$GeV AGS energy range. To guide the eye, we have added two lines connecting the fit results. We see that the chemical freeze-out temperature $T$ rises for the two lowest reaction energies 11.6 and 20 $A$ GeV to near the Hagedorn temperature, $T=160$ MeV, of boiling hadron matter. Once the chemical non-equilibrium is allowed for, the data fit turns to be much more precise, and the picture of the phase boundary with smaller ‘measurement’ error reveal a much more complex structure, and contains physics details prior analysis based on a rudimentary model could not uncover. The shape of the hadronization boundary, shown in in the $T$–$\mu_{\rm B}$ plane, is the result of a complex interplay between the dynamics of heavy ion reaction, and the properties of both phases of matter, the inside of the fireball, and the hadron phase we observe. The dynamical effect, capable to shift the location in temperature of the expected phase boundary is due to the expansion dynamics of the fireball which occurs at parton level, and effects of chemical nonequilibrium, see subsection \[Tphase\] for full discussion. Possibly, not only the location, but also the [*nature*]{} of the phase boundary can be modified by variation of $\gamma_i$. We recall that for the 2+1 flavor case, there is a critical point at finite baryochemical potential with $\mu_{\rm B}\simeq 350$ MeV [@Fodor:2004nz]. For the case of 3 massless flavors, there can be a 1st order transition at all $\mu_{\rm B}$ [@Karsch:2001vs; @Peikert:1998jz; @Bernard:2004je]. Considering a classical particle system, one easily sees that an over-saturated phase space, [*e.g.*]{}, with $\gamma_q=1.6,\ \gamma_s\ge \gamma_q $ for the purpose of the study of the phase transition acts as being equivalent to a system with 3.2 light quarks and 1.6 massive (strange) quarks present in the confined hadron phase. Considering the sudden nature of the fireball breakup seen in several observables [@RBRC], we conjecture that the hadronizing fireball leading to $\gamma_s>\gamma_q=1.6$ super-cools and experiences a true 1st order phase boundary corresponding also at small $\mu_{\rm B}$. The system we observe in the final state prior to hadronization is mainly a quark–antiquark system with gluons frozen in prior expansion cooling of the QCD deconfined parton fluid. The quark dominance is necessary to understand [*e.g.*]{} how the azimuthal asymmetry $v_2$ varies for different particles [@Huang:2005nd]. These quarks and antiquarks have, in principle, at that stage a significant thermal mass. The evidence for this is seen in in its two bottom panels, showing the dimensionless variable $E/TS$. The energy end entropy per particle of non-relativistic and semi-relativistic classical particle gas comprising both quarks and antiquarks is (see section 10, [@CUP]): $${ E\over N}\simeq m+3/2\, T+\ldots,\quad { S\over N} \simeq 5/2+m/T+\ldots,\qquad {E\over TS}\simeq {m/T+3/2\over m/T+5/2}\ .$$ It is thus possible to interpret the fitted value $E/TS\to 0.78$ in terms of a quark matter made of particles with $m\propto aT$, $a=2$ which is close to what is expected based on thermal QCD [@Petreczky:2001yp]. Looking forward to LHC {#LHC} ---------------------- We expect considerably more violent transverse expansion of the fireball of matter created at LHC. The kinetic energy of this transverse motion must be taken from the thermal energy of the expanding matter, and ultimately this leads to local cooling and thus a reduction in the number of quarks and gluons. The local entropy density decreases, but the expansion assures that the total entropy still increases. Primarily, gluons are feeding the expansion dynamics, while strange quark pair yield, being weaker coupled remain least influenced by this dynamics. Model calculations show that this expansion yields an increase in the final QGP phase strangeness occupancy $\gamma_s$ [@Raf99R]. This mechanism, along with the required depletion of the non-strange degrees of freedom, in the feeding of the expansion, assures an increase in the $K/\pi$ ratio given the nearly 30-fold increase of collision energy. Depending on what we believe to be a valid hadronization temperature for a fast transversely expanding fireball, the possible maximal enhancement in the $K/\pi$ ratio may be in the range of a factor 2–3. Perhaps even more interesting than the $K/\pi$ ratio enhancement would be the enhancement anomaly in strange (antibaryon) yields. With $\gamma_s\gg 1$, we find that the more strange baryons and antibaryons are more abundant than the more ‘normal’ species. Specifically of interest would be $(\Omega^-+\overline\Omega^+)/(h^++h^-)$, $(\Xi^-+\overline\Xi^+)/(h^++h^-)$, and $2\phi/(h^++h^-)$ which should show a major, up to an order of magnitude shift in relative production strength. Detailed predictions for the yields of these particles require considerable extrapolation of physics conditions from the RHIC to LHC domain and this work is in progress [@LHCPred]. Ultimately, strange $s$, and $\bar s$ quarks can exceed in abundance the light quark component, in which case we would need to rethink in much more detail the distribution of global particle yield. The ratios of neutral and charged hadrons would undergo serious change. Regarding charm, we note that situation will not become as severe. Given the large charm quark mass, we expect that most of charm quark yield is due to first hard interactions of primary partons. For this reason, the yield of strange and light quarks, at time of hadronization, exceeds by about a factor 100 or more that of charm at central rapidity. Thus, even though charm phase space occupancy at hadronization may largely exceed the chemical equilibrium value, seen the low hadronization temperature, [*e.g.*]{}, $m_c/T\simeq 10$, it takes a factor $\gamma_c\simeq e^{10}/ 10^{1.5}=700$ to compensate hadron yield suppression due to the high charm mass. Said differently, while strange quarks can compete in abundance with light quarks for $m_s/T\simeq 1$, charm (and heavier) flavor(s) will remain suppressed, in absolute yield, at the temperatures we can make presently in laboratory experiments. Highlights ---------- We have shown that strangeness, and entropy, at SPS and RHIC are well developed tools allowing the detailed study of hot QGP phase. Our detailed discussion of hadronization analysis results has further shown that a systematic study of strange hadrons fingerprints the properties of a new state of matter at point of its breakup into final state hadrons. We have shown that it is possible to describe the ‘horn’ in the $K^+/\pi^+$ hadron ratio within the chemical non-equilibrium statistical hadronization model. We have shown that appearance of this structure is related to a rapid change in the properties of the hadronizing matter. Of most theoretical relevance and interest are the implications of non-equilibrium hadronization on the possible change in the location and [*nature*]{} of the phase boundary. In summary, we have presented interpretation of the experimental soft hadron data production and discussed the production of strangeness and entropy that this analysis infers. We have seen, in quantitative way, how the relative strangeness and entropy production in most central high energy heavy ion collisions agrees with quark-gluon degree of freedom counting in hot primordial matter where the values of these quantities have been established. Acknowledgments {#acknowledgments .unnumbered} --------------- Work supported in part by a grant from: the U.S. Department of Energy DE-FG02-04ER4131. LPTHE, Univ.Paris 6 et 7 is: Unité mixte de Recherche du CNRS, UMR7589. JR thanks Bikash Sinha and Jan-e Alam and the organizers of the 5th International Conference on Physics and Astrophysics of Quark Gluon Plasma, February 8 — 12, 2005 Salt Lake City, Kolkata, India for their kind hospitality. Dedicated to Professor Bikash Sinha on occasion of his 60th anniversary. 0.3cm References {#references .unnumbered} ========== [19]{} Assessments by the RHIC experimental collaborations: [*Hunting the Quark Gluon Plasma: results from the first 3 years at RHIC*]{} BNL-73847-2005 Formal Report, April 18, 2005, to appear in Nucl. Phys. A (2005). J. Letessier and J. Rafelski, [*Hadrons and quark - gluon plasma*]{} Cambridge Monogr. Part. Phys. Nucl. Phys. Cosmol. [**18**]{} 1-397 (2002) (Cambridge, UK, 2002). Available on line to read freely with WIN and MAC platforms at http://site.ebrary.com/pub/cambridgepress/Doc?isbn=0521385369 G. Torrieri, W. Broniowski, W. Florkowski, J. Letessier and J. Rafelski, \[arXiv:nucl-th/0404083\], Computer Physics Communications in press, see: www.physics.arizona.edu/$\tilde{\phantom{.}}$torrieri/SHARE/share.html J. Letessier, J. Rafelski and G. Torrieri, “Deconfinement energy threshold: Analysis of hadron yields at 11.6-A-GeV,” arXiv:nucl-th/0411047. J. Rafelski, J. Letessier and G. Torrieri, “Centrality dependence of bulk fireball properties at RHIC,” arXiv:nucl-th/0412072. J. Letessier and J. Rafelski, “Hadron production and phase changes in relativistic heavy ion collisions,” arXiv:nucl-th/0504028. J. Rafelski and J. Letessier, Acta Phys. Polon. B [**34**]{}, 5791 (2003) \[arXiv:hep-ph/0309030\]; and J. Phys. G [**30**]{} (2004) S1 \[arXiv:hep-ph/0305284\]. J. Letessier, A. Tounsi, U. W. Heinz, J. Sollfrank and J. Rafelski, Phys. Rev. Lett.  [**70**]{}, 3530 (1993). H. Caines \[STAR Collaboration\], J. Phys. G [**31**]{}, S1057 (2005). G. E. Bruno \[NA57 Collaboration\], J. Phys. G [**30**]{}, S717 (2004) \[arXiv:nucl-ex/0403036\]. M. Gazdzicki [*et al.*]{} \[NA49 Collaboration\], J. Phys. G [**30**]{}, S701 (2004) \[arXiv:nucl-ex/0403023\]. J. Rafelski, J. Letessier and G. Torrieri, Phys. Rev. C [**64**]{}, 054907 (2001) \[Erratum-ibid. C [**65**]{}, 069902 (2002)\]. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**69**]{}, 034909 (2004) \[arXiv:nucl-ex/0307022\]. H. B. Zhang \[STAR Collaboration\], “Delta, K\* and rho resonance production and their probing of freeze-out dynamics at RHIC,”, poster presentation at QM2004, Oakland, January 2004 \[arXiv:nucl-ex/0403010\];\ J. Adams \[STAR Collaboration\], \[arXiv:nucl-ex/0412019\], Phys. Rev. C. (2005) in press. J. Adams [*et al.*]{} \[STAR Collaboration\], Phys. Lett. B [**612**]{}, 181 (2005) \[arXiv:nucl-ex/0406003\]. M. Gazdzicki, Commented compilation of NA49 results, private communication. J. Letessier, A. Tounsi and J. Rafelski, Phys. Lett. B [**389**]{} (1996) 586. J. Rafelski and J. Letessier, Phys. Lett. B [**469**]{}, 12 (1999) \[arXiv:nucl-th/9908024\]. J. Rafelski and J. Letessier, Nucl. Phys. A [**702**]{}, 304 (2002) \[arXiv:hep-ph/0112027\]. B. Kampfer, J. Cleymans, P. Steinberg and S. Wheaton, Heavy Ion Phys.  [**21**]{}, 207 (2004). Z. Fodor and S. D. Katz, JHEP [**0404**]{}, 050 (2004) \[arXiv:hep-lat/0402006\]. F. Karsch, Nucl. Phys. A [**698**]{}, 199 (2002) \[arXiv:hep-ph/0103314\]. T. Csorgo and J. Zimanyi, Heavy Ion Phys.  [**17**]{}, 281 (2003) \[arXiv:nucl-th/0206051\]. J. Rafelski and J. Letessier, Phys. Rev. Lett.  [**85**]{}, 4695 (2000) \[arXiv:hep-ph/0006200\]. F. Karsch and E. Laermann, arXiv:hep-lat/0305025. In R.C. Hwa, et al.: Quark gluon plasma III (2004) pp 1-59 (World Scientific, Singapore). P. Braun-Munzinger, K. Redlich and J. Stachel, \[arXiv:nucl-th/0304013\], and references therein. W. Broniowski, W. Florkowski and B. Hiller, Phys. Rev. C [**68**]{}, 034911 (2003) \[arXiv:nucl-th/0306034\]. N. K. Glendenning and J. Rafelski, Phys. Rev. C [**31**]{}, 823 (1985). M. vanLeeuwen, Compilation of [NA49]{} results as function of collision energy. Private communication (2003). J. Alam, B. Sinha and S. Raha, Phys. Rev. Lett.  [**73**]{}, 1895 (1994). , , 1066 (1982); [**56**]{}, 2334E (1986). J. Rafelski and J. Letessier,  B [**469**]{}, 12 (1999). A. Peikert, F. Karsch, E. Laermann and B. Sturm, Nucl. Phys. Proc. Suppl.  [**73**]{}, 468 (1999) . C. Bernard [*et al.*]{} \[MILC Collaboration\], Phys. Rev. D [**71**]{}, 034504 (2005) . H. Z. Huang and J. Rafelski, AIP Conf. Proc.  [**756**]{}, 210 (2005) \[arXiv:hep-ph/0501187\]. P. Petreczky, F. Karsch, E. Laermann, S. Stickan and I. Wetzorke, Nucl. Phys. Proc. Suppl.  [**106**]{}, 513 (2002). J. Rafelski and J. Letessier, “Soft hadron relative multiplicites at LHC” (in preparation).
--- abstract: 'We analyze the effect of weak localization (WL) and weak antilocalization (WAL) in the electronic transport through HgTe/CdTe quantum wells. We show that for increasing Fermi energy the magnetoconductance of a diffusive system with inverted band ordering features a transition from WL to WAL and back, if spin-orbit interactions from bulk and structure inversion asymmetry can be neglected. This, and an additional splitting in the magnetoconductance profile, is a signature of the Berry phase arising for inverted band ordering and not present in heterostructures with conventional ordering. In presence of spin-orbit interaction both band topologies exhibit WAL, which is distinctly energy dependent solely for quantum wells with inverted band ordering. This can be explained by an energy-dependent decomposition of the Hamiltonian into two blocks.' address: - 'Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany' - 'Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany' author: - Viktor Krueckl - Klaus Richter title: | Probing the Band Topology of Mercury Telluride\ through Weak Localization and Antilocalization\ --- Introduction ============ The first theoretical proposal for a two-dimensional topological insulator was based on graphene with intrinsic spin-orbit interaction (SOI) [@Kane2005b; @Kane2005]. Although the spin-orbit coupling of graphene is too small to render an experimental evidence [@Huertas-Hernando2006; @Gmitra2009], this initiated several other suggestions for two-dimensional materials and heterostructures showing topological insulator features [@Bernevig2006b; @Murakami2006; @Liu2008]. Subsequently, the criteria for topological insulators were extended to three dimensions [@Fu2007b] and were experimentally verified in other suitable materials like BiSe [@Fu2007] or Bi$_2$Te$_3$ [@Qu2010; @Xiu2011]. In the meantime, the quantum spin Hall effect, a prominent transport feature of two-dimensional topological insulators, has been observed in HgTe/CdTe quantum wells [@Koenig2007; @Roth2009] as well as for InAs/GaSb heterostructures [@Knez2011]. In both experiments the transmission through a mesoscopic Hall bar is quantized since the bulk of the system is insulating and the current is only carried by edge states, which are protected from backscattering due to time-reversal symmetry. Whilst many theoretical investigations are focused on these edge states of two-dimensional topological insulators [@Zhou2008; @Zhang2011; @Krueckl2011b; @Dolcini2011; @Budich2012], signatures of the special band topology are also traceable in other observables even away from the bulk band gap. To this end we consider a well studied phenomenon in phase coherent transport through disordered quantum systems, namely weak localization (WL) [@Altshuler1980] for systems without SOI and weak antilocalization (WAL) [@Hikami1980] in presence of SOI. The effect stems from the self interference of the charge carriers leading to a quantum correction to the classical transmission for time reversal symmetric systems. Breaking of this symmetry can be easily achieved by applying a perpendicular magnetic field. In a semiclassical picture, the effect is understood in terms of interference between waves traveling in opposite directions along backscattered paths and averaging over all such trajectory pairs. Besides the relative phase shift arising from the enclosed flux of an external perpendicular magnetic field, intrinsic Berry phases affect the interference and thereby the WL behavior. As a consequence, the signatures of WL in transport through systems with strong Berry phases, as for example HgTe heterostructures, can differ significantly from those of conventional electron gases. To our knowledge, there are only a few theoretical studies of WL in systems with inverted band ordering [@Tkachov2011; @Lu2011PRL]. Diagrammatic studies for the two-dimensional case show a transition between WL and WAL upon varying the chemical potential [@Tkachov2011], similar to the WL physics in topological insulator thin films [@Lu2011PRB; @Garate2012], which is supported by experimental investigations [@He2011; @Liu2012b]. However, major SOI effects from bulk and inversion asymmetry are not included, which alter the WL signal, as we will show in this work. A recent experiment with HgTe heterostructures revealed WAL in diffusive transport [@Olshanetsky2010] and detailed investigations attested an energy dependence of the WAL peak [@Minkov2012]. Since no theories for WL in heterostructures with inverted band ordering including SOI are at hand, only conventional theories for A$_3$B$_5$ semiconductors [@Iordanskii1994; @Knap1996] have yet been applied to analyze these results. In order to explore how WL effects are altered by the inverted band ordering of topological insulators, we perform numerical transport calculations. We confirm the transition between WL and WAL reported in Ref. [@Tkachov2011], if SOI can be neglected. We explain the effect in terms of the Berry phase of the bands involved. Moreover, we additionally find a splitting of the WL magnetoconductance profiles due to the two spin species that can also be traced back to the Berry phase and is not accounted for in previous diagrammatic studies. Additionally, we show how the WL phenomenon is altered by SOI, and how bulk and structure inversion asymmetry lead to significantly different WAL features that can strongly depend on the band ordering. This paper is structured as follows: In Section \[sec:model\] we introduce the model used to describe multi-band quantum transport in diffusive HgTe heterostructures. In Section \[sec:noso\] we focus on effects of the Berry phase and the energy dependence of WL and WAL without SOI. In Section \[sec:withso\] we include SOI and show why a variation in WAL upon change in energy serves as an indicator for inverted band ordering. Finally, in Section \[sec:conclude\] we conclude with a brief summary. Model {#sec:model} ===== We describe the electronic properties of the underlying HgTe heterostructure by the Hamiltonian [@Bernevig2006b; @Rothe2010] $$H = \left ( \begin{array}{c c c c} C_k + M_k & A k_+ & - {\mathrm{i}}R k_- & -\Delta \\ A k_- & C_k - M_k & \Delta & 0 \\ {\mathrm{i}}R k_+ & \Delta& C_k + M_k & -A k_-\\ -\Delta& 0 & -A k_+& C_k - M_k\\ \end{array} \right ) \label{Hhgte}$$ where $k_\pm = k_x \pm {\mathrm{i}}k_y$, ${\mathvecfont{k}}^2 = k_x^2 + k_y^2$, $C_k = D {\mathvecfont{k}}^2$ and $M_k = M - B {\mathvecfont{k}}^2$. The material parameters are chosen to be $A=354.5\,\mathrm{meV\,nm}$, $B=-686\,\mathrm{meV\,nm}$, $D=-512\,\mathrm{meV\,nm^2}$ and $M=\pm10\,\mathrm{meV}$. Without SOI ($R = \Delta = 0$) this Hamiltonian breaks up into two independent ${2}{\times}{2}$ blocks, each consisting of an $s$-like electron and a $p$-like hole band. The topology of the band structure depends on the ordering of the electron and hole states, given by the gap $M$ which is positive for conventional ordering ($M>0$) and negative for inverted ordering ($M<0$) . Additionally, in Section \[sec:withso\] we take into account the SOI of strength $\Delta$ and $R$ due to bulk inversion asymmetry (BIA) as well as structure inversion asymmetry (SIA) [@Rothe2010]. While $\Delta$ is fixed (we use $\Delta = 1.6\,\mathrm{meV}$ [@Koenig2008]), the strength of the SOI due to SIA depends on the quantum well structure, and can be tuned to very small values by growing symmetric wells. We study the signatures of WL in magnetotransport through diffusive conductors in the presence of a perpendicular magnetic field $B$. We consider coherent two-terminal transport through disordered strip geometries with a Gaussian correlated disorder, $$U({\mathvecfont{r}}) = \sum_i U_i \exp\left({- \frac{({\mathvecfont{r}} - {\mathvecfont{R}}_i)^2}{2 \sigma^2} }\right) \, , \label{impuritypotential}$$ with a correlation length $\sigma$. Here, a box distribution, $ -U_0 \leq U_i \leq U_0$, is chosen for the strength $U_i$ of the impurity $i$ located at ${\mathvecfont{R}}_i$. In order to eliminate the influence of the edge states we employ periodic boundary conditions in the scattering region, linking the upper and lower edges along transport direction as sketched in Fig [\[figsketch\][[(a)]{}]{}]{}. ![ [[a]{}]{}) Sketch of the scattering region with periodic boundary conditions in vertical direction between two non-periodic leads. A typical backscattered path and its time reversed counter path are shown, contributing to WL and WAL. [[b]{}]{}) Corresponding momentum-space trajectory for the two paths of (a). []{data-label="figsketch"}](fig0){width="\figwidth"} We discretize the Hamiltonian (\[Hhgte\]) on a square grid with a lattice spacing of $5\,\mathrm{nm}$. The conductance $$G=\frac{e^2}{h} T =\frac{e^2}{h}\sum_{n,m}\sum_{\sigma, \sigma'} \vert t_{m, \sigma'; n, \sigma} \vert^2$$ is calculated in linear response within the Landauer formalism [@Landauer1970], whereby the transmission amplitudes $t_{m, \sigma'; n, \sigma}$ are given by the Fisher-Lee relations [@Fisher1981]. The indices $m$ and $n$ stand for the different modes in the leads, which are additionally classified through the index $\sigma \in \{ \mathrm{U}, \mathrm{L} \}$ denoting the upper left $(\mathrm{U})$ and the lower right $( \mathrm{L})$ block of the Hamiltonian [(\[Hhgte\])]{} if no SOI is present ($\Delta=R=0$). Berry phase effects in quantum transport without spin-orbit interaction {#sec:noso} ======================================================================= In the following, we assume a negligibly small BIA and SIA spin-orbit interaction leading to a Hamiltonian [(\[Hhgte\])]{} with two uncoupled blocks. We will show that the Berry phase of each of those blocks leads to an energy dependence of the WL signal different for the two band orderings. Without losing generality we focus on the upper subblock $$H_U = \left ( \begin{array}{c c} M-(B+D) {\mathvecfont{k}}^2 & A (k_x + {\mathrm{i}}k_y)\\ A (k_x - {\mathrm{i}}k_y) & -M+(B-D) {\mathvecfont{k}}^2\\ \end{array} \right ), \label{Hup}$$ since the results for the lower subblock can be obtained by applying the time reversal operator. This Hamiltonian can be easily diagonalized, leading to the energy dispersion for the electron and hole branch, $$E_{e/h}({\mathvecfont{k}}) = - D {\mathvecfont{k}}^2 \pm F({\mathvecfont{k}}),$$ with $$F({\mathvecfont{k}}) = \sqrt{A^2 {\mathvecfont{k}}^2 + ( B {\mathvecfont{k}}^2 - M )^2} .$$ The corresponding eigenstates are given by $$\psi_{e/h}({\mathvecfont{k}}) \propto \left ( \begin{array}{c} M - B {\mathvecfont{k}}^2 \pm F({\mathvecfont{k}})\\ A (k_x - {\mathrm{i}}k_y) \end{array} \right).$$ For vanishing SOI, the WL properties are governed by the phases accumulated by one of these spinors. In a semiclassical description quantum corrections to the conductance stem from the interference of waves traveling along different impurity-scattered paths. Upon disorder average the contributions from pairs of uncorrelated paths vanish. The remaining contributions leading to WL mainly originate from pairs of a path $\gamma$ with its time inverted path $\gamma^\dagger$ where the dynamical phases cancel out, as depicted in Fig. [\[figsketch\][[(a)]{}]{}]{}. As a result, the WL signal is governed by additional phases, like the phase due to the flux of an external magnetic field or a Berry phase. The latter is associated with the Berry curvature given by [@Berry1984; @Chang1996] $$\mathcal{A}_\sigma({\mathvecfont{k}}) = -{\mathrm{i}}\langle \psi_\sigma({\mathvecfont{k}}) \vert \nabla_k \psi_\sigma({\mathvecfont{k}}) \rangle \, ,$$ in terms of the bulk eigenstates $\psi_\sigma({\mathvecfont{k}})$. The corresponding phase is obtained by integrating the vector potential $\mathcal{A}_\sigma$ along a backscattered path corresponding to a closed loop in momentum space with a fixed momentum $k = \vert {\mathvecfont{k}} \vert $, as sketched in Fig. [\[figsketch\][[(b)]{}]{}]{}: $$\Gamma_\sigma = \oint_{k=\mathrm{const}} \hspace{-0.7cm} \mathcal{A}_\sigma({\mathvecfont{k}}) \cdot \mathrm{d}{\mathvecfont{k}} = 2 \pi \mathcal{A}_{\sigma}({\mathvecfont{k}}) \cdot (-k_y, k_x) \, . \label{berryphaseangle}$$ Because of the circular symmetry of $\mathcal{A}_\sigma({\mathvecfont{k}})$ the phase $\Gamma_\sigma$ can be evaluated by the scalar product $\mathcal{A}_{\sigma}({\mathvecfont{k}}) \cdot (-k_y, k_x)$ at a single point in momentum space. This geometric phase $\Gamma_\sigma$ enters the semiclassical Greens function. ![ Bulk band structure ([[a]{}]{}),([[c]{}]{}) and corresponding Berry phase Eq. [(\[berryphaseangle\])]{}. ([[b]{}]{}),([[d]{}]{}) of the Hamiltonian [(\[Hhgte\])]{}. Top panels show the result for conventional band ordering ($M = 10\,\mathrm{meV}$), bottom panels the result for inverted band ordering ($M =-10\,\mathrm{meV}$). []{data-label="figberry"}](fig1){width="\figwidth"} As depicted in Fig. [\[figsketch\][[(b)]{}]{}]{}, a backscattered path and its time-inverted partner accumulate opposite reflection angles in momentum space. In view of Eq. [(\[berryphaseangle\])]{}, this opposite angle leads to opposite Berry phases and thereby to a dephasing in the two-path interference. This results in a reduction of WL [@krueckl2011a], right up to a complete reversal of the WL correction to full WAL [@Suzuura2002; @McCann2006a]. For the Hamiltonian [(\[Hhgte\])]{} the geometric phase $\Gamma_\sigma$ has remarkable properties depending on the two different band topologies. In the case of HgTe the strength of the geometric phase of the electron and the hole branch are given by $$\Gamma_{e/h} = \pi \left ( 1 \pm \frac{M-B {\mathvecfont{k}}^2}{F({\mathvecfont{k}})} \right ).$$ Although the band structure of the conventional ($M>0$) and inverted ($M<0$) ordering is very similar \[compare Fig. [\[figberry\][[(a),(c)]{}]{}]{}\], the Berry phases of the different systems are not. For the inverted band ordering, the Berry phase spans the whole range of possible phases from $0$ to $2\pi$, as shown in Fig. [\[figberry\][[(d)]{}]{}]{}. As a consequence, a particular energy exists where the accumulated phase $\Gamma_\sigma=\pi$, as in a “neutrino billiard" [@Berry1987]. However, if the bands are ordered conventionally, this is not the case. Although the phase differs significantly from $0$ or $2\pi$, the region around $\pi$ is excluded as shown in Fig. [\[figberry\][[(b)]{}]{}]{}. As a consequence, we expect distinctly different WL behavior for both systems if the whole energy range is considered. In the following, we study the WL correction in transport through a disordered HgTe heterostructure numerically by calculating the average change of the quantum transmission $$\delta T(B) = \big\langle T(B) - T(0) \big\rangle \label{deltaT}$$ in presence of a perpendicular magnetic field $B$. We tune the Berry phase by changing the Fermi energy of the system. The averages are taken over a set of $1000$ different impurity potentials [(\[impuritypotential\])]{} distributing $20000$ impurities (equals a coverage of $10\%$ of the grid points) within a scattering region of $1000\,\mathrm{nm} \times 5000\,\mathrm{nm}$ with a correlation length $\sigma = 15\,\mathrm{nm}$. The disorder strength $U_0$ is tuned to get a constant mean free path of $1200\,\mathrm{nm}$ for all energies, leading to comparable shapes of the localization correction for a large range of Fermi energies. ![ Weak localization correction in a disordered HgTe nanoribbon ($L=5000\,\mathrm{nm}$, $W=1000\,\mathrm{nm}$). Upper panels: Magnetic field dependence for (a) inverted band ordering and (b) conventional ordering. Different Fermi energies ($E_F\,{=}\,\{11.1\,\mathrm{meV}, 12.5\,\mathrm{meV}, 18.5\,\mathrm{meV}, 52\,\mathrm{meV}\}$ from top to bottom) lead to a Berry phase as given in panel (a) and (b). Impurity potential strength $U_0$ is varied to fix a mean free path of $1200\,\mathrm{nm}$. [[c]{}]{}) Energy dependence of the WL correction $\delta T'$, Eq. [(\[deltaTp\])]{}, for inverted and conventional ordering extracted for a magnetic field $B=0.1\,\mathrm{mT}$. Dashed curves are guides to the eye. []{data-label="figlocnoso"}](fig2){width="\figwidth"} The results are summarized in Fig. \[figlocnoso\]. For energies close to the band gap, the Berry phase is very small in both cases. As a result, the average transmission is similar to that of an electron gas, leading to WL, visible as a negative correction to the magnetotransmission and shown as black line in Fig. [\[figlocnoso\][[(a)]{}]{}]{} for the case of inverted band ordering. By increasing the Fermi energy also the Berry phase raises, leading to a reduced WL correction. For values of $\Gamma_\sigma = \pi/2$, the minimum in the average transmission at $B=0$ is expected to vanish, which is also reflected in the numerical data presented as green line in Fig. [\[figlocnoso\][[(a)]{}]{}]{}. If the energy is tuned to $$E^{(\pi)}_e = -\frac{D M}{B} + \sqrt{\frac{A^2 M}{B}} , \label{topoEpi}$$ such that the momentum ${\mathvecfont{k}}$ fulfills $B {\mathvecfont{k}}^2 = M$, the regime with a Berry phase close to $\pi$ is entered. In this configuration, the system is expected to feature WAL, since a path and the time inverted counter path accumulate a phase difference of $\pi$ and therefore interfere destructively, leading to an enhanced transmission at $B=0$. This is indeed visible in the numerical results (Fig. [\[figlocnoso\][[(a)]{}]{}]{} as blue line) showing a pronounced WAL peak. The physics changes, if a heterostructure with conventional ordering of the quantum well states is considered. In Fig. [\[figlocnoso\][[(b)]{}]{}]{} we show the average magnetoconductance for the same configuration as in Fig. [\[figlocnoso\][[(a)]{}]{}]{}, however, we assume a positive bandgap of $M=10\,\mathrm{meV}$. For Fermi energies close to the bandgap, the Berry phase is small, as in the case with inverted band ordering, leading to a conventional WL feature. Unexpectedly, the strength of the WL correction of the conventional regime is almost twice as strong as the result for the inverted regime \[compare black lines in Figs. [\[figlocnoso\][[(a,b)]{}]{}]{}\]. With increasing Fermi energy, the strength of the Berry phase increases, but does not reach $\pi$. Instead, the maximal phase at $B {\mathvecfont{k}}^2 = M$ is rather close to $\pi/2$, leading to a strong reduction of any localization correction \[see blue line of Fig. [\[figlocnoso\][[(b)]{}]{}]{}\]. For a more closer analysis of the energy dependence we extract the strength of the WL correction, $$\delta T' = \big\langle T(B=0) - T(B=0.1\,\mathrm{mT}) \big\rangle , \label{deltaTp}$$ for various Fermi energies. The results are summarized in Fig. [\[figlocnoso\][[(c)]{}]{}]{}. For conventional ordering, we get a transition from WL close to the band gap to almost no localization for higher energies. For inverted band ordering one finds a clear-cut transition, from WL to WAL and back to WL. Note that for very low energies only few channels contribute to transport. As a consequence the strength of the WL correction is reduced due to the finite number of open channels [@Beenakker1997], and non-universal features may appear. These apparently erratic values vanish when the width of the scattering region is increased. Additional to the expected crossover from WL to WAL, the Berry phase leads furthermore to opposite shifts in $B$ of the magnetotransmission profiles associated with the upper and lower blocks of the Hamiltonian [(\[Hhgte\])]{}. A pair of backscattered paths, contributing to WL and WAL, can be characterized in terms of the enclosed area $A$ and the accumulated angle $\alpha$, acquired during the series of random scattering processes at impurities along the diffusive path. Usually, only the enclosed areas $A$ are relevant, and their typical value $A_0$ sets the magnetic field scale in the magnetoconductance profile; [*i.e.*]{} its width is of order $B A_0 \propto \Phi_0$, where $\Phi_0$ is the magnetic flux quantum. However, as has been recently shown for ballistic and diffusive two-dimensional hole gases (based on the ${4}{\times}{4}$ Luttinger Hamiltonian) [@krueckl2011a], an underlying Berry phase gives rise to a characteristic shift of the WL peak. This shift depends on the associated Berry phase $\Gamma$ and the typical accumulated angle $\alpha_0$. Moreover, for diffusive and chaotic conductors there is a finite classical correlation $\rho$ between the random variables $A$ and $\alpha$. These different quantities determine an effective magnetic “Berry field" $\tilde B$ by which the WL magneto profile is shifted. For a chaotic quantum dot, this shift corresponds to an associated flux [@krueckl2011a] $$\tilde B\,A_0 \propto \left ( \Gamma \, \rho \, \frac{\alpha_0 }{2 \pi } \right ) \Phi_0 , \label{effectiveBerryfield}$$ which depends linearly on the Berry phase $\Gamma$, the typical enclosed angle $\alpha_0$ and the classical correlation $\rho$. This behavior has also been found for ballistic cavities based on HgTe [^1]. ![ $B$-field shift of the WAL maximum due to correlations between enclosed area and angle of the contributing trajectories. [[a]{}]{}) Magnetic field dependence of transmission quantum correction of a diffusive periodic nanoribbon ($W=1000\,\mathrm{nm}$, $L=5000\,\mathrm{nm}$) for different number of open channels ($30$ to $45$) close to the Fermi energy $E_F = 52\,\mathrm{meV}$ (displayed with vertical offset). Symbols with error bars: Results $\delta T_U$ for upper block, see Eq. [(\[deltaT\])]{}; solid lines: fit to numerical data; dashed lines: corresponding curve for $\delta T_L$ of lower block. [[b]{}]{}) Energy dependence of Berry phase around $52\,\mathrm{meV}$ (see Fig. [\[figberry\][[(a)]{}]{}]{}). [[c]{}]{}) Energy dependence of shift $\tilde B$ of the magnetotransmission maximum extracted from ([[a]{}]{}). []{data-label="figshift"}](fig3){width="\figwidth"} For the disordered HgTe quantum well, we expect a corresponding behaviour, not only for the WL, but also for the WAL peaks. Fig. [\[figshift\][[(a)]{}]{}]{} shows the numerically obtained quantum correction to the magnetoconductance $\delta T_\mathrm{U}$ (bullets) based on the upper block $\mathrm{U}$ of the Hamiltonian [(\[Hhgte\])]{}. The five different curves correspond to different Fermi energies, close to $E_F=52\,\mathrm{nm}$, labeled by the number of open transverse modes (without spin) varying from 30 to 45. Fits to the numerical data are shown as solid lines. Correspondingly, the dashed lines show the further contribution from the lower block $\mathrm{L}$. The curves exhibit a small but distinct energy-dependent shift in $B$, respectively, a splitting of the magnetoconductance of different blocks. This feature can be explained in terms of the Berry field introduced above. To this end, the Berry phase $\Gamma$ corresponding to the Fermi energy $E$, respectively, a number of open modes is shown in Fig. [\[figshift\][[(b)]{}]{}]{}. Since $\Gamma$ is close to $\pi$ in the energy range shown, all magnetoconductance curves show WAL. Most notably, the sign change in $\Gamma\,{-}\,\pi$ between energies corresponding to 36 and 37 channels in Fig. [\[figshift\][[(b)]{}]{}]{} is reflected in the direction of the energy dependent shift of the WAL curves in Fig. [\[figshift\][[(a)]{}]{}]{}, with a nearly vanishing shift close to the trace with $n=37$. Hence, Fig. [\[figshift\][[(b)]{}]{}]{} illustrates the transition from negative correction to positive correction between 36 and 37 open channels. The same transition is also visible in the effective Berry field $\tilde B$, which we extracted for various magnetoconductance curves by the same fits as shown in Fig. [\[figshift\][[(a)]{}]{}]{}. The effective Berry field $\tilde B$ is depicted in Fig. [\[figshift\][[(c)]{}]{}]{}. In view of Fig. [\[figshift\][[(b)]{}]{}]{} its energy dependence indicates a linear dependence on the Berry phase as it is the case in chaotic quantum dots, see Eq. [(\[effectiveBerryfield\])]{} [@krueckl2011a]. Due to the relatively low correlation $\rho$ between $\alpha$ and $A$ for a diffusive process, we expect the strength of the shift to be only a few $\mu T$ in the present case. However, such a shift leads to a significant change of the magnetoresistance line shape. To the best of our knowledge, it is not captured by any existing diagrammatic approach or description by random matrix theory. Role of spin-orbit interaction {#sec:withso} ============================== In addition to the spin-orbit coupling between the $s$- and $p$-bands within the ${2}{\times}{2}$ blocks, there are further spin-orbit interactions present in HgTe heterostructures. Those can be divided due to their physical origin into terms arising from bulk inversion asymmetry (BIA) and structure inversion asymmetry (SIA). BIA is given by the crystal structure itself, and thus can only be modified by changing the material. SIA depends on internal and external electric fields, and consequently changes its size depending on the symmetries of the grown HgCdTe layers or external gating. For symmetric HgTe quantum wells the strength of SIA is negligibly small. ![ Strength of the WL correction $\delta T$ in presence of spin-orbit interaction due to structure inversion asymmetry ($\Delta=1.6\,\mathrm{meV}$) for ([[a]{}]{}) inverted band order ($M=-10\,\mathrm{meV}$) and ([[b]{}]{}) conventional band order ($M=+10\,\mathrm{meV}$). []{data-label="figSIAonly"}](fig4){width="\figwidth"} In the following, we first focus on the WAL profile in a symmetric heterostructure with a naturally sized BIA and without SIA. Our results for different Fermi energies are summarized in Fig. [\[figSIAonly\][[(a)]{}]{}]{} for inverted band ordering and in Fig. [\[figSIAonly\][[(b)]{}]{}]{} for conventional band ordering. The energies are chosen to cover the full range of Berry phases as in Fig. \[figlocnoso\]. In comparison to systems without additional SOI the average magnetoconductance always features WAL. This is in line with the common explanation that strong SOI leads to spin relaxation and thereby WAL. However, there exists a significant difference between the energy dependence of the WAL strength for the different band orderings. For conventional ordering, the WAL correction is almost constant and also the shape of the correction does not change significantly with Fermi energy, as shown in Fig. [\[figSIAonly\][[(b)]{}]{}]{}. This is not the case for the inverted ordering. Here, the correction is almost twice as strong if the Fermi energy is chosen to be $E^{(\pi)}_e=52\,\mathrm{meV}$, Eq. [(\[topoEpi\])]{}, the point with a Berry phase of $\pi$, as depicted in Fig. [\[figSIAonly\][[(a)]{}]{}]{}. In the following, we give an explanation for this difference. To this end, we apply the unitary transformation $$\mathcal{T} =\frac{1}{\sqrt{2}} \left ( \begin{array}{c c c c} 1 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 \\ -1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \end{array} \right )$$ to the Hamiltonian (\[Hhgte\]), leading to the transformed Hamiltonian $$H = \left ( \begin{array}{c c c c} C_k - \Delta & -A k_+ & -M_k & -{\mathrm{i}}\frac{1}{2} R k_- \\ -A k_- & C_k - \Delta & -\frac{1}{2}{\mathrm{i}}R k_+ & M_k \\ -M_k & \frac{1}{2} {\mathrm{i}}R k_- & C_k + \Delta & -A k_+ \\ \frac{1}{2} {\mathrm{i}}R k_+ & M_k & -A k_- & C_k + \Delta \end{array} \right ) .$$ ![ Strength of the WL correction $\delta T'$ for different spin-orbit interactions (BIA $\Delta=1.6\,\mathrm{meV}$, SIA $R=35\,\mathrm{eV\AA}$). Results are extracted from the transmission at $0.1\,\mathrm{mT}$. [[a]{}]{}) The localization strength for conventional band ordering ($M=10\,\mathrm{meV}$) shows the same localization strength for all combinations of structure and bulk inversion asymmetry. [[b]{}]{}) For inverted band ordering ($M=-10\,\mathrm{meV}$) and pure structure inversion asymmetry, the strength of the WAL correction is doubled at $E^{(\pi)}_e = 52\,\mathrm{meV}$. []{data-label="figfourtypes"}](fig5){width="\figwidth"} If no SOI due to SIA is present ($R=0$), this Hamiltonian consists of two blocks which are only coupled by the matrix element $M_k = M - B {\mathvecfont{k}}^2$. For an inverted band ordering there exists a momentum ${\mathvecfont{k}}$, where $M_k \approx 0$ since $M$ and $B$ are both negative. In HgTe superstructures with $M=-10\,\mathrm{meV}$ the Fermi energy is $52\,\mathrm{meV}$ corresponding to $E_e^{(\pi)}$. At this energy the Hamiltonian decouples into two independent ${2}{\times}{2}$ blocks that both show WAL. Thus the entire WAL strength is twice as high compared to other energies, as shown in Fig. [\[figSIAonly\][[(a)]{}]{}]{}. If additional spin-orbit terms from SIA are present, this unitary transformation into two uncoupled blocks is not possible. As a consequence, the WAL correction stays roughly constant throughout the whole range of different Fermi energies. In Fig. \[figfourtypes\] we have summarized the behaviour of the WAL correction $\delta T'$, Eq. [(\[deltaTp\])]{}, for different combinations of BIA and SIA. As expected, the WAL with conventional band ordering is independent of the type of SOI, as shown in Fig. [\[figfourtypes\][[(a)]{}]{}]{}. However, this is not the case for a heterostructure with inverted band ordering. If only BIA is present, WAL is approximately doubled at $52\,\mathrm{meV}$ and shows a smooth transition in between, see black symbols in Fig. [\[figfourtypes\][[(b)]{}]{}]{}. For finite SIA a block diagonalization is not possible, and hence the WAL correction is constant, independent of whether additional BIA SOI is present. As in the calculations without SOI, the erratic fluctuations of the WAL strength at low energies can be attributed to the correspondingly limited amount of open channels in the numerical calculations. Conclusion {#sec:conclude} ========== In this manuscript, we have analyzed the weak localization properties of HgTe heterostructures with different band topologies. We revealed a transition between weak localization and weak antilocalization for systems without spin-orbit interaction, which is only complete for systems with inverted band ordering and can be related to the effect of the Berry phase. This Berry phase, moreover, affects the magnetoconductance line shape: Owning to correlations in the statistics of backscattered paths that depends on the type of classical dynamics (diffusive, chaotic or regular in the ballistic case) the Berry phase implies a splitting of the magnetoconductance profile. Furthermore, we showed that the band ordering can be deduced from the energy dependence of the weak antilocalization correction in presence of spin-orbit interaction due to bulk inversion asymmetry: If the heterostructure features an inverted band ordering, the correction strength is energy dependent in contrast to a constant weak antilocalization strength for conventional band ordering. This is explained by an energy-dependent separation into two uncoupled bocks. Additional Rashba-type spin-orbit interaction from structure inversion asymmetry again diminishes the energy dependence. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by Deutsche Forschungsgemeinschaft (GRK 1570 and joined DFG-JST Forschergruppe Topological Electronics). We thank I. Adagideli, E. Hankiewicz, G. Tkachov and M. Wimmer for useful conversations. [10]{} <span style="font-variant:small-caps;">C. L. Kane</span> and <span style="font-variant:small-caps;">E. J. Mele</span>, [*[Quantum Spin Hall Effect in Graphene]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.95.226801)]{} [ **[[95](http://dx.doi.org/10.1103/PhysRevLett.95.226801)]{}**]{}, [[226801](http://dx.doi.org/10.1103/PhysRevLett.95.226801)]{} [[(2005)](http://dx.doi.org/10.1103/PhysRevLett.95.226801)]{}. <span style="font-variant:small-caps;">C. L. Kane</span> and <span style="font-variant:small-caps;">E. J. Mele</span>, [*[Z₂ Topological Order and the Quantum Spin Hall Effect]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.95.146802)]{} [**[[95](http://dx.doi.org/10.1103/PhysRevLett.95.146802)]{}**]{}, [[146802](http://dx.doi.org/10.1103/PhysRevLett.95.146802)]{} [[(2005)](http://dx.doi.org/10.1103/PhysRevLett.95.146802)]{}. <span style="font-variant:small-caps;">D. Huertas-Hernando, F. Guinea</span> and <span style="font-variant:small-caps;">A. Brataas</span>, [ *[Spin-orbit coupling in curved graphene, fullerenes, nanotubes, and nanotube caps]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.74.155426)]{} [**[[74](http://dx.doi.org/10.1103/PhysRevB.74.155426)]{}**]{}, [[155426](http://dx.doi.org/10.1103/PhysRevB.74.155426)]{} [[(2006)](http://dx.doi.org/10.1103/PhysRevB.74.155426)]{}. <span style="font-variant:small-caps;">M. Gmitra, S. Konschuh, C. Ertler, C. Ambrosch-Draxl</span> and <span style="font-variant:small-caps;">J. Fabian</span>, [*[Band-structure topologies of graphene: Spin-orbit coupling effects from first principles]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.80.235431)]{} [ **[[80](http://dx.doi.org/10.1103/PhysRevB.80.235431)]{}**]{}, [[235431](http://dx.doi.org/10.1103/PhysRevB.80.235431)]{} [[(2009)](http://dx.doi.org/10.1103/PhysRevB.80.235431)]{}. <span style="font-variant:small-caps;">B. A. Bernevig, T. L. Hughes</span> and <span style="font-variant:small-caps;">S.-C. Zhang</span>, [*[Quantum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells]{}*]{}, [[Science](http://dx.doi.org/10.1126/science.1133734)]{} [ **[[314](http://dx.doi.org/10.1126/science.1133734)]{}**]{}, [[1757](http://dx.doi.org/10.1126/science.1133734)]{} [[(2006)](http://dx.doi.org/10.1126/science.1133734)]{}. <span style="font-variant:small-caps;">S. Murakami</span>, [*[Quantum Spin Hall Effect and Enhanced Magnetic Response by Spin-Orbit Coupling]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.97.236805)]{} [**[[97](http://dx.doi.org/10.1103/PhysRevLett.97.236805)]{}**]{}, [[236805](http://dx.doi.org/10.1103/PhysRevLett.97.236805)]{} [[(2006)](http://dx.doi.org/10.1103/PhysRevLett.97.236805)]{}. <span style="font-variant:small-caps;">C. Liu, T. Hughes, X.-L. Qi, K. Wang</span> and <span style="font-variant:small-caps;">S.-C. Zhang</span>, [ *[Quantum Spin Hall Effect in Inverted Type-II Semiconductors]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.100.236601)]{} [**[[100](http://dx.doi.org/10.1103/PhysRevLett.100.236601)]{}**]{}, [[236601](http://dx.doi.org/10.1103/PhysRevLett.100.236601)]{} [[(2008)](http://dx.doi.org/10.1103/PhysRevLett.100.236601)]{}. <span style="font-variant:small-caps;">L. Fu, C. L. Kane</span> and <span style="font-variant:small-caps;">E. J. Mele</span>, [*[Topological Insulators in Three Dimensions]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.98.106803)]{} [**[[98](http://dx.doi.org/10.1103/PhysRevLett.98.106803)]{}**]{}, [[106803](http://dx.doi.org/10.1103/PhysRevLett.98.106803)]{} [[(2007)](http://dx.doi.org/10.1103/PhysRevLett.98.106803)]{}. <span style="font-variant:small-caps;">L. Fu</span> and <span style="font-variant:small-caps;">C. L. Kane</span>, [*[Topological insulators with inversion symmetry]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.76.045302)]{} [ **[[76](http://dx.doi.org/10.1103/PhysRevB.76.045302)]{}**]{}, [[45302](http://dx.doi.org/10.1103/PhysRevB.76.045302)]{} [[(2007)](http://dx.doi.org/10.1103/PhysRevB.76.045302)]{}. <span style="font-variant:small-caps;">D.-X. Qu, Y. S. Hor, J. Xiong, R. J. Cava</span> and <span style="font-variant:small-caps;">N. P. Ong</span>, [ *[Quantum Oscillations and Hall Anomaly of Surface States in the Topological Insulator Bi₂Te₃]{}*]{}, [[Science](http://dx.doi.org/10.1126/science.1189792)]{} [ **[[329](http://dx.doi.org/10.1126/science.1189792)]{}**]{}, [[821](http://dx.doi.org/10.1126/science.1189792)]{} [[(2010)](http://dx.doi.org/10.1126/science.1189792)]{}. <span style="font-variant:small-caps;">F. Xiu</span> [*et al.*]{}<span style="font-variant:small-caps;"></span>, [*[Manipulating surface states in topological insulator nanoribbons]{}*]{}, [[Nat. Nano](http://dx.doi.org/10.1038/nnano.2011.19)]{} [ **[[6](http://dx.doi.org/10.1038/nnano.2011.19)]{}**]{}, [[216](http://dx.doi.org/10.1038/nnano.2011.19)]{} [[(2011)](http://dx.doi.org/10.1038/nnano.2011.19)]{}. <span style="font-variant:small-caps;">M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi</span> and <span style="font-variant:small-caps;">S.-C. Zhang</span>, [*[Quantum Spin Hall Insulator State in HgTe Quantum Wells]{}*]{}, [[Science](http://dx.doi.org/10.1126/science.1148047)]{} [ **[[318](http://dx.doi.org/10.1126/science.1148047)]{}**]{}, [[766](http://dx.doi.org/10.1126/science.1148047)]{} [[(2007)](http://dx.doi.org/10.1126/science.1148047)]{}. <span style="font-variant:small-caps;">A. Roth, C. Brüne, H. Buhmann, L. W. Molenkamp, J. Maciejko, X.-L. Qi</span> and <span style="font-variant:small-caps;">S.-C. Zhang</span>, [*[Nonlocal Transport in the Quantum Spin Hall State]{}*]{}, [[Science](http://dx.doi.org/10.1126/science.1174736)]{} [**[[325](http://dx.doi.org/10.1126/science.1174736)]{}**]{}, [[294](http://dx.doi.org/10.1126/science.1174736)]{} [[(2009)](http://dx.doi.org/10.1126/science.1174736)]{}. <span style="font-variant:small-caps;">I. Knez, R.-R. Du</span> and <span style="font-variant:small-caps;">G. Sullivan</span>, [*[Evidence for Helical Edge Modes in Inverted InAs/GaSb Quantum Wells]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.107.136603)]{} [**[[107](http://dx.doi.org/10.1103/PhysRevLett.107.136603)]{}**]{}, [[136603](http://dx.doi.org/10.1103/PhysRevLett.107.136603)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevLett.107.136603)]{}. <span style="font-variant:small-caps;">B. Zhou, H.-Z. Lu, R.-L. Chu, S.-Q. Shen</span> and <span style="font-variant:small-caps;">Q. Niu</span>, [ *[Finite Size Effects on Helical Edge States in a Quantum Spin-Hall System]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.101.246807)]{} [**[[101](http://dx.doi.org/10.1103/PhysRevLett.101.246807)]{}**]{}, [[246807](http://dx.doi.org/10.1103/PhysRevLett.101.246807)]{} [[(2008)](http://dx.doi.org/10.1103/PhysRevLett.101.246807)]{}. <span style="font-variant:small-caps;">L. B. Zhang, F. Cheng, F. Zhai</span> and <span style="font-variant:small-caps;">K. Chang</span>, [*[Electrical switching of the edge channel transport in HgTe quantum wells with an inverted band structure]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.83.081402)]{} [ **[[83](http://dx.doi.org/10.1103/PhysRevB.83.081402)]{}**]{}, [[81402](http://dx.doi.org/10.1103/PhysRevB.83.081402)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevB.83.081402)]{}. <span style="font-variant:small-caps;">V. Krueckl</span> and <span style="font-variant:small-caps;">K. Richter</span>, [*[Switching Spin and Charge between Edge States in Topological Insulator Constrictions]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.107.086803)]{} [**[[107](http://dx.doi.org/10.1103/PhysRevLett.107.086803)]{}**]{}, [[86803](http://dx.doi.org/10.1103/PhysRevLett.107.086803)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevLett.107.086803)]{}. <span style="font-variant:small-caps;">F. Dolcini</span>, [*[Full electrical control of charge and spin conductance through interferometry of edge states in topological insulators]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.83.165304)]{} [**[[83](http://dx.doi.org/10.1103/PhysRevB.83.165304)]{}**]{}, [[165304](http://dx.doi.org/10.1103/PhysRevB.83.165304)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevB.83.165304)]{}. <span style="font-variant:small-caps;">J. C. Budich, F. Dolcini, P. Recher</span> and <span style="font-variant:small-caps;">B. Trauzettel</span>, [ *[Phonon-Induced Backscattering in Helical Edge States]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.108.086602)]{} [**[[108](http://dx.doi.org/10.1103/PhysRevLett.108.086602)]{}**]{}, [[086602](http://dx.doi.org/10.1103/PhysRevLett.108.086602)]{} [[(2012)](http://dx.doi.org/10.1103/PhysRevLett.108.086602)]{}. <span style="font-variant:small-caps;">B. Altshuler, D. Khmel’nitzkii, A. Larkin</span> and <span style="font-variant:small-caps;">P. Lee</span>, [ *[Magnetoresistance and Hall effect in a disordered two-dimensional electron gas]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.22.5142)]{} [ **[[22](http://dx.doi.org/10.1103/PhysRevB.22.5142)]{}**]{}, [[5142](http://dx.doi.org/10.1103/PhysRevB.22.5142)]{} [[(1980)](http://dx.doi.org/10.1103/PhysRevB.22.5142)]{}. <span style="font-variant:small-caps;">S. Hikami, A. I. Larkin</span> and <span style="font-variant:small-caps;">Y. Nagaoka</span>, [*[Spin–Orbit Interaction and Magnetoresistance in the Two-Dimensional Random System]{}*]{}, Prog. Theor. Phys. [**63**]{}, 707 (1980). <span style="font-variant:small-caps;">G. Tkachov</span> and <span style="font-variant:small-caps;">E. M. Hankiewicz</span>, [*[Weak antilocalization in HgTe quantum wells and topological surface states: Massive versus massless Dirac fermions]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.84.035444)]{} [**[[84](http://dx.doi.org/10.1103/PhysRevB.84.035444)]{}**]{}, [[035444](http://dx.doi.org/10.1103/PhysRevB.84.035444)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevB.84.035444)]{}. <span style="font-variant:small-caps;">H.-Z. Lu, J. Shi</span> and <span style="font-variant:small-caps;">S.-Q. Shen</span>, [*[Competition between Weak Localization and Antilocalization in Topological Surface States]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.107.076801)]{} [**[[107](http://dx.doi.org/10.1103/PhysRevLett.107.076801)]{}**]{}, [[076801](http://dx.doi.org/10.1103/PhysRevLett.107.076801)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevLett.107.076801)]{}. <span style="font-variant:small-caps;">H.-Z. Lu</span> and <span style="font-variant:small-caps;">S.-Q. Shen</span>, [*[Weak localization of bulk channels in topological insulator thin films]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.84.125138)]{} [ **[[84](http://dx.doi.org/10.1103/PhysRevB.84.125138)]{}**]{}, [[125138](http://dx.doi.org/10.1103/PhysRevB.84.125138)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevB.84.125138)]{}. <span style="font-variant:small-caps;">I. Garate</span> and <span style="font-variant:small-caps;">L. Glazman</span>, [*[Weak Localization and Antilocalization in Topological Insulator Thin Films with Coherent Bulk-Surface Coupling]{}*]{}, arXiv 1206.1239v1 (2012). <span style="font-variant:small-caps;">H.-T. He, G. Wang, T. Zhang, I.-K. Sou, G. Wong, J.-N. Wang, H.-Z. Lu, S.-Q. Shen</span> and <span style="font-variant:small-caps;">F.-C. Zhang</span>, [*[Impurity Effect on Weak Antilocalization in the Topological Insulator Bi\_{2}Te\_{3}]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.106.166805)]{} [**[[106](http://dx.doi.org/10.1103/PhysRevLett.106.166805)]{}**]{}, [[166805](http://dx.doi.org/10.1103/PhysRevLett.106.166805)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevLett.106.166805)]{}. <span style="font-variant:small-caps;">M. Liu</span> [*et al.*]{}<span style="font-variant:small-caps;"></span>, [*[Crossover between Weak Antilocalization and Weak Localization in a Magnetically Doped Topological Insulator]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.108.036805)]{} [ **[[108](http://dx.doi.org/10.1103/PhysRevLett.108.036805)]{}**]{}, [[036805](http://dx.doi.org/10.1103/PhysRevLett.108.036805)]{} [[(2012)](http://dx.doi.org/10.1103/PhysRevLett.108.036805)]{}. <span style="font-variant:small-caps;">E. B. Olshanetsky, Z. D. Kvon, G. M. Gusev, N. N. Mikhailov, S. A. Dvoretsky</span> and <span style="font-variant:small-caps;">J. C. Portal</span>, [*[Weak antilocalization in HgTe quantum wells near a topological transition]{}*]{}, [[JETP Letters](http://dx.doi.org/10.1134/S0021364010070052)]{} [ **[[91](http://dx.doi.org/10.1134/S0021364010070052)]{}**]{}, [[347](http://dx.doi.org/10.1134/S0021364010070052)]{} [[(2010)](http://dx.doi.org/10.1134/S0021364010070052)]{}. <span style="font-variant:small-caps;">G. M. Minkov, A. V. Germanenko, O. E. Rut, A. A. Sherstobitov, S. A. Dvoretski</span> and <span style="font-variant:small-caps;">N. N. Mikhailov</span>, [*[Weak antilocalization in HgTe quantum well with inverted energy spectrum]{}*]{}, arXiv:1202.1093 (2012). <span style="font-variant:small-caps;">S. V. Iordanskii, Y. B. Lyanda-Geller</span> and <span style="font-variant:small-caps;">G. E. Pikus</span>, [ *[Weak localization in quantum wells with spin-orbit interaction]{}*]{}, JETP Lett. [**60**]{}, 206 (1994). <span style="font-variant:small-caps;">W. Knap</span> [*et al.*]{}<span style="font-variant:small-caps;"></span>, [*[Weak antilocalization and spin precession in quantum wells.]{}*]{}, Phys. Rev. B [**53**]{}, 3912 (1996). <span style="font-variant:small-caps;">D. G. Rothe, R. W. Reinthaler, C.-X. Liu, L. W. Molenkamp, S.-C. Zhang</span> and <span style="font-variant:small-caps;">E. M. Hankiewicz</span>, [*[Fingerprint of different spin-orbit terms for spin transport in HgTe quantum wells]{}*]{}, [[New J. Phys.](http://dx.doi.org/10.1088/1367-2630/12/6/065012)]{} [ **[[12](http://dx.doi.org/10.1088/1367-2630/12/6/065012)]{}**]{}, [[65012](http://dx.doi.org/10.1088/1367-2630/12/6/065012)]{} [[(2010)](http://dx.doi.org/10.1088/1367-2630/12/6/065012)]{}. <span style="font-variant:small-caps;">M. König, H. Buhmann, L. W. Molenkamp, T. L. Hughes, C.-X. Liu, X.-L. Qi</span> and <span style="font-variant:small-caps;">S.-C. Zhang</span>, [*[The Quantum Spin Hall Effect: Theory and Experiment]{}*]{}, [[J. Phys. Soc. Jpn.](http://dx.doi.org/10.1143/JPSJ.77.031007)]{} [ **[[77](http://dx.doi.org/10.1143/JPSJ.77.031007)]{}**]{}, [[31007](http://dx.doi.org/10.1143/JPSJ.77.031007)]{} [[(2008)](http://dx.doi.org/10.1143/JPSJ.77.031007)]{}. <span style="font-variant:small-caps;">R. Landauer</span>, [*[Electrical resistance of disordered one-dimensional lattices]{}*]{}, [[Philosophical Magazine](http://dx.doi.org/10.1080/14786437008238472)]{} [**[[21](http://dx.doi.org/10.1080/14786437008238472)]{}**]{}, [[863](http://dx.doi.org/10.1080/14786437008238472)]{} [[(1970)](http://dx.doi.org/10.1080/14786437008238472)]{}. <span style="font-variant:small-caps;">D. S. Fisher</span> and <span style="font-variant:small-caps;">P. A. Lee</span>, [*[Relation between conductivity and transmission matrix]{}*]{}, Phys. Rev. B [**23**]{}, 6851 (1981). <span style="font-variant:small-caps;">M. V. Berry</span>, [*[Quantal phase factors accompanying adiabatic changes]{}*]{}, [[Proc. R. Soc. Lond. A](http://dx.doi.org/10.1098/rspa.1984.0023)]{} [**[[392](http://dx.doi.org/10.1098/rspa.1984.0023)]{}**]{}, [[45](http://dx.doi.org/10.1098/rspa.1984.0023)]{} [[(1984)](http://dx.doi.org/10.1098/rspa.1984.0023)]{}. <span style="font-variant:small-caps;">M.-C. Chang</span> and <span style="font-variant:small-caps;">Q. Niu</span>, [*[Berry phase, hyperorbits, and the Hofstadter spectrum: Semiclassical dynamics in magnetic Bloch bands.]{}*]{}, [[Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.53.7010)]{} [ **[[53](http://dx.doi.org/10.1103/PhysRevB.53.7010)]{}**]{}, [[7010](http://dx.doi.org/10.1103/PhysRevB.53.7010)]{} [[(1996)](http://dx.doi.org/10.1103/PhysRevB.53.7010)]{}. <span style="font-variant:small-caps;">V. Krueckl, M. Wimmer, I. Adagideli, J. Kuipers</span> and <span style="font-variant:small-caps;">K. Richter</span>, [*[Weak Localization in Mesoscopic Hole Transport: Berry Phases and Classical Correlations]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.106.146801)]{} [**[[106](http://dx.doi.org/10.1103/PhysRevLett.106.146801)]{}**]{}, [[146801](http://dx.doi.org/10.1103/PhysRevLett.106.146801)]{} [[(2011)](http://dx.doi.org/10.1103/PhysRevLett.106.146801)]{}. <span style="font-variant:small-caps;">H. Suzuura</span> and <span style="font-variant:small-caps;">T. Ando</span>, [*[Crossover from Symplectic to Orthogonal Class in a Two-Dimensional Honeycomb Lattice]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.89.266603)]{} [**[[89](http://dx.doi.org/10.1103/PhysRevLett.89.266603)]{}**]{}, [[266603](http://dx.doi.org/10.1103/PhysRevLett.89.266603)]{} [[(2002)](http://dx.doi.org/10.1103/PhysRevLett.89.266603)]{}. <span style="font-variant:small-caps;">E. McCann, K. Kechedzhi, V. I. Fal’ko, H. Suzuura, T. Ando</span> and <span style="font-variant:small-caps;">B. L. Altshuler</span>, [*[Weak-Localization Magnetoresistance and Valley Symmetry in Graphene]{}*]{}, [[Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.97.146805)]{} [**[[97](http://dx.doi.org/10.1103/PhysRevLett.97.146805)]{}**]{}, [[146805](http://dx.doi.org/10.1103/PhysRevLett.97.146805)]{} [[(2006)](http://dx.doi.org/10.1103/PhysRevLett.97.146805)]{}. <span style="font-variant:small-caps;">M. V. Berry</span> and <span style="font-variant:small-caps;">R. J. Mondragon</span>, [*[Neutrino billiards: time-reversal symmetry-breaking without magnetic fields]{}*]{}, [[Proc. R. Soc. Lond. A](http://dx.doi.org/10.1098/rspa.1987.0080)]{} [ **[[412](http://dx.doi.org/10.1098/rspa.1987.0080)]{}**]{}, [[53](http://dx.doi.org/10.1098/rspa.1987.0080)]{} [[(1987)](http://dx.doi.org/10.1098/rspa.1987.0080)]{}. <span style="font-variant:small-caps;">C. W. J. Beenakker</span>, [*[Random-matrix theory of quantum transport]{}*]{}, Rev. Mod. Phys. [**69**]{}, 731 (1997). [^1]: See supplementary material of Ref. [@krueckl2011a].
--- abstract: 'We consider effects of anisotropy on solitons of various types in two-dimensional nonlinear lattices, using the discrete nonlinear Schr[ö]{}dinger equation as a paradigm model. For fundamental solitons, we develop a variational approximation, which predicts that broad quasi-continuum solitons are unstable, while their strongly anisotropic counterparts are stable. By means of numerical methods, it is found that, in the general case, the fundamental solitons and simplest on-site-centered vortex solitons (“vortex crosses") feature enhanced or reduced stability areas, depending on the strength of the anisotropy. More surprising is the effect of anisotropy on the so-called “super-symmetric" intersite-centered vortices (“vortex squares"), with the topological charge $S$ equal to the square’s size $M$: we predict in an analytical form by means of the Lyapunov-Schmidt theory, and confirm by numerical results, that arbitrarily weak anisotropy results in dramatic changes in the stability and dynamics in comparison with the *degenerate*, in this case, isotropic limit.' author: - 'P.G. Kevrekidis$^1$, D.J. Frantzeskakis$^{2}$, R.Carretero-González$^3$, B.A. Malomed$^3$ and A.R. Bishop$^5$' date: 'Submitted to [*Phys. Rev. E*]{}, July 2005' title: Discrete Solitons and Vortices on Anisotropic Lattices --- Introduction and the model ========================== In the last two decades, nonlinear lattice (spatially discrete) systems have been a very rapidly growing area of interest for a variety of applications [@reviews]. Such systems arise in physical contexts encompassing, *inter alia*, beam dynamics in coupled waveguide arrays in nonlinear optics [@reviews1], the time evolution of fragmented Bose-Einstein condensates (BECs) trapped in optical lattices (OLs) [@reviews2], coupled cantilever systems in nano-mechanics [@sievers], denaturation of the DNA double strand in biophysics [@reviews3] and even stellar dynamics in astrophysics [@voglis]. One of the main objectives of the research in this field is to achieve an understanding of intrinsically localized states (discrete solitons). In two-dimensional (2D) lattices, these are fundamental discrete solitons [@solit] and discrete vortices (i.e., localized states with an embedded nonzero phase circulation over a closed lattice contour) [@vort1; @vort2]. Most recently, a substantial effort was dedicated to the experimental creation of both these entities in photonic lattices induced in photorefractive crystals (although these systems are only quasi-discrete). In particular, fundamental and dipole solitons, soliton trains and necklaces, and vector solitons have been reported [@esolit], as well as vortex solitons [@evort]. Parallel developments in the experimental studies of soliton patterns in BECs have also been very substantial, leading to the creation of quasi-1D dark [@becd], bright [@becb] and gap [@becg] solitons. The generation of 2D BEC solitons in OLs has been theoretically demonstrated [@boris] to be feasible with the currently available experimental technology [@old2d]. A paradigm dynamical lattice model that appears in the above-mentioned physical problems is the discrete nonlinear Schr[ö]{}dinger (DNLS) equation. Various applications of the DNLS equation are well documented [@reviews; @reviews1; @reviews2]. Besides being a generic asymptotic form of a whole class of lattice models (for small-amplitude nonlinear excitations), it finds direct applications (where it furnishes extremely accurate description of the underlying physics) in terms of arrayed (1D) or bunched (2D) nonlinear optical waveguides, BECs trapped in strong OLs, and crystals built of optical or exciton microcavities. An interesting issue in this framework that has not received sufficient attention concerns the influence of anisotropy on the soliton dynamics in 2D lattices. Some of the settings mentioned above are *inherently* anisotropic, e.g., photorefractive crystals [@solit; @esolit], while others (in particular, the fragmented BECs trapped in strong OLs [@reviews2]) can be easily rendered anisotropic by slight variations of control parameters, such as intensities of laser beams that create two sublattices which together constitute the 2D optical lattice. The aim of this paper is to understand how the lattice anisotropy affects 2D discrete solitons in the DNLS equation. Some findings reported below are surprising, demonstrating that anisotropy effects are not straightforward. The straightforward expectation might be that weak anisotropy is a small perturbation that possibly alters details of parametric dependences of the observed phenomenology but does not change it “structurally" (i.e., essentially the same dynamical features as in the isotropic lattice occur, but at different positions in the parameter space). We find that for the simplest soliton and vortex structures this is indeed the case, while for more sophisticated ones it is not. More specifically, we find that for especially symmetric (so-called “supersymmetric") vortices, with their center set at an intersite position, and the topological charge equal to the size of the vortex square frame (see below for details), the isotropic lattice is a *degenerate* one, therefore even very weak anisotropy fundamentally alters the stability and dynamical properties of such structures. On the other hand, despite the delicate organization of the supersymmetric vortices, they constitute a structurally stable, i.e., physically meaningful, class of objects. We take the DNLS equation in the following form: $$i\dot{u}_{n,m}=-\epsilon \Delta _{\alpha }u_{n,m}-|u_{n,m}|^{2}u_{n,m}, \label{deq1}$$where $u_{n,m}(t)$ is the complex, 2D lattice field (the overdot stands for its time derivative), $\epsilon $ is the lattice coupling constant, and $$\label{deq2} \begin{array}{rl} \Delta _{\alpha }u_{n,m}=&\alpha \left( u_{n+1,m}+u_{n-1,m}\right)+u_{n,m+1}\\[2.0ex] &+u_{n,m-1}-2(1+\alpha )u_{n,m}, \end{array}$$ is the *anisotropic* discrete Laplacian, which becomes isotropic with $\alpha =1$. Note that, unlike the continuum limit, no scaling transformation can cast the anisotropic DNLS equation into the isotropic form. Equation (\[deq1\]) conserves two dynamical invariants: the Hamiltonian, $$\begin{array}{rcl} H &=&\sum_{n,m}\left[ \left( u_{n,m+1}^{\ast }u_{n,m}+u_{n,m+1}u_{n,m}^{\ast }\right) \right.\\[2.0ex] && \left. +\alpha \left( u_{n+1,m}^{\ast }u_{n,m}+u_{n+1,m}u_{n,m}^{\ast }\right) \right. \\[2.0ex] &&\displaystyle -\left. \left( \frac{\Lambda }{\epsilon }-2-2\alpha \right) \left\vert u_{n,m}\right\vert ^{2}+\frac{1}{2\epsilon }\left\vert u_{n,m}\right\vert ^{4}\right] , \end{array}$$ and norm, $$N=\sum_{n,m}\left\vert u_{n,m}\right\vert ^{2}, \label{N}$$ where $\Lambda$ is the frequency of the internal mode (equivalently the chemical potential in the context of BECs or the propagation constant in the context of optical waveguide arrays). Stationary solutions to Eq. (\[deq1\]) will be sought as $$u_{n,m}=u_{n,m}^{(0)}\exp (i\Lambda t), \label{stationary} $$ which leads to a stationary finite-difference equation, $$\Lambda u_{n,m}^{(0)}=\epsilon \Delta _{\alpha }u_{n,m}^{(0)}-\left\vert u_{n,m}^{(0)}\right\vert ^{2}u_{n,m}^{(0)} \label{stateqn}$$ (generally speaking, the discrete functions $u_{n,m}^{(0)}$ may be complex). In the case of fundamental-soliton solutions, we will apply the variational approximation (VA) to the *real* version of Eq. (\[stateqn\]), which is based on the fact that it can be derived from the Lagrangian, $$\label{L} \begin{array}{rl} L=&\displaystyle \sum_{n,m}\left[u_{n,m+1}u_{n,m}+\alpha u_{n+1,m}u_{n,m} -\phantom{\frac{A}{A}}\right. \\[4.0ex] &\displaystyle\left.\left( \frac{\Lambda }{2\epsilon }-1-\alpha \right) u_{n,m}^{2}+\frac{1}{4\epsilon }u_{n,m}^{4}\right] . \end{array}$$ After analyzing fundamental solitons by means of the VA, we will construct discrete solitons in the anisotropic model and will study their stability by means of numerical methods. For the numerical procedure, our starting point is always the anti-continuum (AC) limit corresponding to $\epsilon =0$ [@mackay], where configurations of interest can be constructed at will as appropriate combinations of on-site states, which are either $u_{n,m}=\sqrt{\Lambda }\exp (i\Lambda t)$ with $\Lambda >0$ at excited sites, and $u_{n,m}\equiv 0$ at non-excited ones, cf. Eqs. (\[stationary\]) and (\[stateqn\]) for the general case, $\epsilon >0$. The stability of the solitons is then analyzed by linearizing Eq. (\[deq1\]) for perturbations around a stationary solution $u_{n,m}^{(0)}e^{i\Lambda t}$, $$u_{n,m}=\left[ u_{n,m}^{(0)}+\delta \cdot \left( a_{n,m}e^{\lambda t}+b_{n,m}e^{\lambda ^{\ast }t}\right) \right] e^{i\Lambda t}, \label{lambda}$$ where $\delta $ is an infinitesimal perturbation amplitude of the perturbation, and $\lambda $ is its eigenvalue. The Hamiltonian nature of the system dictates that if $\lambda $ is an eigenvalue, then so also are $-\lambda $, $\lambda ^{\ast }$ and $-\lambda ^{\ast }$ (in the stable case, $\lambda $ is imaginary, hence this symmetry yields only two different eigenvalues, $\lambda $ and $-\lambda $). Clearly, the stationary solution is unstable if at least one pair of eigenvalues features nonvanishing real parts. It is noteworthy that the instability against perturbations corresponding to purely real eigenvalues $\lambda $ in Eq. (\[lambda\]) can be predicted by the *Vakhitov-Kolokolov* (VK) criterion [@VK]: a soliton family, characterized by the dependence $N(\Lambda )$ \[recall $N$ is the solution’s norm defined by Eq. (\[N\])\], may be stable under the condition $dN/d\Lambda >0$, and is definitely unstable in the opposite case. In particular, this criterion (as well as the VA) was found to be very useful and quite reliable in the investigation of 2D solitons in the Gross-Pitaevskii equation for BECs in 2D and quasi-1D periodic OL potentials [@Bakhtiyor], and even in 2D quasi-periodic potentials (such as the Penrose tiling among others) [@Bakhtiyor2]. Our study of different states in the anisotropic model and their properties is structured as follows. In Section II, we present the VA for fundamental solitons. In Section III, discrete solitons and vortex crosses with the topological charge $S=1$ are considered, which are only perturbatively (weakly) affected by the anisotropy. In the following two sections, we will define and consider special “super-symmetric" configurations, with $S=1$ and $S=2$, respectively, and compare them with simpler cases. Finally, in section VI we summarize findings and present our conclusions. Variational approximation for fundamental solitons ================================================== As was shown in Ref. [@MIW] for the one-dimensional DNLS equation (see also Ref. [@MIW2], for a more rigorous variational approach applied to higher dimensional solitons in the isotropic case), the only analytically tractable variational *ansatz* for stationary fundamental solitons may be based on the following cusp-shaped expression (in the 2D case, it has the shape of a cross cusp),$$u_{n,m}^{(0)}=A\exp \left( -a|n|-b|m|\right) , \label{ansatz}$$with positive parameters $a$ and $b$, that determine the widths of the soliton in the horizontal and vertical directions, and an arbitrary amplitude $A$. Note that expression (\[ansatz\]) is indeed an *exact solution* to the linearized version of Eq. (\[stateqn\]), which describes soliton tails, if $\Lambda $ is linked to $a$ and $b$ by the dispersion relation,$$\Lambda =2\epsilon \left[ \alpha \sinh ^{2}(a/2)+\sinh ^{2}(b/2)\right] . \label{disp}$$ The substitution of ansatz (\[ansatz\]) makes it possible to calculate the corresponding effective Lagrangian explicitly. First of all, it is convenient to eliminate the amplitude in favor of the norm (\[N\]). Indeed, the substitution of the ansatz in the definition of $N$ yields $A^{2}=N\tanh a\cdot \tanh b$. After this, the effective Lagrangian becomes $$\begin{aligned} L_{\mathrm{eff}} &=&N~\left( \alpha \mathrm{sech}a+\mathrm{sech}b\right) -\left( \frac{\Lambda }{2\epsilon }+1+\alpha \right) N \notag \\ &+&\frac{N^{2}}{16\epsilon }\frac{\cosh \left( 2a\right) \cosh \left( 2b\right) \sinh a \sinh b}{\cosh ^{3}(a) \cosh ^{3}(b)}~. \label{Leff}\end{aligned}$$ Variational equations for the stationary profile are obtained from here in the form $$\frac{\partial L_{\mathrm{eff}}}{\partial N}=\frac{\partial L_{\mathrm{eff}}}{\partial a}=\frac{\partial L_{\mathrm{eff}}}{\partial b}=0. \label{general}$$ In the general case, the explicit form of these equations is quite cumbersome (this will be treated numerically, see below). A detailed analysis is possible in two special cases, as specified below. First is the case of small $a$ and $b$ ($a,b\ll 1$), which implies broad solitons. Then, the expansion of the effective Lagrangian (\[Leff\]) yields $$\begin{aligned} L_{\mathrm{eff}} &\approx &-\frac{\Lambda }{2\epsilon }N+\frac{1}{2}N\left( -b^{2}-\alpha a^{2}+\frac{5}{12}b^{4}+\frac{5\alpha }{12}a^{4}\right) \notag \\ &&+\frac{N^{4}}{16\epsilon }\left( ab+\frac{2}{3}a^{3}b+\frac{2}{3}ab^{3}\right) , \label{expansion}\end{aligned}$$and the variational equations (\[general\]) following from Eq. (\[expansion\]) generate the following solution: $$\begin{aligned} N &=&16\epsilon \sqrt{\alpha }\left( 1-\frac{7}{8}\frac{\alpha +1}{\alpha } \frac{\Lambda }{\epsilon }\right) , \label{NLambda} \\ a^{2} &=&\frac{\Lambda }{2\epsilon \alpha },~b=\sqrt{\alpha }a. \label{ab}\end{aligned}$$ As follows from these expressions, the underlying assumptions $a,b\ll 1$ indeed hold (i.e., the approximation is self-consistent) under the condition $$\Lambda \ll \left\{ \begin{array}{cc} \alpha \epsilon , & \mathrm{if~}\alpha ~\,_{\sim }^{>}~1, \\ \epsilon , & \mathrm{if~}\alpha \gg ~1.\end{array}\right. \label{condLambda}$$ The broad (quasi-continuum) solitons predicted in this approximation are *unstable* according to the VK criterion, as Eq. (\[NLambda\]) immediately shows that $dN/d\Lambda <0$. Note that the expansion of the dispersion relation (\[disp\]) for the same case of small $a$ and $b$ yields $\alpha a^{2}+b^{2}=\Lambda /\epsilon $. It is noteworthy that this relation, although derived independently of the variational equations, is consistent with Eq. (\[ab\]). Another tractable case is that of a *strongly anisotropic* soliton, which is broad (quasi-continuum) in either direction and narrow in the other, i.e., it corresponds to $a\ll 1,b\gg 1$, or vice versa. If $a$ is small and $b$ is large, the variational equations (\[general\]) yield the following results: $$a=\sqrt{\frac{\Lambda }{3\alpha \epsilon }},~\sinh \left( \frac{b}{2}\right) =\sqrt{\frac{\Lambda }{\epsilon }},~N^{2}=\frac{4}{3}\epsilon \alpha \Lambda . \label{aniso}$$ These results are consistent with the underlying assumptions ($a\ll 1,b\gg 1$) under the conditions $$1\ll \Lambda /\epsilon \ll \alpha . \label{cond}$$ On the contrary to the broad solitons given above by Eqs. (\[NLambda\]) and (\[ab\]), Eqs. (\[aniso\]) show that the anisotropic solitons are *stable* as per the VK criterion, as they obviously meet the condition $dN/d\Lambda >0$. For the opposite strongly anisotropic case, with $a\gg 1$ and $b\ll 1$, the result is$$b=\sqrt{\frac{\Lambda }{3\epsilon }},~\sinh \left( \frac{a}{2}\right) =\sqrt{\frac{\Lambda }{\alpha \epsilon }},~N^{2}=\frac{4}{3}\epsilon \Lambda , \label{aniso2}$$cf. Eqs. (\[aniso\]). These expressions comply with the underlying assumptions $a\gg 1,b\ll 1$ provided that $$\alpha \ll \Lambda /\epsilon \ll 1, \label{cond2}$$cf. Eq. (\[cond\]). Similarly to the solution of Eq. (\[aniso\]), the one of Eq. (\[aniso2\]) obviously meets the VK stability criterion. ![a) Norm of the solution vs. $\Lambda$ for several values of the anisotropy and fixed coupling strength $\epsilon=1$. For all the panels in this figure the anisotropy values are $\alpha=1.5,1.25,1,0.75$, respectively, for each curve from top to bottom. Thick lines (solid and dashed) represent direct numerical results and the thin lines represent the VA. The dashed lines correspond to unstable soliton solutions Note that the sign of the slope of $N(\Lambda)$ reflects the stability of the soliton solutions as predicted by the VK criterion. b) The norm of the soliton solution as a function of the coupling strength for fixed $\Lambda=1$. Once again, thick lines represent direct numerical results and thin lines illustrate the VA. []{data-label="NvsLambda_fig"}](N_vs_Lamba.ps){width="6.cm" height="4cm"} ![a) Norm of the solution vs. $\Lambda$ for several values of the anisotropy and fixed coupling strength $\epsilon=1$. For all the panels in this figure the anisotropy values are $\alpha=1.5,1.25,1,0.75$, respectively, for each curve from top to bottom. Thick lines (solid and dashed) represent direct numerical results and the thin lines represent the VA. The dashed lines correspond to unstable soliton solutions Note that the sign of the slope of $N(\Lambda)$ reflects the stability of the soliton solutions as predicted by the VK criterion. b) The norm of the soliton solution as a function of the coupling strength for fixed $\Lambda=1$. Once again, thick lines represent direct numerical results and thin lines illustrate the VA. []{data-label="NvsLambda_fig"}](N_vs_epsilonB.ps){width="6.cm" height="4cm"} Lastly, inequalities (\[cond\]) and (\[cond2\]) imply that the above solutions indeed pertain to the strongly anisotropic model, as the corresponding parameter $\alpha $ is large in the former case, and small in the latter one. We also notice that the condition (\[cond\]) in the case of large $\alpha $, or its counterpart (\[cond2\]) in the opposite case of small $\alpha $, is incompatible with the respective condition (\[condLambda\]), i.e. (as one would expect), the existence regions of the unstable quasi-continuum solitons and stable strongly anisotropic ones have no overlap. For general $a$ and $b$, the variational equations (\[general\]), with the effective Lagrangian (\[Leff\]), cannot be solved explicitly and one has to find $(N,a,b)$ solutions numerically for each $(\epsilon,\Lambda)$ pair. In Fig. \[NvsLambda\_fig\] we compare the results obtained from the VA with solutions obtained through numerically solving the stationary equation (\[stationary\]). Fig. \[NvsLambda\_fig\].a depicts the norm of the soliton solutions as a function of the propagation constant $\Lambda$ for several values of the anisotropy parameter $\alpha$ and for constant coupling ($\epsilon=1$). As may be noticed from the figure, the VA (thin lines) provides a good approximation to the actual solution (thick lines). We also checked the stability of the constructed solutions by following the largest real eigenvalue (see also details below). of the linearized problem defined in Eq. (\[lambda\]). Stable solutions are depicted with solid lines while unstable solutions correspond to dashed lines. As is clear from the figure, the slope of $N(\Lambda)$ predicts the stability of the solution according to the VK criterion (see above). Furthermore, since the VA gives a good approximation of $N(\Lambda)$, it is possible to obtain a good estimate for the transition from stable to unstable solutions, as $\Lambda$ is decreased, using the VA together with the VK criterion. Finally, in Fig. \[NvsLambda\_fig\].b we fix $\Lambda=1$ and perform a similar calculation by varying the coupling strength $\epsilon$. Again, the VA (thin lines) approximates remarkably well the norm of the solutions (thick lines). Fundamental solitons and vortex crosses: numerical results ========================================================== We start numerical computations with a single excited site in the AC limit, and continue the solution in $\epsilon $ (for a fixed value of the anisotropy parameter $\alpha $). The objective is to construct regular site-centered discrete solitons, with the anticipation that, as is known for the isotropic model ($\alpha =1$), the solitons will be stable up to a critical value of the coupling constant, i.e., at $\epsilon <\epsilon _{\mathrm{cr}}$ [@MIW2; @2d]. At $\epsilon >\epsilon _{\mathrm{cr}}$, the discrete solitons are found to be unstable due to a real eigenvalue arising in the linearization around the soliton. In the numerical part of the work (unlike the VA considered above), we fix $\Lambda =1$ in Eq. (\[stateqn\]), using the scaling invariance of Eq. (\[deq1\]), and examine how $\epsilon _{\mathrm{cr}}$ is affected by the variation of $\alpha $. The results will be summarized in the form of two-parameter diagrams that chart regions of stable and unstable discrete states. For regular discrete solitons, such a diagram is presented in Fig. \[dfig1\]. The top panel illustrates the fact that the increase of $\alpha $ gradually destabilizes the solitons, i.e., $\epsilon _{\mathrm{cr}}$ decreases with increasing $\alpha $. Interestingly, the respective dependence is very well approximated by an empirical relation $\epsilon _{\mathrm{cr}}=1/\sqrt{\alpha }$. More accurately, the best fit to this numerical dependence is given by: $\epsilon _{\mathrm{cr}}\approx 0.999\alpha ^{-0.488}$. The middle panel in Fig. \[dfig1\] illustrates in more detail some special cases of this dependence for $\alpha =1$ (solid lines), $\alpha =1.25$ (dashed lines) and $\alpha =0.75$ (dash-dotted lines). We note that, in terms of the general equation (\[deq1\]), the cases of $\alpha <1$ and $\alpha >1$ are tantamount to each other, as one may divide the equation by $\alpha $, mutually rename the vertical and horizontal indices ($n$ and $m$), and then rescale the equation to the form with $\alpha $ replaced by $1/\alpha $. However, this transformation is not possible once we fix $\Lambda \equiv 1$, which is why we report results below for both $\alpha >1$ and $\alpha <1$. For $\alpha =1$, an eigenvalue bifurcates from the edge of the continuous spectrum at $\epsilon \approx 0.445$, and with further increase of $\epsilon $ it moves towards the origin of the spectral plane $(\lambda _{r},\lambda _{i})$ (the subscripts denote the real and imaginary part of the eigenvalue) . It becomes unstable, reaching the origin at $\epsilon \approx 1.006$. For $\alpha =1.25 $, the first bifurcation occurs at $\epsilon \approx 0.398,$ and the instability sets in at $\epsilon \approx 0.896$, whereas for $\alpha =0.75$ the respective critical points (the appearance of the eigenvalue, and its passage into the instability region) are found at $\epsilon \approx 0.511$ and $1.156$, respectively. Notice that these results are quite natural since, as $\alpha \rightarrow 0$, the system becomes nearly one-dimensional, hence we expect the destabilization point to approach its 1D counterpart. Thus, as the 1D discrete solitons are well-known to be stable up to the continuum limit, one may expect that $\epsilon _{\mathrm{cr}}\rightarrow \infty $ for $\alpha \rightarrow 0$. The bottom panel of Fig. \[dfig1\] shows an example of a discrete soliton for $\alpha =1.5$ and $\epsilon =1$. Although the anisotropy is hardly observed in this case, it can be traced nevertheless; in particular, $u_{1,0}=0.785$, and $u_{0,1}=0.579$. ![The soliton line in the top panel shows the critical value of $\protect\epsilon $ (the border between stable and unstable discrete solitons) as a function of $\protect\alpha $; the dashed line is $\protect\epsilon =1/\protect\sqrt{\protect\alpha }$. The middle panel shows how the real and imaginary parts of the stability eigenvalue, $\protect\lambda _{r}$ and $\protect\lambda _{i}$, depend on $\protect\epsilon $ for $\protect\alpha =1.25,~1$, and $0.75$ (dashed, solid, and dash-dotted curves, respectively). The bottom panel shows an example of the discrete soliton found for $\protect\epsilon =1$ and $\protect\alpha =1.5$.[]{data-label="dfig1"}](anis2.ps){width="6.cm" height="5cm"} ![The soliton line in the top panel shows the critical value of $\protect\epsilon $ (the border between stable and unstable discrete solitons) as a function of $\protect\alpha $; the dashed line is $\protect\epsilon =1/\protect\sqrt{\protect\alpha }$. The middle panel shows how the real and imaginary parts of the stability eigenvalue, $\protect\lambda _{r}$ and $\protect\lambda _{i}$, depend on $\protect\epsilon $ for $\protect\alpha =1.25,~1$, and $0.75$ (dashed, solid, and dash-dotted curves, respectively). The bottom panel shows an example of the discrete soliton found for $\protect\epsilon =1$ and $\protect\alpha =1.5$.[]{data-label="dfig1"}](anis1B.ps){width="5.8cm" height="4cm"}   ![The soliton line in the top panel shows the critical value of $\protect\epsilon $ (the border between stable and unstable discrete solitons) as a function of $\protect\alpha $; the dashed line is $\protect\epsilon =1/\protect\sqrt{\protect\alpha }$. The middle panel shows how the real and imaginary parts of the stability eigenvalue, $\protect\lambda _{r}$ and $\protect\lambda _{i}$, depend on $\protect\epsilon $ for $\protect\alpha =1.25,~1$, and $0.75$ (dashed, solid, and dash-dotted curves, respectively). The bottom panel shows an example of the discrete soliton found for $\protect\epsilon =1$ and $\protect\alpha =1.5$.[]{data-label="dfig1"}](anis2a.ps "fig:"){width="6cm" height="5.25cm"} Similar results can be obtained for on-site vortices (discrete vortex solitons) with the topological charge $S=1$. In this section, we consider the solitons in the form of the so-called “vortex cross", with $u_{1,0}=1$, $u_{0,1}=\exp (i\pi /2)\equiv i$, $u_{-1,0}=\exp (i\pi )\equiv -1$, $u_{0,-1}=\exp (i3\pi /2)\equiv -i$ (and $u_{0,0}=0$, at the central point), excited in the AC limit [@vort1]. There are interesting variations to this problem, in comparison with the fundamental soliton. In particular, the respective instability mechanism is different, as it is caused by an eigenvalue bifurcating from the origin in the spectral plane for $\epsilon \neq 0$, and eventually (upon parametric continuation) colliding with the edge of the continuous spectrum (or an eigenvalue bifurcating from the continuous spectrum). The collision gives rise to a *quartet* of eigenvalues, through the so-called Hamiltonian-Hopf bifurcation [@hamilton]. In the isotropic case ($\alpha =1$), it is known that this instability sets in at $\epsilon _{\mathrm{cr}}\approx 0.39$ [@vort1], while, in the present anisotropic model, we have found that $\epsilon _{\mathrm{cr}} \approx 0.325$ for $\alpha =1.3$ and $\epsilon _{\mathrm{cr}} \approx 0.429$ for $\alpha =0.7$. The respective two-parameter diagram $(\epsilon _{\mathrm{cr}},a)$ is shown in the top panel of Fig. \[dfig2\]. The cases of $\alpha =1.3$, $1$, and $0.7$ (dashed, solid and dash-dotted, dashed-dotted curves, respectively) are shown in the middle panel. The bottom panel of the figure illustrates the squared-amplitude profile of the discrete vortex for $\alpha =0.2$ and $\epsilon =0.5$. The sites $(1,0)$ and $(0,1)$ have the squared amplitudes $|u_{1,0}|^{2}=1.934$ and $|u_{0,1}|^{2}=2.057$, respectively. Notice also that as $\alpha \rightarrow 0$, a quasi-1D situation is again approached, where the so-called twisted-localized mode (TLM) [@TLM] configuration (alias an odd soliton) is a counterpart of the 2D vortex. As one would expect, the critical point of the instability departs from the value $\epsilon _{\mathrm{cr}}^{(\mathrm{2D})} \approx 0.39$, corresponding to the isotropic 2D case, towards the value corresponding to the stability border of the 1D TLM solitons, which is $\epsilon _{\mathrm{cr}}^{(\mathrm{1D})}\approx 0.433$. ![ The top panel shows the critical value of $\protect\epsilon $ separating the stable and unstable discrete vortices (on-site-centered ones, alias *vortex crosses*) with $S=1$ as a function of $\protect\alpha $. The middle panel shows how the real and imaginary parts of the eigenvalue leading to the instability depend on $\protect\epsilon $ for $\protect\alpha =1.3,~1$, and $0.7$ (dashed, solid and dash-dotted curves, respectively). Notice that, for $\protect\alpha =1.3$, there is a secondary instability arising for $\protect\epsilon >0.433$. The bottom panel shows the squared-absolute-value profile of the discrete vortex for $\protect\epsilon =0.5$ and $\protect\alpha =0.2$.[]{data-label="dfig2"}](anis3n.ps){width="5.6cm" height="4.5cm"} ![ The top panel shows the critical value of $\protect\epsilon $ separating the stable and unstable discrete vortices (on-site-centered ones, alias *vortex crosses*) with $S=1$ as a function of $\protect\alpha $. The middle panel shows how the real and imaginary parts of the eigenvalue leading to the instability depend on $\protect\epsilon $ for $\protect\alpha =1.3,~1$, and $0.7$ (dashed, solid and dash-dotted curves, respectively). Notice that, for $\protect\alpha =1.3$, there is a secondary instability arising for $\protect\epsilon >0.433$. The bottom panel shows the squared-absolute-value profile of the discrete vortex for $\protect\epsilon =0.5$ and $\protect\alpha =0.2$.[]{data-label="dfig2"}](anis4nB.ps){width="5.8cm" height="4cm"}   ![ The top panel shows the critical value of $\protect\epsilon $ separating the stable and unstable discrete vortices (on-site-centered ones, alias *vortex crosses*) with $S=1$ as a function of $\protect\alpha $. The middle panel shows how the real and imaginary parts of the eigenvalue leading to the instability depend on $\protect\epsilon $ for $\protect\alpha =1.3,~1$, and $0.7$ (dashed, solid and dash-dotted curves, respectively). Notice that, for $\protect\alpha =1.3$, there is a secondary instability arising for $\protect\epsilon >0.433$. The bottom panel shows the squared-absolute-value profile of the discrete vortex for $\protect\epsilon =0.5$ and $\protect\alpha =0.2$.[]{data-label="dfig2"}](anis4a.ps "fig:"){width="6cm" height="5.25cm"} Fundamental vortex squares ========================== For the discrete solitons examined so far, the difference between the isotropic and non-isotropic cases has not been particularly dramatic; the anisotropy chiefly entailed a smooth deformation of the instability-onset scenarios known for the isotropic case. Therefore, the dynamical evolution triggered by the instability is naturally expected to be similar to that in previously studied isotropic cases [@solit; @vort1; @vort2; @2d]. Now we will give an example where the instability scenario and dynamics are *very different* from their isotropic counterparts. We focus, in particular, on the off site-centered vortex (alias “vortex square") [@vort1; @vort2]. The vortex-square contours are characterized by their size $M$, which is the number of lattice bonds that each side of the square contour contains in the AC-limit pattern, from which the solution family stems. Hence, the vortex square based, in the AC limit, on the set of sites $(0,0),(1,0),(1,1),(0,1)$ is the $M=1$ contour. The configuration with $S=1$ is written on this set by lending the four sites the phases $0$, $\pi /2$, $\pi $ and $3\pi /2$, respectively. The persistence of such configurations, as was discussed in detail in Ref. [@dep], is determined by whether secular conditions (obtained from the Lyapunov-Schmidt theory [@golub]), excluding the projection of eigenvectors in the kernel of the linearization at $\epsilon =0$ to the solution at finite $\epsilon $, are satisfied. In the isotropic case, to leading order ($O$($\epsilon $)), these secular conditions are found to be $$0=f(\theta _{l})\equiv \sin (\theta _{l}-\theta _{l+1})+ \sin (\theta _{l}-\theta _{l-1}) \label{deq3}$$ for $l=1,\dots ,N$ (with periodic boundary conditions), where $N=4M$ is the number of sites participating in the contour and $\theta_l$ are their respective phases \[cf. Eqs. (3.1)–(3.2) of Ref. [@dep]\]. One can then apply similar arguments to the present setting and derive modified persistence criteria for the anisotropic model. For $M=1$, they are $$0=f(\theta _{l})\equiv \left\{ \begin{array}{rcl} \alpha \sin (\theta _{l}-\theta _{l+1})&+&\sin (\theta _{l}-\theta _{l-1}) \\[0.5ex] l&=&2k+1,k=0,1 \\[2.0ex] \sin (\theta _{l}-\theta _{l+1})&+&\alpha \sin (\theta _{l}-\theta_{l-1}) \\[0.5ex] l&=&2k,k=1,2.\end{array}\right. \label{deq3a}$$ While Eqs. (\[deq3a\]) may seem a moderate modification of (\[deq3\]), there is a crucial (for stability purposes) difference. Indeed, consider the linearization around the $S=1$ solution according to Eq. (\[lambda\]). It was proved in Ref. [@dep] that the Jacobian matrix of the reduced set of Eqs. (\[deq3a\]), defined through $J_{lk}=\partial f_{l}/\partial \theta _{k}$, determines leading-order corrections to $N-1$ eigenvalue pairs bifurcating from the origin \[one pair stays at the origin due to the invariance of Eq. (\[deq1\]) with respect to the phase shift\], since these eigenvalues satisfy the equation: $$\lambda _{l}^{2}=2\epsilon \mu _{l}, \label{deq3b}$$with $\mu _{l}$ the corresponding eigenvalues of the reduced $N\times N$ Jacobian $J_{lk}$. It is further easy to check that, for the vortex square with $S=1$ and $M=1$, the *entire* Jacobian matrix consists of *zeros*. More generally, as shown in Ref. [@dep], this is the case for the square vortices of size $M$ with charge $S=M$, which for that reason were termed “super-symmetric" vortices. Obviously, to determine the stability of the vortices in this special case, one needs to go to higher-order expansions. Typically, second-order reductions will yield a non-trivial result for the stability of such super-symmetric configurations, leading to eigenvalue dependences $\lambda _{l}\propto \epsilon $ \[rather than $\lambda _{j}\propto \sqrt{\epsilon }$, as dictated by Eq. (\[deq3b\]) in the generic case\]. The key variation to this theme stemming from the presence of the anisotropy is that the matrix $J_{lk}$ has generically non-vanishing elements in the *lowest approximation* for $\alpha \neq 1$; in other words, the isotropic lattice is a *degenerate* one for the supersymmetric solitons, and arbitrarily weak anisotropy *lifts this degeneracy*. As a result, the eigenvalue bifurcations occur, typically, at the leading order, rather than at the second-order perturbation expansions, which was the case in the isotropic model. More strikingly, considering a specific example, such as for $\alpha =1.05$ (a very weak deviation from the isotropic case), we find that the relevant angles (in radians) satisfying the conditions (\[deq3a\]) are $\theta _{1}=-0.0229$, $\theta _{2}=1.8577$, $\theta _{3}=3.4285 $, and $\theta _{4}=4.6895$; the corresponding $4\times 4$ Jacobian has two zero eigenvalues (one of which will split to order $O(\epsilon )$, see below) and two nonzero ones, $\pm 0.6403$. From the existence of the positive eigenvalue and from Eq. (\[deq3b\]), it immediately follows that the $S=M=1$ configuration is *immediately unstable* (for all values of $\epsilon $). This is in [*complete*]{} contrast with the super-symmetric vortex in the isotropic model, which has two imaginary eigenvalue pairs (bifurcating at the second-order reduction), $\lambda \approx \pm 2i\epsilon $, and is *linearly stable* for $\epsilon <\epsilon _{c}\approx 0.38$. From here, we conclude that the anisotropy can play a critical role in destabilizing configurations that would be very robust ones in the isotropic limit. Furthermore, this can happen arbitrarily close to the isotropic limit (that turns out to be a very delicate one), given the nature of the argument presented above. We also note in passing that in the anisotropic case examined above, there is yet another real eigenvalue pair which is $\lambda \approx \pm 3\epsilon $ for small $\epsilon $ (this pair stems from the higher-order reduction, in agreement with the prediction of the reduced Jacobian). These two eigenvalue pairs eventually collide at $\epsilon =0.057$, resulting in a Hamiltonian Hopf bifurcation to an eigenvalue quartet which is present in the stability spectrum at $\epsilon >0.057$. This phenomenology is shown in Fig. \[dfig2off\]. The leading-order prediction for the most unstable eigenvalue is in good agreement with the full numerical result for small values of $\epsilon $. For higher values of $\epsilon $, the second-order corrections that we do not examine here in detail come into play and lead to the Hamiltonian Hopf bifurcation. ![For the $S=1$ super-symmetric square vortex (one with the size $M=1$), two real stability eigenvalues are shown as functions of $\protect\epsilon $ for $\protect\alpha =1.05$. The numerical and analytical results (see text) are displayed, respectively, by the solid lines and dashed lines.[]{data-label="dfig2off"}](anis6an.ps){width="6.cm" height="5cm"} To directly compare the dynamics between the isotropic and weakly anisotropic (yet unstable) case for the super-symmetric vortex, we have performed numerical simulations. Detailed simulations are reported in this work only for the super-symmetric cases (see also the next section), since for all other states anisotropy operates as a regular perturbation, see above; as a result, instabilities of the other states may be shifted due to the anisotropy, but structurally the phenomenology remains the same. For the delicate super-symmetric vortex square, the dynamics altered by the anisotropy is indeed found to be dramatically different from the isotropic case. This is illustrated by Fig. \[dfig2off2\], for the vortex square with $S=M=1$, carried (in the AC limit) by $4$ sites. The time dynamics of the squared absolute value of the field at the main sites is shown in the figure for a weakly anisotropic model, with $\alpha =1.05$, and its isotropic counterpart (top and bottom panels, respectively). Stark contrast between the instability developing for $t>50$ in the former case, versus the complete stability for all times in the latter (isotropic) system, is obvious (notice the difference in the scales of vertical axes between the two panels). In the linear approximation, these results are well predicted by the above theory. ![The dynamics of an initially very weakly perturbed super-symmetric vortex with $S=M=1$, principally based on four lattice sites that form an elementary cell (the sites are labeled as $1,2,3,4$). The time evolution of the squared absolute value of the fields at these sites is shown in the top panel for a weakly anisotropic model, with $\protect\alpha =1.05$, and for its isotropic counterpart ($\protect\alpha =1$) in the bottom panel. In both cases, the same uniformly distributed, random initial perturbation of amplitude $10^{-4}$ was added to the solution at $t=0$ to excite possible instabilities. Clearly, the vortex on the weakly anisotropic lattice becomes unstable at $t>50$, while in the isotropic case the perturbation remains bounded and small at all times. In these examples, the intersite lattice coupling constant is $\protect\epsilon =0.025$.[]{data-label="dfig2off2"}](anis7B.ps){width="6.cm" height="6cm"} Higher-order vortices ===================== We now give a summary of results for vortices with higher values of the topological charge. First, we consider the $S=M=2$ super-symmetric vortex populating the sites $(1,0)$, $(1,1)$, $(0,1)$, $(-1,1)$, $(-1,0)$, $(-1,-1)$, $(0,-1)$, $(1,-1)$ in the AC limit, with a phase shift of $\pi /2$ between adjacent sites (in the isotropic model). The latter provides for a total phase gain of $4\pi $ around a closed path surrounding the origin. This type of the configuration with $S=M=2$ was identified in Ref. [@dep] as possessing a real eigenvalue pair with $\lambda _{r}=\pm \sqrt{\sqrt{80}-8}\epsilon $, in excellent agreement with numerical computations. However, the presence of the small anisotropy for $\alpha \neq 1$ again strongly affects the vortex for reasons similar to the ones presented above. In this case, the reductions leading to the perturbed dynamics in the anisotropic model are described by the following persistence conditions: $$0=f(\theta _{l})\equiv \left\{ \begin{array}{rcl} \sin (\theta _{l}-\theta _{l+1})&+&\sin (\theta _{l}-\theta _{l-1})\\[0.5ex] i&=&2k+1,k=0,1,2,3, \\[2.0ex] \alpha \sin (\theta _{l}-\theta _{l+1})&+&\sin (\theta _{l}-\theta _{l-1}) \\[0.5ex] i&=&4k+2,k=0,1, \\[2.0ex] \sin (\theta _{l}-\theta _{l+1})&+&\alpha \sin (\theta _{l}-\theta_{l-1}) \\[0.5ex] i&=&4k+4,k=0,1,\end{array}\right. \label{deq10}$$ cf. Eqs. (\[deq3a\]). In this expression $\theta _{l}$ is the phase of the field at each of the eight above-mentioned sites (where, in the order the sites were mentioned, the corresponding index is $l=1,2,\dots ,8$). Furthermore, as discussed above, the analysis performed in Ref. [@dep] can be used to show that the linear stability eigenvalues for such a vortex soliton will be given, to the leading order, by Eq. (\[deq3b\]). Using this prediction, even in the weakly anisotropic case (e.g., for $\alpha =1.05$) one finds that the corresponding $8\times 8$ Jacobian possesses three real $O(\sqrt{\epsilon })$ eigenvalues, which result in an instability (contrary to the single real $O(\epsilon )$ eigenvalue in the $\alpha =1$ case). Hence, once again, the anisotropy results in a significant destabilization of the super-symmetric vortex, in comparison to the isotropic model. As a specific example, we show in Fig. \[dfig3\]) the situation for $\alpha =1.05$. The solution of Eqs. (\[deq10\]) yields $\theta _{1}=0.218$, $\theta _{2}=1.967$, $\theta _{3}=3.182$, $\theta _{4}=4.397$, $\theta _{5}=6.145$, $\theta _{6}=7.894$, $\theta _{7}=9.109$, $\theta _{8}=11.036$, which, in turn, results in a Jacobian with the 3 real eigenvalues $\mu= \{1.0145,0.5357,0.2391\}$. The comparison of the numerical prediction for the eigenvalue dependence on $\epsilon $ versus the corresponding analytical prediction (solid and dashed lines, respectively) based on the above results is given in Fig. \[dfig3\], demonstrating a very good agreement between the two. ![For the $S=M=2$ supersymmetric vortex, the three real eigenvalues are displayed as functions of $\protect\epsilon $ for $\protect\alpha =1.05$. The solid and dashed lines depict the numerical and analytical results.[]{data-label="dfig3"}](anis5n.ps){width="6.cm" height="5cm"} To highlight the substantial differences between the dynamics in the isotropic and anisotropic models, we have performed numerical simulations of the super-symmetric vortex with $S=M=2$. In this case, the evolution of the field at the eight basic sites is shown in the top panel of Fig. \[dfig4\] for $\alpha =1.05$, and in the bottom panel for $\alpha =1$. In the former case, for the coupling strength $\epsilon =0.015$ considered here, the three unstable eigenvalues for $\alpha =1.05$ are $\lambda =0.1688$, $\lambda =0.1258$ and $\lambda =0.0855$, while in the latter case (isotropic model), the only unstable eigenvalue is a much smaller one, $\lambda =0.0146$. Naturally, we observe the instability setting in much earlier in the anisotropic model (at $t~\,_{\sim }^{>}~30$) than in the isotropic one (at $t~\,_{\sim }^{>}~160$). ![Same as in Fig. \[dfig2off2\] but for the supersymmetric vortex of the $S=M=2$ type. The four lines (solid, dashed, dotted, and dash-dotted) and four symbols (circles, pluses, stars and triangles) are used to denote the squared absolute values of the field at the eight sites carrying the vortex in the anisotropic (top) model and its isotropic counterpart (bottom) for $\protect\epsilon =0.015$.[]{data-label="dfig4"}](anis8.ps){width="6.cm" height="6cm"} One may be wondering whether the strong dynamical effect of the weak anisotropy should be attributed to the super-symmetry of the vortex, or maybe just the specific type of contour which carries the vortex. To check this, we have also considered the vortex with $S=3$ sitting on the same $M=2$ contour. Given the lack of the super-symmetry in the latter case, the bifurcation of the relevant $7$ ($=N-1$) eigenvalue pairs occurs at the leading-order reduction and all of them are proportional to $\pm i\sqrt{\epsilon }$. More specifically, for the largest pair in the isotropic case (for instance), the proportionality factor is $2.3784$. In the anisotropic case with $\alpha =1.05$, the seven pairs remain on the imaginary axis, being slightly perturbed due to $\alpha \neq 1$. For instance, the largest one among them is now $\lambda =\pm 2.3943\cdot i\sqrt{\epsilon }$. On the other hand, for $\alpha =0.95$, the largest eigenvalue pair is $\lambda =\pm 2.3647\cdot i\sqrt{\epsilon }$. This also is in line with our above results on the fundamental discrete soliton and vortex cross, since it indicates that, for $\alpha >1$, the collision of this eigenvalue with the continuous spectrum (which leads to the Hamiltonian Hopf bifurcation) will occur at smaller $\epsilon $, the opposite being true for $\alpha <1$. Hence the stability diagram of the $S=3,M=2$ vortex square is quite similar to that shown for the fundamental soliton and vortex cross in Figs. \[dfig1\] and \[dfig2\] (therefore, it is not shown here). Conclusions =========== In this work, we have examined effects of anisotropy on lattice nonlinear dynamical systems supporting discrete solitons and vortices. The two-dimensional discrete nonlinear Schrödinger equation was used as a paradigm model. The variational approximation was developed for fundamental solitons, showing (by means of the Vakhitov-Kolokolov criterion) that broad quasi-continuum ones are unstable, while strongly anisotropic solitons are stable. By means of numerical methods, we have found that usual localized states, such as the fundamental discrete solitons and vortex crosses, are only mildly affected by the anisotropy, which results in a modified stability region (reduced when one direction features a stronger coupling than the isotropic limit, and augmented when the coupling along this direction is weaker). General phenomenology for such states is similar to that for their counterparts on the isotropic lattice. The main finding reported in the present work is that the assumption about mild deformation of the stability region induced by weak anisotropy is not valid for the delicate super-symmetric vortex states residing on square contours, in the case when the vorticity $S$ is equal to the contour’s size $M$. In this special case, the degeneracy of the leading-order existence conditions (dictated by Lyapunov-Schmidt theory) specific to the isotropic case is broken by the anisotropy. This, in turn, results in a dramatically different behavior (as a function of the intersite coupling constant) of the corresponding linear stability eigenvalues, in terms of both the order of their bifurcation, and the number of real eigenvalues. As a consequence, the supersymmetric vortex-square structure that was marginally stable in the isotropic case is found to be strongly unstable even on the weakly anisotropic lattice. Similarly, the supersymmetric vortex with $S=M=2$ is found to be much more unstable in the anisotropic case in comparison to its isotropic counterpart. The most natural systems for experimental observation of the results predicted in this work are deep optical lattices trapping BECs, and bundled sets of nonlinear optical waveguides (the latter have been recently created experimentally [@Lederer]). Anisotropic lattices can also be induced in photorefractive media, but this medium should be considered separately, in view of the different (saturable) character of the optical nonlinearity in this case. Such investigations are currently in progress and will be reported elsewhere. A further natural extension of this work would be to examine effects of anisotropy in three-dimensional lattices on discrete solitons, vortices, dipoles and quadrupoles of various types, octupoles, and more exotic localized configurations, that were recently investigated for the isotropic case in Refs. [@vortex3d].   ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ The work of B.A.M. was supported in a part by the Israel Science Foundation through the grant No. 8006/03. P.G.K. gratefully acknowledges support from NSF-DMS-0204585, NSF-CAREER. R.C.G. and P.G.K. also acknowledge support from NSF-DMS-0505663. Work at Los Alamos is supported by the US DoE. [99]{} S. Aubry, Physica D **103**, 201, (1997); S. Flach and C. R. Willis, Phys. Rep. **295** 181 (1998); D. Hennig and G. Tsironis, Phys. Rep. **307**, 333 (1999); P. G. Kevrekidis, K. [Ø]{}. Rasmussen, and A. R. Bishop, Int. J. Mod. Phys. B **15**, 2833 (2001). D. N. Christodoulides, F. Lederer and Y. Silberberg, Nature **424**, 817 (2003); Yu. S. Kivshar and G. P. Agrawal, *Optical Solitons: From Fibers to Photonic Crystals*, Academic Press (San Diego, CA, 2003). P. G. Kevrekidis and D. J. Frantzeskakis, Mod. Phys. Lett. B **18**, 173 (2004); V. V. Konotop and V. A. Brazhnyi, Mod. Phys. Lett. B **18** 627, (2004); P. G. Kevrekidis, R. Carretero-González, D. J. Frantzeskakis, and I. G. Kevrekidis, Mod. Phys. Lett. B **18**, 1481 (2004). M. Sato and A. J. Sievers, Nature **432**, 486 (2004); M. Sato, B. E. Hubbard, A. J. Sievers, B. Ilic and H. G. Craighead, Europhys. Lett. **66**, 318 (2004); M. Sato, B. E. Hubbard, A. J. Sievers, B. Ilic, D. A. Czaplewski, and H. G. Craighead, Phys. Rev. Lett. **90**, 044102 (2003). M. Peyrard, Nonlinearity **17**, R1 (2004). N. Voglis, Mon. Not. Roy. Astr. Soc. **344**, 575 (2003). N. K. Efremidis, [*et al.*]{}, S. Sears, D. N. Christodoulides, J. W. Fleischer, and M. Segev, Phys. Rev. E **66**, 046602 (2002); A. A. Sukhorukov, Yu. S. Kivshar, H. S Eisenberg, and Y. Silberberg, IEEE J. Quantum Electron. **39**, 31 (2003). B. A. Malomed and P. G. Kevrekidis, Phys. Rev. E **64**, 026601 (2001). J. Yang and Z. Musslimani, Opt. Lett. **23**, 2094 (2003), P. G. Kevrekidis, B. A. Malomed, Z. G. Chen, and D. J. Frantzeskakis, Phys. Rev. E **70**, 056612 (2004). J. W. Fleischer, T. Carmon, M. Segev, N. K. Efremidis, and D. N Christodoulides, Phys. Rev. Lett. **90** 023902 (2003); H. Martin, E. D. Eugenieva, Z. G. Chen, and D. N. Christodoulides, Phys. Rev. Lett. **92** 123902 (2004); J. K Yang, I. Makasyuk, A. Bezryadina, and Z. Chen, Opt. Lett. **29**, 1662 (2004); Z. G. Chen, H. Martin, E. D. Eugenieva, J. J. Xu, and A. Bezryadina, Phys. Rev. Lett. **92** 143902 (2004), Z. G. Chen, A. Bezryadina, I. Makasyuk, and J. K. Yang, Opt. Lett. **29** 1656 (2004); J. Yang, I. Makasyuk I, P. G. Kevrekidis, H. Martin, B. A. Malomed, D. J. Frantzeskakis, and Z. G. Chen, Phys. Rev. Lett. **94**, 113902 (2005). D. N. Neshev, T. J. Alexander, E. A. Ostrovskaya, Yu. S. Kivshar, H. Martin, I. Makasyuk, and Z. G. Chen, Phys. Rev. Lett. **92**, 123903 (2004); J. W. Fleischer, G. Bartal, O. Cohen, O. Manela, M. Segev, J. Hudock, and D. N. Christodoulides, Phys. Rev. Lett. **92** (2004) 123904. S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K.Sengstock, A. Sanpera, G. V. Shlyapnikov, and M. Lewenstein, Phys. Rev.Lett. **83**, 5198 (1999); J. Denschlag, J. E. Simsarian, D. L.Feder, C. W. Clark, L. A. Collins, J. Cubizolles, L. Deng, E. W.Hagley, K. Helmerson, W. P. Reinhardt, S. L. Rolston, B. I. Schneider, and W. D. Phillips, Science **287**, 97 (2000); B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark, and E. A. Cornell, Phys. Rev. Lett. **86**, 2926 (2001). K. E. Strecker, G. B. Partridge, A. G. Truscott, and R. G. Hulet, Nature **417**, 150 (2002); L. Khaykovich, F. Schreck, G.Ferrari, T. Bourdel, J. Cubizolles, L. D. Carr, Y. Castin, and C.Salomon, Science **296**, 1290 (2002). B. Eiermann, Th. Anker, M. Albiez, M. Taglieber, P.Treutlein, K.-P. Marzlin, and M. K. Oberthaler, Phys. Rev. Lett.**92**, 230401 (2004). B. B. Baizakov, B. A. Malomed, and M. Salerno, Europhys. Lett. **63**, 642 (2003); B. B. Baizakov, B. A. Malomed and M. Salerno, Phys. Rev. A **70**, 053613 (2004). See, e.g., M. Greiner, I. Bloch, O. Mandel, T. W. Hansch, T. Esslinger, Appl. Phys. B **73**, 769 (2001); M. Greiner, I. Bloch, O. Mandel, T. W. Hansch, T. Esslinger, Phys. Rev. Lett. **87**, 160405 (2001). R. S. MacKay and S. Aubry, Nonlinearity **7** (1994) 1623. M. G. Vakhitov and A. A. Kolokolov, Izv. Vuz. Radiofiz. **16** (1973) 1020 \[in Russian; English translation: Sov. J. Radiophys. Quantum Electr. **16** (1973) 783\]; L. Berg[é]{}, Phys. Rep. **303**, 260 (1998). B. B. Baizakov, B. A. Malomed, and M. Salerno, Europhys. Lett. **63**, 642 (2003); Phys. Rev. A **70**, 053613 (2004). B. B. Baizakov, M. Salerno, and B. A. Malomed, in *Nonlinear Waves: Classical and Quantum Aspects* , ed. by F. Kh. Abdullaev and V. V. Konotop, p. 61 (Kluwer Academic Publishers: Dordrecht, 2004); also available at http://rsphy2.anu.edu.au/$\sim$asd124/Baizakov\_2004\_61\_NonlinearWaves.pdf. B. A. Malomed and M. I. Weinstein, Phys. Lett. A **220**, 91 (1996). M. I. Weinstein, Nonlinearity **12**, 673 (1999). S. Flach, K. Kladko and R.S. MacKay, Phys. Rev. Lett. [**78**]{}, 1207 (1997). P. G. Kevrekidis, K. [Ø]{}. Rasmussen and A. R. Bishop, Phys. Rev. E **61**, 2006 (2000); P. G. Kevrekidis, K. [Ø]{}. Rasmussen, and A. R. Bishop, Math. Comp. Simul. **55**, 449 (2001). J.-C. van der Meer, Nonlinearity **3**, 1041 (1990). S. Darmanyan, A. Kobyakov and F. Lederer, Sov. Phys. JETP **86**, 682-686 (1998); P. G. Kevrekidis, A. R. Bishop and K. [Ø]{}. Rasmussen, Phys. Rev. E **63**, 036603 (2001). D. E. Pelinovsky, P. G. Kevrekidis, and D. J. Frantzeskakis, nlin.PS/0411016. M. Golubitsky and D. G. Schaeffer, *Singularities and Groups in Bifurcation Theory*, vol. 1, (Springer-Verlag, New York, 1985). T. Pertsch, U. Peschel, F. Lederer, J. Burghoff, M.Will, S. Nolte, and A. Tunnermann, Opt. Lett. **29**, 468 (2004). P. G. Kevrekidis, B. A. Malomed, D. J. Frantzeskakis, and R.Carretero-González, Phys. Rev. Lett. **93**, 080403 (2004); R. Carretero-González, P. G. Kevrekidis, B. A. Malomed, and D. J. Frantzeskakis, Phys. Rev. Lett. **94**, 203901 (2005).
--- abstract: 'Wavelengths and transition rates are given for E1 transitions between singlet $^1\! S$, $^1\! P$, $^1\! D$, and $^1\! F$ states, between triplet $^3\! S$, $^3\! P$, and $^3\! D$ states, and between triplet $^3\! P_1$ and singlet $^1\! S_0$ states in ions of astrophysical interest: helium-like carbon, nitrogen, oxygen, neon, silicon, and argon. All possible E1 transitions between states with $J\leq 3$ and $n\leq6$ are considered. Energy levels and wave functions used in calculations of the transition rates are obtained from relativistic configuration-interaction calculations that include both Coulomb and Breit interactions.' author: - 'W. R. Johnson, I. M. Savukov, and U. I. Safronova,' - 'A. Dalgarno' title: 'E1 transitions between states with $n$=1 to 6 in helium-like carbon, nitrogen, oxygen, neon, silicon, and argon' --- Introduction ============ The emission lines resulting from electron capture by multicharged ions colliding with neutral gases provide a powerful diagnostic probe of astrophysical plasmas. X-rays seen in auroras on Jupiter [@ME:83; @WA:94; @CR:95; @KLD:98], in comets [@LI:96; @DET:97; @LI:01] and the X-ray background [@CR:00; @CR:01] have been attributed to electron capture by heavy ions. Several laboratory studies of the X-ray emissions have been carried out [@GR:00; @BE:00; @Lu:01; @GR:01; @BE:01; @MA:01; @FL:01; @HA:01]. The interpretation of the laboratory data and the astrophysical data requires a reliable description of the radiative cascade that follows the capture into excited states. The cascade lines appearing at extreme ultraviolet wavelengths provide additional probes of the environments in which the multicharged heavy ions are present. For the helium-like ions interesting differences may occur as the nuclear charge of the ions changes. It has long been known that for ions beyond N$^{5+}$ the spin-forbidden $2\, ^3\!P_1 - 1\, ^1\! S_0$ transition is more probable than the allowed $2\, ^3\!P_1 - 2\, ^3\! S_1$ transition [@DD:69]. In the most recent National Institute of Standards and Technology (NIST) data compilation [@WFD:96], recommended transition rates for helium-like C, N, and O are based primarily on nonrelativistic calculations by @CT:92, who used explicitly correlated wave functions for singlet and triplet $S$, $P$, and $D$ states with $n\leq 6$ to calculate energies and dipole oscillator strengths for $S-P$ and $P-D$ transitions in helium-like ions with $Z$ from 2 to 10. In this paper, we extend the results of previous calculations using relativistic configuration-interaction (CI) wave functions and energies for $(1snl)\, ^{2S+1}\! L_J$ states of helium-like ions with $J\leq 3$, $n \leq 6$, and $Z=6,\, 7,\, 8,\, 10,\, 14$, and 18. We present results here for the allowed singlet-singlet and triplet-triplet transitions. The rates for triplet-triplet transitions are determined by averaging the calculated rates over fine-structure substates. Additionally, we present data for intercombination transitions between $^3P_1$ and $^1S_0$ states. Relativistic CI calculations of wavelengths and transition probabilities have been carried out previously for E1, M1, and M2 transitions between $n = 1$ and 2 states of helium-like ions [@PJS:95]. Nonrelativistic many-body perturbation theory, treating relativistic corrections perturbatively, has also been used to calculate energies of $(1snl)$ states with $n=2$–5 and $l=0$–2 [@VS:85]. The present calculations extend previous relativistic calculations to $n>2$ and extend previous nonrelativistic calculations to higher values of $Z$. The importance of accurate atomic characteristics for astrophysics was further illustrated by @KD:01, where the variability of cometary X-ray emission induced by solar wind ions was studied. Calculation =========== Relativistic CI calculations were introduced to evaluate precise values of energies of $n=1$ and 2 states of helium-like ions [@CCJ:93; @CC:94] and used subsequently to evaluate transition energies and transition rates between states of helium-like ions with $n$ = 1 and 2 [@PJS:95]. In the present paper, we apply the methods developed in these earlier papers to evaluate wavelengths and rates for E1 transitions between $(1snl)$ states with $n\leq 6$ in various helium-like ions of astrophysical interest. The computational method is summarized in the following paragraphs. \[seca\] Wave Functions and Energies ------------------------------------ The wave function describing a state with angular momentum $J, M$ in a two-electron ion may be written as $$\Psi_{JM} = \sum_{i \le j} c_{ij} \Phi_{ij}(JM) , \label{eq1}$$ where the quantities $c_{ij}$ are expansion coefficients and where $\Phi_{ij}(JM)$, the configuration state vectors, are given by $$\Phi_{ij}(JM) = \eta_{ij} \sum_{m_i m_j} \langle j_i m_i , j_j m_j | JM\rangle \; a^{\dagger}_i a^{\dagger}_j |0\rangle , \label{eq2}$$ in second quantization, with $$\eta_{ij} = \left\{ \begin{array}{ll} 1, & i\ne j ,\\ 1/{\sqrt 2}, & i = j . \end{array} \right. \label{eq3}$$ In the above equations, we use subscripts $i$ to designate quantum numbers $(n_i,j_i,l_i,m_i)$ of one-electron states. The quantities $c_{ij}$, $\Phi_{ij}(JM)$, and $\eta_{ij}$ are independent of magnetic quantum numbers $m_i$ and $m_j$. To construct a state of even or odd parity, one requires the sum of orbital angular momenta $l_i + l_j$ to be either even or odd, respectively. From the symmetry properties of the Clebsch-Gordan coefficients, it can be shown that $$\Phi_{ij}(JM) = (-1)^{j_i + j_j + J + 1} \Phi_{ji}(JM) . \label{eq4}$$ This relation, in turn, implies that $\Phi_{ii}(JM)$ vanishes unless $J$ is even. The wave-function normalization condition has the form $$\langle\Psi_{JM}|\Psi_{JM}\rangle = \sum_{i \le j} c_{ij}^2 = 1 . \label{eq5}$$ Substituting $\Psi_{JM}$ into the Schrödinger equation $(H_0 +V) \Psi_{JM} = E \Psi_{JM}$, one obtains the following set of linear equations for the expansion coefficients $c_{ij}$: $$(\epsilon_i + \epsilon_j) c_{ij} + \sum_{k \le l} \eta_{ij} V_J(ij;kl)\, \eta_{kl} \, c_{kl} = E c_{ij} \, . \label{eq6}$$ The potential matrix in Eq. (\[eq6\]) is $$\begin{gathered} V_J(ij;kl) = \\ \sum_L (-1)^{j_j + j_k + L + J} \left\lbrace \begin{array}{ccc} j_i&j_j&J\\ j_l&j_k&L \end{array} \right\rbrace X_L(ijkl) \\ + \sum_L (-1)^{j_j + j_k + L} \left\lbrace \begin{array}{ccc} j_i&j_j&J\\ j_k&j_l&L \end{array} \right\rbrace X_L(ijlk) , \label{eq7}\end{gathered}$$ where the quantities $X_L(ijkl)$ are given by $$X_L(ijkl) = (-1)^L \langle\kappa_i\|C_L\|\kappa_k\rangle \langle\kappa_j\|C_L\|\kappa_l\rangle R_L(ijkl) .\label{eq8}$$ The coefficients $\langle\kappa_i\|C_L\|\kappa_j\rangle$ in the above equation are reduced matrix elements of normalized spherical harmonics, and the quantities $R_L(ijkl)$ are relativistic Slater integrals [@CCJ:93; @PJS:94]. In the present calculation, where both the Coulomb and Breit interactions are included in the Hamiltonian, $$\begin{gathered} X_L(ijkl) \rightarrow X_L(ijkl) \\ + M_L(ijkl) + N_L(ijkl) + O_L(ijkl) \, , \label{eq9}\end{gathered}$$ where $M_L(ijkl)$, $N_L(ijkl)$, and $O_L(ijkl)$ are magnetic Slater integrals [@JBS:88]. Identification of the levels obtained by solving the CI equations was aided by comparison with the online @nist:01. Transition Amplitudes and Rates ------------------------------- Using the CI wave functions discussed in Sec. \[seca\] for both the initial and final states and carrying out the sums over magnetic substates, one obtains the following expression for the reduced electric-dipole matrix element $$\begin{gathered} \langle F || D|| I \rangle = - \sqrt{[J_I][J_F]} \sum_{\substack{m\le n\\ r\le s}} \eta_{rs} \, \eta_{mn} \, c^{\scriptscriptstyle (F)}_{rs} \, c^{\scriptscriptstyle (I)}_{mn} \, \times \\ \Biggl\{ \Biggr. (-1)^{j_r + j_s + J_I} \left\{ \begin{array}{ccc} 1 & J_I & J_F \\ j_s & j_r & j_m \end{array} \right \} \langle r || d || m \rangle \delta_{ns} \\ + (-1)^{j_r + j_n} \left\{ \begin{array}{ccc} 1 & J_I & J_F\\ j_s & j_r & j_n \end{array} \right \} \langle r || d || n \rangle \delta_{ms} \\ + (-1)^{J_F + J_I + 1} \left\{ \begin{array}{ccc} 1 & J_I & J_F\\ j_r & j_s & j_m \end{array} \right \} \langle s || d || m \rangle \delta_{nr} \\ + (-1)^{j_r + j_n + J_F} \left\{ \begin{array}{ccc} 1 & J_I & J_F\\ j_r & j_s & j_n \end{array} \right \} \langle s || d || n \rangle \delta_{mr} \Biggl. \Biggr\} , \label{eq10}\end{gathered}$$ where $[J]=2J+1$. The one-electron reduced matrix elements $\langle m \| d \| n \rangle$ in Eq. (\[eq10\]) are given by $$\begin{gathered} \langle\kappa_i||d||\kappa_j\rangle = \frac{3}{k} \langle \kappa_i|| C_1|| \kappa_j \rangle \\ \int_0^\infty \! \! dr \, \biggl\{ j_1(kr) [P_i(r)P_j(r) + Q_i(r)Q_j(r)] \biggr. \\ + j_2(kr) \left[ \frac{\kappa_i - \kappa_j}{2} [P_i(r) Q_j(r) + Q_i(r) P_j(r)] \right. \\ + [P_i(r) Q_j(r) - Q_i(r) P_j(r)] \left. \right] \biggl. \biggr\} \, , \label{dl}\end{gathered}$$ in length form, and $$\begin{gathered} \langle\kappa_i||d||\kappa_j\rangle = \frac{3}{k} \langle \kappa_i|| C_1 || \kappa_j \rangle \\ \int_0^\infty \!\! dr \, \Biggl\{ \Biggr. - \frac{\kappa_i - \kappa_j}{2} \left [ \frac{d j_1(kr)}{d\, kr} + \frac{j_1(kr)}{kr} \right ] \times \\ [P_i(r) Q_j(r) + Q_i(r) P_j(r)]\Biggr. \\ \Biggl. + \frac{j_1(kr)}{kr} \, [P_i(r)Q_j(r) - Q_i(r)P_j(r)] \Biggr\} \, , \label{dv}\end{gathered}$$ in velocity form. In Eqs. (\[dl\]) and (\[dv\]), the quantities $P_i(r)$ and $Q_i(r)$ are large- and small-component radial Dirac wave functions for state $i$ and $j_l(kr)$ is a spherical Bessel function of order $l$; $k=2\pi/\lambda$ being the magnitude of the wave vector. These dipole matrix elements are fully retarded. The dipole transition rates are given in terms of the dipole matrix elements by $$A_{FI} = \frac{2.0613\times 10^{18}}{[J_F]\lambda^3} S_{FI}, \label{trans}$$ where $S_{FI} = |\langle F || D|| I \rangle|^2$ is the line-strength of the transition (atomic units) and $\lambda$ is the transition wavelength (Å). Discussion of Tables ==================== We solve the CI-equation (\[eq6\]) using the method described in [@CCJ:93] to obtain wave functions for $(1snl)\ ^{2S+1}\!L_J$ states with $J \leq 3$, $S= 0$ & 1, and $n \leq 6$ for helium-like ions with $Z$ = 6, 7, 8, 10, 14, and 18. The energies obtained from the CI calculations are in precise agreement with earlier relativistic calculations [@PJS:94] and in agreement to parts in 10$^4$ with the nonrelativistic CI calculations of @CT:92. The wavelengths for transitions between nearly degenerate levels are less accurate than the level energies owing to cancellation. The present wavelengths agree with NIST tabulations of measured wavelengths [@WFD:96] to better than 0.02% for $\lambda < 200$ Å, to better than 0.2% for $200\leq \lambda < 2000$ Å, and to better than 2% for $\lambda < 20000$ Å. Oscillator strengths from the present calculation agree precisely with those given in [@PJS:95] for transitions between states with with $n=1$ and 2; they are also in close agreement (typically 0.01%) with values from [@CT:92]. Transition rates agree with the tabulated NIST values [@WFD:96] to better than 0.5% for $\lambda < 200$ Å, to better than 1% for $200\leq \lambda < 2000$ Å, and to better than 2% for $\lambda < 20000$Å. Results of our calculations are given in Tables \[sps\]–\[xsp\]. Although we quote five significant figures for wavelengths and four significant figures for transition rates in these tables, the reader is cautioned that the accuracy of the wavelengths (and consequently the transition rates) as determined by the comparisons discussed above is substantially smaller for larger wavelengths. Length-form and velocity-form matrix elements for these transitions are in close agreement; however, there are small residual differences between length-form and velocity-form matrix elements caused by our neglect of contributions from negative-energy states. As discussed in [@PJS:95], these differences, when evaluated perturbatively, contribute only to velocity-form matrix elements, and bring velocity-form matrix elements into precise agreement with the corresponding length-form matrix elements. In the present tabulation, we list transition rates obtained from length-form calculations only. In Tables \[sps\]–\[sdf\], we present wavelengths and transition rates for singlet-singlet transitions of the type $(1snl)\, ^1L -(1sml')\, ^1L'$ with $L$ and $L'$ ranging through $S,\, P,\, D,\, F$. With the exception of the $n\, ^1 P - m\, ^1 S$ transitions listed in Table \[sps\], $n-n$ transitions (which have wavelengths $\sim 10^5\ \text{to}\ 10^6$ Å and transition rates $< 10^5$ s$^{-1}$) are omitted from the tables since [*ab-initio*]{} calculations for such cases are unreliable. In Tables \[tps\]–\[tpd\] we present wavelengths and transition rates for triplet-triplet transitions of the type $(1snl)\, ^3L -(1sml')\, ^3 L'$ with $L$ and $L'$ ranging over $S,\, P,\, D$. In the astrophysical applications mentioned in the introduction, the fine-structure of the transitions is unresolved. We therefore average the triplet-triplet rates over individual fine-structure substates $J_I$, $J_F$. The average rates $\bar{A}$ listed in the tables are given by $$\bar{A} = \frac{\sum_{J_I,J_F} [J_F] A_{FI} }{3 [L_F]} .$$ (Note that $3 [L_F] = \sum_{J_F} [J_F]$ for triplet states.) Multiplet-average wavelengths $\bar{\lambda}$ are also listed in the tables. For reasons given in the previous paragraph, we include $n-n$ transition data only for the $n\ ^3 \!P - m\ ^3\!S$ transitions (Table \[tps\]). The intercombination transitions between $n\ ^3\! P_1$ states and $m\ ^1\! S_0$ states, which are comparable in size to allowed transitions for highly-charged ions, are listed in Tables \[xps\] & \[xsp\]. In summary, we present accurate wavelengths and transition rates for allowed singlet-singlet and triplet-triplet transitions and for intercombination transitions between triplet $^3P_1$ and singlet $^1S_0$ states with $n\leq 6$ in helium-like, carbon, nitrogen, oxygen, neon, silicon, and argon for use in the analysis of astrophysical plasmas. The research of WRJ and IS was supported in part by National Science Foundation Grant No. PHY-99-70666. The research of UIS was supported by Grant No.B503968 from Lawrence Livermore National Laboratory. The research of AD was supported by the Division of Astronomy of the National Science Foundation. [lrrrrrrrrrr]{}\ $2\, ^1\! P_1$& 40.266& 8.862\[11\]& 3524.7& 1.672\[07\]& & & & & &\ $3\, ^1\! P_1$& 34.972& 2.551\[11\]& 247.31& 1.277\[10\]& 12160.& 2.440\[06\]& & & &\ $4\, ^1\! P_1$& 33.426& 1.064\[11\]& 186.35& 5.764\[09\]& 711.69& 1.663\[09\]& 29083.& 5.936\[05\]& &\ $5\, ^1\! P_1$& 32.754& 5.420\[10\]& 167.22& 3.000\[09\]& 495.34& 9.318\[08\]& 1543.0& 3.897\[08\]& 57105.& 1.959\[05\]\ $6\, ^1\! P_1$& 32.399& 3.186\[10\]& 158.36& 1.780\[09\]& 424.88& 5.638\[08\]& 1017.4& 2.514\[08\]& 2838.4& 1.288\[08\]\ \ $2\, ^1\! P_1$& 28.786& 1.806\[12\]& 2893.3& 2.098\[07\]& & & & & &\ $3\, ^1\! P_1$& 24.900& 5.149\[11\]& 173.39& 2.690\[10\]& 9972.0& 3.072\[06\]& & & &\ $4\, ^1\! P_1$& 23.771& 2.141\[11\]& 130.30& 1.205\[10\]& 498.19& 3.531\[09\]& 23835.& 7.489\[05\]& &\ $5\, ^1\! P_1$& 23.281& 1.089\[11\]& 116.84& 6.259\[09\]& 345.82& 1.965\[09\]& 1079.5& 8.316\[08\]& 46755.& 2.478\[05\]\ $6\, ^1\! P_1$& 23.024& 6.328\[10\]& 110.62& 3.667\[09\]& 296.47& 1.172\[09\]& 710.40& 5.268\[08\]& 1989.4& 2.721\[08\]\ \ $2\, ^1\! P_1$& 21.600& 3.302\[12\]& 2446.6& 2.547\[07\]& & & & & &\ $3\, ^1\! P_1$& 18.627& 9.344\[11\]& 128.24& 5.039\[10\]& 8427.3& 3.738\[06\]& & & &\ $4\, ^1\! P_1$& 17.767& 3.876\[11\]& 96.194& 2.246\[10\]& 368.10& 6.654\[09\]& 20139.& 9.120\[05\]& &\ $5\, ^1\! P_1$& 17.395& 1.970\[11\]& 86.204& 1.164\[10\]& 255.01& 3.683\[09\]& 797.24& 1.571\[09\]& 39507.& 3.018\[05\]\ $6\, ^1\! P_1$& 17.199& 1.146\[11\]& 81.591& 6.828\[09\]& 218.48& 2.198\[09\]& 523.52& 9.925\[08\]& 1468.2& 5.165\[08\]\ \ $2\, ^1\! P_1$& 13.446& 8.853\[12\]& 1852.6& 3.542\[07\]& & & & & &\ $3\, ^1\! P_1$& 11.546& 2.478\[12\]& 78.251& 1.397\[11\]& 6375.9& 5.214\[06\]& & & &\ $4\, ^1\! P_1$& 10.999& 1.024\[12\]& 58.547& 6.182\[10\]& 224.32& 1.859\[10\]& 15231.& 1.274\[06\]& &\ $5\, ^1\! P_1$& 10.764& 5.196\[11\]& 52.427& 3.194\[10\]& 155.00& 1.021\[10\]& 485.61& 4.405\[09\]& 29849.& 4.228\[05\]\ $6\, ^1\! P_1$& 10.639& 3.008\[11\]& 49.607& 1.864\[10\]& 132.70& 6.051\[09\]& 318.10& 2.752\[09\]& 894.55& 1.446\[09\]\ \ $2\, ^1\! P_1$& 6.6466& 3.757\[13\]& 1195.5& 6.276\[07\]& & & & & &\ $3\, ^1\! P_1$& 5.6796& 1.037\[13\]& 37.807& 6.156\[11\]& 4105.5& 9.302\[06\]& & & &\ $4\, ^1\! P_1$& 5.4036& 4.270\[12\]& 28.214& 2.703\[11\]& 108.26& 8.246\[10\]& 9801.1& 2.278\[06\]& &\ $5\, ^1\! P_1$& 5.2846& 2.161\[12\]& 25.246& 1.392\[11\]& 74.607& 4.495\[10\]& 234.26& 1.961\[10\]& 19189.& 7.586\[05\]\ $6\, ^1\! P_1$& 5.2221& 1.244\[12\]& 23.880& 8.076\[10\]& 63.822& 2.644\[10\]& 153.05& 1.211\[10\]& 431.60& 6.426\[09\]\ \ $2\, ^1\! P_1$& 3.9478& 1.071\[14\]& 816.70& 1.133\[08\]& & & & & &\ $3\, ^1\! P_1$& 3.3647& 2.931\[13\]& 22.162& 1.790\[12\]& 2793.9& 1.697\[07\]& & & &\ $4\, ^1\! P_1$& 3.1989& 1.203\[13\]& 16.521& 7.826\[11\]& 63.436& 2.402\[11\]& 6661.5& 4.173\[06\]& &\ $5\, ^1\! P_1$& 3.1275& 6.078\[12\]& 14.779& 4.024\[11\]& 43.669& 1.305\[11\]& 137.25& 5.717\[10\]& 13045.& 1.389\[06\]\ $6\, ^1\! P_1$& 3.0900& 3.535\[12\]& 13.976& 2.358\[11\]& 37.335& 7.749\[10\]& 89.522& 3.557\[10\]& 252.45& 1.896\[10\]\ [lrrrrrrrrr]{}\ $3\, ^1\! S_0$& 271.92& 5.707\[09\]& & & & & &\ $4\, ^1\! S_0$& 198.09& 2.292\[09\]& 776.10& 1.565\[09\]& & & &\ $5\, ^1\! S_0$& 176.09& 1.141\[09\]& 521.09& 7.539\[08\]& 1677.3& 5.361\[08\]& &\ $6\, ^1\! S_0$& 166.09& 6.613\[08\]& 442.23& 4.272\[08\]& 1065.6& 2.954\[08\]& 3079.7& 2.234\[08\]\ \ $3\, ^1\! S_0$& 187.92& 1.127\[10\]& & & & & &\ $4\, ^1\! S_0$& 137.23& 4.542\[09\]& 536.18& 3.110\[09\]& & & &\ $5\, ^1\! S_0$& 122.07& 2.265\[09\]& 361.01& 1.505\[09\]& 1158.7& 1.070\[09\]& &\ $6\, ^1\! S_0$& 115.18& 1.301\[09\]& 306.71& 8.458\[08\]& 738.89& 5.865\[08\]& 2132.4& 4.418\[08\]\ \ $3\, ^1\! S_0$& 137.55& 2.013\[10\]& & & & & &\ $4\, ^1\! S_0$& 100.63& 8.131\[09\]& 392.41& 5.583\[09\]& & & &\ $5\, ^1\! S_0$& 89.555& 4.057\[09\]& 264.73& 2.708\[09\]& 847.91& 1.925\[09\]& &\ $6\, ^1\! S_0$& 84.510& 2.336\[09\]& 225.03& 1.527\[09\]& 541.73& 1.060\[09\]& 1559.5& 7.983\[08\]\ \ $3\, ^1\! S_0$& 82.762& 5.227\[10\]& & & & & &\ $4\, ^1\! S_0$& 60.698& 2.117\[10\]& 236.11& 1.460\[10\]& & & &\ $5\, ^1\! S_0$& 54.052& 1.057\[10\]& 159.72& 7.105\[09\]& 510.17& 5.052\[09\]& &\ $6\, ^1\! S_0$& 51.022& 6.066\[09\]& 135.88& 3.995\[09\]& 326.94& 2.783\[09\]& 938.91& 2.094\[09\]\ \ $3\, ^1\! S_0$& 39.416& 2.149\[11\]& & & & & &\ $4\, ^1\! S_0$& 28.981& 8.726\[10\]& 112.47& 6.054\[10\]& & & &\ $5\, ^1\! S_0$& 25.825& 4.356\[10\]& 76.290& 2.954\[10\]& 243.03& 2.100\[10\]& &\ $6\, ^1\! S_0$& 24.385& 2.490\[10\]& 64.957& 1.655\[10\]& 156.22& 1.157\[10\]& 447.49& 8.696\[09\]\ \ $3\, ^1\! S_0$& 22.967& 6.071\[11\]& & & & & &\ $4\, ^1\! S_0$& 16.905& 2.467\[11\]& 65.549& 1.715\[11\]& & & &\ $5\, ^1\! S_0$& 15.069& 1.232\[11\]& 44.514& 8.380\[10\]& 141.66& 5.956\[10\]& &\ $6\, ^1\! S_0$& 14.229& 7.123\[10\]& 37.903& 4.753\[10\]& 91.101& 3.323\[10\]& 260.35& 2.495\[10\]\ [lrrrrrrrrr]{}\ $3\, ^1\! D_2$& 267.28& 3.930\[10\]& & & & & &\ $4\, ^1\! D_2$& 197.03& 1.229\[10\]& 760.16& 4.410\[09\]& & & &\ $5\, ^1\! D_2$& 175.67& 5.567\[09\]& 517.36& 2.096\[09\]& 1639.3& 9.415\[08\]& &\ $6\, ^1\! D_2$& 165.88& 3.062\[09\]& 440.75& 1.169\[09\]& 1057.1& 5.468\[08\]& 3009.5& 2.905\[08\]\ \ $3\, ^1\! D_2$& 185.22& 8.137\[10\]& & & & & &\ $4\, ^1\! D_2$& 136.62& 2.555\[10\]& 526.87& 9.117\[09\]& & & &\ $5\, ^1\! D_2$& 121.82& 1.159\[10\]& 358.81& 4.344\[09\]& 1136.4& 1.944\[09\]& &\ $6\, ^1\! D_2$& 115.05& 6.319\[09\]& 305.80& 2.401\[09\]& 733.62& 1.121\[09\]& 2089.1& 5.934\[08\]\ \ $3\, ^1\! D_2$& 135.83& 1.501\[11\]& & & & & &\ $4\, ^1\! D_2$& 100.23& 4.734\[10\]& 386.47& 1.682\[10\]& & & &\ $5\, ^1\! D_2$& 89.394& 2.151\[10\]& 263.33& 8.033\[09\]& 833.67& 3.585\[09\]& &\ $6\, ^1\! D_2$& 84.429& 1.176\[10\]& 224.45& 4.454\[09\]& 538.42& 2.075\[09\]& 1532.3& 1.096\[09\]\ \ $3\, ^1\! D_2$& 81.925& 4.025\[11\]& & & & & &\ $4\, ^1\! D_2$& 60.504& 1.284\[11\]& 233.20& 4.529\[10\]& & & &\ $5\, ^1\! D_2$& 53.972& 5.853\[10\]& 159.02& 2.173\[10\]& 503.17& 9.658\[09\]& &\ $6\, ^1\! D_2$& 50.981& 3.199\[10\]& 135.59& 1.205\[10\]& 325.26& 5.593\[09\]& 925.25& 2.945\[09\]\ \ $3\, ^1\! D_2$& 39.098& 1.628\[12\]& & & & & &\ $4\, ^1\! D_2$& 28.907& 5.316\[11\]& 111.37& 1.860\[11\]& & & &\ $5\, ^1\! D_2$& 25.795& 2.445\[11\]& 76.026& 9.014\[10\]& 240.37& 3.985\[10\]& &\ $6\, ^1\! D_2$& 24.369& 1.339\[11\]& 64.845& 5.010\[10\]& 155.57& 2.318\[10\]& 442.21& 1.215\[10\]\ \ $3\, ^1\! D_2$& 22.791& 4.502\[12\]& & & & & &\ $4\, ^1\! D_2$& 16.864& 1.475\[12\]& 64.938& 5.142\[11\]& & & &\ $5\, ^1\! D_2$& 15.052& 6.801\[11\]& 44.368& 2.501\[11\]& 140.19& 1.102\[11\]& &\ $6\, ^1\! D_2$& 14.220& 3.761\[11\]& 37.846& 1.403\[11\]& 90.770& 6.481\[10\]& 257.67& 3.390\[10\]\ [crrrrrr]{}\ $4\, ^1\! P_1$& 745.56& 1.953\[08\]& & & &\ $5\, ^1\! P_1$& 511.51& 8.443\[07\]& 1610.1& 1.077\[08\]& &\ $6\, ^1\! P_1$& 436.72& 4.505\[07\]& 1046.2& 5.503\[07\]& 2954.2& 5.631\[07\]\ \ $4\, ^1\! P_1$& 518.22& 4.100\[08\]& & & &\ $5\, ^1\! P_1$& 355.35& 1.773\[08\]& 1119.3& 2.263\[08\]& &\ $6\, ^1\! P_1$& 303.45& 9.376\[07\]& 727.44& 1.145\[08\]& 2058.9& 1.170\[08\]\ \ $4\, ^1\! P_1$& 381.00& 7.648\[08\]& & & &\ $5\, ^1\! P_1$& 261.14& 3.306\[08\]& 822.94& 4.223\[08\]& &\ $6\, ^1\! P_1$& 222.96& 1.752\[08\]& 534.49& 2.141\[08\]& 1513.0& 2.188\[08\]\ \ $4\, ^1\! P_1$& 230.72& 2.085\[09\]& & & &\ $5\, ^1\! P_1$& 158.03& 9.007\[08\]& 498.38& 1.158\[09\]& &\ $6\, ^1\! P_1$& 134.91& 4.761\[08\]& 323.53& 5.851\[08\]& 916.91& 5.995\[08\]\ \ $4\, ^1\! P_1$& 110.73& 8.650\[09\]& & & &\ $5\, ^1\! P_1$& 75.772& 3.734\[09\]& 239.19& 4.874\[09\]& &\ $6\, ^1\! P_1$& 64.673& 1.967\[09\]& 155.15& 2.454\[09\]& 440.25& 2.530\[09\]\ \ $4\, ^1\! P_1$& 64.821& 2.448\[10\]& & & &\ $5\, ^1\! P_1$& 44.321& 1.057\[10\]& 140.01& 1.375\[10\]& &\ $6\, ^1\! P_1$& 37.811& 5.621\[09\]& 90.688& 6.985\[09\]& 257.25& 7.201\[09\]\ [crrrrrr]{}\ $4\, ^1\! F_3$& 749.60& 6.005\[09\]& & & &\ $5\, ^1\! F_3$& 512.46& 2.101\[09\]& 1619.6& 1.176\[09\]& &\ $6\, ^1\! F_3$& 437.25& 1.026\[09\]& 1049.2& 6.061\[08\]& 2978.7& 3.391\[08\]\ \ $4\, ^1\! F_3$& 520.61& 1.255\[10\]& & & &\ $5\, ^1\! F_3$& 355.90& 4.329\[09\]& 1124.8& 2.407\[09\]& &\ $6\, ^1\! F_3$& 303.70& 2.089\[09\]& 728.86& 1.226\[09\]& 2070.3& 6.846\[08\]\ \ $4\, ^1\! F_3$& 382.50& 2.377\[10\]& & & &\ $5\, ^1\! F_3$& 261.49& 8.103\[09\]& 826.41& 4.477\[09\]& &\ $6\, ^1\! F_3$& 223.12& 3.906\[09\]& 535.46& 2.278\[09\]& 1520.8& 1.269\[09\]\ \ $4\, ^1\! F_3$& 231.38& 6.952\[10\]& & & &\ $5\, ^1\! F_3$& 158.18& 2.333\[10\]& 499.88& 1.276\[10\]& &\ $6\, ^1\! F_3$& 134.98& 1.115\[10\]& 323.92& 6.436\[09\]& 920.05& 3.569\[09\]\ \ $4\, ^1\! F_3$& 110.88& 3.441\[11\]& & & &\ $5\, ^1\! F_3$& 75.803& 1.140\[11\]& 239.51& 6.276\[10\]& &\ $6\, ^1\! F_3$& 64.687& 5.402\[10\]& 155.22& 3.138\[10\]& 440.88 & 1.740\[10\]\ \ $4\, ^1\! F_3$& 64.821& 1.066\[12\]& & & &\ $5\, ^1\! F_3$& 44.319& 3.517\[11\]& 140.00& 1.972\[11\]& &\ $6\, ^1\! F_3$& 37.817& 1.671\[11\]& 90.721& 9.884\[10\]& 257.52 & 5.514\[10\]\ [lrrrr]{}\ $ 5\, ^1\! D_2$& 1620.0& 2.149\[07\]& &\ $ 6\, ^1\! D_2$& 1049.0& 9.238\[06\]& 2976.6& 1.794\[07\]\ \ $ 5\, ^1\! D_2$& 1125.0& 4.449\[07\]& &\ $ 6\, ^1\! D_2$& 728.87& 1.893\[07\]& 2070.1& 3.623\[07\]\ \ $ 5\, ^1\! D_2$& 826.54& 8.348\[07\]& &\ $ 6\, ^1\! D_2$& 535.44& 3.553\[07\]& 1520.4& 6.723\[07\]\ \ $ 5\, ^1\! D_2$& 500.03& 2.411\[08\]& &\ $ 6\, ^1\! D_2$& 323.95& 1.022\[08\]& 920.08& 1.902\[08\]\ \ $ 5\, ^1\! D_2$& 239.70& 1.207\[09\]& &\ $ 6\, ^1\! D_2$& 155.29& 5.101\[08\]& 441.19& 9.351\[08\]\ \ $ 5\, ^1\! D_2$& 140.19& 3.850\[09\]& &\ $ 6\, ^1\! D_2$& 90.771& 1.644\[09\]& 257.71& 3.000\[09\]\ [lrrrrrrrr]{} \ $2\, ^3\! P$& 2272.3 & 5.695\[07\] & & & & & &\ $3\, ^3\! P$& 227.17 & 1.360\[10\] &8425.8 & 6.928\[06\]& & & &\ $4\, ^3\! P$& 173.26 & 6.187\[09\] &671.98 & 1.673\[09\]& 20665. & 1.586\[06\]& &\ $5\, ^3\! P$& 156.22 & 3.223\[09\] &472.13 & 9.483\[08\]& 1474.3 & 3.821\[08\]& 41075. &5.087\[05\]\ $6\, ^3\! P$& 148.30 & 1.906\[09\] &406.54 & 5.738\[08\]& 980.37 & 2.493\[08\]& 2731.8 &1.240\[08\]\ \ $2\, ^3\! P$& 1899.1 & 6.865\[07\] & & & & & &\ $3\, ^3\! P$& 161.21 & 2.851\[10\] &7004.7 & 8.445\[06\]& & & &\ $4\, ^3\! P$& 122.42 & 1.285\[10\] &474.41 & 3.565\[09\]& 17138. & 1.943\[06\]& &\ $5\, ^3\! P$& 110.22 & 6.676\[09\] &331.95 & 2.001\[09\]& 1038.5 & 8.214\[08\]& 33998. &6.261\[05\]\ $6\, ^3\! P$& 104.57 & 3.907\[09\] &285.51 & 1.196\[09\]& 688.27 & 5.255\[08\]& 1925.3 &2.652\[08\]\ \ $2\, ^3\! P$& 1627.7 & 8.085\[07\] & & & & & &\ $3\, ^3\! P$& 120.32 & 5.315\[10\] &5979.5 & 1.003\[07\]& & & &\ $4\, ^3\! P$& 91.085 & 2.380\[10\] &352.73 & 6.726\[09\]& 14605. & 2.318\[06\]& &\ $5\, ^3\! P$& 81.916 & 1.234\[10\] &246.07 & 3.749\[09\]& 770.85 & 1.559\[09\]& 28950. &7.480\[05\]\ $6\, ^3\! P$& 77.677 & 7.233\[09\] &211.42 & 2.239\[09\]& 509.30 & 9.926\[08\]& 1427.1 &5.063\[08\]\ \ $2\, ^3\! P$& 1256.6 & 1.077\[08\] & & & & & &\ $3\, ^3\! P$& 74.350 & 1.462\[11\] &4588.0 & 1.354\[07\]& & & &\ $4\, ^3\! P$& 56.042 & 6.493\[10\] &216.83 & 1.880\[10\]& 11179. & 3.147\[06\]& &\ $5\, ^3\! P$& 50.328 & 3.356\[10\] &150.65 & 1.038\[10\]& 472.77 & 4.389\[09\]& 22119. &1.020\[06\]\ $6\, ^3\! P$& 47.692 & 1.957\[10\] &129.27 & 6.156\[09\]& 311.20 & 2.759\[09\]& 874.56 &1.428\[09\]\ \ $2\, ^3\! P$& 833.99 & 1.790\[08\] & & & & & &\ $3\, ^3\! P$& 36.439 & 6.386\[11\] &3019.9 & 2.298\[07\]& & & &\ $4\, ^3\! P$& 27.340 & 2.812\[11\] &105.66 & 8.348\[10\]& 7335.0 & 5.380\[06\]& &\ $5\, ^3\! P$& 24.515 & 1.449\[11\] &73.103 & 4.568\[10\]& 229.82 & 1.964\[10\]& 14486. &1.752\[06\]\ $6\, ^3\! P$& 23.215 & 8.408\[10\] &62.637 & 2.690\[10\]& 150.67 & 1.219\[10\]& 424.68 &6.401\[09\]\ \ $2\, ^3\! P$& 590.47 & 2.990\[08\] & & & & & &\ $3\, ^3\! P$& 21.530 & 1.862\[12\] &2123.9 & 3.911\[07\]& & & &\ $4\, ^3\! P$& 16.118 & 8.165\[11\] &62.242 & 2.453\[11\]& 5146.8 & 9.216\[06\]& &\ $5\, ^3\! P$& 14.442 & 4.201\[11\] &43.087 & 1.337\[11\]& 135.22 & 5.794\[10\]& 10158. &3.006\[06\]\ $6\, ^3\! P$& 13.670 & 2.462\[11\] &36.792 & 7.943\[10\]& 88.436 & 3.617\[10\]& 249.32 &1.912\[10\]\ [lrrrrrrrr]{}\ $3\, ^3\! S$& 260.20 & 6.709\[09\] & & & & & &\ $4\, ^3\! S$& 189.29 & 2.607\[09\] & 756.96 & 1.803\[09\] & & & &\ $5\, ^3\! S$& 168.44 & 1.279\[09\] & 506.32 & 8.527\[08\] & 1651.3 & 6.077\[08\] & &\ $6\, ^3\! S$& 159.01 & 7.324\[08\] & 429.72 & 4.775\[08\] & 1044.2 & 3.291\[08\] & 3051.3 & 2.495\[08\]\ \ $3\, ^3\! S$& 180.71 & 1.292\[10\] & & & & & &\ $4\, ^3\! S$& 131.87 & 5.057\[09\] & 524.44 & 3.510\[09\] & & & &\ $5\, ^3\! S$& 117.41 & 2.488\[09\] & 352.07 & 1.671\[09\] & 1142.6 & 1.191\[09\] & &\ $6\, ^3\! S$& 110.87 & 1.416\[09\] & 299.15 & 9.307\[08\] & 725.85 & 6.441\[08\] & 2113.7 & 4.867\[08\]\ \ $3\, ^3\! S$& 132.81 & 2.265\[10\] & & & & & &\ $4\, ^3\! S$& 97.125 & 8.917\[09\] & 384.72 & 6.207\[09\] & & & &\ $5\, ^3\! S$& 86.515 & 4.395\[09\] & 258.93 & 2.967\[09\] & 837.34 & 2.114\[09\] & &\ $6\, ^3\! S$& 81.702 & 2.508\[09\] & 220.12 & 1.659\[09\] & 533.29 & 1.151\[09\] & 1547.1 & 8.686\[08\]\ \ $3\, ^3\! S$& 80.410 & 5.742\[10\] & & & & & &\ $4\, ^3\! S$& 58.967 & 2.275\[10\] & 232.31 & 1.590\[10\] & & & &\ $5\, ^3\! S$& 52.552 & 1.124\[10\] & 156.87 & 7.647\[09\] & 504.93 & 5.450\[09\] & &\ $6\, ^3\! S$& 49.637 & 6.407\[09\] & 133.47 & 4.272\[09\] & 322.80 & 2.974\[09\] & 932.62 & 2.242\[09\]\ \ $3\, ^3\! S$& 38.591 & 2.303\[11\] & & & & & &\ $4\, ^3\! S$& 28.376 & 9.200\[10\] & 111.15 & 6.457\[10\] & & & &\ $5\, ^3\! S$& 25.302 & 4.558\[10\] & 75.306 & 3.124\[10\] & 241.20 & 2.226\[10\] & &\ $6\, ^3\! S$& 23.902 & 2.592\[10\] & 64.127 & 1.742\[10\] & 154.78 & 1.217\[10\] & 445.25 & 9.168\[09\]\ \ $3\, ^3\! S$& 22.583 & 6.463\[11\] & & & & & &\ $4\, ^3\! S$& 16.624 & 2.591\[11\] & 64.930 & 1.822\[11\] & & & &\ $5\, ^3\! S$& 14.825 & 1.286\[11\] & 44.057 & 8.840\[10\] & 140.79 & 6.297\[10\] & &\ $6\, ^3\! S$& 14.005 & 7.402\[10\] & 37.519 & 4.991\[10\] & 90.438 & 3.491\[10\] & 259.31 & 2.627\[10\]\ [lrrrrrrrr]{}\ $3\, ^3\! D$& 248.72 & 4.240\[10\] & & & & & &\ $4\, ^3\! D$& 186.72 & 1.412\[10\] & 717.49 & 4.312\[09\] & & & &\ $5\, ^3\! D$& 167.40 & 6.554\[09\] & 497.10 & 2.162\[09\] & 1557.2 & 8.794\[08\] & &\ $6\, ^3\! D$& 158.48 & 3.650\[09\] & 425.91 & 1.231\[09\] & 1022.0 & 5.370\[08\] & 2869.1 & 2.640\[08\]\ \ $3\, ^3\! D$& 173.92 & 8.729\[10\] & & & & & &\ $4\, ^3\! D$& 130.35 & 2.892\[10\] & 501.17 & 8.966\[09\] & & & &\ $5\, ^3\! D$& 116.80 & 1.340\[10\] & 346.61 & 4.472\[09\] & 1087.1 & 1.838\[09\] & &\ $6\, ^3\! D$& 110.56 & 7.387\[09\] & 296.87 & 2.518\[09\] & 712.56 & 1.106\[09\] & 2004.8 & 5.476\[08\]\ \ $3\, ^3\! D$& 128.46 & 1.606\[11\] & & & & & &\ $4\, ^3\! D$& 96.148 & 5.304\[10\] & 369.83 & 1.663\[10\] & & & &\ $5\, ^3\! D$& 86.121 & 2.454\[10\] & 255.43 & 8.265\[09\] & 801.82 & 3.422\[09\] & &\ $6\, ^3\! D$& 81.500 & 1.355\[10\] & 218.67 & 4.657\[09\] & 524.82 & 2.056\[09\] & 1478.0 & 1.025\[09\]\ \ $3\, ^3\! D$& 78.294 & 4.328\[11\] & & & & & &\ $4\, ^3\! D$& 58.491 & 1.424\[11\] & 225.09 & 4.538\[10\] & & & &\ $5\, ^3\! D$& 52.360 & 6.580\[10\] & 155.17 & 2.244\[10\] & 487.70 & 9.392\[09\] & &\ $6\, ^3\! D$& 49.538 & 3.622\[10\] & 132.76 & 1.259\[10\] & 318.65 & 5.599\[09\] & 898.87 & 2.815\[09\]\ \ $3\, ^3\! D$& 37.835 & 1.823\[12\] & & & & & &\ $4\, ^3\! D$& 28.206 & 5.994\[11\] & 108.57 & 1.947\[11\] & & & &\ $5\, ^3\! D$& 25.233 & 2.768\[11\] & 74.698 & 9.584\[10\] & 235.08 & 4.058\[10\] & &\ $6\, ^3\! D$& 23.866 & 1.521\[11\] & 63.871 & 5.361\[10\] & 153.30 & 2.404\[10\] & 433.20 & 1.219\[10\]\ \ $3\, ^3\! D$& 22.209 & 5.216\[12\] & & & & & &\ $4\, ^3\! D$& 16.539 & 1.708\[12\] & 63.659 & 5.604\[11\] & & & &\ $5\, ^3\! D$& 14.791 & 7.877\[11\] & 43.756 & 2.752\[11\] & 137.77 & 1.172\[11\] & &\ $6\, ^3\! D$& 13.987 & 4.358\[11\] & 37.397 & 1.549\[11\] & 89.728 & 6.979\[10\] & 253.57 & 3.556\[10\] [lrrrrrr]{}\ $4\, ^3\! P$& 762.96 & 2.928\[08\]& & & &\ $5\, ^3\! P$& 515.31 & 1.248\[08\]& 1651.1 & 1.530\[08\]& &\ $6\, ^3\! P$& 438.15 & 6.627\[07\]& 1055.6 & 7.679\[07\]& 3035.4 & 7.732\[07\]\ \ $4\, ^3\! P$& 528.55 & 5.841\[08\]& & & &\ $5\, ^3\! P$& 357.58 & 2.495\[08\]& 1143.6 & 3.069\[08\]& &\ $6\, ^3\! P$& 304.27 & 1.313\[08\]& 732.93 & 1.531\[08\]& 2106.8 & 1.543\[08\]\ \ $4\, ^3\! P$& 387.61 & 1.049\[09\]& & & &\ $5\, ^3\! P$& 262.56 & 4.486\[08\]& 838.50 & 5.537\[08\]& &\ $6\, ^3\! P$& 223.47 & 2.367\[08\]& 537.98 & 2.770\[08\]& 1543.6 & 2.795\[08\]\ \ $4\, ^3\! P$& 233.88 & 2.734\[09\]& & & &\ $5\, ^3\! P$& 158.69 & 1.170\[09\]& 505.81 & 1.454\[09\]& &\ $6\, ^3\! P$& 135.14 & 6.155\[08\]& 325.18 & 7.260\[08\]& 931.55 & 7.350\[08\]\ \ $4\, ^3\! P$& 111.78 & 1.112\[10\]& & & &\ $5\, ^3\! P$& 75.981 & 4.765\[09\]& 241.66 & 5.979\[09\]& &\ $6\, ^3\! P$& 64.738 & 2.499\[09\]& 155.67 & 2.981\[09\]& 445.11 & 3.031\[09\]\ \ $4\, ^3\! P$& 65.268 & 3.139\[10\]& & & &\ $5\, ^3\! P$& 44.400 & 1.346\[10\]& 141.08 & 1.691\[10\]& &\ $6\, ^3\! P$& 37.829 & 7.135\[09\]& 90.907 & 8.522\[09\]& 259.41 & 8.671\[09\]\ [lrrrrrrrrrr]{}\ $2\, ^3\! P_1$& 40.751 &2.696\[07\]& & & & & & & &\ $3\, ^3\! P_1$& 35.086 &7.831\[06\]& 252.41 &4.524\[05\] & & & & & &\ $4\, ^3\! P_1$& 33.477 &3.289\[06\]& 187.57 &1.985\[05\] & 729.09& 6.230\[04\] & & & &\ $5\, ^3\! P_1$& 32.786 &1.699\[06\]& 167.75 &1.035\[05\] & 499.63& 3.397\[04\] & 1584.5 &1.517\[04\] & &\ $6\, ^3\! P_1$& 32.423 &1.086\[06\]& 158.68 &6.608\[04\] & 426.92& 2.187\[04\] & 1028.8 &1.014\[04\] & 2927.8 & 5.502\[03\]\ \ $2\, ^3\! P_1$& 29.095 &1.346\[08\]& & & & & & & &\ $3\, ^3\! P_1$& 24.970 &3.984\[07\]& 176.50 &2.357\[06\] & & & & & &\ $4\, ^3\! P_1$& 23.802 &1.684\[07\]& 131.04 &1.041\[06\] & 508.76& 3.279\[05\] & & & &\ $5\, ^3\! P_1$& 23.301 &8.669\[06\]& 117.15 &5.415\[05\] & 348.40& 1.783\[05\] & 1104.6 &7.979\[04\] & &\ $6\, ^3\! P_1$& 23.038 &5.292\[06\]& 110.80 &3.312\[05\] & 297.61& 1.100\[05\] & 716.78 &5.108\[04\] & 2039.8 & 2.775\[04\]\ \ $2\, ^3\! P_1$& 21.810 &5.352\[08\]& & & & & & & &\ $3\, ^3\! P_1$& 18.674 &1.605\[08\]& 130.28 &9.667\[06\] & & & & & &\ $4\, ^3\! P_1$& 17.788 &6.811\[07\]& 96.676 &4.290\[06\] & 374.99& 1.354\[06\] & & & &\ $5\, ^3\! P_1$& 17.407 &3.506\[07\]& 86.409 &2.231\[06\] & 256.69& 7.360\[05\] & 813.60 &3.298\[05\] & &\ $6\, ^3\! P_1$& 17.208 &2.093\[07\]& 81.710 &1.337\[06\] & 219.24& 4.451\[05\] & 527.78 &2.068\[05\] & 1501.8 & 1.124\[05\]\ \ $2\, ^3\! P_1$& 13.555 &5.253\[09\]& & & & & & & &\ $3\, ^3\! P_1$& 11.570 &1.604\[09\]& 79.265 &9.896\[07\] & & & & & &\ $4\, ^3\! P_1$& 11.010 &6.840\[08\]& 58.785 &4.421\[07\] & 227.74& 1.399\[07\] & & & &\ $5\, ^3\! P_1$& 10.769 &3.523\[08\]& 52.528 &2.303\[07\] & 155.83& 7.608\[06\] & 493.68 &3.414\[06\] & &\ $6\, ^3\! P_1$& 10.644 &2.067\[08\]& 49.664 &1.357\[07\] & 133.06& 4.526\[06\] & 320.14 &2.105\[06\] & 910.80 & 1.145\[06\]\ \ $2\, ^3\! P_1$& 6.6878 &1.550\[11\]& & & & & & & &\ $3\, ^3\! P_1$& 5.6884 &4.798\[10\]& 38.172 &3.047\[09\] & & & & & &\ $4\, ^3\! P_1$& 5.4073 &2.054\[10\]& 28.299 &1.369\[09\] & 109.49& 4.337\[08\] & & & &\ $5\, ^3\! P_1$& 5.2867 &1.059\[10\]& 25.282 &7.145\[08\] & 74.902& 2.364\[08\] & 237.15 &1.061\[08\] & &\ $6\, ^3\! P_1$& 5.2236 &6.161\[09\]& 23.900 &4.179\[08\] & 63.950& 1.396\[08\] & 153.77 &6.498\[07\] & 437.31 & 3.536\[07\]\ \ $2\, ^3\! P_1$& 3.9685 &1.782\[12\]& & & & & & & &\ $3\, ^3\! P_1$& 3.3691 &5.486\[11\]& 22.341 &3.548\[10\] & & & & & &\ $4\, ^3\! P_1$& 3.2007 &2.343\[11\]& 16.563 &1.593\[10\] & 64.040& 5.046\[09\] & & & &\ $5\, ^3\! P_1$& 3.1285 &1.206\[11\]& 14.796 &8.305\[09\] & 43.814& 2.748\[09\] & 138.68 &1.234\[09\] & &\ $6\, ^3\! P_1$& 3.0907 &7.001\[10\]& 13.987 &4.848\[09\] & 37.408& 1.620\[09\] & 89.929 &7.542\[08\] & 255.70 & 1.620\[09\]\ [lrrrrrrrr]{}\ $3\, ^1\! S_0$& 252.51 & 1.209\[05\] & & & & & &\ $4\, ^1\! S_0$& 187.60 & 4.961\[04\] & 730.80 & 3.665\[04\] & & & &\ $5\, ^1\! S_0$& 167.76 & 2.484\[04\] & 500.31 & 1.803\[04\] & 1589.25 &1.313\[04\] & &\ $6\, ^1\! S_0$& 158.68 & 1.418\[04\] & 427.37 & 1.010\[04\] & 1030.55 &7.273\[03\] & 2937.4 &5.558\[03\]\ \ $3\, ^1\! S_0$& 176.14 & 6.094\[05\] & & & & & &\ $4\, ^1\! S_0$& 130.85 & 2.502\[05\] & 508.97 & 1.898\[05\] & & & &\ $5\, ^1\! S_0$& 117.00 & 1.253\[05\] & 348.48 & 9.349\[04\] & 1105.9 &6.858\[04\] & &\ $6\, ^1\! S_0$& 110.66 & 7.156\[04\] & 297.66 & 5.239\[04\] & 717.26 &3.803\[04\] & 2042.9 &2.900\[04\]\ \ $3\, ^1\! S_0$& 129.87 & 2.446\[06\] & & & & & &\ $4\, ^1\! S_0$& 96.463 & 1.005\[06\] & 374.80 & 7.768\[05\] & & & &\ $5\, ^1\! S_0$& 86.240 & 5.033\[05\] & 256.61 & 3.828\[05\] & 813.84 &2.823\[05\] & &\ $6\, ^1\! S_0$& 81.558 & 2.874\[05\] & 219.17 & 2.145\[05\] & 527.87 &1.566\[05\] & 1502.8 &1.194\[05\]\ \ $3\, ^1\! S_0$& 78.964 & 2.442\[07\] & & & & & &\ $4\, ^1\! S_0$& 58.632 & 1.003\[07\] & 227.482 & 7.953\[06\] & & & &\ $5\, ^1\! S_0$& 52.407 & 5.025\[06\] & 155.721 & 3.920\[06\] & 493.51 &2.911\[06\] & &\ $6\, ^1\! S_0$& 49.556 & 2.870\[06\] & 132.991 & 2.198\[06\] & 320.08 &1.616\[06\] & 910.78 &1.234\[06\]\ \ $3\, ^1\! S_0$& 38.058 & 7.402\[08\] & & & & & &\ $4\, ^1\! S_0$& 28.241 & 3.037\[08\] & 109.41 & 2.461\[08\] & & & &\ $5\, ^1\! S_0$& 25.236 & 1.521\[08\] & 74.869 & 1.213\[08\] & 237.13 &9.057\[07\] & &\ $6\, ^1\! S_0$& 23.859 & 8.682\[07\] & 63.926 & 6.800\[07\] & 153.76 &5.028\[07\] & 437.40 &3.845\[07\]\ \ $3\, ^1\! S_0$& 22.304 & 8.679\[09\] & & & & & &\ $4\, ^1\! S_0$& 16.543 & 3.557\[09\] & 64.050 & 2.881\[09\] & & & &\ $5\, ^1\! S_0$& 14.781 & 1.780\[09\] & 43.818 & 1.420\[09\] & 138.77 &1.058\[09\] & &\ $6\, ^1\! S_0$& 13.973 & 1.016\[09\] & 37.409 & 7.957\[08\] & 89.963 &5.875\[08\] & 255.92 &4.488\[08\] [28]{} natexlab\#1[\#1]{} Beiersdorfer, P., Lisse, C. M., Olson, R. E., Brown, G. V., & Chen, H. P. 2001, , 549, L147 Beiersdorfer, P., Olson, R. E., Brown, G. V., Chen, H., Harris, C. L., Neill, P. A., Schweikhard, L., Utter, S. B., & Widmann, K. P. 2000, , 85, 5090 Cann, N. M. & Thakkar, A. J. 1992, , 46, 5397 Chen, M. H., Cheng, K., & Johnson, W. 1993, , 47, 3692 Cheng, K. T., Chen, M. H., Johnson, W., & Sapirstein, J. 1994, , 50, 247 Cravens, T. E. 2000, , 532, L153 Cravens, T. E., Howel, E., Waite, J. H., & Gladstone, G. R. 1995, , 100, 17153 Cravens, T. E., Robertson, I. P., & Snowden, S. L. 2001, , 106, 24883 Dennerl, K., Englhauser, J., & Trumper, J. 1997, Science, 277, 1625 Drake, G. W. F. & Dalgarno, A. 1969, , 157, 459 Flechard, X., Harel, C., Jouin, H., Pons, B., Adoui, L., Fremont, F., Cassimi, A., & Hennecart, D. 2001, J. Phys. B, 34, 2759 Greenwood, J. B., Williams, I. D., Smith, S. J., & Chutjian, A. 2000, , 533, L175 —. 2001, , 63, 062707 Hasan, A. A., Eissa, F., Ali, R., Schultz, D. R., & Stancil, P. C. 2001, , 560, L201 Johnson, W. R., Blundell, S. A., & Sapirstein, J. 1988, , 37, 2764 Kharchenko, V. & Dalgarno, A. 2001, , 554, L99 Kharchenko, V., Liu, W.-H., & Dalgarno, A. 1998, , 103, 26687 Lisse, C. M., Dennerl, K., Englhauser, J., Harden, M., Marshall, F. E., Mumma, M. J., Petre, R., Pye, J. P., Ricketts, M. J., Schmitt, J., Trumper, J., & West, R. G. 1996, Science, 274, 205 Lubinski, G., Juhasz, Z., Morgenstern, R., & Hoekstra, R. 2001, , 86, 616 Ma, X., Stohlker, T., Bosch, F., Brinzanescu, O., Fritzsche, S., Kozhuharov, C., Ludziejewski, T., Mokler, P. H., Stachura, Z., & Warczak, A. 2001, , 64, 012704 Metzger, A. E., Gilman, D. A., Luthey, J. L., Hurley, K. C., Schnoper, H. W., Seward, F. D., & Sullivan, J. D. 1983, , 88, 7731 NIST database. 2001, http://physics.nist.gov/cgi-bin/AtData Plante, D. R., Johnson, W. R., & Sapirstein, J. 1994, , 49, 3519 —. 1995, Advances in Atomic, Molecular and Optical Physics, Vol. 35 (Academic Press), 255–329 Vainshtein, L. A. & Safronova, U. I. 1985, , 31, 519 Waite, J. H., Bagenal, F., Seward, F., Na, C., Gladstone, G. R., Cravens, T. E., Hurley, K. C., Clarke, J. T., & Stern, R. E. S. A. 1994, , 99, 14799 Weaver, H. A., Sekanina, Z., Toth, I., Delahodde, C. E., Hainaut, O. R., Lamy, P. L., Bauer, J. M., A’Hearn, M. F., Arpigny, C., Combi, M. R., Davies, J. K., Feldman, P. D., Festou, M. C., Hook, R., Jorda, L., Keesey, M. S. W., Lisse, C. M., Marsden, B. G., Meech, J. J., Tozzi, G. P., & West, R. 2001, Science, 292, 1329 Wiese, W. L., Fuhr, J. R., & Deters, T. M. 1996, Journ. Phys. Chem. Ref. Data, Monograph, 6, 144
--- author: - | J. Twamley$^{1\ast}$ and G. J. Milburn$^2$\ \ \ \ \ \ \ bibliography: - 'Zeta\_bib4.bib' title: The Quantum Mellin transform --- We uncover a new type of unitary operation for quantum mechanics on the half-line which yields a transformation to “Hyperbolic phase space” $(\eta, p_\eta)$. We show that this new unitary change of basis from the position $x$ on the half line to the Hyperbolic momentum $p_\eta$, transforms the wavefunction via a Mellin transform on to the critial line $s=1/2-ip_\eta$. We utilise this new transform to find quantum wavefunctions whose Hyperbolic momentum representation approximate a class of higher transcendental functions, and in particular, approximate the Riemann Zeta function. We finally give possible physical realisations to perform an indirect measurement of the Hyperbolic momentum of a quantum system on the half-line. Introduction ============ Higher transcendental functions find many uses in mathematics, engineering, physics and many other sciences. Their efficient numerical evaluation is a highly challenging task and typically one makes use of pre-compiled library routines [@thompson97]. In this work we present a curious method which can [*design*]{} a 1D quantum system in such a way that the resulting wavefunction is proportional to a given transcendental function taken from a certain class of functions. The resulting quantum system displays enormously rich behaviour. By studying the corresponding quantum system one might discover new insights into the properties these transcendental functions. More interestingly we discover how to execute a new quantum unitary transform, the quantum Mellin transform, (similar to the quantum Fourier transform), which takes the Mellin transform of a wavefunction. The quantum Fourier transform has been prominent in many quantum algorithms and one might suspect that the quantum Mellin transform might also be useful in quantum computation. The class of transcendental functions we consider are the three parameter family of higher transcendental functions sometimes known as the *Lerch transcendents* $\Phi(z,s,u)$, which contain the celebrated Riemann Zeta function $\zeta(s)$ ( [@bateman53] section 1.11 and [@spanier87] section 64:12). The Lerch transcendent is given as the analytic continuation of the series $$\ \Phi(z,s,u)=\sum_{n=0}^\infty\, \frac{z^n}{(u+n)^s}\;\;,|z|<1,\;\;u\ne -1, -2, -3, \cdots$$ which converges for $u\in\mathbb{R}^+$, $z,s\in \mathbb{C}$, with either $(|z|<1,\Re(s)>0)$, or $(|z|=1, \, \Re(s)>1)$. Special cases include the analytic continuations of the *Riemann zeta function* and the *Hurwitz zeta function*, (valid for $\Re(s)>1$), $$\label{fzeta} \zeta(s)=\sum_{k=1}^{\infty} {1 \over k^s}=\Phi(1,s,1),\;\; \zeta(s,u)=\sum_{k=0}^{\infty} {1 \over (u+k)^s}=\Phi(1,s,u),$$ the *alternating zeta function* (also known as *Dirichlet’s eta function*), and the the *Dirichlet beta function*, (valid for $Re(s)>0$), $$\begin{aligned} \label{alter} \zeta^{*}(s)&&= \sum_{k=1}^{\infty} {(-1)^{k-1} \over k^s} = \Phi(-1,s,1),\\ \beta(s)&&=\sum_{k=0}^{\infty} {(-1)^k \over (2k+1)^s}= 2^{-s} \Phi \left(-1,s,{1 \over 2} \right),\end{aligned}$$ the *Polylogarithm*, and the *Lerch zeta function* $${\textrm{Li}_n(z)}=\sum_{k=1}^{\infty} {z^k \over k^n}=z \Phi(z,n,1),\;\; L(\lambda,\alpha,s)=\Phi(\exp(2 \pi i \lambda),s,\alpha).$$ In the following we will be particularly interested in the evaluation of Lerch transcendent on the [*critical line*]{}, when $s=1/2+it$, and for this case $|z|<1$. Thus the special cases of (\[fzeta\]), do not converge while those of (\[alter\]), do converge. It is known that the Lerch transcendent can be expressed as a Mellin transform: if ($|z|<1$, $\Re(s)>0$), or ($z=1$, $\Re(s)>1$), then $$\label{mellinphi} \Delta(z,s,u)\equiv\Gamma(s)\Phi(z,s,u)= \int_{0}^{\infty} {e^{-(u-1)t} \over e^t-z} t^{s-1} dt, \label{mellinlerch}$$ and making use of the relation $\zeta^{*}(s)=(1-2^{1-s})\zeta(s)$, we have the very interesting special case, where the Riemann Zeta function can be expressed as a Mellin transform, $$(1-2^{1-s})\zeta(s)=\zeta^*(s)=\Phi(-1,s,1)=\frac{1}{\Gamma(s)}\int_{0}^{\infty} \frac{1} {e^t+1} t^{s-1} dt,\label{alterzeta}$$ In summary, we proceed below to uncover a new unitary change of basis in the case of quantum mechanics on $\mathbb{R}^+$, which corresponds to the evaluation of the Mellin transform of a wavefunction when $s=1/2+it$. This transform corresponds to moving from the $|x\rangle$, $(x\in\mathbb{R}^+$), basis to the $|p_\eta\rangle$, ($p_\eta\in\mathbb{R}$), basis where we call the latter the Hyperbolic momentum basis. We then deduce suitable quantum mechanical potentials and boundary conditions that yield ground state wavefunctions which, when viewed in the Hyperbolic momentum basis, give the left hand side of (\[mellinphi\]), when $s=1/2+it$ which includes the Riemann-Zeta function as a special case. For the Riemann-Zeta function case we find that the relevant wavefunction $\psi_\zeta(x),\;x\in\mathbb{R}^+$, is quite simple and corresponds to the ground (bound) state of a deceptively simple 1D potential. Such a potential might be physically engineered in a variety of systems, e.g. neutral atom mirror traps, trapped ion systems. As a second case we can also find [*unbounded*]{}, wavefunctions in the $|x\rangle$ basis which, when viewed in the $|p_\eta\rangle$ basis, give the Riemann Zeta function $\zeta(1/2-ip_\eta)$, precisely. In this case both the original and transformed states are unbounded. However as the basis transform is unitary, and thus preserves inner products, we would conjecture that the analytic continuation of these states gives results identical to the analytic continuation of the Riemann Zeta function onto the critical line. Finally we show how one can obtain bounded wavefunctions which can approximate the Riemann Zeta function in a more systematic (though somewhat complicated) fashion. We finally present a scheme based on the Degenerate Parametric Oscillator to measure the Hyperbolic momentum in a physical system corresponding to the Degenerate Parametric Oscillator. One interesting result of this work is that we are able to find a physical quantum system whose ground state wavefunction, when viewed in the appropriate basis, possesses zeros whose locations exactly match those of the Riemann Zeta function on the critical line. This tantalising result can be compared with the Hilbert-Polya formulation of the Riemann Hypothesis: to find a self-adjoint linear Hermitian operator whose spectrum matches the locations of the zeros of the Riemann Zeta function on the critical line [@montgomery74]. Evidence for the latter seems quite strong following extensive numerical analysis of the spacings between adjacent zeros of the Riemann Zeta function and their comparison with the statistics of a Generalized Unitary Ensemble (GUE - or eigenvalues of random Hermitian matrices) [@montgomery74; @odlyzko89; @rudnick96; @katz99; @NoteAmerMath03]. It is not clear to us how our construction has relevance to the Hilbert-Polya formulation of the Riemann-Zeta hypothesis but the existence of relatively “simple” quantum wavefunctions with randomly distributed zeros seems intriguing. Quantum Mechanics on $\mathbb{R}^+$ {#QMinhalfline} =================================== We now turn to the description of quantum mechanics on the half-line $x\in \mathbb{R}^+$. There have been numerous expositions of the quantum mechanics of a particle moving in $\mathbb{R}^+$. Almost without exception, however, these analyses are executed in the position basis $|x\rangle$. Here, we instead seek to make a closer correspondence between the conjugate phase-space operators $\hat{x},\,\hat{p}$, $[\hat{x},\hat{p}]=i\hbar$, for a particle moving on the full-line $x\in\mathbb{R}$, with analogous conjugate phase-space operators on the half-line. As the latter must represent physical observables they must be represented by Hermitian Self-Adjoint operators. To determine whether an operator $\hat{A}$, is Hermitian one must examine the action of the operator $\hat{A}$, and its adjoint $\hat{A}^\dagger$, but also the $L^2$ domains ($\psi(x) \in L^2[0,\infty]$, if $\int_0^\infty|\psi(x)|^2dx$ is finite), of these operators, $D(\hat{A})$, and $D(\hat{A}^\dagger)$. If the actions of $\hat{A}$ and $\hat{A}^\dagger$ are identical, i.e. $(\hat{A}^\dagger\phi,\psi)=(\phi,\hat{A}\psi),\; \forall \phi,\psi\in D(\hat{A})$, then the operator $\hat{A}$, is Hermitian provided that $D(\hat{A})\in D(\hat{A}^\dagger)$. Further it is Self-Adjoint if the actions match and so too do the domains of the operator and its adjoint, $D(\hat{A}^\dagger)=D(\hat{A})$. An operator can be Hermitian but not Self-Adjoint and this will give rise to unphysical situations in that the action of $\hat{A}^\dagger$ may change the domain of the wavefunction, i.e. to different boundary conditions etc. Here we follow closely the work of [@twamley98], and [@milburn94], to find that the obvious candidate for the momentum operator $\hat{p}_x\sim -i\hbar d/dx$, conjugate to $x\in\mathbb{R}^+$, is not Hermitian and Self-Adjoint. Moreover, there is no self-adjoint extension possible for this operator. This is to be contrasted to the case of a particle confined to a finite interval $x\in[a,b]$ [@carreau90; @daluz94], or on the entire real line with delta-function potentials or with the origin removed [@araujo04; @bonneau01]. We instead proceed to define a new momentum operator $\hat{p}_\eta$, the “Hyperbolic momentum” which corresponds to dilations of the $\hat{x}$ coordinate or displacements of the “Hyperbolic position” $\eta\equiv \ln x$, coordinate. Before examining the case of $\hat{p}_x$, let us examine the somewhat easier case of the kinetic energy operator on the half-line, $$H_0=-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}\;\;.$$ As mentioned above we have to pay particular attention to the domain of operators on which $H_0$ is allowed to act. We define $\psi(x)\in \Omega\equiv C_2^\infty([0,\infty])$, to be the class of functions which are infinitely differentiable (absolutely continuous, i.e. $f(x)=\int_0^x(df/dx)dx+f(0)$), and have finite $L^2$ norm on the interval $[0,\infty]$. If we now consider two functions $\phi_1, \phi_2\in\Omega$, we can examine the Hemiticity of $H_0$, by integrating by parts, $$\begin{aligned} (H_0^\dagger\phi_2,\phi_1)-(\phi_2,H_0\phi_1)&=&\phi_2^*(0)\left.\frac{d\phi_1}{dx}\right|_{x=0}- \left.\frac{d\phi_2^*}{dx}\right|_{x=0}\phi_1(0)\;\;,\\ &=&\phi^*_2(0)\phi_1(0)\left[\frac{\phi_1^\prime}{\phi_1}-\frac{\phi_2^\prime}{\phi_2}\right]_{x=0}\;\;.\label{herm1}\end{aligned}$$ If one chooses boundary conditions (A) $\mathbb{C_A}\equiv\phi_2^*(0)=\phi_1(0)=0$, or (B) $\mathbb{C}_B\equiv \phi_1(0)=d\phi_1^*/dx|_0=0$, then $H_0$ is Hermitian. However in case (A) the domains of $H_0$ and $H_0^\dagger$ are fixed to be identical by $\mathbb{C}_A$, and thus $H_0$ is also self-adjoint. In case (B) $D(H_0)$ is reduced to $\phi_1\in\Omega \cap \mathbb{C}_B $, while $D(H_0^\dagger)=\Omega$, and thus the operator $H_0$ is not self-adjoint. From (\[herm1\]), a more general class of boundary conditions which give an Hermitian and self-adjoint $H_0$, is when $\frac{\phi_1^\prime(0)}{\phi_1(0)}=\kappa\in\mathbb{R}$. This choice of boundary condition extends the operator $H_0$, into a one-parameter class of operators and this process is known as a self-adjoint extension. If we now repeat the above analysis for the operator $\hat{p}_x=-i\hbar d/dx$, again assuming $\phi_1,\phi_2 \in \Omega$, we have $$\begin{aligned} (\hat{p}_x^\dagger \phi_2,\phi_1)-(\phi_2,\hat{p}_x\phi_1)&=&i\hbar\int_0^\infty (\phi_2^{*\,\prime}\phi_1+\phi_2^*\phi_1^{\prime})dx\;\;,\\ &=&i\hbar\left[\left.\phi_2^*\phi_1\right|_\infty-\left.\phi_2^*\phi_1\right|_0\right]\;\;.\label{P1}\end{aligned}$$ The first term in (\[P1\]) vanishes due to the $L^2$ nature of $\Omega$, and thus for the second term to vanish one must set either $\phi_1(0)=0$, [*or*]{} $\phi_2(0)=0$. In either case the resulting operator $\hat{p}_x$, will be Hermitian but will not be self-adjoint. It is not obvious that we can choose alternative boundary conditions that will yield a self-adjoint extension of this non-self-adjoint operator. Fortunately there is a theorem of von Neumann [@vonneumann29; @bonneau01], which tells us whether such an extension exists or not. The method is quite simple and involves comparing the dimensions of the so-called, “deficiency subspaces”, ${\cal N}_\pm$, of the operator in question, e.g. $\hat{A}$. These spaces are defined by $${\cal N}_\pm=\left\{\psi\in D(\hat{A}^\dagger),\;\;\hat{A}^\dagger\psi=z_\pm \psi,\;\;\Im(z_\pm) \gtrless\, 0\right\}\;\;,$$ and to find $(n_+,n_-)$, the dimensions of these spaces, one just looks for the number of normalisable independent solutions to $\hat{A}^\dagger \psi=\pm i\gamma \psi$, for $\gamma$ real and positive. For $\hat{p}_x=-i\hbar d/dx$, we easily have $\psi_\pm\sim \exp(\pm\gamma x/\hbar)$, and for $x\in\mathbb{R}^+$, $(n_+,n_-)=(0,1)$. From [@vonneumann29], if the deficiency indices are different then the operator is not self-adjoint and moreover [*no self-adjoint extension is possible*]{}. If we instead had considered the case of the whole real line then $(n_+,n_-)=(0,0)$, i.e. neither wavefunction is normalisable on the entire real line, and in this case not only is $\hat{p}_x$, self-adjoint, it is termed [*essentially self-adjoint*]{} as its domain is the entire $\phi\in L^2[-\infty,+\infty]$. Hyperbolic Coordinates: Dilatons $\rightarrow$ Displacements ============================================================ That the typical expression for the momentum operator $\hat{p}_x=-i\hbar d/dx$, does [*not*]{} correspond to a physical operator on $\mathbb{R}^+$, seems to have only generated little attention in the literature. On the other hand, as $[\x,\px]=i\hbar$, the action of $\hat{p}_x$ on $\hat{x}$, yields a displacement, $$e^{i\alpha \px/\hbar}\x e^{-i\alpha\px/\hbar}=\x+\alpha\;\;,$$ and this cannot represent a physical operation as it alters the domain $x\in\mathbb{R}^+$. Instead we examine the operator which generates dilations of the $\x$ operator: $$\begin{aligned} \hat{p}_\eta \equiv \frac{1}{2}\left[ \x\px+\px\x\right]&=&\x\px -i\frac{\hbar}{2}\;\;,\\ &=&-i\hbar\left[ x\frac{d}{dx}+\frac{1}{2}\right]\;\;,\label{pdilaton_x}\end{aligned}$$ where the last line is $\peta$ evaluated in the $\x$ representation. This operator, as a generator of spatial dilations has already been studied as part of a larger space-time conformal transformation in certain non-relativistic quantum mechanical problems by de Alfaro, Fubini and Furlan [@dealfaro76], and Jackiw [@Jackiw91]. The operator also appears in work by van Winter [@vanwinter98], and Berry and Keating [@berry99], where they take this operator to be the Hamiltonian of a system. Indeed, although connections between the Riemann Zeta function and eigenfunctions of the operator $H=\x\px$ were made in [@berry99; @berry99a], our treatment differs significantly in that we consider the operator $p_\eta=\x\px+\px\x$, to be a new conjugate variable describing dilation/conformal momentum. From $[\x,\peta]=i\hbar\x$, the Sack algebra [@sack58], we can see that the action of this new momentum on $\x$, is as expected: $$e^{i\mu\peta/\hbar}\x e^{-i\mu\peta/\hbar}=\x e^{\mu}\;\;.$$ To test for Hermiticity we examine $$K=(\peta^\dagger\phi_1,\phi_2)-(\phi_1,\peta\phi_2)\;\;,$$ and by setting $\phi_1=U/\sqrt{x},\;\phi_2=V/\sqrt{x}$, we find $$K=\left. U^*V\right|_0^\infty\;\;,$$ which vanishes if $V(0)=U(0)=0$, and thus $\peta$ is Hermitian for $\phi\sim U(x)/\sqrt{x},\; \{\phi\in \bar{\Omega}\equiv L^2[\mathbb{R}^+, dx, U(0)=0]\}$. To see if the operator is self-adjoint we use the von Neumann test and examine $\peta\psi=\pm i\gamma\psi$. We find $\psi\sim V/\sqrt{x}$, but $V(x)=A_\pm x^{\mp \gamma/\hbar}$, neither of which are in $\bar{\Omega}$, and thus we have the deficiency indices $(n_+,n_-)=(0,0)$, indicating that $\peta$ is an [*essentially self-adjoint*]{} operator. Defining $\hat{\eta}=\ln \x$, (this will be made more precise later on), we can recover the standard Heisenberg algebra and displacement operation, albeit in the “exponent space”: $[\et, \peta]=i\hbar$, and $$e^{i\alpha\peta/\hbar}\et e^{-i\alpha\peta/\hbar}=\et+\alpha\;\;.$$ Thus by making the unitary transformation of the quantum mechanics on the half line described by the “conjugate operators”, $(\x,\peta)$, to the exponential representation on the full-line, $(\ln \x, \peta)$, we can regain the familiar Heisenberg algebra. Dilations of $\x$ now become displacements of $\eta\equiv \ln \x$, and one can formulate all of the standard QM on $\mathbb{R}^+$, in this Hyperbolic phase-space. We now evaluate the transition matrix elements $\la p_\eta |x\ra,\,\la p_\eta |\eta\ra,\,\la p_\eta |\psi\ra$, and resolutions of unity in the new basis in order to obtain the correct measures. We will also obtain a displacement operator $D(\lambda,\mu)$, in the $(\eta,\peta)$, phase-space and from this a Wigner function pseudo-probability representation of a wavefunction in this phase-space. Armed with these tools we will then be in a position to seek wavefunctions where $\la p_\eta | \psi\ra$ are related to the Lerch transcendent (and Riemann Zeta function in particular), on the critical line. First we look at the eigenstates of $\peta$ by taking $\la x | \peta |p_\eta\ra=p_\eta\la x |p_\eta\ra$, and where $\psi^{p_\eta}(x)=V^{p_\eta}(x)/\sqrt{x}$, to obtain $$\la x|p_\eta\ra=\frac{1}{\sqrt{2\pi}}x^{ip_\eta/\hbar-\frac{1}{2}}\;\;.\label{p_eigen1}$$ As $[\et,\peta]=i\hbar$, we can define the eigenstates of the coordinate conjugate to $\peta$ via the Fourier transform $$|\eta\ra=\frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{+\infty}\,dp_\eta\, e^{-ip_\eta\eta/\hbar}|p_\eta\ra\;\;,$$ which gives $$\begin{aligned} \la x|\eta\ra&=&\frac{1}{2\pi\hbar}\int_{-\infty}^{+\infty}dp_\eta\,e^{-ip_\eta\eta}\frac{x^{ip_\eta/\hbar}}{\sqrt{x}}\;\;,\\ &=&\frac{e^{-u/2}}{2\pi\hbar}\int_{-\infty}^{+\infty}dp_\eta\, e^{-ip_\eta(\eta-u)/\hbar} =e^{-u/2}\delta(\eta-u)=\sqrt{x}\delta(x-e^\eta)\;\;,\label{x_inner}\end{aligned}$$ where we have set $u=\ln x$, and have used $\delta(u)/g^\prime(u)=\delta(x), \, x=g(u)$. We see that the $\et$ eigenstates are not merely rescalings of the $\x$ eigenstates but also include a square-root weighting. From (\[x\_inner\]), we see that $$\la \eta | \psi\ra=\int_0^\infty dx \,\psi(x) \sqrt{x}\delta(x-e^\eta)=e^{\eta/2} \psi(x=e^{\eta})\;\;.\label{psi_x_inner}$$ Let us define the following ket so as to absorb the square-root factor: $$\overline{|\eta\ra}\equiv \sqrt{e^\eta}|x=e^\eta\ra_x\;\;,$$ where we have inserted the subscript $x$ to emphasise that the ket in question is evaluated in the $x$ basis but at the value where $x=\exp(\eta)$. Using this notation, (\[p\_eigen1\]), and the properties of the $\delta-$function we can show $$\begin{aligned} \la p_\eta^1|p_\eta^2\ra&=&\int_{-\infty}^{+\infty}d\eta\, \la p_\eta^1|\overline{|\eta\ra}\overline{\la \eta|}|p_\eta^2\ra\;\;,\\ &=&\delta(p_\eta^1-p_\eta^2)\;\;.\end{aligned}$$ Thus $\mathbb{I}=\int_{-\infty}^{+\infty}\, \overline{|\eta\ra}\overline{\la \eta|}d\eta$, while from (\[x\_inner\]), we have $$\begin{aligned} \overline{\la \eta_1| \eta_2\ra}&=&\overline{\la \eta_1|}\left(\int_0^\infty dx\,|x\ra\la x|\right) \overline{|\eta_2\ra}\;\;,\\ &=&\int_0^\infty dx\, e^{\eta_1/2+\eta_2/2}\delta(x-e^{\eta_1})\delta(x-e^{\eta_2})\;\;,\\ &=& e^{\eta_1/2+\eta_2/2}\delta(e^{\eta_1}-e^{\eta_2})\;\;,\\ &=& \delta(\eta_1-\eta_2)\;\;.\end{aligned}$$ One can then easily show that the new coordinate kets $\overline{|\eta\ra}$, (where now $\eta\in\mathbb{R}$), are completely analogous to $|x\ra$, on the entire real line, and the dilatonic momentum $\peta$, (which generates a dilation of $x$), generates displacements of $\eta$: $$\overline{\la \eta|} \peta |\psi\ra=-i\hbar\frac{d}{d\eta}\overline{\la \eta|}\psi\ra\;\;,$$ and now we can make the definition of $\hat{\eta}$, (which we previously had as $\hat{\eta}\sim \ln \hat{x}$), more precise: $$\hat{\eta}=\int_{-\infty}^{+\infty}d\eta\, \eta \overline{|\eta\ra}\overline{\la \eta|}\;\;.$$ Armed with the conjugate exponential operators, $\hat{\eta}$, and $\peta$, where $[\et, \peta]=i\hbar$, and the transition matrix elements $\la \eta |x\ra$, $\la p_\eta | x\ra$, we can take any quantum state represented in the $x$ coordinates and represent it in the dilation coordinates: $\la x_1|\rho| x_2\ra\rightarrow \overline{\la \eta_1 |}\rho\overline{|\eta_2\ra}$. Mellin transforms and the Lerch Transcendent ============================================ As we mentioned above, one is typically familiar with the Fourier transform appearing in quantum mechanics representing the unitary change in basis $|x\ra\rightarrow|p\ra,\;\;x,p\in\mathbb{R}$. To the best of our knowledge the Mellin transform has never appeared in the literature playing a similar role. However, as we will see, the above change in basis $|x\ra\rightarrow |p_\eta\ra$, yields a Mellin transform. The Mellin transform and its inverse, of a function $f$ is given by $$\begin{aligned} \left\{{\cal M}f\right\}(s)&=&\phi(s)=\int_0^\infty\,x^sf(x)\frac{dx}{x}\;\;,\\ \left\{{\cal M}^{-1}\phi\right\}(x)&=&f(x)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\,x^{-s}\phi(s)dx\;\;.\end{aligned}$$ The Mellin transform may also be defined via the Fourier transform via, $$\left\{{\cal M}f\right\}(s)=\left\{{\cal F}f(e^{-x})\right\}(-is)\;\;.$$ From (\[p\_eigen1\]), we see that the representation of a given wavefunction $\la x|\psi\ra$, in the $|p_\eta\ra$, basis is, $$\begin{aligned} \la p_\eta |\psi\ra &=& \int_0^\infty dx\, \la p_\eta |x\ra\la x|\psi\ra\;\;,\\ &=&\frac{1}{\sqrt{2\pi}}\int_0^\infty dx\, \psi(x) x^{-1/2-ip_\eta}\;\;,\\ &=&\frac{1}{\sqrt{2\pi}}\left\{{\cal M}\psi\right\}(s=1/2-ip_\eta)\;\;.\label{mellineval}\end{aligned}$$ We now return to our previous expression for the Lerch transcendent (\[mellinlerch\]), we see that $$\begin{aligned} \Xi(z,s=1/2-ip_\eta,u)&\equiv&\Gamma(s=1/2-ip_\eta)\Phi(z,s=1/2-ip_\eta,u)\\ &=&\int_0^\infty dt\, \frac{e^{-(u-1)t}}{e^t-z}\,t^{-1/2-ip_\eta}\;\;,\\ &=&\left\{{\cal M} \chi\right\}(z,s=1/2-ip_\eta,u)\;\;,\end{aligned}$$ where $$\chi(z,t,u)=\frac{e^{-(u-1)t}}{e^t-z}\;\;.\label{integrand}$$ Thus, to simulate the three parameter family of Lerch transcendents $\Xi(z,1/2-ip_\eta,u)$, we take the Mellin transform of $\chi(z,t,u)$, evaluated on the critical line $s=1/2-ip_\eta$. However, by (\[mellineval\]), this can be viewed as the $|p_\eta\ra$ representation of a wavefunction $\psi$, where $$\la x|\psi(z,u)\ra=\psi_{z,u}(x)= N(z,u)\frac{e^{-ux}}{1-ze^{-x}}\;\;,\label{wavef}$$ with the normalization factor $N(z,u)$, and where $x\in\mathbb{R}^+$. We can now ask about the quantum system where the wavefunction (\[wavef\]), could appear. More specifically, can we find a potential $V(z,u; x)$, for which (\[wavef\]) is an eigenstate of the half-line Schroedinger equation, $$\left[-\frac{\hbar^2}{2}\frac{d^2}{dx^2}+V(x)\right]\psi_{z,u}(x)=E\psi_{z,u}(x)\;\;. \label{schroe}$$ As we have noted in section (\[QMinhalfline\]), the Hamiltonian/Schroedinger operator is only Hermitian and self-adjoint if one restricts the domain of the wavefunctions to satisfy $\psi_{z,u}^\prime/\psi_{z,u}|_{x=0}=\kappa\in\mathbb{R}$. From (\[wavef\]), we can find $$\lim_{x\rightarrow 0^+}\,\frac{\psi_{z,u}(x)^\prime}{\psi_{z,u}(x)}=\kappa=-u+\frac{z}{z-1}\;\;.$$ Inserting (\[wavef\]), into (\[schroe\]), and letting $R=ze^{-x}/(1-ze^{-x})$, we can solve for the potential function, $$V(x)=E+\frac{1}{2}\left[ (u+R)^2+R(R+1)\right] \;\;.\label{eigen1}$$ From the form of (\[eigen1\]), we can add a constant to $E$, without changing the condition that (\[wavef\]) solves (\[schroe\]). We add $-(E+u^2/2)$, so that $\lim_{x\rightarrow\infty} V(x)=0$, and thus we can set $$\bar{V}(x)=+\frac{1}{2}\left[ (u+R)^2-u^2+R(R+1)\right] \;\;,\label{eigen2}$$ where now $E=-u^2/2$, and $\lim_{x\rightarrow\infty}\bar{V}(x)=0$. We note that $$\lim_{x\rightarrow 0^+} \bar{V}(x)=-\frac{1}{2}\frac{uz}{z-1}+\frac{1}{2}\frac{z(1+z)}{(z-1)^2}\;\;,\label{Lerchpot}$$ and this becomes singular when $z=1$. This singularity at $z=1$ is to be expected as our analogy breaks down since (\[mellinlerch\]) holds for $z=1$, only when $\Re(s)>1$, but above we assumed $s=1/2-ip_\eta$. A particularly interesting case is when $z=-1,\;u=1$, as the Mellin transform of the wavefunction $$\la x|\psi_{\zeta}\ra={\cal N}\frac{1}{1+e^x}\;\;,$$ where the normalisation factor ${\cal N}^{-2}=-1/2+\ln 2$, gives via (\[alterzeta\]), the Riemann Zeta function $$\la p_\eta|\psi_\zeta\ra={\cal N}(1-2^{1/2+ip_\eta})\Gamma(1/2-ip_\eta)\zeta(1/2-ip_\eta)/\sqrt{2\pi}\;\;.\label{RZpsi}$$ However the amplitude of this wavefunction, now represented in the Hyperbolic momentum coordinate $p_\eta$, falls off rapidly with $p_\eta$, due to the $\Gamma$-function in (\[RZpsi\]). From (\[Lerchpot\]), we see that $\la x |\psi_\zeta\ra$, is an eigenfunction of the potential $$V_{\zeta}(x)=-\frac{1}{2}\left[1-\frac{e^x}{e^x+1}\tanh \frac{x}{2}\right]\;\;,\;\;x\in\mathbb{R}^+\;\;,\label{eigenpot}$$ with the boundary condition $$\lim_{x\rightarrow 0^+}\frac{\psi_\zeta(x)^\prime}{\psi_\zeta(x)}=-\frac{1}{2}\;\;.\label{boundaryc}$$ The prefactors appearing in (\[RZpsi\]), possesses no zeros and thus the location of the zeros of the wavefunction $\la p_\eta|\psi_\zeta\ra$, corresponds exactly with those of the Riemann Zeta function. However these prefactors, $(1-2^{1-s})\Gamma(s)$, when $s=1/2-ip_\eta$, damp as $\exp(- p_\eta)$. Thus the resulting wavefunction is mostly peaked around $p_\eta\sim 0$, and the finer details of the zeros of the wavefunction are very difficult to observe for large $p_\eta$. For clarity, since $\la p_\eta|\psi\rangle\sim \zeta(1/2-ip_\eta)\,e^{- p_\eta}$, in order to distinguish the zeros of the wavefunction we plot in Fig. \[fig:fig1\], the magnitude of the unnormalised wavefunction (in a logarithmic scale), rescaled by $p_\eta$. We see perfect correspondence between the wavefunction and the Zeta function zeros. Below we will show other forms for $\psi_\zeta(x)$, which yield much closer approximations to the Zeta function, i.e. where the prefactors vary more slowly with the Hyperbolic momentum $p_\eta$. Exact quantum simulation of the $\zeta$ function ================================================ From Fig. \[fig:fig1\], we see good correspondence between $\psi_\zeta(p_\eta)$, and the Riemann Zeta function on the critical line. However the correspondence is detracted by the exponential damping given by the $\Gamma$-function appearing in (\[alterzeta\]). It would be preferable if one could find a wavefunction in the $x-$representation $\la x|\psi\ra$, whose representation in the Hyperbolic momentum $\la p_\eta|\psi\ra$, gives a closer match to the Riemann Zeta function than afforded by (\[alterzeta\]). Surprisingly a wavefunction which yields a perfect match can be found. To describe this wavefunction we make reference to a set of qubit basis states proposed by Gottesman, Kitaev and Preskill (GKP), [@gottesman01], for quantum information processing using continuous variables. On the entire line, $x\in\mathbb{R}$ these states are defined by $$\begin{aligned} \overline{\overline{|0\ra}}&=&\sum_{n=-\infty}^{+\infty}\,|x=2\theta n\ra =\sum_{n=-\infty}^{+\infty}\,|p=\pi\theta^{-1}n\ra\;\;,\\ \overline{\overline{|1\ra}}&=&\sum_{n=-\infty}^{+\infty}\,|x=2\theta n+\theta\ra =\sum_{n=-\infty}^{\infty}\,(-1)^n|p=\pi\theta^{-1}n\ra\;\;, \end{aligned}$$ $\theta\in[0,1]$, which are infinite superpositions of infinitely squeezed states (i.e. eigenstates of $\hat{x}$, and $\hat{p}$), which are unnormalised and also possess infinite energy. Such continuous variable states have been proposed to encode a qubit $|q\ra=a\overline{\overline{|0\ra}}+b\overline{\overline{|1\ra}}$ [@gottesman01], and recently an error analysis [@glancy06], and various physical methods for their synthesis [@giovannetti01; @pinard05; @travaglione02; @pirandola04; @pirandola06a; @pirandola06b], have appeared in the literature. As we are concerned with quantum mechanics on the half-line we set $\theta=1/2$, and only consider the $\delta$-function sum in $\overline{\overline{|0\ra}}$, over a strictly positive domain, $$\widetilde{|0\ra}=\sum_{n=1}^{+\infty}\, |x=s\ra\;\;.$$ Following from (\[p\_eigen1\]), we can easily find $$\la p_\eta \widetilde{|0\ra} \sim \sum_{n=1}^\infty\,\frac{1}{n^s}\;\;,$$ where $s=1/2-ip_\eta$, which is precisely the Riemann Zeta function evaluated on the critical line. However, just as the state $\widetilde{|0\ra}$, in the $x-$representation is unbounded and possesses infinite energy, the Riemann Zeta function for $\Re(s)<1$ is also formally unbounded and is defined there via analytic continuation. Thus although formally we have a wavefunction whose $p_\eta$-representation is precisely the Zeta function, the unboundedness poses severe difficulties in its physical interpretation. Intermediate simulation ======================= We have seen a wavefunction whose $p_\eta$-transform yields the Riemann Zeta function up to a rapidly damped multiplicative function and another wavefunction whose $p_\eta$-transform yields the Riemann-Zeta function precisely. We now instead search for an intermediate wavefunction in the $x-$representation which does not possess any pathologies such as unbounded norm, infinite energy etc and which gives the Zeta function up to some multiplicative factor which damps far slower than above, i.e. slower than $\Gamma(1/2-ip_\eta)\sim \exp(-p_\eta)$. As we shall see, there are an infinite number of such examples. For the most part however, we will be unable to give simple analytic formulae for these wavefunctions. To find these wavefunctions we return to Eqn. (\[alterzeta\]), $$(1-2^{1-s})\zeta(s)\Gamma(s)=\int_{0}^{\infty} \frac{1} {e^t+1} t^{s-1} dt\;\;.\label{alterzeta1}$$ This is a special case of a more general Mellin transform formula [@sneddon72], $$\left\{{\cal M}\left(\sum_{n=1}^\infty\,(-1)^{n-1}f(xn)\right)\right\}(s)=(1-2^{1-s})\left\{{\cal M} f\right\}(s) \zeta(s)\;\;,\label{snedsum}$$ where $f(x)$ is any function. We see that the particular case of (\[alterzeta1\]), is obtained when we choose $$f(x)=\exp(-x)\;\;,\qquad \left\{{\cal M} e^{-x}\right\}(s)=\Gamma(s)\;\;.$$ We thus seek a function (say), $g$, whose Mellin transform $\{{\cal M}s\}(s)$, does not decay rapidly with $t$, for $s=1/2+it$. An example of such a function is the one parameter family of functions $$g(x,\phi)=N(\phi)\frac{1+x\cos(\phi)}{1+2x\cos(\phi)+x^2}\;\;,\label{myf1}$$ where $N(\phi)$ is a normalisation factor. This function has the Mellin transform [@sneddon72], $$\begin{aligned} \Xi(s,\phi)&\equiv&\left\{{\cal M} g(\phi)\right\}\\ &=&\frac{N(\phi)\pi}{2\sin (\pi s)}\left[(\cos \phi-i|\sin \phi|)^{s}+(\cos \phi +i|\sin \phi|)^{s}\right]\;\;.\label{melling}\end{aligned}$$ The sum, $$\Sigma(x,\phi)\equiv -\sum_{n=1}^\infty\,(-1)^n f(nx)\;\;,\label{Sigma}$$ cannot be easily expressed in analytic form and must represent a normalisable wavefunction on $x\in\mathbb{R}^+$, and thus we choose the normalisation factor $N(\phi)$, such that $\int_0^\infty|\Sigma|^2dx=1$. We plot the behaviour of (the unnormalised), $\Sigma(x,\phi)$, in Fig. \[fig:fig2\]. As $\phi\rightarrow\pi$, $\Sigma(x,\phi)$, develops an infinite number of oscillations within the domain $x\in[0,1]$, but in the following we will only consider $\phi\in[0,3]$, and here the function $g(x,\phi)$, and its Mellin transform $\Xi(s,\phi)$, are well behaved. We see that the Mellin transform damps quickly with increasing $t$, for low values of $\phi\in[0,2.5]$, but as $\phi\rightarrow 3$, this function damps more slowly. We expect that by increasing the value of the parameter $\phi\rightarrow \pi$, we can extend the Mellin transform’s $\Xi(1/2+it,\phi)$, extent to arbitrary large $t$-values at the expense of using highly oscillatory wave-functions $\Sigma(x,\phi)$. If we now follow this argument and choose a wavefunction in the $x-$domain to be $\la x|\chi\ra=N(\phi=3)\Sigma(x,\phi=3)$, we can move to the $p_\eta-$representation to find $$\la p_\eta | \chi\ra=N(3)(1-2^{1/2+ip_\eta})\zeta(1/2-ip_\eta)\Xi(1/2-ip_\eta,3)\;\;,\label{finalzeta}$$ and we plot $|\la p_\eta|\chi\ra|$, vs. $-p_\eta$, in Fig. \[fig:fig1\] (II). Compared with Fig. \[fig:fig1\] (I), we can clearly see that the wavefunction now has greater extent over the $-p_\eta$ coordinate and more details of the zeros/nodes of this wavefunction are apparent. Thus although we have shown that one can find legitimate normalisable wavefunctions on the $x\in\mathbb{R}^+$, axis whose $p_\eta$-representation gives closer and closer approximations to the Riemann Zeta function, these wavefunctions become more and more pathologic e.g. $\sim \sin(1/x)$. Wigner Functions in the $(\eta,p_\eta)$ phase space =================================================== The Wigner quasi-probability distribution function $W(\eta,p_\eta)$, is a standard tool for vizualising the quantum aspects of a quantum state, i.e. when $W$ assumes negative values. The $W$ function however, is typically defined on the phase space $(x,p)$, where $x,p\in \mathbb{R}$. For quantum mechanics on the half-line $x\in\mathbb{R}^+$, we have shown above that one cannot define a self-adjoint momentum operator and thus we cannot construct an associated Wigner function. However in the Hyperbolic representation $(\eta, p_\eta)$, where now both $\eta,p_\eta\in \mathbb{R}$, and $[\hat{\eta},\hat{p}_\eta]=i\hbar$, we can (as in [@twamley98]), define a Wigner function to be $$W_D(\eta,p_\eta)\equiv \frac{1}{2\pi}\int_{-\infty}^{+\infty}d\eta^\prime\, \overline{\la \eta+\frac{1}{2}\eta^\prime|} \rho \overline{|\eta- \frac{1}{2}\eta^\prime\ra}\,e^{i\eta^\prime p_\eta}\;\;.$$ For the two quantum states (\[RZpsi\]), and (\[finalzeta\]), we have plotted the associated Hyperbolic phase-space Wigner function in Figs. \[fig:fig7\]-\[fig:fig10\]. Although zeros of the wavefunction do not normally correspond to zeros of the Wigner function, from Figure \[fig:fig8\] and Figure \[fig:fig10\], some correspondence seems present. Moreover the $p_\eta$-marginal probability, $|\psi_\zeta(p_\eta)|^2=\int W(\eta,p_\eta)d\eta$, exhibits the engineered Riemann-Zeta zeros. Physical Realisation ==================== Above we have given a mathematical derivation of a new type of unitary transformation which transforms quantum mechanics on the positive half line to a Hyperbolic representation and in the process, executes a Mellin transform $\psi(x)\rightarrow \left\{{\cal M}\psi\right\}(s=1/2-ip_\eta)/\sqrt{2\pi}$. By choosing $\psi(x)$, carefully (such that it is the eigenstate of a particular potential $V(x)$, (\[eigenpot\]), subject to the boundary condition (\[boundaryc\])), the transformed wavefunction $\psi(p_\eta)$, has a nodal structure identical to the Riemann Zeta function on the critical line. The boundary condition (\[boundaryc\]), is not typical. However as shown in [@pazma89; @filop02], one can closely approximate such a boundary condition by a short-range potential $V_{bndry}(x)$, near the origin. For the boundary condition (\[boundaryc\]) to hold independent of energy, $V_{bndry}$, limits to a hard-wall infinite potential with infinitesimal structure [@filop02] which enforces (\[boundaryc\]). Possible physical systems which might admit the engineering of such short-range boundary potentials are multi-frequency evanescent-wave mirrors used for trapping and reflecting ultra-cold neutral atoms [@cote03]. A more challenging task is the physical realisation of the measurement of Hyperbolic phase $\la p_\eta | \psi\ra$. We propose using an indirect measurement scheme [@milburn83; @imoto85], where the system we wish to measure is coupled to a quantum probe via an interaction Hamiltonian $H_I$ such that the Hyperbolic phase of the system drives a linear displacement of the probe’s quantum state. Since $[\eta,p_\eta]=1$, we can formulate Hyperbolic creation and annihilation operators $\eta\equiv (\tilde{a}+\tilde{a}^\dagger)/2$, $p_\eta=(\tilde{a}-\tilde{a}^\dagger)/2i$, with $[\tilde{a},\tilde{a}^\dagger]=1$. Taking the probe as a bosonic field described by operators $b,\,b^\dagger,\;[b,b^\dagger]=1$ (i.e. on the full line rather than the half line and these quadrature operators $x_{pb},\,p_{pb}$, are now well defined self-adjoint operators), then the indirect measurement generated by the interaction $H_I^A=\chi(\tilde{a}b^\dagger+\tilde{a}^\dagger b)=2i\chi(x_{pb}p_\eta-p_{pb}\eta)$, will drive displacements of the probe field generated by the system’s Hyperbolic quadratures. Unfortunately, although (from (\[pdilaton\_x\])), $p_\eta=[ \x\px+\px\x]/2$, the Hyperbolic position is effectively the logarithm of the position of the system on the half line, $\eta\sim \ln \x$, and since $\ln x \approx \ln a -\sum_{i=1}^\infty\,(-1)^i(x-a)^i/(ia^i)$, $x\le 2a$, the interaction Hamiltonian $H_I^A$ involves coupling the probe to a highly nonlinear function (involving all powers of $\x$), of the system’s position. Instead we consider the alternative coupling Hamiltonian $$\begin{aligned} H_I^B&=&2\chi\left[\xpb(\xs^2-\pxs^2)+\ppb(\xs\pxs+\pxs\xs)\right]\nonumber\\ &=&2\chi\left[\xpb(\xs^2-\pxs^2)+2\ppb p_\eta\right]\nonumber\\ &=&\chi(\b^\dagger\a^2+\b\a^{\dagger\,2})\;\;,\label{indirect}\end{aligned}$$ where $\xs,\,\pxs,\,\a,\,\a^\dagger$, now describes a bosonic mode defined on the entire real line. We note however that (\[indirect\]) acts disjointly on the two half-line “super-selection” sectors $x\in\mathbb{R}^+$ and $x\in\mathbb{R}^-$. From this we see that the probe’s momentum is coupled to the Hyperbolic momentum in each super-selection sector but the probe’s position is no longer coupled to the Hyperbolic position. Moreover, (\[indirect\]), is the Hamiltonian for the Degenerate Parametric Oscillator, and has been studied by many [@hillery84; @hillery94; @agarwal97; @chaturvedi02; @agarwal06], though mostly in the case where the probe field is treated semiclassically. To get a feeling for how (\[indirect\]), provides us with an indirect measurement of the Hyperbolic momentum of the system mode we examine the quantum Heisenberg equations of motion and their semiclassical c-number approximations, $$\begin{aligned} \frac{d\a}{dt}&=-2i\chi\b\a^\dagger\;\;,\;\;&\frac{d\alpha}{dt}=-2i\chi\beta\alpha^{*\,2}\;\;,\label{H1}\\ \frac{d\b}{dt}&=-i\chi\a^2\;\;,\;\;&\frac{d\beta}{dt}=-i\chi\alpha^2\;\;,\label{H2}\end{aligned}$$ given the initial probe state to be the vacuum $|\psi_{pb}\ra=|0\ra$, and the system state only within $\mathbb{R}^+$, i.e. $\la x|\psi_{sys}\ra=0\,\; \forall x\le0$. Setting $\tau=4\chi t$, quadratures for the probe and system $2\beta=\xb+i\yb,\;2\alpha=\xa+i\ya$, and introducing the new variables $2u\equiv \xa^2-\ya^2,\;2w\equiv \xa^2+\ya^2,\;v\equiv \xa\ya$, where now $v$ is the semiclassical analogue of the Hyperbolic mometum of the system, equations (\[H1\],\[H2\]) can be recast as $$\frac{d}{d\tau}\left[\begin{array}{c} \xb \\ v\\ w\\ u\\ \yb \end{array}\right]=\left[ \begin{array}{ccccc} 0& 1/2&0&0&0\\ 0&0&-\xb& 0&0\\ 0&-\xb&0&\yb&0\\ 0&0&\yb&0&0\\ 0&0&0&-1/2&0 \end{array}\right]\left[\begin{array}{c} \xb \\ v\\ w\\ u\\ \yb \end{array}\right]\;\;. \label{bigmat}$$ From this one can prove that $(\xb^2+\yb^2)+w=2|\beta(t)|^2+|\alpha(t)|^2=K$, is a constant of the motion, while in addition $\ddot{x}_{pb}/\xb=\ddot{p}_{pb}/\yb$. Using this constant of the motion one can arrive at coupled nonlinear dynamics for the probe mode alone, $$2\ddot{x}_{pb}=-\xb[K-(\xb^2+\yb^2)]\;\;,\;\;2\ddot{y}_{pb}=-\yb[K-(\xb^2+\yb^2)]\;\;,$$ with the initial conditions $\xb=\yb=0,2\dot{\xb}=\xa\ya,\;\; 2\dot{p}_{pb}=\xa^2-\ya^2$. This is a conservative system of nonlinear ordinary differential equations which corresponds precisely to the kinematics of a particle moving in two spatial dimensions $(\xb,\yb)$, under the central potential $$V(\xb, \yb)=\frac{K}{4}r^2-\frac{1}{8}r^4\;\;,$$ where $r^2=\xb^2+\yb^2$. The dynamics can be exactly solved via Elliptic functions. One can find a series solution for small times to be $$\begin{aligned} \xb(\tau)&=&\frac{1}{2}\xa\ya\tau\left[ 1 -\frac{1}{24}(\xa^2+\ya^2)\tau^2+\cdots\right]\;\;,\\ \yb(\tau)&=&-\frac{1}{4}(\xa^2-\ya^2)\tau\left[1-\frac{1}{24}(\xa^2+\ya^2)\tau^2+\cdots\right]\;\;.\end{aligned}$$ Qualitatively the particle moves away from the origin initially with constant velocity (as near the origin the force vanishes), but after a time $t^*\sim \sqrt{\frac{3}{4\w(t=0)}}/\chi$, both $\xb$ and $\yb$ asymptote to fixed values as the particle comes to rest at the maximum of the potential $V$. The initial energy of the system $w(t=0)$, drives the probe up the potential and in the long time limit all of the system’s energy is expended into displacing the probe. However the dependence of the probe’s displacement on the initial system parameters, $u$ and $v$ is only linear in the small time limit $t\ll t^*$. To summarise, from the semiclassical Heisenberg equations of motion we have good reason to believe that for short interaction times the system’s Hyperbolic momentum via, $H_I^B$, induces a linear response of the $\xb$ quadrature. Although Degenerate Parametric Oscillators are studied in optical settings it is also possible to engineer (\[indirect\]), between the vibrational modes of an ion in an ion trap [@agarwal97]. Conclusion ========== We have found a new type of unitary transformation for quantum mechanics on the half-line which transforms one into the Hyperbolic representation. The effect of this transformation is to execute a Mellin transform on the wavefunction. Although this work has no immediate impact on the Riemann-Zeta hypothesis, it is interesting that we are able to present bounded quantum wavefunctions whose nodal properties match exactly those of the Riemann Zeta function on the critical line. It is also possible that just as the quantum Fourier transform has found numerous applications in quantum computation, this new quantum Mellin transform might lead to new quantum applications and algorithms. This work was supported in part by the European Commission Integrated Project QAP under Contract No. 015848. (9,13) (-2,7.3)[![Comparison of $|\zeta(s=1/2-ip_\eta)|$ (blue), and $|\psi_\zeta(p_\eta)|$ (red) wavefunction in (I) from Eq.(\[RZpsi\]) and (II) from Eq.(\[finalzeta\]). In (I) we plot $[\log |\psi_\zeta(p_\eta)|]/(-p_\eta)$ (red), to overcome the exponential supression of (\[RZpsi\]), generated by the $\Gamma(1/2-ip_\eta)$ function while in (IIA) we plot $|\psi_\zeta(p_\zeta)|$ (red) on a linear vs. linear scale and in (IIB) we repeat on a log vs. linear scale. In (IIA) and (IIB), the details of the zeros of the wavefunction are more clearly visible for a larger range of $p_\eta$. \[fig:fig1\]](Fig1_72.eps "fig:"){width="15.2cm" height="5cm"}]{} (5.5,7)[$-p_\eta$]{} (.4,10)[(B)]{} (.4,8)[(A)]{} (1,11.5)[(I)]{} (-1,0.5)[![Comparison of $|\zeta(s=1/2-ip_\eta)|$ (blue), and $|\psi_\zeta(p_\eta)|$ (red) wavefunction in (I) from Eq.(\[RZpsi\]) and (II) from Eq.(\[finalzeta\]). In (I) we plot $[\log |\psi_\zeta(p_\eta)|]/(-p_\eta)$ (red), to overcome the exponential supression of (\[RZpsi\]), generated by the $\Gamma(1/2-ip_\eta)$ function while in (IIA) we plot $|\psi_\zeta(p_\zeta)|$ (red) on a linear vs. linear scale and in (IIB) we repeat on a log vs. linear scale. In (IIA) and (IIB), the details of the zeros of the wavefunction are more clearly visible for a larger range of $p_\eta$. \[fig:fig1\]](Fig7a_72.eps "fig:"){width="6cm" height="5cm"}]{} (6,0.5)[![Comparison of $|\zeta(s=1/2-ip_\eta)|$ (blue), and $|\psi_\zeta(p_\eta)|$ (red) wavefunction in (I) from Eq.(\[RZpsi\]) and (II) from Eq.(\[finalzeta\]). In (I) we plot $[\log |\psi_\zeta(p_\eta)|]/(-p_\eta)$ (red), to overcome the exponential supression of (\[RZpsi\]), generated by the $\Gamma(1/2-ip_\eta)$ function while in (IIA) we plot $|\psi_\zeta(p_\zeta)|$ (red) on a linear vs. linear scale and in (IIB) we repeat on a log vs. linear scale. In (IIA) and (IIB), the details of the zeros of the wavefunction are more clearly visible for a larger range of $p_\eta$. \[fig:fig1\]](Fig7b_72.eps "fig:"){width="6cm" height="5cm"}]{} (.1,4.8)[(IIA)]{} (7,1)[(IIB)]{} (2.,0)[$-p_\eta$]{} (9.,0)[$-p_\eta$]{} (12,12) (-.5,0)[![Graphs of (A) the sum $\Sigma(x,\phi)=\sum_{n=1}^\infty\,(-1)^n f(nx)$, from Eq. (\[Sigma\]), and (B) the Mellin transform of $f(x)$, $|\Xi(s,\phi)|$ from Eq. (\[melling\]), for $s=1/2+it$. For low values of $\phi\sim 2$, $\Sigma(x,\phi)$ possesses very little structure while for $\phi\sim 3$, this function develops more complex oscillations. For low values of $\phi\sim 0$, the Mellin transform $\Xi$ is highly damped function of $t$, while for $\phi\sim 3$, the transform damps much more slowly, thus allowing extended explorations of the Zeta function on the $-p_\eta$ axis. \[fig:fig2\]](phi_plot_waterfall_72.eps "fig:"){width="7cm" height="6cm"}]{} (4,-.3)[$x$]{} (-.9,3.5)[$\Sigma$]{} (-.1,.6)[$\phi$]{} (7,0)[![Graphs of (A) the sum $\Sigma(x,\phi)=\sum_{n=1}^\infty\,(-1)^n f(nx)$, from Eq. (\[Sigma\]), and (B) the Mellin transform of $f(x)$, $|\Xi(s,\phi)|$ from Eq. (\[melling\]), for $s=1/2+it$. For low values of $\phi\sim 2$, $\Sigma(x,\phi)$ possesses very little structure while for $\phi\sim 3$, this function develops more complex oscillations. For low values of $\phi\sim 0$, the Mellin transform $\Xi$ is highly damped function of $t$, while for $\phi\sim 3$, the transform damps much more slowly, thus allowing extended explorations of the Zeta function on the $-p_\eta$ axis. \[fig:fig2\]](mellin_phi_water_72.eps "fig:"){width="7cm" height="6cm"}]{} (13.3,.8)[$\phi$]{} (6.8,1.2) $|\{{\cal M} f\}(1/2+it)|$ (9.5,.1)[$t$]{} (0,6)[(A)]{} (7.5,6)[(B)]{} (9,9) (1.5,0)[![Hyperbolic phase space Wigner function $W(\eta, p_\eta)$, of the Riemannn-Zeta wavefunction in Eq. (\[RZpsi\]).[]{data-label="fig:fig7"}](Wigner1_72.eps "fig:"){width="8cm" height="8cm"}]{} (5.5,-.25)[$\eta$]{} (1,4)[$p_\eta$]{} (10,10) (-4.25,10) (-3.3,5)[$p_\eta$]{} (-.5,-.2)[$\eta$]{} (3.5,-.15)[$\eta$]{} (6.3,-.15)[$\ln \int W(\eta,p_\eta)d\eta$]{} (10.5,-.15)[$|\zeta(1/2-ip_\eta)|$]{} (-.7,9.7)[(A)]{} (3.3,9.7)[(B)]{} (7.1,9.7)[(C)]{} (11,9.7)[(D)]{} (9,9) (1.5,0)[![Hyperbolic phase space Wigner function $W(\eta, p_\eta)$, of the Riemannn-Zeta wavefunction in Eq.(\[finalzeta\]).[]{data-label="fig:fig9"}](Wigner2_72.eps "fig:"){width="8cm" height="8cm"}]{} (5.5,-.25)[$\eta$]{} (1,4)[$p_\eta$]{} (10,10) (-2.3,9.6) (-3.3,5)[$p_\eta$]{} (-.5,-.2)[$\eta$]{} (3.5,-.15)[$\eta$]{} (6.3,-.15)[$\ln \int W(\eta,p_\eta)d\eta$]{} (10.5,-.15)[$|\zeta(1/2-ip_\eta)|$]{} (-.7,9.7)[(A)]{} (3.3,9.7)[(B)]{} (7.1,9.7)[(C)]{} (11,9.7)[(D)]{}
--- abstract: | Live video-streaming platforms such as Twitch enable top content creators to reap significant profits and influence. To that effect, various behavioral norms are recommended to new entrants and those seeking to increase their popularity and success. Chiefly among them are to simply put in the effort and promote on social media outlets such as Twitter, Instagram, and the like. But does following these behaviors indeed have a relationship with eventual popularity? In this paper, we collect a corpus of Twitch streamer popularity measures—spanning social and financial measures—and their behavior data on Twitch and third party platform. We also compile a set of community-defined behavioral norms. We then perform temporal analysis to identify the increased predictive value that a streamer’s future behavior contributes to predicting future popularity. At the population level, we find that behavioral information improves the prediction of relative growth that exceeds the median streamer. At the individual level, we find that although it is difficult to quickly become successful in absolute terms, streamers that put in considerable effort are more successful than the rest, and that creating social media accounts to promote oneself is effective irrespective of when the accounts are created. Ultimately, we find that studying the popularity and success of content creators in the long term is a promising and rich research area. author: - | Robert Netzorg, Lauren Arnett, Augustin Chaintreau, Eugene Wu\ Columbia University in the City of New York\ September 2018 bibliography: - 'mainnew.bib' date: September 2018 title: 'PopFactor: Live-Streamer Behavior and Popularity ' --- =1 Introduction {#s:intro} ============ Live-streaming platforms have recently grown to be of tremendous interest around the world. One example is Twitch[^1], a popular U.S.-based live-streaming platform focused on video game streaming. Content creators, called streamers, create channels to broadcast live videos of themselves playing video games to multitudes of interested followers. Top video game streaming channels on the Twitch platform have been viewed over 1 billion times, and the website attracts over 15M viewers per day. In 2014, Twitch was purchased by Amazon for \$970M. A key aspect of live-streaming platforms is that there are tremendous social and financial incentives to grow in popularity. Viewers can support a particular streamer in a variety of ways. Users can follow a streamer and be notified when the streamer starts a broadcast. Streamers that have on average $\ge3$ concurrent viewers per broadcast gain additional benefits: users can pay for monthly subscriptions to gain exclusive access to additional content and social features, or spend $\$0.01$ to [*cheer*]{} on a streamer during a broadcast. Further, the audience can directly donate to a streamer on third-party platforms such as Patreon and TipeeeStream; in 2016, over half a million Twitch viewers donated a total of \$80M to their favorite streamers. When combined with product sponsorships, the top streamers can make upwards of \$4M per year. In short, Twitch—along with comparable platforms in the U.S. (e.g., Youtube Live) and around the world (e.g., Meitu, Chushou)—are hugely popular and emerging as large economic forces. To this end, the Twitch streamer community has curated a set of behavioral norms for how new streamers can quickly grow their audience. These behaviors vary from the variety of games to play and how long to stream, to the ways in which streamers can promote their channel on third-party social networks such as Twitter, Instagram, and others. While there is ample official [@twitchadvice] and unofficial [@twitchreddit; @twitchguide] advice how streamers can alter their behavior to become more successful, it is unclear whether or not different behavior indeed affects success, and, if so, whether all of the recommended advice applies equally. To supplement this uncertainty, there are an increasing number of articles in the popular media that describe the difficulty of growing a successful streaming career [@redditfulltimetwitch; @twitchnoone], and the considerable work it takes to maintain such a career [@getrichnewyorker; @youtubeburn; @iceposeidon]. However, there is a lack of quantitative analysis for how content creator behavior is related with their short term and longer term growth in popularity. Understanding this dynamic could provide a basis for guidelines about how new streamers should choose to focus their time and resources, and for these platforms to develop tools to aid their content providers. Towards this goal, we have collected a corpus of Twitch streamers that joined Twitch in 2016, and actively broadcast over the course of two years. This data contains the streamer activity on Twitch[^2] as well as third-party social media platforms such as Twitter and YouTube. It also contains several popularity measures that represent general social popularity (number of followers), active popularity (number of concurrent viewers during a broadcast), and financial popularity (number of cheers). In addition, we surveyed and categorized community recommended behaviors into 6 groups of “rules” that are believed to aid in streamer success (e.g., produce more content, promote on Twitter). Based on these two datasets, this paper studies the temporal dynamics of Twitch streamer popularity growth during their first year of broadcasting—where the primary growth in popularity occurs. We seek to understand [*whether*]{} following behavioral rules established by the Twitch community is related to increased popularity (as defined by different popularity measures), and if so, [*how*]{}. To this end, we model this as an inference task, where given a streamer’s information at time $t$, to predict the streamer’s popularity at a future time $t+\delta$. Simply formulating the prediction task is challenging—a naive model that merely predicts future success would confound past streamer behaviors and success with future behavior that the streamer has control over. We instead propose to analyze the [*difference*]{} in predictive power between a baseline model that [*only*]{} uses past streamer information (e.g., before or at time $t$), with a behavioral model that augments the baseline with behavior information between $t$ and $t+\delta$. The predictive power gained from adding the latter information is a strong indicator of future behavioral effects[^3]. Using this procedure, we analyze behavioral effects on predicting future popularity in absolute terms (ranking in the top-10% of a measure), as well as predicting future relative growth that is higher than the median growth (Section \[s:prediction\]). In other words, the fast-growing streamers. We find that across the three popularity measures, future behavior does not contribute to more accurately predicting future popularity in absolute terms. However, future behavior does contribute to predicting future relative growth, which is arguably important for an individual streamer that is deciding how to rapidly grow her audience over the next several months. We also find that it is simply very difficult to identify streamers whose financial success will rapidly grow in either the short or long term. We also study how streamers may individually grow. We find that it is very difficult to predict streamers that can grow at a rate to reach Twitch Partner status after 2 years (100 average concurrent viewers). In contrast to popular media [@twitchnoone], we find that few streamers broadcast consistently to an empty audience, and that streamers that broadcast as a full-time job ($\ge40$hrs/week) are considerably more successful than the rest. In addition, any time is a good time to start publicizing on third-party social media accounts. Overall, we find that studying content creator behavior as a predictor of future popularity growth is a promising and impactful research direction and discuss future directions in Section \[s:conclusion\]. Related Work {#s:related} ============ Popularity has been broadly studied across many disciplines, including business, marketing, and social networks. Here, we survey relevant work on quantitative analysis and prediction of social media popularity. Much prior work has studied content features that lead to social network virality. For instance, models to predict Facebook photo re-shares [@cheng2014can], Twitter re-tweets [@hong2011predicting], Twitter hashtag usage [@ma2013predicting], Digg story up-votes [@szabo2010predicting], and hourly volume of news phrases [@yang2010modeling]. This area of work identifies both content-specific features (e.g., of a potential Tweet), as well as user-specific features (e.g., their popularity, network characteristics), that are predictive of the content’s eventual popularity (i.e \# of Shares). Although these studies may use user popularity as a predictive feature, it is unclear [*how the user became popular*]{}. In contrast, we specifically investigate which community-accepted behaviors are predictive of popularity growth over time. Social network user popularity has primarily focused on predicting a given user’s popularity based on network characteristics, and whether such popularity measures are indicative of influence. For instance, popular Instagram users exhibit broader topical interests [@ferrara2014online], form reciprocal relationships with other users, and often share common followers [@Kim2017]. Similarly, popular Twitter users tend of create more original tweets, and retweet less [@fu2016online]. Other studies have employed information diffusion models [@yang2010modeling] to measure the extent to which popularity results in network-level influence, and found that popular users on Twitter are not the top influencers of hashtag propagation on Twitter. Ultimately, these studies focus on users that are already popular. There has been recent work studying the Twitch ecosystem to understand the intrinsic motivations of Twitch streamers and viewers, and how streamers adopt personas to fit the live-streaming medium. Others have studied the high volume of chat content that forms during streamer broadcasts, and their community characteristics [@HamiltonPlay]. In terms of popularity, Kaytoue et al. [@Kaytoue2012] predict viewership dynamics within a given broadcasting session. Our work builds on this body of research by proposing predictive models of streamer popularity growth based on streamer behaviors. Unlike predicting the popularity of content, which focuses on predicting popularity in the near future, our goal is to study the process of *becoming popular* on a social network by observing behavioral characteristics over long spans of time. To the best of our knowledge, there are many community-based anecdotes about effective behavior, and relatively few quantitative or longitudinal studies. Cha et al. suggest that behavior may be a factor in growing social network influence; Hutto et al. find that how Twitter users interact with their social network affects their follower count [@hutto2013longitudinal]; Chang et al. find that diverse content can help increase followings on Pinterest [@chang2014specialization]. We extend these ideas by examining a broad set of behaviors derived from the Twitch community and quantitatively studying their ability to predict future popularity growth for varying time ranges. Twitch Data and Popularity {#s:data} ========================== We collected behavioral and popularity information for Twitch streamers that created accounts throughout 2016 and remained active for a year. The behavioral data includes their broadcasting activity on Twitch. We also collected activity on third party platforms such as Twitter, YouTube, and Instagram, if those accounts were linked from a streamer’s Twitch profile. The Twitch-specific data was provided by the Twitch data science team. This section describes our data collection as well as statistics of our sample population. Data Collection --------------- This study concerns the population of Twitch streamers who began streaming at some point in 2016 and continued streaming consistently (at least once every two months) for at least a year. Our dataset consists of  17,682 users and 4 million broadcasts. As we received this data from Twitch, no data cleaning was required. Due to the selection process for our dataset, however, we cannot be totally sure that popularity dynamics after the first year of streaming is what would occur if the streamers consistently continued to stream into the second year. Therefore, our analysis in Section \[s:prediction\] focuses solely on the first year of streaming. Although Twitch is focused on gamers, it allows non-gaming streamers (e.g., cooking) . Only 1.5% of all broadcasts in our corpus were non-gaming related, and we do not find that they bias our results. Thus, we keep them in the dataset. [**Third-Party Social Media Data:**]{} To more completely understand a streamer’s presence on the internet, we use links on streamers’ profiles to other social media accounts and scrape data that is publicly available from Twitter, YouTube, and Instagram. These three platforms provide temporal insight into a streamer’s behavior on external social media accounts—for instance, whether the streamer advertises an upcoming broadcast on Twitter. It also allows us to study how the streamer’s follower community has developed on other platforms. Using the Twitter, YouTube, and Instagram APIs, we collected the entire posting history for streamers with third-party accounts. While there are other third-party platforms, such as Patreon or Snapchat, that streamers also link in their profiles, those platforms do not provide access to historical information, so we did not collect data from them. We also did not collect data from platforms that some Twitch streamers use, such as Discord, TippeeeStream, and Facebook groups because they do not provide public APIs. Popularity Measures ------------------- Content creators may have different motivations for broadcasting on Twitch [@maslow1943theory; @weiner1972theories]—it may be for financial gain, to seek popularity and fame, for social interaction, or because they simply enjoy it. For this reason, we studied multiple measures of streamer popularity related to total popularity, active viewership, and financial gain. `Follower` counts measure the number of Twitch users that want to be notified when a streamer begins a new broadcast; concurrent viewer counts (`Conc. Viewer`) measure the average number of users that watch a streamer’s broadcasts for at least a few minutes; cumulative viewer counts (`Cum. Viewer`) measure the total number of users that watched a streamer’s broadcasts in a month (for any amount of time); and `Cheers` measure the number of $\$0.01$ donations during a streamer’s broadcasts. ![Popularity of streamers is highly skewed. The top 10% for each measure is dashed.[]{data-label="f:popdist"}](figures/popularity_distribution_users.pdf){width=".9\columnwidth"} Figure \[f:popdist\] plots the cumulative popularity of top streamers for each popularity measure one year after creating their account on Twitch. The dashed portion of the line represents the top 0-10% of streamers for each popularity measure. For instance, the top 10% of streamers account for 72% of the total number of followers at the end of one year. Other popularity measures are even more skewed with the top 10% receiving above 80% of views and 90% of cheers. Indeed, nearly 45% of streamers never receive a single cheer after one year. The curves for concurrent viewers and cumulative views are nearly identical, indicating that after one year, ranking streamers based on historical popularity or recent viewership audience does not affect the distribution. However, in subsequent sections we will find that the growth dynamics on a per-streamer basis for these two measures are quite different. From Behavioral Norms to Features {#s:rules} ================================= To help new streamers grow their followers, the Twitch community has curated effective behavioral norms into a set of “rules” believed to be indicative of popularity growth. We describe our categorization of these rules and how we translate them into features used in our prediction models. Community-Recognized Behaviors ------------------------------ We surveyed the popular Twitch subreddit[^4] and community-developed guides [@twitchguide; @twitchpopularitybook]. We then classified them into six general [*rules*]{}. For space constraints, we summarize each rule in terms that are not Twitch-specific, and describe the features we extracted to represent each rule. A comprehensive list of our features and their descriptions is listed in Appendix \[a:features\]. To the best of our ability, we avoided information leakage by excluding behavior features strongly correlated with popularity. For instance, the audience size during a streaming session and the number of Twitter followers are indicative of popularity. Also, we did not include features based on video stream content (e.g., narration style, emotion) because we lacked access to historical video streams; we leave the analysis of such rules to future work. [**R1: Produce More Content:**]{} Broadcasting more is considered a core component of becoming popular. Popular streamers tend to stream for 4-8 hours, 5-7 days a week, and new streamers are recommended to to stream $>2-4$ hours per weekday [@twitchguide]. We computed 4 features measuring number, frequency, and total length of broadcast content. As an example, Broadcast\_Gap computes the average time between broadcasts, i.e., how long a user is effectively *inactive*. A user will be said to follow the rule “produce more content” if she keeps that time small (see Section \[ss:translate\] for more details). [**R2: Release Content Regularly:**]{} Adhering to a consistent broadcasting schedule is considered a vital part of audience growth. We computed 2 features that measure whether or not they schedule and to what extent the streamer follows it. [**R3: Don’t Release Overcrowded or Obscure Content:**]{} Twitch recommends streamers based on popularity, and community wisdom suggests that streamers playing overly popular games will be drowned out by already popular streamers. On the other hand, overly obscure games will not be interesting to potential followers. We computed 2 features that trace whether or not a streamer plays an overcrowded game and how long they spend playing it. [**R4: Have a Social Media Presence:**]{} Linking to, and promoting on, other social media accounts is believed to help build a follower community. We use links on streamer profiles to see whether the streamer has a YouTube or Instagram account. We computed 9 features for third-party social media accounts related to YouTube video and Instagram post metadata. [**R5: Twitter is Best for Promotion:**]{} Twitter is highlighted as one of the best ways to advertise content before and after each broadcast. We measured general Twitter activity and activity in relation to broadcasts. We computed 7 features related to Twitter activity and temporal correspondence with broadcasts. [**R6: Diversify Your Content:**]{} Based on prior analysis of Pinterest [@chang2014specialization], diversified content may appeal to a broader audience. However, streamers typically stream one game, and occasionally mix secondary games. We computed the number of games played during each broadcast and averaged across each month. The scope of our rules was limited by our dataset. For instance, we did not collect data about the streamer’s chat or in-video interactions with the audience during a broadcast [@lurkers]. Despite this, our analysis finds that behavior improves the predictiveness of future follower growth, and we expect the inclusion of additional behavioral rules will strengthen these findings. Translating Rules into Temporal Features {#ss:translate} ---------------------------------------- We now describe how we distill the question [*“did streamer $u$ obey rule $r$ during time interval $[t, t+\delta]$?*]{}” into a single binary feature that can be used in our prediction model. We will use Tweet\_Num as the running example; others are simpler or defined similarly. The process requires addressing a number of nuanced challenges. The first is that the features measured for the above rules are not temporally aligned. Some are per-broadcast while others are per-Tweet. Second, what does it mean to obey a rule? Is it relative to the streamer’s previous actions, to the rest of the streamer community, or to the streamer that actually succeed? Third, how to reduce a streamer’s rule following, which may change over time, into a single binary value while losing as little information as possible? To address the first challenge, we compute the feature’s average value over the time interval $w=[t, t+\delta]$. For instance, let $B_u = [b_{u,1},\cdots,b_{u,m}]$ be the sequence of $m$ tweets for streamer $u$, and let $b_{u,j}.t$ be the timestamp for the $j^{th}$ tweet. We define $f_{u,w} = count(\{b_{u,j} | b_{u,j}.t \in w\})$ as the number of tweets in the time interval $w$. For the second challenge, we observe that the community rules are typically described in relation to the behavior of popular streamers (e.g., broadcast as long as popular streamers) or the streamer community at large (e.g., you should avoid playing the game everyone is playing). For this reason, we interpret obeying a rule as following it [*more*]{} than the general community, *in a way that imitates popular users*. To best address our third challenge we have to find a binary value that captures as much information as possible. This means that the cutoff $C_f$ for each measure need to be feature specific, where $u$ obeys the rule if her feature is comparing to the cutoff in ways that resemble popular users. Therefore, a user $u$ where $f_{u,w}>C_f$ is given a value of 1 and 0 otherwise. To ensure that this principle is best quantified, the cutoff is chosen as the values $C_f$ where the fraction of streamers following the rules differ the most between popular (top-10% streamers) and unpopular (bottom-90% streamers), i.e., $$C_f=\underset{0\leq k\leq1}{argmax}(\left|pop_{k}-unpop_{k}\right|)$$ where $pop_k$ is the fraction of popular streamers whose feature value $f_{u,w}$ is above the $k^{th}$ percentile among all streamers, and $unpop_k$ be defined accordingly for unpopular streamers. Note that, since we will later use a feature to study popularity either in terms of followers, views or cheers, we redefine top-10% streamer for each case, which means the cutoff will be different (albeit very close) for different prediction tasks. An alternative method is to pick cutoffs that ignore popular users and their different behaviors, for instance choose $C_f$ to be the median of $f$ among all streamers. Although for many features it resembles our method[^5], it has two drawbacks: first, in several cases the median (or another arbitrarily chosen percentile) is often degenerate because some features are heavily skewed. Second, when a simple median cutoff exhibits negative results (indicating that behaviors are not adding accuracy in prediction) one can still ask if behavior could possibly help with a more *informative* cutoff. As an example Figure \[f:broad\_perc\] plots the distribution of Tweet\_Num for the popular (top-10% streamers in terms of follower count) and unpopular (the other streamers) subpopulations. The median over the whole population (i.e., 0 Tweets) is not as informative to separate popular from unpopular streamers based on that behavior alone. In contrast, the cutoff choice we present avoids that trap unless of course the feature is entirely uninformative. Using the above procedure, we can compute one feature vector for each user in each time interval for each popularity measure. In the next section, we define different measures of popular/unpopular and time interval in order to understand the temporal dynamics of streamer behavior on popularity. ![Example of feature and cutoff choices for rule “Tweet More”. The vertical line indicates the dynamic cutoff chosen by our method, while median (0 Tweets) offers less information.[]{data-label="f:broad_perc"}](figures/percentile_example.pdf){width="\columnwidth"} Population-Level Dynamics {#s:prediction} ========================= [.49]{} ![% Percentage of streamers (y-axis) with same or more (a) followers, (b) average concurrent viewers, (c) cheers, and (d) cummulative views at different account ages (lines)[]{data-label="f:ccdf"}](figures/ccdf_follower_growth.pdf "fig:"){width="\columnwidth"} [.49]{} ![% Percentage of streamers (y-axis) with same or more (a) followers, (b) average concurrent viewers, (c) cheers, and (d) cummulative views at different account ages (lines)[]{data-label="f:ccdf"}](figures/ccdf_views_growth.pdf "fig:"){width="\columnwidth"} \ [.49]{} ![% Percentage of streamers (y-axis) with same or more (a) followers, (b) average concurrent viewers, (c) cheers, and (d) cummulative views at different account ages (lines)[]{data-label="f:ccdf"}](figures/ccdf_cheers_growth.pdf "fig:"){width="\columnwidth"} [.49]{} ![% Percentage of streamers (y-axis) with same or more (a) followers, (b) average concurrent viewers, (c) cheers, and (d) cummulative views at different account ages (lines)[]{data-label="f:ccdf"}](figures/ccdf_cummulative_ccu_growth.pdf "fig:"){width="\columnwidth"} This section analyzes streamer behaviors that are correlated with the four popularity measures described in Section \[s:data\], We begin with Figure \[f:ccdf\], which shows a population level view of these measures at different months since account creation (streamer age). For measures such as follower count and cumulative views, growth appears consistent across months for users of all percentiles. The median user gains 102 and 105 follows in months 1 and 2. However, measures such as concurrent viewership and cheers are more elusive. Even after 2 years, very few streamers reach 100 concurrent viewers, nor more than \$10 in cheers per month (one cheer is 1$\cent$). The question then becomes: what behaviors dilineate the unpopular streamer from her popular counterpart? The rest of this section studies this question by using the behavioral factors defined by the community rules and rule-following (Section \[s:rules\]). Temporal Analysis Methodology ----------------------------- The rest of this section studies how behavior at age $t$, measured as the degree of rule-following described above, is correlated with future popularity at age $t+\delta$. Intuitively, this is challenging because popularity, behavior, and time are intricately connected. In particular, a naive popularity prediction task could confound factors such as current status with behavioral factors. Without access to randomized experiments and artifacts that can help infer causality, we present a temporal analysis method to minimize the effects of confounding factors and isolate behavioral effects. The main idea is to use a strong baseline model $F_{cur}$ that uses all relevant information at time $t$ to predict eventual popularity at $t+\delta$, and compare it with a behavioral model $F_{cur+b}$ that [*additionally includes behavioral features*]{}. The difference in predictive accuracy between the two models describes the additional predictive power that behavior accounts for. We now define the two models. [**Strong Baseline ($F_{cur}$):**]{} We formulate a binary inference task. The model input includes *all* information on a streamer’s popularity and actions, including on third-party platforms, up to age $t$ (e.g., all data prior to age of 4 months). The goal, or output, is to accurately predict whether the streamer was among the top 10% of a given popularity measure (e.g., top 10% most followers) by the end of the interval $t+\delta$ (e.g., age of 6 months if $\delta=2$). We call this [*Absolute Popularity*]{}, as it measures popularity in absolute terms. In contrast, an individual streamer may simply care about rapidly growing. Thus we also define [*Relative Popularity Growth*]{} by whether or not the streamer’s popularity measure increases more than the median streamer’s growth. For instance, if the follower count grew 10% over 2 months and the median only grew 5% during the same period, then the streamer had high relative follower count growth and the model should predict $1$. If not, then the model should predict $0$. We evaluate both absolute and relative popularity in the following experiments. Note that $F_{cur}$ carefully accounts for the effect of age. It uses supervised training to interpolate a growth trajectory using past information until $t$ to estimate the expected outcome at $t+\delta$. For a fixed age interval size (e.g., $\delta=3$), we pool the intervals at each monthly starting age (e.g., \[1m-4m\],\[2m-5m\],$\cdots$), and report test AUC[^6] using an 80-20 train-test split (performed on the entire dataset before the temporal partition; for each window $[t, t+\delta]$, 20% of users are held out at random). We use a logistic regression model because the contribution of each behavioral feature can be interpreted by the weights of the model. Formally, the prediction task is as follows. Let $X^t$ and $y^t$ be the set of features and binary popularity outcomes at time $t$ across all users, and let $\delta$ be the time interval size. The task is to learn a set of linear feature coefficients $A$ that minimize the non-regularized logistic regression: $$A^* = \operatorname*{arg\,min}_{A} \sum_{t\in [1, 12-\delta]} | Y^{t+\delta} - AX^t |^2_2$$ [**Behavior Model ($F_{cur+b}$):**]{} The behavior model augments the inputs with behavioral features observed *during* the age interval $[t,t+\delta]$ as defined in Section \[ss:translate\]. Since it has more information, it is expected to return a higher AUC. However, note that all past behaviors and popularity of that streamer were previously included, so the [*only new information*]{} concerns the unexpected/unpredictable changes in the behavior. By focusing on the *AUC gain* $F_{cur+b}-F_{cur}$ rather than absolute model accuracies, we can more confidently isolate how changes in behavior affect the prediction of popularity. Hence, a large difference between the two models is less likely to be due to a pre-existing factor. [0.25]{} ![image](figures/window_experiments/new_dataset_absolute_pop.pdf){width="0.25\paperwidth"} [0.25]{} ![image](figures/window_experiments/new_dataset_relative_growth.pdf){width="0.25\paperwidth"} [0.25]{} ![image](figures/window_experiments/two_month_window_experiment.pdf){width="0.25\paperwidth"} Behavior and Follower Growth ---------------------------- To start, we study the predictiveness of behaviors on absolute and relative popularity in terms of number of followers (Figure \[f:window\_follower\]). Figure \[f:abswindows\] shows that over short time intervals (2 months), the baseline model can predict absolute popularity with nearly 0.87 AUC. This is because the most popular users typically maintain their status in the short term. In contrast, over longer periods (1 year), the AUC decreases substantially to as low as $0.65$. Knowing future behavior () actually decreases the AUC over the long term to nearly as low as $0.6$, which is slightly above random chance of $0.5$. For the absolute popularity task, prior popularity goes a long way in identifying the future highly popular from the rest (i.e., 55% of users who are in the top 10% most followed in the first month end the year in the top 10%), explaining why behavior would not provide much of a predictive boost. Figure \[f:relwindows\] shows that predicting relative growth using $F_{cur}$ shares a similar trend, but is generally harder to predict, than absolute popularity. Incorporating future behavior provides a considerable boost in AUC—by $0.2$ over a 1 year interval. This is consistent with community expectations that streamer behavior can affect the rate of growth. More surprisingly, the AUC for $F_{cur+b}$ is almost flat as the time interval increases. This suggests that behavior may be a strong contributor to a streamer’s rate of follower growth over both short and long term—there is potential to control one’s popularity in a predictable manner. To account for streamer age, Figure \[f:twomonthrel\] reports the AUC for relative growth, but fixes the interval size to $\delta=2$ months and varies the age at the start of the interval (x-axis). We find that behavior is indeed important throughout the first year (16% gain on average), and is highest at 4 months (23% gain). We note that the first interval is dramatically higher than the other intervals due to sampling biases. For example, many professional gamers and previously-popular streamers bring their fans when they create their Twitch account, which exacerbates the distinctions between seemingly high and low growth streamers, making the prediction problem simpler in the first interval. Interestingly, even then, behavior matters. [0.3]{} ![image](figures/window_experiments/absolute_pop_window_views.pdf){width="\columnwidth"} [0.3]{} ![image](figures/window_experiments/absolute_pop_window_cheers.pdf){width="\columnwidth"} [0.3]{} ![image](figures/window_experiments/absolute_pop_window_cummulative_views.pdf){width="\columnwidth"} \ [0.3]{} ![image](figures/window_experiments/relative_growth_window_views.pdf){width="\columnwidth"} [0.3]{} ![image](figures/window_experiments/relative_pop_window_cheers_filtered.pdf){width="\columnwidth"} [0.3]{} ![image](figures/window_experiments/relative_pop_window_cummulative_views.pdf){width="\columnwidth"} Additional Popularity Measures ------------------------------ As discussed in Section \[s:data\], the Twitch ecosystem offers other definitions of popularity beyond follower count. For instance, the average concurrent viewership measures how many users concurrently watch a streamer’s average broadcast for longer than a few minutes. This measure is important because followers may not necessarily watch the streamer. Many streamers broadcast on Twitch in the hopes of potentially making money, and the number of Cheers is a monetary measure of popularity. A third measure is the cumulative total views, which measures the total number of times a streamer’s broadcasts have been viewed. This is a cumulative statistic similar to followers, and although it is not used by Twitch, other platforms such as TikTok and YouTube report it. Finally, views and followers are moderately correlated (0.44), cheers and followers are weakly correlated (0.26), and cumulative views and followers are highly correlated (0.88). Unlike follower count, concurrent views and cheers are much more volatile metrics of popularity, and more difficult to attain. After streaming for 2 years, only 55% of streamers receive a single cheer, 19% earn \$100, and 4% earn \$1,000. Further, we note that unlike the other measures, average concurrent viewers does not grow monotonically and can fluctuate considerably from month to month, and broadcast to broadcast. We find that the difficulty of attaining any concurrent viewers (the median user has $\approx6$ concurrent viewers) impacts the overall accuracy of the predictive models. [**Absolute Popularity:**]{} The first row of figures in Figure \[f:window\_popularity\] report the AUC curves for $F_{cur}$ and $F_{cur+b}$ using the absolute popularity of the three measures. We find that the curves for concurrent viewership and cumulative views are consistent with the followers measure in Figure \[f:abswindows\]. In contrast, there are so few streamers with more than a single cheer that both models perform near randomly, although behavior contributes a slight gain over the 1 year interval. [**Relative Popularity Growth:**]{} The second row reports AUC curves for relative popularity growth of the three measures. The cumulative views curves in Figure \[f:rel\_cumviews\] are nearly identical to the corresponding followers curves in Figure \[f:relwindows\]. This similarity makes sense in light of the fact that both measures are monotonic and highly correlated. The prediction ease of $F_{cur+b}$ on cumulative views suggests that, across interval sizes, behavior is indicative of distinguishing between highly and seldom viewed streamers. In contrast, the curves for concurrent views and cheers are considerably different. Both $F_{cur}$ models perform nearly randomly, and although behavior features increases the AUC by nearly 0.1, the overall accuracy is still very low (around 0.6 for both measures). This suggests that community accepted behavioral rules may not be enough if a streamer is focused on monetary or viewership success. [@ rlll]{} **** & **** & **Cum.** & **Concur.**\ **Feature** & **Followers** & **Views** & **Views**\ \ \# Broadcast & & &\ & & &\ & & &\ & & &\ Broadcast Len & & &\ Sched Regularity & & &\ & & &\ & & &\ \# Days & & &\ & & &\ & & &\ & & &\ \# Games & & &\ & & &\ & & &\ & & &\ & & &\ & & &\ \# Popular Game & & &\ Gap Btwn Broadcasts & & &\ & & &\ & & &\ Unique Games & & &\ & & &\ Comparing Feature Coefficients ------------------------------ Table \[t:coefs\] summarizes each feature’s coefficients in the $F_{cur+b}$ relative popularity growth models for each popularity measure; we exclude cheers because the extreme skew of streamers that receive cheers is degenerate and caused the model to perform poorly. We use $^*$ and $^{**}$ to denote significance at $<0.1$ and $<0.05$ levels. The p-value for each feature was computed separately by using two-tailed t-test. For convenience, we summarize how the coefficients change between each pair of models in the final three columns. In order to assure the robustness of our coefficient estimates, we ran a correlation analysis to see if our model contained a set of features that were possible colienar with one another. Several features, namely Instagram length, Instagram num, and Tags num, are highly correlated with one another in the follower task. We removed these features and reran to model to find that the significance and coefficient magnitudes remained the same. Because removing the features did not impact the results, we include them here to provide a full analysis of the feature set. We find that most features have a positive correlation with follower growth. In particular, regularly broadcasting more often, and for longer periods of time (Broadcast \#, Broadcast Len, Sched Regularity) are all highly correlated with follower growth. In fact, Broadcast Gap has a very high negative coefficient, which penalizes long periods between consecutive broadcasts. In addition, advertising on different social media platforms by linking to upcoming streams (Instagram Adv, Youtube Adv, Twitter Adv), and by simply posting (Tweet \#) are highly correlated. There is a slight negative correlation with longer Tweets and YouTube descriptions. Thus in general, simply increasing the volume of activity appears to correlate highly with follower growth. These results appear similar for the cumulative views model as well. Although a small number of features, such as Twitter advertisements, the number of games played, and YouTube posts become negatively correlated, the coefficients are not statistically significant. We highlight in the features whose coefficients flipped from positive in the followers model to negative, or if the opposite occured. The model for concurrent viewers is far more difficult to predict in terms of AUC than the preceeding two measures, and it is also highlighted in the discrepency between its coefficients and the coefficients for the followers model. For instance, very few features have coefficients that are statistically significant—broadcasting more and longer continue to be the primary features. Other features, such as advertising on third-party platforms, switch to having no or silghtly negative coefficients. In fact, most third-party features have negligible coefficients. All our features tend to predict high growth and not the opposite, according to norms and recommendations of that community. Our results reveal that not all behaviors are as predictive as the community would believe them to be. For all of the behaviors, Table \[t:coefs\] indicates that the community was either right or overconfident about the predictiveness of a particular feature, but never so poorly wrong as to say that one feature predicts high growth when it actually predicts low growth. Rules such as Activity, Twitter Promotion, and Regularity seem to hold their weight in terms of importance, but other community-defined rules like Social Media usage or Avoid Playing Popular Games seem to not matter as much towards the growth task. Streamer-Centric Analysis {#s:investigation} ========================= The previous section studies the relationship between behavior and popularity as compared to the entire sample population. However, an individual streamer may simply want to understand how behavior is related to individual popularity irrespective of other streamers. This section performs streamer-centric analyses in terms of growing at a rate to reach a fixed level of success, the amount of effort streamers put in, and the effects of creating third-party accounts. Self-Growth Towards Partner Status ---------------------------------- The previous section studied models that predict whether a streamer would grow faster relative to the population. While this was useful to identify and distinguish high growth streamers, an individual streamer may simply want to improve at a steady rate in order to achieve a fixed goal. In this case, the streamer is more interested in growing faster than a base rate. To this end, we extended our previous temporal analysis to an outcome variable that measures “self-growth”. We define this based on qualifying for the Twitch Partnership Program after two years, which requires around 100 average concurrent viewers per broadcast. Thus the base rate of growth is to gain 4 concurrent viewers per month, and the binary outcome variable measures whether this rate of growth has been achieved over a given time interval. ![Predicting whether streamer grows at a rate of $\ge4$ concurrent viewers per month.[]{data-label="f:self-growth"}](figures/window_experiments/self_growth.pdf){width=".7\columnwidth"} Figure \[f:self-growth\] highlights the difficulty of sustained self-growth. Even with knowledge about streamer’s past success and actions, as well as her future behavior, both the $f_{cur}$ and $f_{cur+b}$ models perform near-randomly in the short and long term. Streamer Effort --------------- Recent media coverage [@twitchnoone] suggest that many Twitch streamers spend considerable time broadcasting to no one, and that the maount of effort put in is not worth it. Further, the preceeding study suggests that behaviors, including effort, are almost uncorrelated with the amount of concurrent viewership growth to reach Partner status. Yet, Table \[t:coefs\] showed that many of highest feature coefficients were related to sheer broadcasting volume. To better understand these dynamics, we now study the amount of effort that streamers put into growing their popularity. Twitch requires members of their affiliates program [@twitchaffiliate] to broadcast at least 500 minutes ($8.3$hrs) per month. Thus, we use the total hours broadcasted per month as a crude measure of streamer effort. Figure \[f:effort\_box\] shows that 92% of streamers broadcast more than the affiliates minimum. In fact, the median streamer broadcasts for more than 24 hours per month. We studied streamers that treat broadcasting as a full time job, as defined by broadcasting more than 40 hours per week (160hrs/month). 6% of streamers treat Twitch as a job. We then compared these streamers with the rest of the population by running 3 Welch Two Sample t-tests under the null hypothesis that their popularity measures are not different. We found statistical significance for followers (effect: 5642, p-value: 1.9E-10), concurrent viewers (effect: 94.3, p-value: 8.0E-4), and cheers (effect: 171.62, p-value: 3.7E-5). Further studies are needed to establish a causal relationship between full-time effort and success. ![Box Plot of the Amount of Time Spent Streaming in a Given Month[]{data-label="f:effort_box"}](figures/effort_boxplots.pdf){width="\columnwidth"} We then studied “failed” streamers that broadcast to an empty audience, and found encouraging results. As shown in Figure \[f:failure\_density\], only 1.3% of streamers spend more than 25% of their broadcasts without an audience. In fact, the majority of streamers (80%) have less than 5% empty broadcasts. This result suggests that it is natural to spend some amount of time broadcasting to an empty room [@redditmotivated], and most streamers that start off broadcasting to no one tend to grow out of this phase. ![CDF: % of Broadcasts with 0 Viewers. The median streamer (y-axis) has $\le1\%$ empty broadcasts (x-axis).[]{data-label="f:failure_density"}](figures/effort_dist.pdf){width=".8\columnwidth"} When to Create Social Media Accounts? ------------------------------------- We found that social media presence is correlated with follower and cumulative viewership growth. However, a streamer that is starting out without a social media presence may wonder whether creating an social media account is still worth it. Does the timing of when an account is created have a relationship with eventual popularity? Or are they already at a disadvantage? Figure \[f:soc\_start\] groups streamers based on when they started their YouTube, Twitter, or Instagram accounts relative to their Twitch account (x-axis). For example, 5 in Figure \[f:youtube\_follows\] means that the YouTube account was created 5 months after the Twitch account. For each group, we compute the mean and standard error of the [*peak follower count*]{} (top row) and [*peak monthly concurrent viewership*]{} (bottom row) across the first year of the streamer’s lifespan. We run Welch Two Sample t-tests to compare the populations of streamers who had active social media presence before their streaming with those who created their social media accounts after. We find no statistical significance on peak follower count for each of YouTube (p-value: 0.058), Twitter (p-value: 0.128), or Instagram (p-value: 0.855). Testing again on peak concurrent views, we still find no statistical significance for YouTube (p-value: 0.09), Twitter (p-value: 0.935), and Instagram (p-value: 0.587). We see that across the different platforms and the different popularity measures, when the social media presence begins has no effect on the peak popularity a user can achieve. Streamers who begin broadcasting with a pre-existing social media presence do not appear to have an advantage over those that broadcast without a social media account, and even those who develop their social media presence months later. Even though the timing does not directly affect popularity, having a social media account in itself is correlated with popularity growth. [.32]{} ![image](figures/social_third_party/youtube.pdf){width="\columnwidth"} [.32]{} ![image](figures/social_third_party/twitter.pdf){width="\columnwidth"} [.32]{} ![image](figures/social_third_party/insta.pdf){width="\columnwidth"} [.32]{} ![image](figures/social_third_party/youtube-ccu.pdf){width="\columnwidth"} [.32]{} ![image](figures/social_third_party/twitter-ccu.pdf){width="\columnwidth"} [.32]{} ![image](figures/social_third_party/insta-ccu.pdf){width="\columnwidth"} Limitations and Future Work {#s:discussion} =========================== We now describe limitations and future work. [**Beyond Modeling:**]{} There is a broader set of questions regarding the content creator community in general. How does the Twitch community identify and misidentify behavioral suggestions? Is it based on intuition? Or survivor bias based on recommendations from successful streamers? Furthermore, it is equally important to understand how streamers themselves choose which rules to follow—it is likely that some rules are simply easier to understand, are less resource-consuming, or less risky. Models that rely on behavioral data to make predictions (e.g., Google Flu Trends [@googleflu]) can directly alter and thus diverge from user behavior. Similarly, as streamers learn about in/effective behaviors from modeling research, does their shift in behavior invalidate or alter our findings? Revisiting these results at regular intervals may lead to interesting patterns. [**Beyond Twitch:**]{} In this paper we have focused on Twitch, however it is unclear how our specific findings generalized to other live-streaming platforms such as YouTube-live, or more broadly, other social-media platforms. For instance, how can a new artist, or musician, or writer, distinguish herself? Closer to home, do academics, who produce research and publications as their primary form of content, exhibit similar characteristics? Should academics self-promote on Twitter as well (but not too much)? It is tempting to believe that simply producing content at a high volume and high frequency may correlate with success, however it remains to be studied. Despite this, we believe that our analysis methodology—to compare $F_{cur+b}$ and $F_{cur}$ on a temporal prediction task—is applicable to other studies of behavior. Further, we believe our focus on studying the relationship between content creator behavior and long term success is both timely and important beyond Twitch. Conclusion {#s:conclusion} ========== In this paper, we study the relationship streamer behavior and popularity growth on the Twitch live-video streaming service. We surveyed community recommended behaviors, and grouped them into 6 overarching rules. Through careful experimental design, we seek to isolate the amount that future behavior, which streamers can control, increase the ability to predict future popularity growth. At the population level, we find that although behavior does not better predict how one rises through the ranks in absolute terms, it is highly correlated in identifying streamers whose relative growth is faster than the median. From this study, we find that community recommendations are not all predictive of rapid growth, however they do not appear to harm growth either. At the individual level, we find that it is extremely difficult to predict whether a streamer will grow at a rate to reach Twitch Partner after 2 years. More positively, few streamers broadcast to an empty audience, creating and advertising on social media accounts is effective irrespective of when the accounts are created, and streamers that treat broadcasting as a job are more popular than the rest of the streamer population. Ultimately, we find that the effects of user behavior on popularity growth of content-creators is a rich and deep research area, and point towards promising directions for future work in this area. Detailed List of Features {#a:features} ========================= The following is a list of features and their descriptions for each of the behavior rules used in this paper. The features are computed with respect to a given time interval $[t, t+\delta]$. Intervals are at least one month. [**User features:**]{} These features were computed from attributes in the Twitch-provided dataset. They encapsulate rules 1, 2,3, and 6. - Broadcast Gap: The average amount of time between consecutive broadcasts. - \# Broadcast: The number of broadcasts. - \# Games: The average number of games per broadcast. - Broadcast Len: The average length of a streamer’s broadcast. - \# Popular Game: The average number of popular games played per broadcast. A game is popular if it is a top-10% most-viewed game on Twitch. - \# Days: The average number of days per week a streamer broadcasts. - Sched Regularity: A measure of how consistently a streamer broadcasts on specific weekdays. For each day of week $d$, we count the number of weeks the streamer broadcasts on that weekday $N_d$. We then compute $\sum_{d\in [0,7]} max(N_d-1, 0)$. - Unique Games: The total number of unique games played. [**Twitter features:**]{} These features describe Rule 5, and are computed using data collected from Twitter. If a streamer doesn’t have a Twitter account, the feature is set to $0$. - \# Tweet: The total number of tweets. - Twitter Live: Number of Twitter posts containing the word “live”. Streamers often advertise that they are “going live” before a broadcast. - Tweet Before Gap: The average amount of time between a broadcast and its immediately preceeding Twitter post. - Tweet After Gap: The average amount of time between the end of a broadcast and its immediately succeeding Twitter post. - Twitter Adv: The number of Twitter posts with containing a Twitch URL. - Tweet Len: The average character length of Twitter posts. - \# Twitter Replies: The number of Twitter posts that are replies to another post. [**Third-Party Features:**]{} These features are computed using the YouTube and Instagram data. If the streamer does not have an account, the feature is set to $0$. These features describe Rule 4. - \# YouTube Posts: The number of YouTube videos. - YouTube Desc Len: The average length of a YouTube video’s description text. - YouTube Title Len: The average length of a YouTube video’s title. - YouTube Video Length: The average length of a YouTube video. - YouTube Adv: The number of YouTube video descriptions containing a URL to Twitch. - \# Instagram: The number of Instagram posts. - \# Tags/Instag. Post: The average number of tags used in an Instagram post. - Instagram Adv: The number of Instagram posts containing a URL to Twitch. - Instagram Len: The average length of an Instagram Post. [**Acknowledgements:**]{} We thank Twitch for contributing the datasets, and the Twitch data science team for helpful discussions. [^1]: <http://www.twitch.com> [^2]: The Twitch-specific data was provided by the Twitch data science team. [^3]: Note that we do not imply causal relationships between observed streamer behaviors and eventual success, due to possible confounders not present in the dataset. This is why we report and emphasize the difference in predictive accuracy. [^4]: <https://www.reddit.com/r/Twitch/> [^5]: When running the analysis using the median, even, our results regarding the rules the remain the same, although the feature Twitter Live does become significant for the follower task [^6]: Area Under the Curve (AUC) of the ROC Curve describes a model’s predictive power and is indifferent to class imbalances, a common problem for using accuracy. 0.5 AUC signifies random guesses, while 1.0 AUC signifies a perfect classifier. The AUC can be interpreted as the probability of correctly ranking a positive and negative class.
--- abstract: 'We study the zero temperature phase diagram of a class of two-dimensional SU$(N)$ antiferromagnets. These models are characterized by having the same type of SU$(N)$ spin placed at each site of the lattice, and share the property that, in general, more than two spins must be combined to form a singlet. An important motivation to study these systems is that they may be realized naturally in Mott insulators of alkaline earth atoms placed on optical lattices; indeed, such Mott insulators have already been obtained experimentally, although the temperatures are still high compared to the magnetic exchange energy. We study these antiferromagnets in a large-$N$ limit, finding a variety of ground states. Some of the models studied here have a valence bond solid ground state, as was found in prior studies, yet we find that many others have a richer variety of ground states. Focusing on the two-dimensional square lattice, in addition to valence cluster states (which are analogous to valence bond states), we find both Abelian and non-Abelian chiral spin liquid ground states, which are magnetic counterparts of the fractional quantum Hall effect. We also find a “doubled” chiral spin liquid ground state that preserves time reversal symmetry. These results are based on a combination of rigorous lower bounds on the large-$N$ ground state energy, and a systematic numerical ground state search. We conclude by discussing whether experimentally relevant SU$(N)$ antiferromagnets – away from the large-$N$ limit – may be chiral spin liquids.' author: - Michael Hermele - Victor Gurarie bibliography: - 'sun-long.bib' title: 'Topological liquids and valence cluster states in two-dimensional ${\rm SU}(N)$ magnets' --- Introduction {#sec:intro} ============ Since their discovery nearly thirty years ago, fractional quantum Hall (FQH) liquids continue to be a rich source of novel and exciting physics.[@tsui82; @laughlin83] FQH liquids belong to an intriguing class of quantum states of matter: they do not fall into the conventional classification in terms of broken symmetry, electron band structure, and Fermi liquid theory, but instead are characterized by the notion of topological order. [@wen90] $^{\!\!\! \text{,} }$ [^1] In FQH liquids, topological order is directly responsible for the celebrated properties of fractional charge, fractional and non-Abelian statistics, and gapless chiral edge states. While the phenomenon of topological order is not limited to FQH systems in principle, they remain its only known experimental realization, recent progress with rotating cold atomic condensates notwithstanding.[@Gemelke10] It is thus important to ask in which other systems we might find topologically ordered states of matter. Over the past several years there has been considerable progress identifying model quantum spin systems exhibiting topological order.[@moessner01; @balents02; @senthil02; @motrunich02; @kitaev02; @freedman04; @levin05; @kitaev06] The ground states of these models typically have no spontaneously broken symmetries, and are thus referred to as quantum spin liquids; these are concrete realizations of Anderson’s idea of resonating valence bonds.[@anderson73] The models that can be shown to exhibit topological order are generally not very realistic, but many are built from realistic degrees of freedom without any special symmetries, making it clear that there is no in-principle obstacle for topological order to exist in real quantum spin systems. Despite this progress, we know of no candidate materials for a topologically ordered spin liquid. While a few solid state quantum magnets are quantum spin liquid candidates,[@shimizu03; @helton07; @ofer06; @mendels07; @okamoto07; @itou08] all these systems seem to have gapless excitations and are thus not natural candidates for topological order. Here, we discuss a class of spin systems with topologically ordered spin liquid ground states. While the systems we study are not realistic for solid state materials, they can be realized naturally – without complicated engineering of a special Hamiltonian – using fermionic ultracold alkaline earth atoms (AEA) in optical lattices.[@gorshkov10] While most ultracold atom experiments to date involve alkali atoms, AEA are promising systems to study many-body physics, and experiments in this direction are progressing rapidly.[@fukuhara07a; @fukuhara07b; @fukuhara09; @escobar09; @stellmer09; @desalvo10; @taie10; @tey10; @stellmer10; @sugawa10] An important feature of these systems is the presence – without fine-tuning – of a large ${\rm SU}(N)$ spin-rotation symmetry, where $N = 2I + 1$, and $I$ is the nuclear spin.[@gorshkov10; @cazalilla09] The nuclear spin can be as large as $I = 9/2$ for $^{87}$Sr, so $N$ can be as large as 10. The focus of this paper is primarily on the models themselves, and not their cold atom realizations; nonetheless, for completeness we review in Appendix \[sec:atomic\] the realization of the spin systems of interest using AEA. Further information along these lines can be found in Ref. . The models we study are two-dimensional ${\rm SU}(N)$ antiferromagnets where the ${\rm SU}(N)$ representation (*i.e.* type of spin) is the same on every lattice site – the simplest case is the $N$-dimensional ${\rm SU}(N)$ fundamental representation. More generally, we consider spins in the ${\rm SU}(N)$ irreducible representation labeled by a $m \times n_c$ Young tableau with $m < N$ rows and $n_c$ columns (Fig. \[fig:rect\_youngtab\]). We will refer to such a representation as the $m \times n_c$ representation. These models differ crucially from solid state ${\rm SU}(2)$ magnetism, in that, in general, more than two spins are required to form a ${\rm SU}(N)$ singlet. This means that singlets are not two-site valence bonds, but rather are multi-site “valence clusters.” The cases $n_c = 1$ and $n_c = 2$ can both be realized as AEA Mott insulators (Appendix \[sec:atomic\]). While a variety of values of $m$ can be realized, $m=1$ is of greatest interest because it best avoids issues of three-body and other losses. For ${\rm SU}(2)$ spins, $m = 1$ and $n_c = 2S$, so that $n_c = 1,2$ correspond to $S = 1/2, 1$, respectively. It should be noted that these models are distinct from a much-studied class of ${\rm SU}(N)$ spin models, where (inequivalent) conjugate representations occupy the two sublattices of a bipartite lattice.[@read89a; @read89b] ![The Young tableau corresponding to the $m \times n_c$ irreducible representation of ${\rm SU}(N)$. We consider spin models where the spin at each lattice site transforms in this representation.[]{data-label="fig:rect_youngtab"}](rect_youngtab.eps){width="1.5in"} In Ref. , together with A. M. Rey, we considered the semiclassical limit $m=1$ and $n_c \to \infty$. This is analogous to the large-$S$ limit for ${\rm SU}(2)$ spins, and, indeed, reduces to it when $N = 2$. The limit $n_c \to \infty$ is biased toward magnetically ordered ground states, because the spins become *classical* $N$-component complex vectors. However, it turns out that the ground state manifold is in general extensively degenerate (precisely, its dimension is proportional to the number of sites in the system) – on the square lattice with nearest-neighbor exchange this occurs for $N \geq 3$, with the degree of extensive degeneracy growing with $N$. This situation sometimes occurs in geometrically frustrated magnets, where a common consequence is that magnetic order is strongly suppressed, and sometimes even destroyed, by large very low-energy fluctuations.[@moessner98] Given that this occurs even in a limit which is deliberately biased in favor of magnetic order, non-magnetic ground states are likely for the $n_c = 1,2$ cases of greatest interest. (We note that recent work has given strong evidence that a magnetically ordered ground state does occur for $m = n_c = 1$ and $N = 3$ on the square lattice.[@toth10] For $N=4$, while prior exact diagonalization[@bossche00] and variational wavefunction[@wangf09] studies favored a non-magnetic ground state, a very recent study employing both projected entangled pair states and exact diagonalization found evidence in favor of magnetic order.[@corboz11] These results are consistent with the expectation that non-magnetic ground states are more likely for larger values of $N$, where the extensive degeneracy in the semiclassical limit is larger.) This paper is concerned with the ground states of these ${\rm SU}(N)$ antiferromagnets, in a solvable large-$N$ limit suitable for addressing the competition among non-magnetic states.[@affleck88; @marston89] With Rey in Ref. , we studied the case $n_c = 1$ on the square lattice in the large-$N$ limit, and announced a number of results. Here, we study the case of arbitrary $n_c$ on general lattices, with a focus on the square lattice for $n_c = 1,2$. We also provide more detail on the results already reported in Ref. . In the large-$N$ limit, $N$ and $m$ are taken to infinity, while $N/m = k$ and $n_c$ are held fixed. The parameter $k$, which we choose to be an integer greater than unity, plays a very important role in our analysis: $k$ *is the minimum number of spins needed to form a* ${\rm SU}(N)$ *singlet*. Given this physical interpretation of $k$, the large-$N$ limit can thought of as a solvable generalization of the model with ${\rm SU}(k)$ symmetry, $m=1$, and the same fixed $n_c$. Readers primarily interested in the implications of our results for real AEA Mott insulators can interpret our large-$N$ results as a prediction for the ground state of these physically realizable $m = 1$ models. This bold prediction will need to be tested further in future work; see Sec. \[sec:discussion\] for further discussion along these lines. In general, the ${\rm SU}(N)$ singlets are $k$-site valence clusters. Based on the observation that, when $k = 2$, singlets are 2-site valence bonds, the case $k = 2$ has been studied as a solvable large-$N$ generalization of ${\rm SU}(2)$ antiferromagnetism.[@affleck88; @marston89; @read89b; @rokhsar90] For the same reason, the $k > 2$ case does not provide a good generalization of ${\rm SU}(2)$ antiferromagnetism. Under very general conditions in the $k=2$ large-$N$ limit, the ground state is a valence-bond solid (VBS) that spontaneously breaks lattice symmetries.[@rokhsar90] One of the striking results of this paper (and Ref. ) is that the large-$N$ ground states are much richer in the less-studied case $k > 2$. While ${\rm SU}(N)$ spin models with the same representation on every lattice site have not received extensive attention (except in the case of self-conjugate representations, *i.e.* $k = 2$), there have been several earlier studies. While our focus is primarily on two dimensions, we note that the one-dimensional chain with $m = n_c = 1$ was solved exactly for all $N$,[@sutherland75] and the effective field theories of it and other chains were also studied.[@affleck88a] In two dimensions, most work focused on the $m = n_c = 1$ model with either $N = 3$ or $N = 4$. The former case arises as a special point of a $S = 1$ spin model with bilinear and biquadratic exchange terms,[@lauchli06; @toth10] while the latter is a highly symmetric point of a $S = 1/2$ Mott insulator with an additional two-fold orbital degeneracy,[@pokrovskii72; @li98; @bossche00; @wangf09] or a special point of a model with ${\rm Sp}(4)$ symmetry.[@wucj03] We also note a further very recent study of the $N=4$, $m = n_c = 1$ model on the square lattice.[@corboz11] Models on the cubic lattice have been studied in high-temperature series expansion,[@fukushima05] and a class of exactly solvable models with $n_c > 1$ was studied in Ref. . Finally, effective models of valence cluster degrees of freedom – analogous to more familiar quantum dimer models – have been studied.[@pankov07; @xuc08] Returning for a moment to the ultracold atom realization of our models, it should be mentioned that high-spin quantum magnets can also be realized using alkali atoms, and in that context also have spin symmetry enhanced above ${\rm SU}(2)$.[@wucj03] However for an $N$-component system, the symmetry is generically less than ${\rm SU}(N)$. For example, in a spin-$3/2$ alkali system, the spin symmetry is expected generically to be ${\rm Sp}(4)$ and not ${\rm SU}(4)$.[@wucj03] While these systems share with ${\rm SU}(2)$ magnets the property that two spins can be combined to form a singlet, they are also likely to be fertile ground for the realization of a variety of interesting ground states.[@wucj03; @chen05; @lecheminant05; @wucj05; @wucj06; @szirmai10; @wucj10] In close relation to quantum magnetism, half-filled repulsive ${\rm SU}(N)$ Hubbard models have also been studied in the context of ultracold atoms.[@honerkamp03; @xuc10] Very recently, the repulsive ${\rm SU}(3)$ Hubbard model was studied for arbitrary filling.[@rapp11] We now summarize our results for the square lattice with $n_c = 1,2$. (A graphical summary for $n_c = 1$ can be found in the phase diagram of Fig. \[fig:phasediagram\], discussed in Sec. \[sec:discussion\].) Depending on $k$, we find valence cluster states (VCS) that break lattice symmetries and are formed by tiling the lattice with multi-site singlet clusters, and three distinct types of topologically ordered spin liquids. Two of the spin liquids are chiral spin liquid (CSL) states.[@kalmeyer87; @kalmeyer89; @wen89] The CSL is a spin system analog of an FQH state; it spontaneously breaks parity and time-reversal symmetry (symmetries that are broken explicitly by the magnetic field in FQH systems),[^2] supports excitations with fractional quantum numbers and statistics, and has gapless chiral edge states that carry spin. We find both an Abelian chiral spin liquid (ACSL) (for $n_c = 1$ and $k \geq 5$), and a non-Abelian Chiral spin liquid (nACSL) (for $n_c = 2$ and $k \geq 6$). The ACSL is described at low energies by a ${\rm U}(1)_N$ Chern-Simons theory and has fractional statistics. The nACSL, on the other hand, is described by a ${\rm U}(1)_{2 N} \times {\rm SU}(2)_N$ Chern-Simons theory, and supports non-Abelian statistics. The third state we find is distinct from these CSL states in that it preserves time reversal symmetry. At the mean-field level it appears as two copies of the ACSL, with opposite chiralities, and we thus dub it a doubled chiral spin liquid (dCSL); its low-energy theory is a ${\rm U}(1)$ mutual Chern-Simons theory. A more concrete way of describing all these states is in terms of Gutzwiller projected trial wavefunctions, as described in Sec. \[sec:top\]. The dCSL is found for $n_c = 2$ and has the same energy as the nACSL in the $N \to \infty$ limit. Presumably $1/N$ corrections select one of these states as a ground state; we have not computed these, since in our view the large-$N$ limit is primarily useful as a tool to determine *likely* ground states of physically realizable models, and ultimately the issue of whether the dCSL or nACSL (or some other state) is lower in energy will need to be determined by directly studying those models (occurring at finite $N$). We find VCS states for $n_c = 1,2$ and $2 \leq k \leq 4$, as well as a more complicated inhomogeneous ground state when $n_c = 2$ and $k = 5$. These results are obtained via a combination of exact lower bounds on the large-$N$ ground state energy (generalizing the results of Ref. ), and a systematic numerical search. A current interest in states which support excitations with non-Abelian statistics is fueled by the expectation that they could be used to build a topologically protected quantum computer. [@kitaev02] The simplest non-Abelian statistics is described by an ${\rm SU}(2)_2$ Chern-Simons theory. It is believed to be realized in the quantum Hall effect at the filling fraction $5/2$, [@moore91; @willett10] as well as in a variety of setups involving Majorana fermions. [@read00; @FuKane3d; @Lutchyn10; @Oreg10; @Sau2DEG; @Alicea2DEG; @Alicea10] However, it is not rich enough to support universal quantum computations. [@nayak08] Non-Abelian statistics described by ${\rm SU}(2)_N$ Chern-Simons theory for $N>2$ is significantly richer and in fact gets richer as $N$ increases. In particular, $N=3$ or $N \ge 5$ is known to be sufficient for universal quantum computations. [@freedman02] Some fractional quantum Hall states in the first excited Landau level are believed to realize these types of non-Abelian statistics for moderate $N$, at least for $N=3$. [@read99; @pan08] We observe that the non-Abelian statistics proposed here can be as high as ${\rm SU}(2)_{10}$ (in case of $^{87}$Sr), thus it is inherently very rich. We note that solvable spin models with CSL ground states have been found previously.[@khveshchenko89; @khveshchenko90; @yao07; @schroeter07; @greiter09; @thomale09; @greiter11] One of these[@yao07] is a generalization of the Kitaev model on a decorated honeycomb lattice. The models of Refs.  involve long-range 6-spin interactions. Refs.  found that a CSL was the large-$N$ ground state of a ${\rm SU}(N)$ spin model of a different type from those considered here, where in addition a four-spin ring exchange term which explicitly broke time-reversal invariance was added to the Hamiltonain. Very recently, also motivated by ultracold alkaline earth atoms, Szirmai *et. al.* studied the same type of spin model discussed in this paper, for $n_c = 1$ and $k = 6$ on the honeycomb lattice, and found a CSL ground state in the large-$N$ limit.[@szirmai11] We now give an outline of our paper. In Sec. \[sec:models\], we define a broad class of SU$(N)$ Heisenberg models in terms of slave fermions; this is convenient for understanding the large-$N$ limit, which is also described in this section. In Sec. \[sec:top\] we discuss the properties of the Abelian chiral spin liquid, non-Abelian chiral spin liquid, and doubled chiral spin liquid, including their wavefunctions and edge states. We spend the rest of the paper arguing that these states indeed appear in the large $N$ limit of the appropriate models. In particular, in Sec. \[sec:genlat\] we discuss the solution to the large $N$ limit of our models on general lattices. We give examples of lattices where the large $N$ solution can be proven to be a VCS, by generalizing results of Ref.  to $k > 2$. The principal tool of analysis is a rigorous lower bound on the large-$N$ ground state energy, which is saturated by certain VCS states. We also give general arguments that VCS are not the only states which are possible on generic lattices, and other states, including spin liquid states, should naturally appear in appropriate cases. In Sec. \[sec:bipartite\] we specialize to bipartite lattices, showing that a stricter lower bound on the energy can be obtained in this case (when $k > 2$). Finally, in Sec. \[sec:square\] we further specialize to the square lattice. Using the rigorous lower bounds, we show that the large-$N$ ground state is a VCS for $k = 2,3,4$, for both $n_c = 1$ and $n_c = 2$. Next, employing a numerical analysis we show that the large-$N$ ground state at $n_c=1$ on the square lattice is the ACSL for $5 \le k \le 8$. Moreover, at $n_c=2$ we show that nACSL and dCSL are the degenerate ground states at $k=6, 7$. Closely tied to these results is the discussion of Appendix \[app:largek\], where we discuss the possible ground states in the limit of large $k$. In particular, for $n_c = 1$, we show that the ACSL wins over VCS states as well as a trial uniform gapless state, giving us ammunition to conjecture that ACSL is the ground state for all $k \ge 5$. The same analysis goes over to $n_c = 2$ and leads us to conjecture that the nACSL and dCSL are degenerate ground states for all $k \ge 6$. We would not have studied these models if it were not for the strong potential to realize them in systems of AEA on optical lattices. The paper concludes with a discussion in Sec. \[sec:discussion\], focusing on the prospects to find chiral spin liquids in those spin models that can be realized in cold atom experiments. In particular, we discuss the phase diagram in the $k$-$m$ plane (Fig \[fig:phasediagram\]). Finally, we mention some directions for future study; one such direction is to understand how fractional or non-Abelian particles may be localized and braided in these systems, with an eye toward detection of fractional or non-Abelian statistics. In Appendix \[app:carriers\] we further discuss some ideas in this direction, describing how fractional *holons* (which carry conserved atom number but not spin), may be localized by applying an external potential. In Appendix \[sec:atomic\], we review some aspects of the cold atom realizations of these systems. Moreover, starting from the Hubbard model describing AEA on an optical lattice in the large-$U$ limit, we derive the appropriate Heisenberg models using degenerate perturbation theory. Some technical details are given in Appendices \[app:1site-irrep\], \[app:2site-exact\] and \[app:kcluster\]. Models and large-$N$ limit {#sec:models} ========================== Here we introduce the ${\rm SU}(N)$ spin models and construct the solvable large-$N$ limit, which allows us to address the competition among non-magnetic ground states. We shall define the models in terms of the fermionic spinon operators $f^\dagger_{\br a \alpha}$. Here $\br$ labels lattice sites, $\alpha = 1,\dots,N$ is the ${\rm SU}(N)$ spin index, and $a = 1,\dots,n_c$ will be called a “color” index. The $\alpha$ index transforms in the fundamental representation of ${\rm SU}(N)$ spin rotations; that is, a global ${\rm SU}(N)$ rotations acts by $$f_{\br a \alpha} \to U_{\alpha \beta} f_{\br a \beta} \text{,}$$ where $U$ is an arbitrary ${\rm SU}(N)$ matrix. (Here, and throughout the paper, summation over repeated indices is implied. This does not apply to repeated site labels $\br$.) Similarly, the $a$ index transforms in the fundamental representation of ${\rm SU}(n_c)$ color rotations. It is important to distinguish the spinons from the physical fermions of an underlying Hubbard model (as in Appendix \[sec:atomic\]); we elaborate on this distinction and its importance below. Before defining the Hamiltonian, we must specify the type of spin at each lattice site. This is accomplished by a pair of local constraints, $$\begin{aligned} f^\dagger_{\br a \alpha} f^{\vphantom\dagger}_{\br a \alpha} &=& n_c m \label{eqn:num-constraint} \\ f^\dagger_{\br a \alpha} T^A_{a b} f^{\vphantom\dagger}_{\br b \alpha} &=& 0 \label{eqn:color-constraint} \text{.}\end{aligned}$$ Here, $A = 1,\dots,n_c^2 - 1$ labels the traceless, Hermitian $n_c \times n_c$ matrices $T^A$ that generate infinitesimal ${\rm SU}(n_c)$ rotations. The proper interpretation of these constraints is that, for each lattice site, we restrict to the subspace of the fermion Hilbert space spanned by eigenstates of the left-hand sides of Eqs. (\[eqn:num-constraint\],\[eqn:color-constraint\]), with eigenvalues given by the right-hand sides. The first constraint specifies a fixed number of fermions on each lattice site, and the second constraint dictates that each site is a color singlet. The second constraint is omitted when $n_c = 1$. These constraints project out the “charge” (conserved number) and color degrees of freedom of the spinons, which are not physical at the microscopic level but are important for understanding the low-energy effective theories obtained in the large-$N$ limit. While the constraint Eq. (\[eqn:color-constraint\]) may appear mysterious, in the case $n_c = 2$ it arises naturally in the large-$U$ limit of the Hubbard model describing one type of AEA Mott insulator, as described in Appendix \[sec:atomic\]. Taken together, the constraints imply that the spin at each site transforms in the ${\rm SU}(N)$ irreducible representation with a $m \times n_c$ rectangular Young tableau – this is shown in Appendix \[app:1site-irrep\]. Since *all* physical operators must commute with these local constraints, which together form a ${\rm U}(n_c)$ algebra, in this choice of variables, there is a local ${\rm U}(n_c)$ redundancy. This is intimately related to the fact that, in the large-$N$ limit, the low-energy effective theory is a ${\rm U}(n_c)$ gauge theory; we shall see this below. This type of slave particle representation has been employed before.[@read89b; @freedman04; @xuc10] The ${\rm SU}(N)$ spin operators are defined to be $$S_{\alpha \beta}(\br) = \sum_a f^\dagger_{\br a \alpha} f^{\vphantom\dagger}_{\br a \beta} \text{,}$$ and the Hamiltonian is $$\label{eqn:hspin2} {\cal H} = \sum_{( \br, \br' )} J_{\br \br'} S_{\alpha \beta}(\br) S_{\beta \alpha} (\br') \text{.}$$ Here, the sum is over all pairs of sites $(\br, \br')$. The cases $n_c = 1,2$ are realizable with alkaline earth atoms, as discussed in Appendix \[sec:atomic\], and most of our analysis is focused on these cases. We shall always consider $m = N / k$, where $k \geq 2$ is an integer. The parameter $k$, as introduced in Sec. \[sec:intro\], is the minimum number of spins required to form a ${\rm SU}(N)$ singlet. This model becomes exactly solvable in the limit where $N$ and $m$ are taken to infinity, while $k$ and $n_c$ are held fixed. For technical convenience, we also write $J_{\br \br'} = {\cal J}_{\br \br'} / N$ and hold ${\cal J}_{\br \br'}$ fixed; this corresponds merely to multiplication of the Hamiltonian by a constant. The case of greatest experimental interest is $m=1$, and the large-$N$ limit with fixed $k$ and $n_c$ should be thought of as a solvable limit of the model with $m=1$ and $N = k$; the minimum number of spins required to form a singlet is the same as in this model. We now describe in detail the large-$N$ solution, which follows the work of Affleck and Marston[@affleck88; @marston89], and also Read and Sachdev.[@read89b] Affleck and Marston studied the case $n_c = 1$ and $m = N/2$, while Read and Sachdev generalized their results to arbitrary $n_c$ while still fixing $m = N/2$. The formal structure of the large-$N$ solution is the same as in the earlier works, but, as is discussed in the following sections, the nature of the ground states is dramatically different. We first consider separately the case $n_c = 1$ for its greater simplicity. The starting point is the imaginary-time functional integral for the partition function $$\label{eqn:pf} Z = \int {\cal D} f {\cal D} \bar{f} {\cal D} \chi {\cal D} \lambda \exp\big( - S(f, \bar{f}, \chi, \lambda) \big) \text{,}$$ where the action is $$\begin{aligned} \label{eq:hsa} S &=& \int_0^\beta d\tau \sum_{\br} \Big[ \bar{f}_{\br \alpha} \partial_{\tau} f_{\br \alpha} + i \lambda_{\br} \big( \bar{f}_{\br \alpha} f_{\br \alpha} - m \big) \Big]\label{eqn:nc1action} \\ &+& \int_0^\beta d\tau\prsum \frac{N}{{\cal J}_{\br \br'} } | \chi_{\br \br'} |^2 \nonumber \\ &+& \int_0^\beta d\tau \prsum \big( \chi_{\br \br'} \bar{f}_{\br \alpha} f_{\br' \alpha} + \text{H.c.} \big) \nonumber \text{.}\end{aligned}$$ Here, the fermionic variables $f_{\br \alpha}(\tau)$ and $\bar{f}_{\br \alpha}(\tau)$ are the usual Grassmann variables. $\lambda_{\br}(\tau)$ is a real Lagrange multiplier field that implements the constraint $f^\dagger_{\br \alpha} f^{\vphantom\dagger}_{\br \alpha} = m$. The primed sum $\sum'_{(\br, \br')}$ in the last two terms is over only those bonds $(\br, \br')$ where ${\cal J}_{\br \br'} \neq 0$, and $\chi_{\br \br'}(\tau)$ is a complex field defined on the same set of bonds. Upon integrating out $\chi$ one obtains the Hamiltonian Eq. (\[eqn:hspin2\]), which is quartic in fermion operators. We focus on the zero-temperature limit $\beta \to \infty$. We can formally integrate out the fermions and obtain an effective action $$\begin{aligned} S_{{\rm eff}}(\chi, \lambda) &=& \int_0^\beta d\tau\prsum \frac{N}{{\cal J}_{\br \br'} } | \chi_{\br \br'} |^2 - i m \int_0^\beta d\tau \sum_{\br} \lambda_{\br} \nonumber \\ &+& N \operatorname{Tr} \operatorname{ln} Q(\chi, \lambda) \text{,}\end{aligned}$$ where $Q$ is the quadratic form characterizing the fermionic part of the action Eq. (\[eqn:nc1action\]). Since $m = N/k$, $S_{{\rm eff}}$ has a prefactor of $N$ and no other $N$-dependence, implying that when $N \to \infty$ the functional integral over $\chi$ and $\lambda$ can be done exactly using the saddle point approximation. We therefore replace $\chi$ and $\lambda$ by non-fluctuating fields $$\begin{aligned} \chi_{\br \br'} &\to& \bar{\chi}_{\br \br'} \\ \lambda_{\br} &\to& i \mu_{\br} \text{,}\end{aligned}$$ which are substituted into Eq. (\[eqn:nc1action\]) to obtain a theory of non-interacting fermions subject to the mean-field Hamiltonian $$\begin{aligned} {\cal H}_{{\rm MFT}} &=& \prsum \frac{N}{{\cal J}_{\br \br'} } | \bar{\chi}_{\br \br'} |^2 + m \sum_{\br} \mu_{\br} \\ &+& \prsum \big( \bar{\chi}_{\br \br'} f^\dagger_{\br \alpha} f^{\vphantom\dagger}_{\br' \alpha} + \text{H.c.} \big) - \sum_{\br} \mu_{\br} f^\dagger_{\br \alpha} f^{\vphantom\dagger}_{\br \alpha} \nonumber \text{.}\end{aligned}$$ The imaginary saddle point $\lambda_{\br} \to i \mu_{\br}$ is needed for the mean-field Hamiltonian to be Hermitian. We emphasize that despite the appearance of the term “mean-field,” and the appearance of mean-field equations and a mean-field Hamiltonian, we are not making any sort of mean-field *approximation*. That is, in the $N \to \infty$ limit, the specific mean-field decoupling we consider – and *only* this decoupling – becomes exact. The results we present are thus exact for the Heisenberg spin model in the $N \to \infty$ limit. In order for this to be a legitimate saddle point, we must satisfy the extremum condition $$\frac{\delta}{\delta \chi_{\br \br'}} S_{{\rm eff}} \Big|_{\chi \to \bar{\chi}, \lambda \to i \mu} = \frac{\delta}{\delta \lambda_{\br}} S_{{\rm eff}} \Big|_{\chi \to \bar{\chi}, \lambda \to i \mu} = 0 \text{.} \label{eqn:excond}$$ In the low-temperature limit, the ground state energy $E_{{\rm MFT}}$ of ${\cal H}_{{\rm MFT}}$ satisfies $S_{{\rm eff}}(\chi \to \bar{\chi}, \lambda \to i \mu) = \beta E_{{\rm MFT}}$, so satisfying Eq. (\[eqn:excond\]) is equivalent to extremizing the ground state energy. The saddle point equations of Eq. (\[eqn:excond\]) are equivalent to the more convenient expressions $$\begin{aligned} \label{eqn:nc1-spe-chi} \bar{\chi}_{\br \br'} &=& - \frac{{\cal J}_{\br \br'}}{N} \langle f^{\dagger}_{\br' \alpha} f^{\vphantom\dagger}_{\br \alpha} \rangle \\ \label{eqn:nc1-spe-constraint} m &=& \langle f^{\dagger}_{\br \alpha} f_{\br \alpha} \rangle \text{.}\end{aligned}$$ Here, the expectation values are calculated using the non-interacting Hamiltonian $H_{{\rm MFT}}$. Now that we have discussed the simpler case $n_c = 1$, we discuss the general case $n_c > 1$, describing only those aspects that differ from $n_c = 1$, and making some definitions that will be useful later on. The principal difference is that now the fields $\chi$ and $\lambda$ are $n_c \times n_c$ matrices: $\chi^{a b}_{\br \br'}(\tau)$ is a general complex $n_c \times n_c$ matrix, and $\lambda^{a b}_{\br}(\tau)$ is a $n_c \times n_c$ Hermitian matrix. The partition function is of the same form as Eq. (\[eqn:pf\]), where the integration over $\lambda$ is understood to be over the restricted space of Hermitian matrices. The action is now $$\begin{aligned} S &=& \int_0^\beta d\tau \sum_{\br} \Big[ \bar{f}_{\br a \alpha} \partial_{\tau} f_{\br a \alpha} + i \big( \lambda^{b a}_{\br} \bar{f}_{\br a \alpha} f_{\br b \alpha} - m \operatorname{tr}(\lambda_{\br}) \big) \Big] \nonumber \\ &+& \int_0^\beta d\tau \prsum \frac{N}{{\cal J}_{\br \br'}} \operatorname{tr} ( \chi^\dagger_{\br \br'} \chi^{\vphantom\dagger}_{\br \br'} ) \label{eqn:nc2action} \\ &+& \int_0^\beta d\tau \prsum \big[ \chi^{a b}_{\br \br'} \bar{f}_{\br a \alpha} f_{\br' b \alpha} + \text{H.c.} \big] \text{.} \nonumber\end{aligned}$$ The traces in this expression are in the color space. The field $\lambda_{\br}$ is again a Lagrange multiplier, now implementing both the constraints of Eq. (\[eqn:num-constraint\]) and (\[eqn:color-constraint\]). Again, integrating out $\chi_{\br \br'}$ we obtain the Hamiltonian Eq. (\[eqn:hspin2\]). The saddle point values of the fields take the form $$\begin{aligned} \chi^{a b}_{\br \br'} &\to& \bar{\chi}^{a b}_{\br \br'} \\ \lambda^{a b}_{\br} &\to& i \mu^{a b}_{\br} \text{,}\end{aligned}$$ where $\mu_{\br}$ is a Hermitian matrix. The mean-field Hamiltonian is $$\begin{aligned} {\cal H}_{{\rm MFT}} &=& \prsum \frac{N}{{\cal J}_{\br \br'} } \operatorname{tr} ( \bar{\chi}^\dagger_{\br \br'} \bar{\chi}^{\vphantom\dagger}_{\br \br'} ) + m \sum_{\br} \operatorname{tr}(\mu_{\br}) \label{eqn:hmft} \\ &+& {\cal H}_K + {\cal H}_V \text{,} \nonumber\end{aligned}$$ where $$\begin{aligned} \label{eqn:hk} {\cal H}_K &=& \prsum \big( \bar{\chi}^{a b}_{\br \br'} f^\dagger_{\br a \alpha} f^{\vphantom\dagger}_{\br' b \alpha} + \text{H.c.} \big) \\ \label{eqn:hv} {\cal H}_V &=& - \sum_{\br} \mu^{b a}_{\br} \hat{n}^{a b}_{\br} \text{.}\end{aligned}$$ Here, we have defined the color density $$\hat{n}^{a b}_{\br} = f^\dagger_{\br a \alpha} f^{\vphantom\dagger}_{\br b \alpha} \text{.}$$ Note that we can also write ${\cal H}_V = - \sum_{\br} \operatorname{tr} ( \mu_{\br} \hat{n}_{\br} )$. The saddle point equations are now $$\begin{aligned} \bar{\chi}^{a b}_{\br \br'} &=& - \frac{{\cal J}_{\br \br'}}{N} \langle f^{\dagger}_{\br' b \alpha} f^{\vphantom\dagger}_{\br a \alpha} \rangle \label{eqn:spe-chi} \\ m \, \delta^{a b} &=& \langle \hat{n}^{a b}_{\br} \rangle \label{eqn:spe-constraints} \text{.}\end{aligned}$$ When analyzing the mean-field Hamiltonian we shall always work in the canonical ensemble for the conserved fermion number. This discussion shows that finding the ground state in the large-$N$ limit reduces to finding the saddle point with lowest energy $E_{{\rm MFT}}$. In general this task, while a great deal simpler than finding the ground state of the original quantum problem, is still nontrivial. We shall make progress below using a combination of exact lower bounds on $E_{{\rm MFT}}$, and numerical methods to search for ground states. As mentioned above, it is important to recognize that the $f^\dagger_{\br a \alpha}$ spinon operators are not the same as the physical fermions of the Hubbard model discussed in Appendix \[sec:atomic\]. We are describing a spin model, and there are only ${\rm SU}(N)$ spin degrees of freedom. In addition to spin degrees of freedom the physical alkaline earth atom fermions have degrees of freedom associated with their conserved number, as well as with their $^{1}S_0$ and $^{3}P_0$ electronic states. These degrees of freedom are not present in the model; this is appropriate for a low-energy description of the Mott insulating states we are describing, where excitations associated with these degrees of freedom are gapped. (To describe such gapped excitations, one must return to the original Hubbard model.) The difference between the spinons and the physical fermions is manifest when we consider the fluctuations about a mean-field saddle point. This allows us to construct a low-energy effective theory, which goes beyond mean-field theory for a given saddle point. The spinons in this effective theory should not be interpreted as a microscopic representation of the spins, but as low-energy effective degrees of freedom. As we shall see below, the spinons are minimally coupled to a fluctuating ${\rm U}(n_c)$ gauge field. On the other hand, the physical fermions of the underlying Hubbard model are uncharged under this ${\rm U}(n_c)$ gauge field and do not couple to it directly. We consider fluctuations of the form $$\begin{aligned} \lambda^{a b}_{\br}(\tau) &=& i \mu^{a b}_{\br} + a^{a b}_{\br \tau}(\tau) \\ \chi^{a b}_{\br \br'}(\tau) &=& \Big[ \bar{\chi}_{\br \br'} \exp \big( i a_{\br \br'}(\tau) \big) \Big]^{a b} \text{,}\end{aligned}$$ where $a_{\br \br'}(\tau)$ is a $n_c \times n_c$ Hermitian matrix, so that $e^{i a_{\br \br'} }$ is unitary. While other fluctuations are typically trivially massive (*e.g.* amplitude fluctuations in $\chi$), these fluctuations take the form of a ${\rm U}(n_c)$ gauge field minimally coupled to the spinons. Specifically, $a_{\br \tau}$ and $a_{\br \br'}$ form the time and space components, respectively, of the fluctuating ${\rm U}(n_c)$ vector potential. Gauge fluctuations can and do dramatically modify the properties of the mean-field state, and therefore should in general not be neglected. For example, if the gauge field is in a confining phase, then the spinons will not be good quasiparticle excitations, as a naive mean-field analysis would suggest – this indeed occurs in the VCS ground states. On the other hand, in the CSL and dCSL phases, Chern-Simons terms for the gauge field are present; this not only prevents spinon confinement, it converts the spinons from fermions into anyons. Properties of topological liquid ground states {#sec:top} ============================================== Anticipating the results on energetics discussed below, in this section we discuss the properties of the three topological liquid ground states on the square lattice. Since the main focus of this paper is on energetics, we shall content ourselves primarily with deriving low-energy effective field theories for each state, and shall not discuss the resulting properties in detail. Abelian chiral spin liquid {#sec:acsl} -------------------------- The Abelian chiral spin liquid (ACSL) occurs for $n_c = 1$, and corresponds to a mean-field saddle point $$\begin{aligned} \label{eq:uniformfield} \bar{\chi}_{\br \br'} &=& \chi e^{i a^0_{\br \br'} } \\ \mu_{\br} &=& 0 \text{,} \end{aligned}$$ where $\chi$ is real and positive, and $a^0_{\br \br'}$ is chosen so that $2 \pi / k$ magnetic flux pierces each plaquette of the square lattice. The band structure consists of $k$ bands, of which the lowest is full and the others are empty, resulting in a Hall conductance (for the mean-field fermions) of $\sigma_{x y} = N$. To understand the properties of this state it is necessary to go beyond mean-field level, and couple the fermions to the fluctuating ${\rm U}(1)$ gauge field. However some properties can already be understood at mean-field level. In particular, we see that parity (*i.e.* reflection) and time reversal symmetries are spontaneously broken, while the other symmetries of the square lattice (as well as ${\rm SU}(N)$ spin rotation) are preserved. To see this, it is important to recall that in a slave-particle gauge theory such as this one, symmetry operations act projectively on the fermions.[@wen02] For example, if $S : \br \to S(\br)$ is a space-group operation, then acting on a fermion it may be supplemented by a general space-dependent ${\rm U}(1)$ gauge transformation: $$S : f_{\br \alpha} \to e^{i \lambda^S_{\br} } f_{ S(\br) \alpha } \text{.}$$ An operation $S$ is a symmetry if and only if it is possible to find a gauge transformation $\lambda^S_{\br}$ such that the above transformation leaves the mean-field Hamiltonian invariant. For the CSL saddle point, this is nothing but the familiar magnetic translation group (expanded to include all symmetries, not only translations). Because reflections and time reversal both change the sign of the gauge-invariant magnetic flux through each plaquette, they are spontaneously broken in the ACSL. Other operations leave the flux invariant and are indeed symmetries of the ACSL. To go beyond mean-field theory, we couple the fermions to the fluctuating ${\rm U}(1)$ gauge field. Since the fermions are gapped we can integrate them out, resulting in the following imaginary-time continuum effective action for the gauge field: $$\label{eq:chernsimons} S = \int d\tau d^2\br \Big[ \frac{i N}{4 \pi} \epsilon_{\mu \nu \lambda} a_{\mu} \partial_{\nu} a_{\lambda} + \frac{1}{2 e^2} ( \sum_{\mu} \epsilon_{\mu \nu \lambda} \partial_{\nu} a_{\lambda} )^2 \Big] \text{.}$$ This is simply Maxwell-Chern-Simons theory. The coefficient of the Chern-Simons term is determined by $\sigma_{x y} = N$, while the coefficient of the Maxwell term is non-universal (in the large-$N$ limit, it is determined by details of the fermion band structure). Various properties of the ACSL can be derived from this effective action – notably, it implies that the fermions are converted via flux attachment into anyons with statistics angle $\pi \pm \pi / N$. A different – and particularly concrete – route beyond mean-field theory is to construct a wavefunction for the ACSL. One starts with the ground state of the mean-field Hamiltonian, which for the ACSL is simply an integer quantum Hall state with the lowest (lattice) Landau level filled. One applies the Gutzwiller projection operator ${\cal P}$, which simply projects onto the subspace with exactly $m$ fermions on every lattice site. By construction, $| \psi \rangle = {\cal P} | \psi_0 \rangle$ satisfies the local constraint Eq. (\[eqn:num-constraint\]) and is thus a legitimate wavefunction for the spin model. Such Gutzwiller projected wavefunctions have been studied and discussed in a variety of contexts (for a few examples, see Refs. ), and properties of such wavefunctions can be computed numerically using a Monte Carlo technique.[@gros89] It is reasonable to expect that $| \psi \rangle$ should correctly capture the properties of the corresponding low-energy effective gauge theory; for example, it has been shown that a class of projected wavefunctions associated with an effective $Z_2$ gauge theory capture the expected $Z_2$ topological order.[@ivanov02; @paramekanti05] However, this expectation will need to be tested by future detailed studies of the wavefunction. In the present case, the projected wavefunction may be a useful tool for future microscopic analysis away from the large-$N$ limit, in particular to help assess the prospects for ACSL in physically realizable models. Another important property of the ACSL is the presence of gapless chiral edge states, which are described by a chiral ${\rm SU}(N)_1$ Wess-Zumino-Witten (WZW) model. A simple argument for this can be given following Ref. : Rather than consider the low-energy effective field theory of fermions coupled to a gauge field, we consider the projected wavefunction described above. Before projection, the edge mode consists simply of $N$ chiral fermions. Using non-Abelian bosonization, the edge theory can be cast as two decoupled theories: a chiral ${\rm SU}(N)_1$ Wess-Zumino-Witten (WZW) model, and a chiral ${\rm U}(1)$ Luttinger liquid.[@witten84; @knizhnik84] This is an instance of spin-charge separation, where the ${\rm SU}(N)$ spin degrees of freedom are associated with the ${\rm SU}(N)_1$ WZW model, and the fermion “charge” with the Luttinger liquid. Upon projection, the “charge” degrees of freedom are removed, and hence so is the ${\rm U}(1)$ Luttinger liquid, while the spin degrees of freedom and the ${\rm SU}(N)_1$ WZW model survive. Finally, we mention an alternate route to construct a low-energy effective theory for the ACSL that does not require integrating out the fermions. This approach is based on the Chern-Simons effective theory for Abelian quantum Hall states.[@wen95] In this approach, one pays the price that the ${\rm SU}(N)$ symmetry is broken down to ${\rm U}(1)^{N-1}$, but this is not expected to affect any topological properties of the state. Before coupling to the gauge field, each spin species of fermion is in an integer quantum Hall state, and the current of the fermions of spin $\alpha$ (where $\alpha = 1,\dots,N$) can be represented in terms of a ${\rm U}(1)$ gauge field: $$\label{eqn:current-rep-with-gauge-field} J^{\alpha}_{\mu} = \frac{1}{2\pi} \epsilon_{\mu \nu \lambda} \partial_{\nu} b^\alpha_{\lambda} \text{.}$$ The corresponding integer quantum Hall state is captured by a Chern-Simons term for $b^\alpha$, which gives the following contribution to the real-time Lagrangian: $${\cal L}_{\alpha} = \frac{1}{4\pi} \epsilon_{\mu \nu \lambda} b^{\alpha}_{\mu} \partial_{\nu} b^{\alpha}_{\lambda} \text{.}$$ Moreover, the coupling of the fermions of spin $\alpha$ to the gauge field $a_{\mu}$ is simply given by $$a_{\mu} J^{\alpha}_{\mu} = \frac{1}{2\pi} \epsilon_{\mu \nu \lambda} a_{\mu} \partial_{\nu} b^{\alpha}_{\lambda} \text{.}$$ Finally, the $a_{\mu}$ gauge field has no bare Chern-Simons term – it is a Lagrange-multiplier field whose role is to make the total ${\rm U}(1)$ fermion current vanish. (The Chern-Simons term derived above for $a_{\mu}$ came from integrating out the fermions.) Combining the above results, we have the Lagrangian in $K$-matrix form, $${\cal L} = \frac{1}{4\pi} K_{I J} A^I_{\mu} \epsilon_{\mu \nu \lambda} \partial_{\nu} A^J_{\lambda} \text{,}$$ where $I = 1,\dots, N+1$, $A^1_{\mu} = a_{\mu}$, $A^I = b^{\alpha - 1}_{\mu}$ for $I > 1$, and the $(N+1)\times(N+1)$ $K$-matrix is $$K = \left( \begin{array}{cc} 0 & {\cal I}^T \\ {\cal I} & {\bf 1}_{N \times N} \end{array} \right) \text{.}$$ Here, ${\bf 1}_{N \times N}$ is the $N \times N$ identity matrix, and ${\cal I}^T = (1, \dots, 1)$ is a $N$-element vector. Following Ref. , both bulk and edge topological properties can be deduced from this effective theory. We note that the $K$-matrix has $N$ positive eigenvalues and one negative eigenvalue, and thus gives rise to $N$ co-propagating edge modes and one counter-propagating mode. The counter-propagating mode, and one of the co-propagating modes, are singlets under ${\rm U}(1)^{N-1}$ spin rotations, and these singlet modes generically are expected to acquire a gap, leaving $N-1$ gapless co-propagating modes – this is nothing but the free boson description of the ${\rm SU}(N)_1$ chiral WZW model. Non-Abelian chiral spin liquid ------------------------------ The non-Abelian chiral spin liquid (nACSL) occurs for $n_c = 2$, and corresponds to a mean-field saddle point $$\begin{aligned} \bar{\chi}^{a b}_{\br \br'} &=& \chi e^{i a^0_{\br \br'} } \delta^{a b} \\ \mu^{a b}_{\br} &=& 0 \text{,} \end{aligned}$$ where $\chi$ is real and positive, and again $a^0_{\br \br'}$ is chosen so that $2 \pi / k$ magnetic flux pierces each plaquette of the square lattice. This state has a ${\rm U}(2) = {\rm U}(1) \times {\rm SU}(2)$ gauge structure, and upon going beyond mean-field theory the fermions are coupled to a ${\rm U}(2)$ gauge field. The background magnetic flux is a ${\rm U}(1)$ flux – the background ${\rm SU}(2)$ flux is zero. The band structure can be thought of as $k$ $2N$-fold degenerate bands, where the lowest band is filled and all others are empty. The mean-field fermions have a Hall conductance $\sigma_{x y} = 2 N$. As above, parity and time reversal are spontaneously broken, while other symmetries are preserved. In the large-$N$ limit, the ground state energy of the nACSL is precisely twice that of the ACSL. This occurs because, at the mean-field level, the nACSL is simply two decoupled copies of the $n_c = 1$ ACSL, each with the same magnetic flux. Upon integrating out the fermions, we obtain the following action: $$\begin{aligned} S &=& \frac{2 N i}{4 \pi} \int d\tau d^2 \br \, \epsilon_{\mu \nu \lambda} a_{\mu} \partial_{\nu} a_{\lambda} \nonumber \\ &+& \frac{i N}{4 \pi} \int d\tau d^2 \br \, \epsilon_{\mu \nu \lambda} \operatorname{tr} \Big[ \alpha_{\mu} \partial_{\nu} \alpha_{\lambda} - \frac{ 2 i }{3} \alpha_{\mu} \alpha_{\nu} \alpha_{\lambda} \Big] \text{.} \end{aligned}$$ Here $a_{\mu}$ is the ${\rm U}(1)$ gauge field, $\alpha_{\mu} = \sum_{i = 1}^{3} \alpha^i_{\mu} \sigma^i$ is the ${\rm SU}(2)$ gauge field ($\sigma^i$ are the usual Pauli matrices), and we omitted the Maxwell terms that are also present. The second term is the level-$N$ Chern-Simons term for the ${\rm SU}(2)$ gauge field, which gives rise to the non-Abelian statistics of the nACSL. As above for the ACSL, one can construct a wavefunction for the nACSL. One proceeds as above, but now must apply a projection operator to enforce both the local constraints Eq. (\[eqn:num-constraint\]) and Eq. (\[eqn:color-constraint\]). The chiral edge states of the nACSL can be understood in terms of an argument very similar to that given above for the ACSL. In mean-field theory, there are $2N$ chiral fermions on the edge of the system. Following Affleck,[@affleck86] this free fermion theory can be bosonized to a chiral ${\rm SU}(N)_2$ WZW model (carrying spin excitations), a chiral ${\rm SU}(2)_N$ WZW model (carrying color), and a chiral ${\rm U}(1)$ Luttinger liquid. Now the projection removes both the “charge” and color degrees of freedom of the fermions, leaving only the chiral ${\rm SU}(N)_2$ WZW model. Doubled chiral spin liquid {#sec:dcsl} -------------------------- The doubled chiral spin liquid (dCSL) occurs for $n_c = 2$, and corresponds to a mean-field saddle point $$\begin{aligned} \bar{\chi}^{a b}_{\br \br'} &=& \left( \begin{array}{cc} e^{i a^0_{\br \br'}} & 0 \\ 0 & e^{-i a^0_{\br \br'}} \end{array}\right) \label{eqn:dcsl-chi} \\ \mu^{a b}_{\br} &=& 0 \text{,} \end{aligned}$$ where $\chi$ and $a^0_{\br \br'}$ are as above. In contrast to the nACSL, there is now a ${\rm SU}(2)$ background magnetic flux, but no ${\rm U}(1)$ flux. Following the reasoning of Ref. , the presence of the nontrivial ${\rm SU}(2)$ flux breaks the ${\rm SU}(2)$ gauge structure down to ${\rm U}(1)$. More precisely, the $\alpha^1$ and $\alpha^2$ components of the ${\rm SU}(2)$ gauge field acquire a mass due to the presence of the flux, while the $\alpha^3$ component is unaffected. Therefore, for the purposes of understanding the low-energy physics, we can drop the $\alpha^1$ and $\alpha^2$ components of the ${\rm SU}(2)$ gauge field, and consider a theory of fermions coupled to the two ${\rm U}(1)$ gauge fields $a_{\mu}$ and $\alpha^3_{\mu}$. It should be noted that the special role of $\alpha^3_{\mu}$, as compared to $\alpha^1_{\mu}$ and $\alpha^2_{\mu}$, is determined by the choice of gauge made in writing Eq. (\[eqn:dcsl-chi\]) – a global ${\rm SU}(2)$ gauge transformation can be made to select any desired preferred axis. In the large-$N$ limit, the ground state energy of the dCSL is again precisely twice that of the ACSL, because again, at the mean-field level, the dCSL is two decoupled copies of the $n_c = 1$ ACSL, but now with opposite magnetic fluxes. This means that in the $N \to \infty$ limit the dCSL and nACSL have exactly the same energy. This degeneracy is expected to be lifted by $1/N$ corrections that can in principle be computed; this is left for future work. The dCSL actually respects time reversal symmetry, which is implemented by the operation $${\cal T} : f_{\br a \alpha} \to (i \sigma^2)_{a b} f_{\br b \alpha} \text{.}$$ (This operation can be supplemented as well with a ${\rm SU}(N)$ rotation, but due to the ${\rm SU}(N)$ symmetry this is not essential.) The crucial point is that the gauge-rotation in the color space compensates for the fact that complex conjugation reverses the flux. Reflection symmetry ${\cal R} : \br \to \br'$, where $\br' = (-r_x, r_y)$, is similarly preserved, and $${\cal R} : f_{\br a \alpha} \to (i \sigma^2)_{a b} f_{\br' b \alpha} \text{.}$$ The other symmetries (lattice translations and rotations, and ${\rm SU}(N)$ spin rotations) are preserved in the dCSL as they are in the above two states. The dCSL therefore does not spontaneously break *any* symmetries, in contrast to the ACSL and nACSL. Upon integrating out the fermions, we obtain the following mutual Chern-Simons action: $$S = \frac{i N}{\pi} \int d\tau d^2\br \, \epsilon_{\mu \nu \lambda} \, a_{\mu} \partial_{\nu} \alpha^3_{\lambda} \text{.}$$ Here we have again omitted the Maxwell terms that will also be present; the mutual Chern-Simons term fully gaps out both gauge fields, and the Maxwell terms play only the quantitative role of setting the scale of the gap to gauge field excitations. It should be noted that similar spin liquid states, but with an additional non-Abelian gauge structure, were considered in Ref. . (There, however, the analog of the $\alpha^3_{\mu}$ gauge field was incorrectly dropped, and therefore a ${\rm U}(1)$ mutual Chern-Simons term was missed.) It can be seen that this term also converts the mean-field fermionic excitations into anyons with statistics angle $\pi \pm \pi / N$, which can occur in a time-reversal invariant fashion due to the color index. We note that the same procedure described for the nACSL can be applied here to produce a wavefunction for the dCSL. Because the dCSL respects time reversal symmetry, it lacks chiral edge states. However, it is interesting to note that – when $N$ is odd – the edge states are protected at the mean-field level, because the mean-field Hamiltonian has a nontrivial $Z_2$ topological invariant[@kane05b] for odd $N$. It is therefore conceivable that topologically protected edge states could survive coupling of the mean-field fermions to the fluctuating gauge fields, and it would be interesting to study this question. Presumably such protection, if it occurs, would only hold if one assumes that no spontaneous breaking of time-reversal symmetry occurs at the edge. Finally, we can also construct a $K$-matrix Lagrangian for the dCSL as above for the ACSL. We let $A^1_{\mu} = a_{\mu}$ and $A^2_{\mu} = \alpha^3_{\mu}$. Next, for $I = 3, \dots, N + 2$, $A^I_{mu}$ represents the current of fermions with $a = 1$ and spin $\alpha = I - 2$ (as in Eq. \[eqn:current-rep-with-gauge-field\]), while for $I = N+3, \dots, 2 N + 2$, $A^I_{\mu}$ represents the current of fermions with $a = 2$ and spin $\alpha = I - (N+2)$. Following essentially the same reasoning as in Sec. \[sec:acsl\], we have the $(2 N + 2) \times (2 N + 2)$ $K$-matrix $$K = \left( \begin{array}{cccc} 0 & 0 & {\cal I}^T & {\cal I}^T \\ 0 & 0 & {\cal I}^T & - {\cal I}^T \\ {\cal I} & {\cal I} & {\bf 1}_{N \times N} & {\bf 0}_{N \times N} \\ {\cal I} & - {\cal I} & {\bf 0}_{N \times N} & - {\bf 1}_{N \times N} \end{array} \right) \text{,}$$ where ${\bf 0}_{N \times N}$ is the $N \times N$ matrix of zeros. Large-$N$ limit: General lattices {#sec:genlat} ================================= In Ref. , Rokhsar derived an exact lower bound on the large-$N$ ground state energy $E_{{\rm MFT}}$, for the case $n_c = 1$ and $k = 2$. He further showed that this bound is saturated by a VBS state under conditions that are satisfied for the great majority of lattices one encounters. More precisely, let us define ${\cal J}_{{\rm max}}$ to be the largest of the exchange couplings ${\cal J}_{\br \br'}$. (Note that we do *not* restrict to only nearest-neighbor exchange.) Following Rokhsar we say that a lattice is dimerizable with respect to ${\cal J}_{{\rm max}}$ when it is possible to partition the lattice into 2-site dimers, such that the two sites $(\br, \br')$ in each dimer have ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$. Each lattice site must belong to precisely one dimer. For a fixed partition into dimers, let the set of bonds $(\br, \br')$ that connect the two sites of a dimer be $B$. Rokhsar considered the VBS saddle point defined by $$\begin{aligned} \begin{array}{ll} \chi_{\br \br'} = \chi \neq 0 & \text{, } (\br, \br') \in B \\ \chi_{\br \br'} = 0 & \text{, } (\br, \br') \notin B \end{array} \text{,}\end{aligned}$$ and showed that it is a ground state (its energy saturates the bound on $E_{{\rm MFT}}$). Except in the case of disordered systems lacking translation symmetry, most familiar lattices (and associated sets of ${\cal J}_{\br \br'}$) are dimerizable with respect to ${\cal J}_{{\rm max}}$.[@rokhsar90] Therefore, when $k=2$, one has to consider a relatively unusual lattice to find anything other than a VBS ground state in the large-$N$ limit. (See Ref.  for an example of a lattice that is not dimerizable with respect to ${\cal J}_{{\rm max}}$.) Here, we generalize Rokhsar’s bound to the case of arbitrary $k$ and $n_c$ (Sec. \[sec:bound\]). In Sec. \[sec:saturation\] we derive necessary and sufficient conditions to saturate the bound. Next, in Sec. \[sec:simplex\], we show that the analog of Rokhsar’s VBS saddle point is a $k$-simplex VCS state, where the lattice is decomposed into $k$-site simplices ($k$-simplices for short), in which every site is connected to the other $k-1$ sites by an exchange coupling ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$. As soon as $k > 2$, *many* lattices cannot be decomposed into $k$-simplices, and for $k \geq 5$ we show that no lattice can be decomposed into $k$-simplices without fine-tuning of the exchange couplings. Therefore it becomes more and more difficult to saturate the bound as $k$ increases. Derivation of the bound {#sec:bound} ----------------------- Our starting point is the mean-field Hamiltonian ${\cal H}_{{\rm MFT}}$ for general $n_c$ \[Eq. (\[eqn:hmft\])\], where $\bar{\chi}^{a b}_{\br \br'}$ and $\mu^{a b}_{\br}$ are chosen to satisfy the saddle-point equations Eqs. (\[eqn:spe-chi\],\[eqn:spe-constraints\]). The bound is derived in two steps: first we will show $E_{{\rm MFT}} \geq E'_{{\rm MFT}}$ (defined below), then we will show $E'_{{\rm MFT}} \geq E_{{\rm bound}}$. The first step was actually omitted in Ref. . While this step should not be omitted even in the special case considered there, none of the results of Ref.  are affected by this omission. Recalling the definitions of ${\cal H}_{{\rm MFT}}$ in Eq.(\[eqn:hmft\]) and ${\cal H}_K$ in Eq. (\[eqn:hk\]), we begin by defining $${\cal H}'_{{\rm MFT}} = \prsum \frac{N}{{\cal J}_{\br \br'} } \operatorname{tr} ( \bar{\chi}^\dagger_{\br \br'} \bar{\chi}^{\vphantom\dagger}_{\br \br'} ) + {\cal H}_K \text{,}$$ where $\bar{\chi}^{a b}_{\br \br'}$ is the same as in ${\cal H}_{{\rm MFT}}$. That is, we obtain ${\cal H}'_{{\rm MFT}}$ by starting with ${\cal H}_{{\rm MFT}}$ and setting $\mu^{a b}_{\br}$ to zero. The ground state energy of ${\cal H}'_{{\rm MFT}}$ is $E'_{{\rm MFT}}$. Note that, in general, the ground state of ${\cal H}'_{{\rm MFT}}$ will not satisfy the saddle point equations. Now, $E_{{\rm MFT}} = \langle {\cal H}_{{\rm MFT}} \rangle$, where the expectation value is taken using the ground state of ${\cal H}_{{\rm MFT}}$. Using Eq. (\[eqn:spe-constraints\]), we note that $\langle {\cal H}_V \rangle = - m \sum_{\br}\operatorname{tr} (\mu_{\br}) $; this cancels the second term in ${\cal H}_{{\rm MFT}}$, so we have $$E_{{\rm MFT}} = \prsum \frac{N}{{\cal J}_{\br \br'} } \operatorname{tr} ( \bar{\chi}^\dagger_{\br \br'} \bar{\chi}^{\vphantom\dagger}_{\br \br'} ) + \langle {\cal H}_{K} \rangle \text{.}$$ Letting $E_{K}$ be the ground state energy of ${\cal H}_K$, we have $\langle {\cal H}_K \rangle \geq E_{K}$, and so $$E_{{\rm MFT}} \geq E'_{{\rm MFT}} = N \prsum \frac{| \chi_{\br \br'}|^2}{J_{\br \br'} } + E_{K} \text{.}$$ This is the first of the two desired inequalities. We shall now deal with ${\cal H}'_{{\rm MFT}}$ and $E'_{{\rm MFT}}$, and establish a lower bound on $E'_{{\rm MFT}}$. To do this, we generalize Rokhsar’s argument[@rokhsar90] to the case of general $m$ and $n_c$. Let $N_s$ be the number of sites of our lattice. We label the single-particle energy levels of ${\cal H}_K$ by an index $q$; the energies are $\epsilon_q$. ${\cal H}_K$ is specified by the $n_c N N_s \times n_c N N_s$ Hermitian matrix $$(H_K)_{\br a \alpha ; \br' b \beta} = \delta_{\alpha \beta} \chi^{a b}_{\br \br'} \text{,}$$ where $$\chi_{\br' \br} = \chi^\dagger_{\br \br'} \text{.}$$ Because this is traceless (all diagonal entries are zero), we have $$\sum_q \epsilon_q = 0 \text{.} \label{eqn:sum-to-zero}$$ The ground state of ${\cal H}_{K}$ (and hence of ${\cal H}'_{{\rm MFT}}$) is obtained by filling the lowest $n_c m N_s$ energy levels with fermions. We call the set of such energy levels ${\cal L}$. The other $n_c(N-m)N_s$ levels, which we denote by the set ${\cal U}$, are empty. It will be useful to define averages over the sets of levels ${\cal L}$ and ${\cal U}$: $$\begin{aligned} \left[ \epsilon \right]_{\cal L} &=& \frac{1}{n_c m N_s} \sum_{q \in {\cal L}} \epsilon_{q} \\ \left[ \epsilon \right]_{\cal U} &=& \frac{1}{n_c (N - m) N_s} \sum_{q \in {\cal U}} \epsilon_q \text{.}\end{aligned}$$ We also denote the average of $\epsilon^2_q$ over the two sets by $[ \epsilon^2]_{\cal L}$ and $[ \epsilon^2 ]_{\cal U}$, and the average of $\epsilon^2_q$ over *all* states is written $[ \epsilon^2 ]$. Equation (\[eqn:sum-to-zero\]) implies $$\label{eqn:zerotrace} [ \epsilon ]_{\cal U} = - \frac{m}{N - m} [ \epsilon ]_{\cal L} \text{.}$$ The bound originates from the pair of inequalities $$\begin{aligned} \label{eqn:seL} [ \epsilon ]_{\cal L}^2 &\leq& [ \epsilon^2 ]_{\cal L} \\ \label{eqn:seU} [ \epsilon ]_{\cal U}^2 &\leq& [ \epsilon^2 ]_{\cal U} \text{,}\end{aligned}$$ which just express the fact that variance is positive. These inequalities are saturated (become equalities) if and only if $\epsilon_q$ is constant over each of the sets ${\cal L}$ and ${\cal U}$. Multiplying Eq. (\[eqn:seL\]) by $m/N$, Eq. (\[eqn:seU\]) by $(N-m)/N$, and adding the two, we have $$\frac{m}{N} [ \epsilon ]_{\cal L}^2 + \frac{(N-m)}{N} [ \epsilon ]_{\cal U}^2 \leq [ \epsilon^2 ] \text{.}$$ Using Eq. (\[eqn:zerotrace\]) and the fact that $[ \epsilon]_{\cal L} < 0$, we have $$\label{eqn:Laverage} [\epsilon]_{\cal L} \geq - \sqrt{\frac{N-m}{m}} \sqrt{ [\epsilon^2 ] } \text{.}$$ Now, $E_{K} = n_c m N_s [\epsilon]_{\cal L}$, so we have shown $$E_{K} \geq - n_c m N_s \sqrt{\frac{N-m}{m}} \sqrt{[\epsilon^2]} \text{.}$$ The next step is to get a simple expression for $[\epsilon^2]$. We have $$\begin{aligned} [\epsilon^2] &=& \frac{1}{n_c N N_s} \sum_{\alpha} \epsilon^2_{\alpha} = \frac{1}{n_c N N_s} \operatorname{tr} [ H_K^2 ] \nonumber \\ &=& \frac{2}{n_c N_s} \sum_{(\br, \br')} \operatorname{tr} ( \chi^\dagger_{\br \br'} \chi^{\vphantom\dagger}_{\br \br'} ) \text{.}\end{aligned}$$ Therefore we have the inequality $$\begin{aligned} E'_{{\rm MFT}} &\geq& N \prsum \sum_{a, b} \frac{| \chi^{a b}_{\br \br'}|^2}{{\cal J}_{\br \br'} } \label{eqn:non-min-bound} \\ &-& n_c m N_s \sqrt{\frac{N-m}{m}} \sqrt{ \frac{2}{n_c N_s} \sum_{(\br, \br')} \sum_{a, b} | \chi^{a b}_{\br \br'}|^2 } \text{.} \nonumber\end{aligned}$$ The next step is to minimize this lower bound, which we do by taking the derivative of the right-hand side of Eq. (\[eqn:non-min-bound\]) with respect to $| \chi^{a b}_{\br \br'}|$ and setting it to zero: $$0 = 2 N \frac{|\chi^{a b}_{\br \br'}|}{{\cal J}_{\br \br'}} - \frac{ 2 m \sqrt{(N-m)/m} | \chi^{a b}_{\br \br'}| }{\sqrt{ \frac{2}{n_c N_s} \sum_{(\br'', \br''')} \sum_{c,d} | \chi^{c d}_{\br'' \br'''}|^2 } } \text{.}$$ For a given bond $(\br, \br')$, this equation implies that either $|\chi^{a b}_{\br \br'} | = 0$ for all $a,b$, or $$\label{eqn:mineq} \frac{2}{n_c N_s} \sum_{( \br'', \br''')} \sum_{c, d} | \chi^{c d}_{\br'' \br'''}|^2 = \frac{m (N-m)}{N^2} {\cal J}_{\br \br'}^2 \text{.}$$ Now, the left-hand side of Eq. (\[eqn:mineq\]) is independent of the bond $(\br , \br')$, and so we must have $$\frac{2}{n_c N_s} \sum_{( \br'', \br''')} \sum_{c, d} | \chi^{c d}_{\br'' \br'''}|^2 = \frac{m (N-m)}{N^2} {\cal J}^2_{*}$$ for some constant ${\cal J}_{*}$. Moreover, this implies that, for a given bond, unless ${\cal J}_{\br \br'} = {\cal J}_{*}$, then we must have $\chi^{a b}_{\br \br'} = 0$ (for all $a,b$). Therefore $$\begin{aligned} \prsum \sum_{a,b} \frac{| \chi^{a b}_{\br \br'}|^2}{{\cal J}_{\br \br'} } &=& \frac{1}{{\cal J}_{*}} \sum_{(\br, \br')} \sum_{a,b} | \chi^{a b}_{\br \br'} |^2 \\ &=& \frac{n_c N_s}{2} \frac{m (N-m)}{N^2} {\cal J}_{*} \text{.}\end{aligned}$$ Putting these results into Eq. (\[eqn:non-min-bound\]), we have $$E'_{{\rm MFT}} \geq - \frac{n_c N_s}{2} \frac{m (N-m)}{N} {\cal J}_{*} \text{.}$$ The global minimum is clearly achieved when ${\cal J}_{*} = {\cal J}_{{\rm max}}$, the largest of the ${\cal J}_{\br \br'}$. Therefore $$\label{eqn:bound} E_{{\rm MFT}} \geq E'_{{\rm MFT}} \geq - \frac{n_c N_s}{2} \frac{m (N- m)}{N} {\cal J}_{{\rm max}} \text{.}$$ Putting $m = N/k$ we have $$\label{eqn:rokhsar-bound} E_{{\rm MFT}} \geq - n_c N N_s \frac{ (k-1)}{2 k^2} {\cal J}_{{\rm max}} \text{,}$$ which reduces to Rokhsar’s result when $k = 2$ and $n_c = 1$. Necessary and sufficient conditions to saturate the bound {#sec:saturation} --------------------------------------------------------- Here, we show that the bound Eq. (\[eqn:rokhsar-bound\]) is saturated if and only if the following two conditions hold: (1) $\epsilon_q$ is constant over each of the sets ${\cal L}$ and ${\cal U}$. That is, all the filled states have the same energy, and all empty states have the same energy. (2) The color density $\tilde{n}^{a b}_{\br}$ calculated using ${\cal H}_K$ satisfies the condition $$\sum_{\br} \operatorname{tr} (\mu_{\br} \tilde{n}_{\br} ) = 0 \text{.}$$ This color density is defined by $$\tilde{n}^{a b}_{\br} = \langle \hat{n}^{a b}_{\br} \rangle_K \text{,}$$ where the expectation value is taken using the ground state of ${\cal H}_K$. Note that $\tilde{n}_{\br}$ in general does not satisfy the constraint Eq. (\[eqn:spe-constraints\]). These conditions for saturation are very restrictive, as we discuss below. There are two separate inequalities that must both be turned into equalities for the bound to be saturated. The first is $E'_{{\rm MFT}} \geq E_{{\rm bound}}$, and the second is $E_{{\rm MFT}} \geq E'_{{\rm MFT}}$. Saturation of the first and second inequalities leads to conditions (1) and (2) above, respectively. It is trivial to show that the first inequality is saturated if and only if $\epsilon_q$ is constant over each of the sets ${\cal L}$ and ${\cal U}$. We now show that condition (2) is equivalent to saturation of the second inequality. It will be useful to define a continuous family of Hamiltonians parametrized by $\alpha \in [0, 1]$: $${\cal H}_{\alpha} = {\cal H}_K + \alpha {\cal H}_V \text{.}$$ This interpolates between ${\cal H}_K$ at $\alpha = 0$ and ${\cal H}_K + {\cal H}_V$, the fermionic part of ${\cal H}_{{\rm MFT}}$, at $\alpha = 1$. The ground state of ${\cal H}_{\alpha}$ with energy $E_{\alpha}$ is denoted by $| \psi_{\alpha} \rangle$. Because we work in the canonical ensemble for the fermion number, we are free to make a constant shift $\mu^{a b}_{\br} \to \mu^{a b}_{\br} + c \delta^{a b}$ so that $\sum_{\br} \operatorname{tr} (\mu_{\br} ) = 0$ (note that this shift does not change $E_{{\rm MFT}}$). With this choice for $\mu_{\br}$, we have $$\langle \psi_1 | {\cal H}_V | \psi_1 \rangle = 0 \text{.}$$ We also have $E_{{\rm MFT}} - E'_{{\rm MFT}} = E_1 - E_0$. In particular, $E_{{\rm MFT}} = E'_{{\rm MFT}}$ if and only if $E_0 = E_1$. The variational principle implies $\langle \psi_{\alpha} | {\cal H}_{\alpha'} | \psi_{\alpha} \rangle \geq E_{\alpha'}$. The left-hand side of this inequality can be written $$\langle \psi_{\alpha} | {\cal H}_{\alpha'} | \psi_{\alpha} \rangle = E_{\alpha} + (\alpha' - \alpha) \langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \text{.}$$ We have thus shown $$E_{\alpha} + (\alpha' - \alpha) \langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \geq E_{\alpha'} \text{.}$$ If we put $\alpha = 1$, this gives $E_1 \geq E_{\alpha}$. On the other hand, putting $\alpha' = 1$ gives instead $$E_{\alpha} + (1 - \alpha) \langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \geq E_1 \text{.}$$ Combining these together, $$\label{eqn:ineq1} E_{\alpha} + (1 - \alpha) \langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \geq E_1 \geq E_{\alpha} \text{,}$$ which immediately implies $$\langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \geq 0 \text{.}$$ A special case of Eq. (\[eqn:ineq1\]) is $$E_0 + \langle \psi_0 | {\cal H}_V | \psi_0 \rangle \geq E_1 \geq E_0 \text{.}$$ From this it follows that if $\langle \psi_0 | {\cal H}_V | \psi_0 \rangle = 0$, then $E_1 = E_0$. Now suppose the converse, *i.e.* suppose $E_0 = E_1$. Note that first-order perturbation theory gives us $$\frac{d E}{d \alpha} = \langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \text{,}$$ and so $$E_1 - E_0 = \int_0^1 d\alpha \langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle \text{.}$$ By assumption this integral is equal to zero. Since the integrand is nonnegative, then we must have $\langle \psi_{\alpha} | {\cal H}_V | \psi_{\alpha} \rangle = 0$, and in particular for $\alpha = 0$. Therefore we have shown that $E_1 = E_0$ if and only if $\langle \psi_0 | {\cal H}_V | \psi_0 \rangle = 0$, and hence $E_{{\rm MFT}} = E'_{{\rm MFT}}$ if and only if $\langle \psi_0 | {\cal H}_V | \psi_0 \rangle = 0$. Since $\langle \psi_0 | {\cal H}_V | \psi_0 \rangle = \sum_{\br} \operatorname{tr} ( \mu_{\br} \tilde{n}_{\br} )$, we have established condition (2) as desired. Both conditions derived above for saturation of the bound are highly restrictive. Condition (1) dictates that there be only two energies in the spectrum; we should expect this to occur only when $\chi^{a b}_{\br \br'}$ is such that the lattice is broken into clusters, so that the spectrum consists of perfectly flat bands. Condition (2) is also very restrictive. An easy way to satisfy (2) is simply to have a saddle point where $\mu_{\br} = 0$. Suppose instead that $\mu_{\br} \neq 0$, and so generically we should expect that $\tilde{n}_{\br}$ is non-uniform and does not satisfy Eq. (\[eqn:spe-constraints\]). It is useful to imagine starting from $\alpha = 0$ and turning on ${\cal H}_V$ by increasing $\alpha$. The $\mu_{\br}$ need to be chosen to “even out” the color density, so that it satisfies Eq. (\[eqn:spe-constraints\]) once $\alpha = 1$. Naively, a choice of $\mu_{\br}$ accomplishing this will cost energy at each lattice site; that is, $$\langle \psi_0 | \big[ - \operatorname{tr} (\mu_{\br} \hat{n}_{\br} ) \big] | \psi_0 \rangle > 0 \text{.}$$ This would imply $$\sum_{\br} \operatorname{tr} ( \mu_{\br} \tilde{n}_{\br} ) < 0 \text{,}$$ which is in conflict with condition (2). This discussion indicates that satisfying condition (2) when $\mu_{\br} \neq 0$ is unlikely. Saturation of the bound and $k$-simplex VCS states {#sec:simplex} -------------------------------------------------- The necessary and sufficient conditions derived above still leave open the questions of what kind of saddle points saturate the bound, and whether saturation is possible for a given lattice and set of exchange couplings $J_{\br \br'}$. Saturation is not always possible – for example, on any bipartite lattice with $k > 2$, the stricter bound derived in Sec. \[sec:bipartite\] shows that saturation of Eq. (\[eqn:rokhsar-bound\]) is impossible. Here, we will show that, when they exist, $k$-simplex VCS states saturate the bound and are thus the analogs of the VBS states for $k = 2$. In striking contrast to VBS states, many commonly encountered lattices do not admit any $k$-simplex VCS states for $k > 2$. Moreover, for $k > d+1$ there is no $d$-dimensional lattice that admits a $k$-simplex state without fine-tuning of the exchange couplings. The implication is that for $k > 2$ a much wider range of ground states are possible in the large-$N$ limit, including spin liquid states. Unless stated otherwise, when discussing specific lattices we consider the case of nearest-neighbor exchange only. We shall first discuss $k$-simplex VCS states for the simpler case $n_c = 1$, and then generalize to arbitrary $n_c$. In the large-$N$ limit, by VCS state we mean a saddle point where $\chi_{\br \br'}$ is chosen to decompose the lattice into clusters. Each lattice site belongs to exactly one cluster, and any two sites in the same cluster are connected by $\chi_{\br \br'} \neq 0$ along some path of bonds (they need not be directly connected). Each cluster must contain some multiple of $k$ lattice sites, since otherwise the cluster will not be a singlet. In a $k$-cluster state, every cluster contains exactly $k$ sites. A $k$-simplex state is a $k$-cluster state where, within each cluster, each site is (directly) connected to every other site by a single bond with ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$ (see Fig. \[fig:tri\]). Just as for VBS states, on a given lattice there can be many different $k$-simplex states with the same $N \to \infty$ energy. It is expected that $1/N$ corrections will select a particular ordered pattern out of this degenerate manifold, again precisely as for VBS states.[@read89b] ![Illustration of 3-cluster and 3-simplex VCS states on the triangular lattice (for $n_c = 1$). $\chi_{\br \br'}$ is nonzero on the highlighted bonds and zero elsewhere. Both states (a) and (b) are 3-cluster states. State (b) is a 3-simplex state and is a $N = \infty$ ground state of the $k=3$ triangular lattice model. State (a) is not a 3-simplex state and therefore has higher energy than (b) following the discussion in the text.[]{data-label="fig:tri"}](tri.eps){width="3in"} To generalize $k$-cluster states to $n_c > 1$, we consider only *diagonal* $\chi^{a b}_{\br \br'}$ (and $\mu^{a b}_{\br}$). That is, we consider $$\begin{aligned} \label{eqn:diagonal} \chi^{a b}_{\br \br'} &=& \delta^{a b} \chi^a_{\br \br'} \text{ (no sum).} \\ \mu^{a b}_{\br} &=& \delta^{a b} \mu^a_{\br} \text{ (no sum).}\end{aligned}$$ For each $a = 1,\dots,n_c$, $\chi^a_{\br \br'}$ is chosen to give a $k$-cluster decomposition of the lattice, resulting in $n_c$ different $k$-cluster decompositions. A $k$-simplex state occurs where each $k$-cluster decomposition is also a decomposition into $k$-simplices; an example of a $n_c > 1$ $k$-simplex states is given in Fig. \[fig:kag\] Such states were considered for $k = 2$ in Ref. , and also as exact ground states of special models for a variety of $n_c$ and $N$ in Ref. . ![An example $k$-simplex state with $n_c = 2$ and $k=3$, on the kagome lattice. Simplices of one color are the triangles marked with solid lines (red online), and those of the other color are triangles marked with dashed lines (blue online). This state was discussed (for $N=3$) in Ref. .[]{data-label="fig:kag"}](kagome_simplex_state.eps){width="1.6in"} Focusing on a single color (say, $a =1$) and a single cluster, and choosing $\mu^{a b}_{\br} = 0$ and $\chi^1_{\br \br'} \to -\chi$ (for bonds within a cluster), the fermionic part of the mean-field Hamiltonian in a $k$-simplex state is $${\cal H}^{k-{\rm simplex}}_{F} = - \chi \sum_{\br \neq \br'} f^\dagger_{\br 1 \alpha} f^{\vphantom\dagger}_{\br' 1 \alpha} \text{.}$$ The lowest single-particle energy is $\epsilon_{{\cal L}} = -(k-1)\chi$; the $k-1$ other eigenvalues are degenerate and take the value $\epsilon_{{\cal U}} = \chi$. The ground state is obtained by filling the lowest level in all clusters, and it is easy to see that in this state the saddle-point condition $\langle \hat{n}^{a b}_{\br} \rangle = m \delta^{a b}$ is satisfied. This state satisfies both the conditions for saturation of the lower bound Eq. (\[eqn:rokhsar-bound\]), and this is easily verified by direct computation of the energy. More general $k$-cluster states do not saturate the bound. To illustrate this, consider for simplicity $n_c = 1$ and a lattice where either ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$, or ${\cal J}_{\br \br'} = 0$. Consider a $k$-cluster state where all the clusters are identical and each cluster contains $N_b$ bonds with nonzero exchange. It can be shown that the energy of each cluster is $E_c = - N N_b {\cal J}_{{\rm max}} / k^2$ (Appendix \[app:kcluster\]), so the total energy is then $$E_{{\rm MFT}} = \frac{N_s}{k} E_c = - \frac{-N {\cal J}_{{\rm max}} N_s N_b}{k^3} \text{.}$$ This attains the bound only if the number of bonds is maximum, that is $N_b = k (k-1)/2$ – but this is precisely the condition that each cluster is a $k$-simplex. As mentioned above, while most lattices admit a VBS state, this is not the case for $k$-simplex states with $k > 2$. For example, the square and honeycomb lattices admit VBS states but no $k$-simplex states with $k \geq 3$. The triangular (Fig. \[fig:tri\]) and kagome lattices admit both VBS and 3-simplex states, but lack $k$-simplex states for $k \geq 4$. The three-dimensional pyrochlore lattice of corner-sharing tetrahedra admits 4-simplex states, but no $k$-simplex states for $k \geq 5$. Going beyond specific examples, for a $d$-dimensional lattice, $k$-simplex states with $k > d + 1$ are impossible, unless the exchange couplings are fine-tuned. To see this, consider the $k$ points of a simplex in $d$-dimensional space. Any pair $(\br, \br')$ of these points must have ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$; this can only be achieved without fine-tuning if space group symmetry forces all the exchange couplings to be equal. This can occur only if the points of the simplex are mutually equidistant, and there can be at most $d+1$ mutually equidistant points in $d$-dimensional space. While on a given lattice there may be other states that saturate the bound even when no $k$-simplex VCS states exist, for large enough $k$ saturation is impossible. To illustrate this, consider again a lattice where either ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$, or ${\cal J}_{\br \br'} = 0$, and let ${\cal N}_b$ be the total number of bonds in the lattice with nonzero exchange. We can obtain a lower bound on the energy by treating each bond as an isolated system, calculating the resulting two-site ground state energy, and summing over bonds. In Appendix \[app:2site-exact\] it is shown that the ground state energy of an isolated bond is $- n_c N {\cal J}_{{\rm max}} / k^2$, so we have $$\label{eqn:bond-gs-bound} E_{{\rm MFT}} \geq - \frac{n_c N {\cal N}_b {\cal J}_{{\rm max}}}{k^2} \text{.}$$ This bound is more strict than Eq. (\[eqn:rokhsar-bound\]) when $k > 2 {\cal N}_b / N_s + 1$, so saturation of Eq. (\[eqn:rokhsar-bound\]) is impossible for such values of $k$. Large-$N$ limit: Bipartite lattices {#sec:bipartite} =================================== Bipartite lower bound {#sec:bipartite-bound} --------------------- We now derive a stricter lower bound on the mean-field energy that holds for bipartite lattices. As in Sec. \[sec:bound\], we consider the mean-field Hamiltonian at general $n_c$, but now on a bipartite lattice. Precisely, we divide the lattice into two sublattices $A$ and $B$ of equal size so that ${\cal J}_{\br \br'}$ is only nonzero when $\br$ and $\br'$ lie in different sublattices. We first use the inequality $E_{{\rm MFT}} \geq E'_{{\rm MFT}}$ precisely as in Sec. \[sec:bound\]. The bipartite structure allows us to obtain a stricter bound on $E'_{{\rm MFT}}$. We recall that $${\cal H}'_{{\rm MFT}} = \prsum \frac{N}{{\cal J}_{\br \br'} } \operatorname{tr} ( \bar{\chi}^\dagger_{\br \br'} \bar{\chi}^{\vphantom\dagger}_{\br \br'} ) + {\cal H}_K \text{.}$$ The crucial observation is that, for a bipartite lattice, ${\cal H}_{K}$ obeys sublattice symmetry, where ${\cal H}_K \to - {\cal H}_K$ under the operation $$\begin{aligned} f_{\br a \alpha} \to \left\{ \begin{array}{ll} f_{\br a \alpha} & \br \in A \\ - f_{\br a \alpha} & \br \in B \end{array} \right. \text{.}\end{aligned}$$ Again we let ${\cal L}$ be the set of $n_c m N_s$ occupied levels. Now, however, we define the set ${\cal U}$ to be the image of ${\cal L}$ under the sublattice operation. The set ${\cal U}$ clearly contains only empty levels. We denote the set of the remaining $n_c (N - 2 m) N_s$ levels by ${\cal M}$. Levels in ${\cal M}$ are empty and have energies intermediate between those in ${\cal L}$ and ${\cal U}$. We define averages of $\epsilon_q$ and $\epsilon^2_q$ over these sets as before. As in Sec. \[sec:bound\], we have $E_K = n_c m N_s [ \epsilon]_{{\cal L}}$, and we need to relate $[\epsilon]_{\cal L}$ to $[\epsilon^2]$. We have $$\begin{aligned} [ \epsilon^2 ] &=& \frac{m}{N} [\epsilon^2]_{\cal L} + \frac{m}{N} [ \epsilon^2]_{\cal U} + \frac{(N - 2 m)}{N} [ \epsilon^2]_{{\cal M}} \\ &=& \frac{2 m}{N} [\epsilon^2]_{\cal L} + \frac{(N - 2 m)}{N} [ \epsilon^2]_{{\cal M}} \\ &\geq & \frac{2 m}{N} [\epsilon^2]_{\cal L} \geq \frac{2 m}{N} [\epsilon]^2_{\cal L} \text{.}\end{aligned}$$ Since $[\epsilon]_{\cal L}$ is negative, this implies $$[\epsilon]_{\cal L} \geq - \sqrt{ \frac{N}{2m} } \sqrt{ [\epsilon^2] } \text{.}$$ From this point, we can precisely follow the steps of Sec. \[sec:bound\] to minimize the lower bound on $E'_{{\rm MFT}}$. In this case we obtain the stricter bound $$\label{eqn:bipartite-bound} E_{{\rm MFT}} \geq - \frac{1}{4 k} n_c N N_s {\cal J}_{{\rm max}} \text{.}$$ This bound is equivalent to Eq. (\[eqn:rokhsar-bound\]) when $k = 2$, and is stricter when $k > 2$. Saturation of the bipartite bound {#sec:bipartite-saturation} --------------------------------- Here we state the necessary and sufficient conditions to saturate the bipartite bound, and give examples of $k$-cluster VCS states that achieve saturation. The bound Eq. (\[eqn:bipartite-bound\]) is saturated if and only if each of the following two conditions hold: (1) $\epsilon_q$ is constant over each of the sets ${\cal L}$ and ${\cal U}$, and $\epsilon_q = 0$ in ${\cal M}$. (2) The color density $\tilde{n}^{a b}_{\br}$ calculated using ${\cal H}_K$ satisfies the condition $$\sum_{\br} \operatorname{tr} (\mu_{\br} \tilde{n}_{\br} ) = 0 \text{.}$$ The proof of this statement follows that given for the more general bound in Sec. \[sec:saturation\]. As before, condition (2) comes from saturation of the inequality $E_{{\rm MFT}} \geq E'_{{\rm MFT}}$; since nothing in this inequality depends on the bipartite structure, the proof of condition (2) is identical to that given before. As before, it is trivial to see that $E'_{{\rm MFT}} = E_{{\rm bound}}$ if and only if condition (1) holds. As before, saturation of the bipartite bound is impossible for large enough $k$. Again we consider a lattice where either ${\cal J}_{\br \br'} = {\cal J}_{{\rm max}}$ or ${\cal J}_{\br \br'} = 0$, and let ${\cal N}_b$ be the total number of bonds in the lattice with nonzero exchange. For $k > 4 {\cal N}_b / N_s$, the bound Eq. (\[eqn:bond-gs-bound\]) is stricter than Eq. (\[eqn:bipartite-bound\]), so saturation is impossible for such values of $k$. ![Cluster states ($n_c = 1$) with energies saturating the lower bound Eq. (\[eqn:bipartite-bound\]) on the square lattice, for $k = 2$ (a), $k = 3$ (b) and $k = 4$ (c). $\chi_{\br \br'}$ has constant magnitude on the dark bonds and is zero on the others. In the $k = 3$ state, the flux through each six-site plaquette is $\pi$, while it is zero for each four-site plaquette in the $k = 4$ state. For each value of $k$, in the $N = \infty$ limit, every tiling of the square lattice by the type of clusters shown is a ground state. This large degeneracy is expected to be lifted upon computing perturbative $1/N$ corrections to the ground state energy.[@read89b][]{data-label="fig:states"}](states.eps){width="3in"} Since a flat energy spectrum of the mean-field Hamiltonian is necessary to saturate the bipartite bound, we expect that it will only be saturated by VCS states. VCS states saturating the bound on the square lattice for $n_c = 1$ are shown in Fig. \[fig:states\] and were also reported in Ref. . For $k=2$ the bound is saturated by any dimer state, and for $k=4$ it is saturated by 4-cluster states of the type shown. For $k=3$ the bound is actually saturated by a class of 6-cluster states. Whenever a given lattice admits a $n_c = 1$ cluster state saturating the bound, it is easy to see that the same lattice (*i.e.* same set of exchange couplings ${\cal J}_{\br \br'}$) also admits $n_c > 1$ cluster states saturating the bound. These $n_c > 1$ states have diagonal $\chi^{a b}_{\br \br'}$ as in Eq. (\[eqn:diagonal\]), and each $\chi^a_{\br \br'}$ is chosen to give a cluster decomposition of the type that saturates the bound for $n_c = 1$. Examples of such states (for $k = 4$ and $n_c =2$) are illustrated for the square lattice in Fig. \[fig:nc2-states\]. ![Illustration of two $N = \infty$ cluster ground states on the square lattice for $n_c = 2$ and $k = 4$, which saturate the lower bound Eq. (\[eqn:bipartite-bound\]). Square clusters of one color are marked with solid lines (red online), while those of the other color are marked with dashed lines (blue online). Any configuration where clusters of the two colors separately tile the lattice is a $N = \infty$ ground state – as in the $n_c = 1$ case, the degeneracy among these states is expected to be lifted upon computing perturbative $1/N$ corrections to the ground state energy.[]{data-label="fig:nc2-states"}](nc2_states.eps){width="3in"} Large-$N$ results on square lattice and numerical ground state search {#sec:square} ===================================================================== In this section we focus on the square lattice, and in particular on the case $k \geq 5$. The discussion of Sec. \[sec:bipartite-saturation\] above establishes that, for $k = 2,3,4$, the large-$N$ ground states on the square lattice are VCS states, of the type shown in Figs. \[fig:states\] and \[fig:nc2-states\]. We know of no cluster states that can saturate the bound for $k \geq 5$ on the square lattice, and we conjecture that saturation is impossible for such values of $k$. In this situation it is very challenging to rigorously determine the large-$N$ ground state, a problem we do not currently know how to solve. Instead, we employ a systematic numerical search for ground states, which, while not foolproof, allows us to determine the ground state with some confidence. Here we first describe our numerical self-consistent minimization (SCM) procedure, which we developed and employed in Ref.  for the case $n_c = 1$. A very similar procedure was later used by M. Foss-Feig and A. M. Rey to study the Kondo lattice model, in collaboration with one of us (M.H.),[@fossfeig10a] and subsequently with both of us.[@fossfeig10b] Due to the local constraint, the SCM procedure is not simply a trivial iteration of a self-consistent equation, and to our knowledge it has not been used previously by others; therefore, we shall describe the SCM procedure here in some detail. Following this discussion, we shall describe the results of SCM on the square lattice for $n_c =1, 2$. Self-consistent minimization procedure -------------------------------------- We first describe the SCM algorithm in the simpler case of $n_c = 1$; modifications in the $n_c = 2$ case are described below. The basic idea is simply to iterate the self-consistency condition Eq. (\[eqn:nc1-spe-chi\]). However, if this is all one does, then the fermion density will be non-uniform and Eq. (\[eqn:nc1-spe-constraint\]) will be violated. Instead, the idea is to iterate Eq. (\[eqn:nc1-spe-chi\]) within a constrained set of $\chi_{\br \br'}$ and $\mu_{\br}$, so that Eq. (\[eqn:nc1-spe-constraint\]) is always satisfied. To accomplish this, the algorithm proceeds as follows: (1) An initial $\chi_{\br \br'}$ is chosen randomly. In our calculations, we chose $\chi_{\br \br'} = |\chi_{\br \br'} | e^{i \phi_{\br \br'}}$, where $|\chi|$ was chosen in the interval \[0.03, 0.18\] and $\phi$ in the interval $[0, 2\pi]$, both with a uniform distribution. (2) Given $\chi_{\br \br'}$, the potential $\mu_{\br}$ is chosen so that Eq. (\[eqn:nc1-spe-constraint\]) is satisfied. We describe below how this is done. (3) A new set of $\chi$ fields is calculated by $$\chi'_{\br \br'} = - \frac{{\cal J}_{\br \br'} }{N} \langle f^\dagger_{\br' \alpha} f^{\vphantom\dagger}_{\br \alpha} \rangle \text{.}$$ (4) We return to step 2, and iterate until the ground state energy converges. In practice, we run this procedure for 500 iterations, by which time the convergence is observed to be excellent. To improve the efficiency of the algorithm as well as its convergence behavior, it is desirable to restrict $\chi_{\br \br'}$ and $\mu_{\br}$ to vary within a unit cell, which is then periodically repeated to form a larger lattice, with periodic boundary conditions. The translation symmetry generated by the unit cell primitive vectors allows us to exploit Bloch’s theorem, further increasing the efficiency. Since different unit cells can accommodate different candidate ground states, a variety of different cells need to be considered separately. SCM is indeed a minimization procedure for the ground state energy $E_{{\rm MFT}}$, as the energy is non-increasing for each iteration. To see this, suppose we have some $\chi_{\br \br'}$ and $\mu_{\br}$ obtained after step 2. In general this is not a saddle point, but Eq. (\[eqn:nc1-spe-constraint\]) is satisfied. We let $H_{{\rm MFT}}$ be the mean-field Hamiltonian defined in terms of $\chi$ and $\mu$. We let $\chi'_{\br \br'}$ and $\mu'_{\br}$ be the fields obtained at the next step of the SCM procedure, and $H'_{{\rm MFT}}$ is the mean-field Hamiltonian defined in terms of the primed fields. We have $$\chi'_{\br \br'} = - \frac{{\cal J}_{\br \br'}}{N} \langle f^\dagger_{\br' \alpha} f^{\vphantom\dagger}_{\br \alpha} \rangle \text{,}$$ where the expectation value $\langle \rangle$ is taken in the ground state of $H_{{\rm MFT}}$. The potential $\mu'_{\br}$ is chosen so that $\langle f^\dagger_{\br \alpha} f^{\vphantom\dagger}_{\br \alpha} \rangle' = m$, where the primed expectation value is taken in the ground state of $H'_{{\rm MFT}}$. We have $$E_{{\rm MFT}} = \langle H_{{\rm MFT}} \rangle = N \prsum \frac{1}{{\cal J}_{\br \br'}} \Big[ | \chi_{\br \br'}|^2 - ( \chi^*_{\br \br'} \chi'_{\br \br'} + \text{c.c.} ) \Big] \text{.}$$ Next, we have the variational upper bound $$E'_{{\rm MFT}} = \langle H'_{{\rm MFT}} \rangle' \leq \langle H'_{{\rm MFT}} \rangle = - N \prsum \frac{ | \chi'_{\br \br'} |^2 }{{\cal J}_{\br \br'} } \text{.}$$ Therefore the change in energy satisfies $$E'_{{\rm MFT}} - E_{{\rm MFT}} \leq -N \prsum \frac{ | \chi_{\br \br'} - \chi'_{\br \br'} |^2 }{ {\cal J}_{\br \br'} } \leq 0 \text{;}$$ that is, the energy is non-increasing for every step of the SCM procedure. This means that when the procedure converges (in practice it almost always does), it converges to a saddle point which is a local minimum of the energy. There is no guarantee, however, of a global minimum, so, in order to have any confidence that a particular state is the global minimum, it is necessary to run the procedure many times with different random initial states. While the other steps of the algorithm are very simple, choosing the potential $\mu_{\br}$ in step 2 requires a more detailed discussion. The basic idea is to use linear response theory to find the change in potential needed to achieve a desired change in the fermion density. Going into step 2, we have fields $\chi_{\br \br'}$ and $\mu_{\br}$, which can be used to construct $H_{{\rm MFT}}$. The density will not in general be uniform, and we define $$n_{\br 0} = \langle f^\dagger_{\br \alpha} f^{\vphantom\dagger}_{\br \alpha} \rangle \text{,}$$ where the expectation value is taken using the ground state of $H_{{\rm MFT}}$. Suppose the potential is changed by $\mu_{\br} \to \mu_{\br} + \delta \mu_{\br}$. To first order in $\delta\mu_{\br}$, the change in the density is $$\label{eqn:linresp} \delta n_{\br} = \sum_{\br'} X_{\br \br'} \delta \mu_{\br'} \text{,}$$ where $X_{\br \br'} = X_{\br' \br} = X^*_{\br \br'}$ is (by definition) the density response function evaluated in real space and at zero frequency, which, using standard results of linear response theory, can be calculated from the single-particle wavefunctions and energies of $H_{{\rm MFT}}$. While straightforward, calculation of $X_{\br \br'}$ is the most computationally expensive step of the algorithm, and must be implemented with attention to efficiency. At this point, the idea is to set $\delta n_{\br} = m - n_{\br 0}$ (the deviation between the original and desired densities), and invert Eq. (\[eqn:linresp\]) to find $\delta\mu_{\br}$. In practice $X_{\br \br'}$ is not invertible, because the density does not change under a uniform shift of $\mu_{\br}$ in the canonical ensemble. Instead we proceed by diagonalizing $X_{\br \br'}$: $$\sum_{\br'} X_{\br \br'} u_{\br' \alpha} = x_\alpha u_{\br \alpha} \qquad \text{(no sum on }\alpha\text{).}$$ Here $x_{\alpha}$ are the eigenvalues of $X_{\br \br'}$, labeled by $\alpha$, and $u_{\br \alpha}$ are the orthonormal eigenvectors. If we expand $\delta n_{\br}$ and $\delta\mu_{\br}$ in the basis of eigenvectors, we can rewrite Eq. (\[eqn:linresp\]) as $$\delta n_{\alpha} = x_{\alpha} \delta\mu_{\alpha} \qquad \text{(no sum on }\alpha\text{).}$$ We invert this by simply ignoring eigenvectors with $x_{\alpha} = 0$, and choosing $$\delta \mu_{\alpha} = \left\{ \begin{array}{ll} \delta n_{\alpha} / x_{\alpha} & \text{, } x_{\alpha} \neq 0 \\ 0 & \text{, } x_{\alpha} = 0 \end{array} \right. \text{.}$$ This is easily converted back to a result for $\delta\mu_{\br}$. What we have obtained is a linear extrapolation for $\delta\mu_{\br}$, and the basic idea at this point is to proceed by replacing $\mu_{\br} \to \mu_{\br} + \delta \mu_{\br}$, and iterating the procedure until the density is uniform. This is, in fact, just a multi-dimensional Newton’s method for finding a zero of $(n_{\br} - m) = F_{\br}[ \{ \mu_{\br} \}]$. While such a method has good local convergence properties (*i.e.* starting sufficiently close to the zero), the global convergence properties are poor. However, this can be improved by very simple modifications.[@numericalrecipes] We define the merit function ${\cal E} = \sum_{\br} (n_{\br} - m)^2$, and demand that each change in $\mu_{\br}$ decrease ${\cal E}$. If $\mu_{\br} \to \mu_{\br} + \delta \mu_{\br}$ actually increases ${\cal E}$, then we try the smaller step $\mu_{\br} \to \mu_{\br} + \lambda \delta \mu_{\br}$ where $0 < \lambda < 1$. This is guaranteed to decrease ${\cal E}$ for sufficiently small $\lambda$; in practice, we use the sequence $\lambda = 1, 0.5, 0.4, 0.3, 0.2, 0.1, 0.09, 0.08, \dots, 0.01, 0.009, \dots$, and give up (simply moving on to step 3) after 1000 attempts. In practice it is only rarely necessary to give up; even when it is necessary, step 2 is successful in later iterations, and convergence still occurs. For each iteration of step 2, this process of choosing a new $\delta\mu_{\br}$ by linear extrapolation is repeated 10 times, or until ${\cal E} < 10^{-20}$. This tolerance for ${\cal E}$ is usually achieved after only a small number of iterations, and is virtually always achieved by the end of a run (500 iterations). Rarely, it happens that ${\cal E}$ is of order unity after a substantial number of iterations, and the algorithm either converges extremely slowly or fails to converge. To avoid this problem, when ${\cal E} \geq 1$ any time after 10 iterations, we abort the calculation and start over with a new random initial condition. We now describe how the SCM procedure is modified to handle $n_c = 2$. The initial set of $\chi^{a b}_{\br \br'}$ is chosen making use of the singular value decomposition $$\chi = U \left( \begin{array}{cc} d_1 & 0 \\ 0 & d_2 \end{array}\right) V \text{.}$$ Here $d_1$ and $d_2$ are each chosen in the interval $[0, 0.2]$ with a uniform distribution. $U$ and $V$ are both random ${\rm U}(2)$ matrices, chosen from a uniform distribution on the ${\rm U}(2)$ manifold. The algorithm itself proceeds via the same four steps outlined above. Only in step 2 are the modifications at all nontrivial: We have to choose $\mu^{a b}_{\br}$ to satisfy $\langle \hat{n}^{a b}_{\br} \rangle = m \delta^{a b}$. We proceed precisely as above using linear response theory, except that now the necessary linear response equation has a matrix structure and is $$\label{eqn:nc2-response-eqn} \delta n^{a b}_{\br} = \sum_{\br'} \sum_{c, d} X^{a b ; c d}_{\br \br'} \delta \mu^{d c}_{\br'} \text{.}$$ Since both $\delta n$ and $\delta \mu$ are Hermitian, is is convenient to expand them in a basis of Hermitian matrices labeled by $A, B = 0, \dots, 3$ – a convenient basis is the identity matrix ($A=0$) plus the three Pauli matrices ($A = 1,2,3$). This allows one to recast Eq. (\[eqn:nc2-response-eqn\]) in the form $$\delta n^A_{\br} = \sum_{\br'} \sum_{B} X^{A B}_{\br \br'} \delta \mu^B_{\br} \text{.}$$ Here, it can be shown that $X^{A B}_{\br \br'} = X^{B A}_{\br' \br}$ and is real, so the response function can be diagonalized as described for $n_c = 1$. Finally, the merit function ${\cal E}$ also needs to be modified, and we choose $${\cal E} = \sum_{\br} (n^0_{\br} - m)^2 + \sum_{\br} \sum_{A = 1}^3 ( n^A_{\br} )^2 \text{.}$$ Results of SCM -------------- We now describe the results of SCM on the square lattice for both $n_c = 1,2$. The $n_c = 1$ results were reported in Ref . Following the protocol described below, we studied $k = 5,6,7,8$ for $n_c = 1$, and $k = 5,6,7$ for $n_c = 2$. (The numerics become more time-consuming with increasing $k$ and $n_c$.) We also checked that SCM indeed produces exact ground states (guaranteed by saturation of lower bounds) for smaller values of $k$. For each value of $k$ and $n_c$ noted above, we considered all unit cells of rectangular geometry containing $k^2$ or fewer lattice sites, excluding cells of unit width for technical reasons. A unit cell of dimensions $\ell_x \times \ell_y$ is periodically repeated to fill the lattice using Bravais lattice vectors $\bR = \ell_x \bx + \ell_y \by$. (Note that other choices of Bravais lattice vectors are possible – we made this restriction for the sake of simplicity and limited computation time.) The lattice itself has periodic boundary conditions and dimensions $L_x \times L_y$. Letting $L = \operatorname{min} (L_x, L_y)$, we always considered $L \geq 40$ for $k = 5$, $L \geq 36$ for $k = 6$, $L \geq 42$ for $k =7$, and $L \geq 40$ for $k = 8$. While in some cases we also considered larger system sizes, a more systematic study of finite-size effects would be desirable, but we have left this for future work. For each unit cell size, we ran the SCM procedure 30 times, using a different random initial condition each time. For $n_c = 1$ and $5 \leq k \leq 8$, we found the ACSL to be the ground state.[@hermele09] For $n_c = 2$ and $k = 5$, we found the ground state to be a rather complicated inhomogeneous state that we have not fully characterized. On the other hand for $k = 6,7$ we found that the nACSL and dCSL are degenerate ground states. Discussion {#sec:discussion} ========== We analyzed a variety of SU$(N)$ symmetric Heisenberg models in two dimensions on the square lattice and gave arguments that topologically ordered spin liquids are among their ground states. In view of their potential realization with alkaline earth atoms placed on optical lattices, we now summarize what we know about realistically achievable SU$(N)$ Heisenberg models. Following that discussion, we conclude by mentioning some directions for future study. The Heisenberg models with $n_c=1$ can be obtained simply as a large-$U$ limit (Mott insulator phase) of a Hubbard model representing alkaline earth atoms hopping on a lattice with $m$ atoms (in their ground electronic state $g$) per site. Such Heisenberg models are within the reach of experiment. [@taie10; @sugawa10] The main issue is temperature, since the achieved temperature in experiments is in the range $t^2/U < k_B T < U$, and not $k_B T<t^2/U$ ($t$ is the Hubbard hopping) necessary for observing effects of magnetic exchange. Yet this is similar to the issues encountered in studying the SU$(2)$ Hubbard model with cold alkali atoms, and currently a significant amount of effort is being spent trying to devise techniques to lower the temperature of Mott insulators. Assuming this is done, the study of the $n_c=1$ Heisenberg model will be possible in the future. We summarize what we know about the $n_c=1$ Heisenberg model in Fig. \[fig:phasediagram\]. On the horizontal axis of this figure, we plot $m$, the number of atoms in the same electronic state $g$ per site. On the vertical axis we plot $k$, which is $k=N/m$. The dashed-dotted line represents roughly the curve $km=10$. The significance of this curve lies in the fact that $km=N$ and $N=10$ is the largest experimentally achievable $N$. Therefore, all the points on the plot which lie above the curve $km=10$ cannot be reached experimentally while those below the curve can. The actual curve on Fig. \[fig:phasediagram\] is corrected to take into account that $k$ and $m$ are integers. ![Phase diagram of the SU$(N)$ Heisenberg model in two dimensions on the square lattice with $n_c=1$ and with $N=mk$. In terms of an underlying Hubbard model, $m$ is the number of fermions per site, while $k$ is the inverse filling. Regions where there is substantial evidence for a given ground state – or where the ground state is known – are shaded. The Abelian chiral spin liquid (ACSL) and valence cluster state (VCS) regions on the right are established by our large-$N$ analysis; the boundary between these regions in large-$N$ is shown by a dashed line. For $k=2$, $m=1$ the Neel state is the well-known ground state. There is also evidence for magnetic order at $k=3$, $m=1$[@toth10] and $k=4$, $m=1$.[@corboz11] Valence-bond solid (VBS) order (which is a type of VCS) was found for $k=2$ and $m=3,4$.[@assaad05] The dashed-dot line separates the range of parameters beyond the reach of current experiments (above and to the right of the line) and the range within the reach of the experiments (below and to the left of the line). The experimentally relevant part of the phase diagram with the greatest potential for novel ground states – in particular, the Abelian chiral spin liquid – is indicated with a question mark.[]{data-label="fig:phasediagram"}](phase_diagram.eps){width="3.5in"} We emphasize that any $N \le 10$ is within reach of an experiment. Indeed, working with $^{87}$Sr, for example, one can selectively populate its nuclear spin states so that only a subset of those are populated with a total number of populated states equal to $N$.[@gorshkov10] At the same time, we expect that $m=1$ and $m=2$ columns of the figure are easiest to reach, as higher $m$ will likely experience losses due to 3-body recombination. At $m=1$ and $k=2$, the ground state is of course the Néel state. There is also evidence for magnetic order at $m=1$, $k=3$[@toth10] and at $m=1$ and $k=4$.[@corboz11] For $k=2$ and $m \geq 3$, it is believed that the ground state is a valence bond solid. This is established by quantum Monte Carlo for $m = 3,4$,[@assaad05] and is proven in the limit $m \to \infty$.[@rokhsar90] In addition to that, in this paper we proved that at $m\rightarrow \infty$, $k<5$, the ground states are valence cluster states, of which valence bond solid is a particular example. Finally, we have shown that at $k>5$ and at least for $k \le 8$, and possibly for $k>8$ as well, and at $m \rightarrow \infty$, the ground state is the Abelian chiral spin liquid. The rest of the phase diagram remains to be filled in. Of course other phases not discussed here may well be present, and there is some evidence this is the case, in particular at $k=2$, $m=2$.[@assaad05] The experiments will be conducted at $m=1$ or $m=2$, and at $k$ as large as 10. The ground state of the Heisenberg model under these conditions is not known; this is represented by a question mark in Fig. \[fig:phasediagram\]. We believe it is unlikely that the Néel state can survive to large $k$, even at $m=1$. Indeed, as discussed earlier, the amount of frustration increases with increasing $k$.[@hermele09] What happens in this region needs to be investigated further. Unfortunately, numerical study is difficult, especially since these models \[except when $k=2$ (Ref. )\] suffer from the quantum Monte Carlo minus sign problem, even on bipartite lattices, in both world-line and fermion determinantal approaches. However, it may be possible to obtain useful information from analytical and density matrix renormalization group studies of quasi-one-dimensional systems. Ultimately, experiment will need to tell us what happens in this part of the phase diagram. An intriguing possibility is that the phase boundary which lies between $k=4$ and $k=5$ extends all the way from large $m$ to $m=1$, thus making the experimentally accessible $m=1$, $k>4$ regime a chiral spin liquid. We note that, while we only considered integer $k$, some non-integer values of $k$ are possible. For example, $m = 2$ and $N = 5$ corresponds to $k = 5/2$, and a well-defined large-$N$ limit with $k = 5/2$ certainly exists. We did not consider such values of $k$ first for simplicity, and second because non-integer $k$ requires $m \geq 2$, making experimental accessibility somewhat less favorable. Nonetheless, it would be interesting to study the large-$N$ limit for non-integer values of $k$ in future work. A similar phase diagram can be discussed at $n_c=2$ where the Abelian chiral spin liquid will be replaced by the non-Abelian chiral spin liquid (or by the doubled chiral spin liquid). Supposing that some of the topological liquids discussed here do indeed occur for physically realizable ${\rm SU}(N)$ spin models, it will be an interesting question how to actually observe fractional or non-Abelian statistics in these systems. This is especially so given the intense interest in topological quantum computation using non-Abelian particles. We expect that holes, the excitations obtained by removing an atom from the system, should split into spinons and holons. The holons may be localized near a given site by an external potential and, at the same time, they obey fractional or non-Abelian statistics depending on which topological liquid we are considering. Therefore braiding may be achieved by manipulating the holons via the external potential, and this is a route by which fractional and non-Abelian statistics may be observed. While some further details along these lines are given in Appendix \[app:carriers\], many questions remain open, and we feel this constitutes an interesting direction for future work. Other directions for future study include investigation of the projected wavefunctions for the various topological liquids, which we discussed only briefly in Sec. \[sec:top\]. Given the difficulty of unbiased numerical study in these systems, such wavefunctions may be useful as variational states to gain understanding of the phase diagram away from the large-$N$ limit. Finally, another potentially interesting problem is a careful study of the dCSL edge states, which may be topologically protected as mentioned in Sec. \[sec:dcsl\]. We are grateful to Gang Chen, Charles Kane, Andreas Laüchli, Hao Song and Ashvin Vishwanath for useful discussions, and are especially grateful to Ana Maria Rey both for numerous useful discussions and ongoing related collaborations. This research is supported by DOE award no. DE-SC0003910 (M.H.), and NSF grants no. DMR-0449521 (V.G.) and PHY-0904017 (V.G.). Alkaline earth atom Hubbard and spin models {#sec:atomic} =========================================== Here we briefly review the Hubbard model describing fermionic alkaline earth atoms in optical lattices. We focus on two kinds of Mott insulating states, in which the spin models we study are the simplest description capturing the essential physics. A more extensive and detailed discussion of fermionic AEA in optical lattices, and the rich variety of strong correlation physics that can be realized in these systems, can be found in Ref. . A single alkaline earth atom has a $^{1}S_0$ electronic ground state. (Recall that the subscript on the right is $J$, the electronic angular momentum, so this state has $J = 0$.) The nuclear spin can be as large as $I = 9/2$ in the case of $^{87}$Sr. Other important examples are $^{171}$Yb and $^{173}$Yb, with $I = 1/2$ and $I = 5/2$, respectively. While Yb is not an alkaline earth, it has the same configuration of outer electrons, and all the discussion here applies equally to alkaline earths and to Yb. Also important for our purposes is the $^{3}P_0$ lowest electronic excited state, which has a very long lifetime on the order of $100\, {\rm s}$. These two electronic states can be subjected to optical lattices of different strength.[@daley08] Interactions between two atoms in any combination of these electronic states, which arise from collisions in the $s$-wave channel, are expected to respect a large ${\rm SU}(N)$ spin rotation symmetry, where $N = 2 I + 1$ is the number of nuclear spin levels per atom.[@gorshkov10; @cazalilla09] The symmetry arises because such atoms have $J = 0$, and due to the resulting quenching of hyperfine coupling, the nuclear spin is essentially a spectator in the collision between two atoms, and only participates via Fermi statistics. The ${\rm SU}(N)$ symmetry is not exact but is expected to hold to an excellent approximation. A rough estimate is that, for two ground state atoms, ${\rm SU}(N)$-breaking effects are $10^{-9}$ times the strength of the ${\rm SU}(N)$-symmetric interaction.[@gorshkov10] For two excited state atoms, the strength of ${\rm SU}(N)$-breaking is estimated to be $10^{-3}$. We now suppose that the atoms are subjected to an optical lattice potential deep enough that a description in terms of a one-band Hubbard model is appropriate. We introduce creation operators $c^\dagger_{\br g \alpha}$ and $c^\dagger_{\br e \alpha}$ for the ground state ($g$) and excited state ($e$) atoms, respectively. Here, $\br$ labels the lattice site, and $\alpha = 1,\dots,N$ labels the $z$-component of nuclear spin. (We shall find this notation more convenient than $I_z = -I, \dots, I$, due to the ${\rm SU}(N)$ symmetry.) To describe the system we consider the most general Hubbard model with ${\rm SU}(N)$ symmetry, nearest-neighbor hopping, and on-site interactions. It is also important to note that the numbers of ground state and excited state fermions are separately conserved, due to the long lifetime (treated here as infinite) of the excited state fermions, and energy conservation. The Hamiltonian is[@gorshkov10] $$\begin{aligned} \label{eq:gorshkov} H &=& -t_g \sum_{\langle \br \br' \rangle} ( c^\dagger_{\br g \alpha} c^{\vphantom\dagger}_{\br' g \alpha} + \text{H.c.} ) - t_e \sum_{\langle \br \br' \rangle} ( c^\dagger_{\br e \alpha} c^{\vphantom\dagger}_{\br' e \alpha} + \text{H.c.} ) \nonumber \\ &+& \sum_{\br} \big( \frac{U_{gg}}{2} n^2_{\br g} + \frac{U_{e e}}{2} n^2_{\br e} + U_{e g} n_{\br g} n_{\br e} \big) \nonumber \\ &-& J_{eg} \sum_{\br} S^g_{\alpha \beta}(\br) S^e_{\beta \alpha} (\br) \text{.} \label{eqn:hubb}\end{aligned}$$ The sums in the first two terms are over nearest-neighbor bonds. We have introduced the following number and spin operators: $$\begin{aligned} n_{\br g} &=& \sum_{\alpha} c^\dagger_{\br g \alpha} c^{\vphantom\dagger}_{\br g \alpha} \\ S^g_{\alpha \beta}(\br) &=& c^\dagger_{\br g \alpha} c^{\vphantom\dagger}_{\br g \beta} \text{,}\end{aligned}$$ with corresponding expressions for $n_{\br e}$ and $S^e_{\alpha \beta}(\br)$. The on-site interaction parameters $U_{gg}$, $U_{ee}$, $U_{eg}$ and $J_{eg}$ are proportional to linear combinations of the four independent $s$-wave scattering lengths characterizing collisions among the atoms.[@gorshkov10] The ${\rm SU}(N)$ spin symmetry acts on the fermions as follows: $$\begin{aligned} c^\dagger_{\br g \alpha} &\to& U_{\alpha \beta} c^\dagger_{\br g \beta} \nonumber \\ c^\dagger_{\br e \alpha} &\to& U_{\alpha \beta} c^\dagger_{\br e \beta} \text{.}\end{aligned}$$ Here, $U$ is an arbitrary ${\rm SU}(N)$ matrix. The fermions thus transform in the fundamental representation of ${\rm SU}(N)$. We shall consider $U_{gg} > 0$, which is known to be the case for $^{87}$Sr and $^{173}$Yb. In both cases the corresponding scattering length is about $100 \, a_0$, which corresponds to rather large repulsive interactions.[@enomoto08; @escobar08] The sign of the interspecies exchange interaction $J_{e g}$ is not yet known and may be either ferromagnetic (positive) or antiferromagnetic (negative); this is likely to depend on the atomic species. If one ground state atom and one excited state atom share the same site, antiferromagnetic (ferromagnetic) $J_{eg}$ favors antisymmetric (symmetric) combinations of their nuclear spins. We consider two types of Mott insulators. The simpler of the two is realized using only ground state atoms, at an integer filling of $m$ atoms per site. While $m=1$ best avoids issues of three-body loss, we consider general $m$ because it is needed for the large-$N$ limit. In this case, the Hubbard model contains only the $t_g$ and $U_{gg}$ terms, and when $t_{g} \ll U_{gg}$ the standard degenerate perturbation theory[@auerbachbook] gives the spin model $$\label{eqn:hspin} H_{{\rm spin}} = J \sum_{\langle \br \br' \rangle} S_{\alpha \beta}(\br) S_{\beta \alpha}(\br') \text{,}$$ where $J = 2 t_g^2 / U_{gg}$, and we have defined $$S_{\alpha \beta}(\br) = S^g_{\alpha \beta}(\br) + S^e_{\alpha \beta}(\br) \text{.}$$ In this case, since no excited state atoms are present, $S_{\alpha \beta}(\br) = S^g_{\alpha \beta}(\br)$. The spin at each site transforms in the $m \times 1$ irreducible representation of ${\rm SU}(N)$ (Fig. \[fig:rect\_youngtab\]); this simply expresses the fact that the nuclear spins of the identical fermions are combined antisymmetrically. For $m = 1$, the spin transforms in the fundamental representation of ${\rm SU}(N)$, and when $N=2$ this is simply a $S = 1/2$ spin. The second type of Mott insulator is related to $S = 1$ Mott insulators of ${\rm SU}(2)$ spins. It is realized with $m$ ground state atoms *and* $m$ excited state atoms on each site. We consider $J_{eg} > 0$, in which case the single-site ground state is associated with a $m \times 2$ tableau. This can be seen by first viewing the ground state atoms as forming a spin transforming in the $m \times 1$ representation, which is required simply by Fermi statistics, and similarly for the excited state atoms. These two spins are then coupled by the $J_{eg}$ exchange term, and formally we need to solve a two-site problem, which is done in Appendix \[app:2site-exact\]. We do not consider antiferromagnetic interspecies exchange, because this gives a single-site ground state with a $2m \times 1$ tableau, which is of the same type obtained with only ground state atoms In the simple case $m=1$, the single-site ground states are of the form $$| \psi_{\br} \rangle = (c^\dagger_{\br g \alpha} c^\dagger_{\br e \beta} + c^\dagger_{\br g \beta} c^\dagger_{\br e \alpha} ) | 0 \rangle \text{;} \label{eqn:ge-single-site}$$ that is, the nuclear spins of the two fermions are combined symmetrically. When $N = 2$ and $m=1$, this is simply a $S = 1$ spin. More generally the single-site ground states can be obtained from the highest-weight state $$| \psi^{{\rm hw}}_{\br} \rangle = c^\dagger_{\br g 1} c^\dagger_{\br e 1} \cdots c^\dagger_{\br g m} c^\dagger_{\br e m} | 0 \rangle \text{,}$$ where all other single-site ground states can be obtained by repeated action on $| \psi^{{\rm hw}}_{\br} \rangle$ with appropriate components of $S_{\alpha \beta}(\br)$. That is, they are linear combinations of states of the form $S_{\alpha \beta}(\br) | \psi^{{\rm hw}}_{\br} \rangle$, $S_{\alpha \beta}(\br) S_{\gamma \delta}(\br) | \psi^{{\rm hw}}_{\br} \rangle$, and so on. Again, $m=1$ best avoids issues of three-body loss, but we shall consider general $m$. Another potentially important loss mechanism is inelastic losses in collisions between two excited state atoms. This can be minimized by making the lattice for the excited state atoms very deep, effectively setting $t_e = 0$. In Sec. \[sec:models\], the type of ${\rm SU}(N)$ spin is specified by the two local constraint equations Eq. (\[eqn:num-constraint\]) and Eq. (\[eqn:color-constraint\]), and in Appendix \[app:1site-irrep\] it is shown that these two constraints imply that the spin transforms in the $m \times n_c$ representation. To make contact with that discussion, we now show that single-site ground states of the present Hubbard model, transforming in the $m \times 2$ representation, satisfy the constraint Eq. (\[eqn:color-constraint\]), that is $$\label{eqn:appendix-color-constraint} T^i_{\br} | \psi_{\br} \rangle = 0\text{,}$$ where $$T^i_{\br} = \frac{1}{2} c^\dagger_{\br a \alpha} \sigma^i_{a b} c^{\vphantom\dagger}_{\br b \alpha} \text{.}$$ Here, $a,b = e,g$, and we formally consider the $e,g$ labels as an index transforming in an ${\rm SU}(2)$ “orbital” space. Moreover, $\sigma^i$ are the $2 \times 2$ Pauli matrices ($i = 1,2,3$), and $| \psi_{\br} \rangle$ is a single-site ground state for the site $\br$. \[Since there are $2 m$ fermions on each site, the constraint Eq. (\[eqn:num-constraint\]) is obviously satisfied.\] The constraint Eq. (\[eqn:appendix-color-constraint\]) is obviously satisfied for the $m =1$ state given in Eq. (\[eqn:ge-single-site\]); the wavefunction is antisymmetric under interchange $e \leftrightarrow g$ and is thus an orbital singlet. The same holds for the highest-weight state $|\psi^{{\rm hw}}_{\br} \rangle$, since it is built as a product of orbital singlets $c^\dagger_{\br g \alpha} c^\dagger_{\br e \alpha}$ (no sum on $\alpha$). Because $[ S_{\alpha \beta}(\br) , T^i_{\br} ] = 0$, this immediately implies that Eq. (\[eqn:appendix-color-constraint\]) holds for all single-site ground states. When $t_e = 0$ and $t_g \ll U_{gg} , J_{eg}$, the spin Hamiltonian is given by the same form Eq. (\[eqn:hspin\]), only now $J = t_g^2/ [ 2 (U_{g g} + J_{eg}) ]$. The degenerate perturbation theory calculation needed to establish this, unlike in the case of only ground state atoms, is not simply a trivial generalization of the familiar calculation for the $S = 1/2$, ${\rm SU}(2)$ Hubbard model. While the end result of this calculation appeared in Ref. , the details were not presented, so we now present them here. First we consider a single lattice site, and note that the energy of $|\psi^{{\rm hw}}_{\br} \rangle$ (neglecting hopping) is $$E_0 = \frac{1}{2} U_{gg} m^2 + \frac{1}{2} U_{ee} m^2 + U_{eg} m^2 - J_{eg} m \text{.}$$ By ${\rm SU}(N)$ symmetry, this holds for any single-site ground state. Moreover, we note that $$\label{eqn:dpt-identity} c^\dagger_{\br g \alpha} c^{\vphantom\dagger}_{\br e \alpha} | \psi^{{\rm hw}}_{\br} \rangle = c^\dagger_{\br e \alpha} c^{\vphantom\dagger}_{\br g \alpha} | \psi^{{\rm hw}}_{\br} \rangle = 0 \text{,}$$ which also holds for any single-site ground state by ${\rm SU}(N)$ symmetry. Now we consider second-order degenerate perturbation theory for a two-site problem with adjacent lattice sites $\br_1$ and $\br_2$. (In second-order perturbation theory, there is no need to consider more than two sites.) We construct the effective Hamiltonian by building up its action on an arbitrary state $| \psi^1_{\br_1} \rangle | \psi^2_{\br_2} \rangle$ in the low-energy manifold (that is, $| \psi^1_{\br_1} \rangle$ and $| \psi^2_{\br_2} \rangle$ are arbitrary single-site ground states). The energy of the initial state is $2 E_0$. The intermediate state is obtained by hopping a single ground state fermion from $\br_1$ to $\br_2$ or vice versa. We consider hopping from $\br_1$ to $\br_2$ so the intermediate state is $$| \psi_{{\rm int}} \rangle = \sum_{\alpha} | \phi^1_{\br_1 \alpha} \rangle | \phi^2_{\br_2 \alpha} \rangle \text{,}$$ where $$\begin{aligned} | \phi^1_{\br_1 \alpha} \rangle &=& c^{\vphantom\dagger}_{\br_1 g \alpha} | \psi^1_{\br_1} \rangle \\ | \phi^2_{\br_2 \alpha} \rangle &=& c^\dagger_{\br_2 g \alpha} | \psi^2_{\br_2} \rangle \text{.}\end{aligned}$$ Acting on the intermediate state with the on-site part of the Hamiltonian and using the identity Eq. (\[eqn:dpt-identity\]) to evaluate the action of the $J_{eg}$ exchange term, the energy of the intermediate state is found to be $$E_{{\rm int}} = 2 E_0 + U_{gg} + J_{eg} \text{.}$$ Since the intermediate state is an eigenstate with energy independent of the initial state, the effective Hamiltonian is $$\begin{aligned} H_{{\rm eff}} &=& \frac{ - t^2_g }{ U_{gg} + J_{eg} } \Big[ {\cal P} c^\dagger_{\br_1 g \alpha} c^{\vphantom\dagger}_{\br_2 g \alpha} (1 - {\cal P} ) c^\dagger_{\br_2 g \beta} c^{\vphantom\dagger}_{\br_1 g \beta} {\cal P} \nonumber \\ &+& (\br_1 \leftrightarrow \br_2) \Big] \text{,}\end{aligned}$$ where ${\cal P}$ is the usual projector onto the ground state manifold, and the second term in the square brackets accounts for the process where a fermion first hops from $\br_2$ to $\br_1$. Because a single hopping process always leaves the ground state manifold (since it changes the fermion number on each site), we can drop the $(1 - {\cal P})$ factor and write $$\begin{aligned} H_{{\rm eff}} &=& \frac{ - t^2_g }{ U_{gg} + J_{eg} } \Big[ {\cal P} c^\dagger_{\br_1 g \alpha} c^{\vphantom\dagger}_{\br_2 g \alpha}c^\dagger_{\br_2 g \beta} c^{\vphantom\dagger}_{\br_1 g \beta} {\cal P}+ (\br_1 \leftrightarrow \br_2) \Big] \nonumber \\ &=& \frac{ 2 t^2_g}{U_{gg} + J_{eg} } \Big[ {\cal P} S^g_{\alpha \beta}(\br_1) S^g_{\beta \alpha} (\br_2) {\cal P} \Big] \text{,}\end{aligned}$$ where we dropped an additive constant in going to the second line. Now, $$S^g_{\alpha \beta}(\br) = \frac{1}{2} S_{\alpha \beta}(\br) + \frac{1}{2} \big[ S^g_{\alpha \beta}(\br) - S^e_{\alpha \beta}(\br) \big] \text{,}$$ where the second term transforms as a triplet in the orbital space. But the projector ${\cal P}$ forces every lattice site to be an orbital singlet, and therefore $${\cal P} S^g_{\alpha \beta}(\br_1) S^g_{\beta \alpha} (\br_2) {\cal P} = \frac{1}{4} S_{\alpha \beta}(\br_1) S_{\beta \alpha}(\br_2) \text{,}$$ so that $$H_{{\rm eff}} = \frac{ t^2_g}{2 (U_{gg} + J_{eg} )} S_{\alpha \beta}(\br_1) S_{\beta \alpha}(\br_2) \text{,}$$ the result we claimed above. Determining the irreducible representation of ${\rm SU}(N)$ spin from local constraints {#app:1site-irrep} ======================================================================================= Here, we show that the local constraints Eqs. (\[eqn:num-constraint\],\[eqn:color-constraint\]) imply that each spin transforms in the $m \times n_c$ irreducible representation of ${\rm SU}(N)$. Another way to state this fact is that the Hilbert space of a single lattice site, subject to the local constraints, transforms irreducibly under ${\rm SU}(N)$ in the $m \times n_c$ representation. To see this, it is helpful to think of spin and color rotations as a subgroup ${\rm SU}(N) \times {\rm SU}(n_c) \subset {\rm SU}(n_c N)$, where the fermions transform in the fundamental of ${\rm SU}(n_c N)$. By fermion antisymmetry, the first constraint \[Eq. (\[eqn:num-constraint\])\] implies that each site transforms in the $n_c m \times 1$ representation of ${\rm SU}(n_c N)$. ![Illustration of Young tableaux occurring in the decomposition of the $n_c m \times 1$ representation of ${\rm SU}(n_c N)$ into irreducible representations of the ${\rm SU}(N) \times {\rm SU}(n_c)$ subgroup, for the case $n_c = m = 2$. If, as described in the text, we project out the ${\rm SU}(n_c)$ irreducible representation corresponding to the tableau on the left, then the corresponding ${\rm SU}(N)$ tableau is as shown on the right. Note that the rows (columns) of the ${\rm SU}(n_c)$ tableau become the columns (rows) of the ${\rm SU}(N)$ tableau.[]{data-label="fig:youngexample"}](youngexample.eps){width="2in"} To understand the role of the second constraint \[Eq. (\[eqn:color-constraint\])\], we need to understand how this representation decomposes into irreducible representations of ${\rm SU}(N) \times {\rm SU}(n_c)$. The decomposition has the general form $$\label{eqn:decomp} ( n_c m \times 1)_{{\rm SU}(n_c N) } = \sum_i r^i_{{\rm SU}(N) } \otimes r^i_{{\rm SU}(n_c) }$$ This equation expresses the fact that the $(n_c m \times 1)$ representation of ${\rm SU}(n_c N)$ is a direct sum of irreducible representations of ${\rm SU}(N) \times {\rm SU}(n_c)$, labeled by $i$. We will first show that, for each term in this decomposition, $r^i_{{\rm SU}(N)}$ *uniquely* determines $r^i_{{\rm SU}(n_c)}$, and vice versa. Focusing on a single lattice site and dropping the site label for fermion operators, we consider the following (overcomplete) basis states for the $( n_c m \times 1)_{{\rm SU}(n_c N) }$ representation: $$| a_1, \alpha_1 ; \dots ; a_{n_c m}, \alpha_{n_c m} \rangle \equiv f^\dagger_{a_1 \alpha_1} \dots f^\dagger_{a_{n_c m} \alpha_{n_c m}} | 0 \rangle \text{.}$$ If $P(i)$ is a permutation of the integers $i = 1,\dots,n_c m$, then fermion antisymmetry implies $$\begin{aligned} && \qquad | a_{P(1)}, \alpha_{P(1)} ; \dots ; a_{P(n_c m)}, \alpha_{P(n_c m)} \rangle = \nonumber \\ && \operatorname{sgn} P | a_1, \alpha_1 ; \dots ; a_{n_c m}, \alpha_{n_c m} \rangle \text{,} \label{eqn:antisymmetry}\end{aligned}$$ where $\operatorname{sgn} P$ is the sign of the permutation. Suppose we want to project out a particular representation of ${\rm SU}(n_c)$. We do this by forming a ${\rm SU}(n_c)$ Young tableau with $n_c m$ boxes, and associating each box with a color index $a_i$. We then follow the usual procedure of first antisymmetrizing the $a_i$ indices occupying the same column, and second symmetrizing those occupying the same row. Because of the overall antisymmetry expressed in Eq. (\[eqn:antisymmetry\]), when in the first step we antisymmetrize the $a_i$ indices in a given column, we also simultaneously *symmetrize* the corresponding set of $\alpha_i$ indices. Similarly, the second step antisymmetrizes those $\alpha_i$ indices corresponding to a given row. This means that, in the process of projecting out a given desired ${\rm SU}(n_c)$ representation, we have also automatically projected out a corresponding given ${\rm SU}(N)$ representation. The tableau of the ${\rm SU}(N)$ representation is given by interchanging the role of rows and columns of the ${\rm SU}(n_c)$ tableau – see Fig. \[fig:youngexample\] for an example that clarifies the meaning of this statement. The constraint Eq. (\[eqn:color-constraint\]) dictates that we keep only the terms in the decomposition where $r^i_{{\rm SU}(n_c)}$ is the singlet representation $0_{{\rm SU}(n_c)}$. Since we have to form the corresponding tableau using $n_c m$ boxes, the only possible ${\rm SU}(n_c)$ tableau is $n_c \times m$, and the above discussion implies that the corresponding ${\rm SU}(N)$ tableau is $m \times n_c$. It can be seen by directly constructing a highest weight state that the representation $(m \times n_c)_{{\rm SU}(N) } \otimes 0_{{\rm SU}(n_c)}$ only occurs once in the decomposition. Therefore the constraint gives $$( n_c m \times 1)_{{\rm SU}(n_c N) } \to (m \times n_c)_{{\rm SU}(N)} \otimes 0_{{\rm SU}(n_c) } \text{,}$$ the desired result. Exact ground state energy of two-site problem {#app:2site-exact} ============================================= Here we consider a problem of two spins at $\br_1$ and $\br_2$, coupled by the Hamiltonian Eq. (\[eqn:hspin2\]). We write $J_{\br_1 \br_2} = {\cal J} / N$, so that $${\cal H} = \frac{{\cal J}}{N} S_{\alpha \beta}(\br_1) S_{\beta \alpha}(\br_2) \text{.}$$ We shall calculate the exact (*i.e.* not large-$N$) ground state energy for arbitrary $N$, $m = N/k$ and $n_c$. It is convenient to define the Hermitian spin operators $$\hat{T}^{{\cal A}}_{\br} = f^\dagger_{\br a \alpha} T^{\cal A}_{\alpha \beta} f^{\vphantom\dagger}_{\br a \beta} \text{,}$$ where ${\cal A} = 1, \dots, N^2 - 1$ labels the ${\rm SU}(N)$ generators $T^{{\cal A}}$. These are chosen to satisfy the orthonormality condition $$\operatorname{tr} ( T^{\cal A} T^{\cal B} ) = \frac{1}{2} \delta^{ {\cal A} {\cal B} } \text{,}$$ and can be shown to satisfy the identity $$T^{\cal A}_{\alpha \beta} T^{\cal B}_{\gamma \delta} = \frac{1}{2} \Big( \delta_{\alpha \delta} \delta_{\beta \gamma} - \frac{1}{N} \delta_{\alpha \beta} \delta_{\gamma \delta} \Big) \text{.} \label{eqn:sun-gen-identity}$$ Equation (\[eqn:sun-gen-identity\]) can be used to show $$\begin{aligned} {\cal H} &=& \frac{2 {\cal J}}{N} \hat{T}^{\cal A}_{\br_1} \hat{T}^{\cal A}_{\br_2} + \frac{ {\cal J} n_c^2 m^2}{N^2} \\ &=& \frac{ {\cal J}}{N} \Big[ ( \hat{T}^{\cal A}_{\br_1} + \hat{T}^{\cal A}_{\br_2} )^2 - ( \hat{T}^{\cal A}_{\br_1} )^2 - (\hat{T}^{\cal A}_{\br_2})^2 \Big] + \frac{ {\cal J} n_c^2 m^2}{N^2} \text{.} \nonumber\end{aligned}$$ Now, $(\hat{T}^{\cal A})^2 = \hat{T}^{\cal A} \hat{T}^{\cal A}$ is the quadratic Casimir of ${\rm SU}(N)$. In a given irreducible representation $r$ this operator is proportional to the identity, and its eigenvalue $C_2(r)$ can be computed from the structure of the Young tableau using a formula given in Ch. 19 of Ref. , which we now reproduce. Suppose the Young tableau has $n_{row}$ rows, each with length $b_i$ ($i = 1,\dots,n_{row}$) and $n_{col}$ columns, each with length $a_i$ ($i = 1, \dots, n_{col}$), and a total of $\ell$ boxes. Then the eigenvalue of the Casimir is given by $$\label{eqn:casimir} C_2(r) = \frac{1}{2} \Big[ \ell ( N - \ell / N) + \sum_{i = 1}^{n_{row}} b_i^2 - \sum_{i = 1}^{n_{col}} a_i^2 \Big] \text{.}$$ Since each spin transforms in the $m \times n_c$ representation, we can use Eq. (\[eqn:casimir\]) to evaluate $ ( \hat{T}^{\cal A}_{\br_1} )^2 = (\hat{T}^{\cal A}_{\br_2})^2$. Moreover, by examining the Young tableaux appearing in the tensor product $(m \times n_c) \otimes (m \times n_c)$, and using Eq. (\[eqn:casimir\]) to evaluate $ ( \hat{T}^{\cal A}_{\br_1} + \hat{T}^{\cal A}_{\br_2} )^2$ for each tableau, we find that the two-spin ground state is the $2m \times n_c$ tableau, and that the corresponding ground state energy is $$E_0 = - \frac{ {\cal J} n_c m^2}{N} = - \frac{ n_c N {\cal J}}{k^2} \text{.}$$ Energy of $k$-cluster states {#app:kcluster} ============================ Here we compute the large-$N$ ground state energy of a single isolated $k$-cluster, a result which is used in the discussion of Sec. \[sec:simplex\]. We consider a spin model defined on an arbitrary connected graph with $k$ sites labeled by $s$, and with links labeled by $\ell$. The exchange energy is taken to be equal on all links and is $J = {\cal J} / N$. The mean-field Hamiltonian is $$H_{{\rm MFT}} = \frac{N}{\cal J} \sum_{\ell} \operatorname{tr} ( \chi^\dagger_{\ell} \chi^{\vphantom\dagger}_{\ell} ) + m \sum_s \operatorname{tr} (\mu_s) + {\cal H}_F \text{,}$$ where ${\cal H}_F = {\cal H}_K + {\cal H}_V$, and the latter two operators are constructed as in Eqs. (\[eqn:hk\],\[eqn:hv\]). We consider the following ansatz: $$\begin{aligned} \chi^{a b}_{\ell} &=& - \delta^{a b} \chi \\ \mu^{a b}_s &=& - \delta^{a b} z_s \chi \text{.} \label{eq:kclusteransatz}\end{aligned}$$ Here, $z_s$ is the coordination number of the site $s$. We shall see that $\chi > 0$ upon minimizing the energy with respect to $\chi$. With this choice, fixing the color and spin quantum numbers, the one-particle Hamiltonian that can be read off from ${\cal H}_F$ is proportional to the Laplacian matrix of the graph (with positive coefficient). Therefore the single particle ground state (for fixed color and spin) has zero energy, is unique, and its wavefunction is a constant. The unique many-particle ground state of ${\cal H}_F$ is obtained by filling this state with $k m n_c = n_c N$ fermions, one in each of the $n_c N$ possible combinations of color and spin states. The mean-field energy is therefore given entirely by the constant terms in $H_{{\rm MFT}}$, and is $$\begin{aligned} E_{{\rm MFT}} &=& \frac{n_c N N_b}{{\cal J}} \chi^2 - m n_c \chi \sum_s z_s \\ &=& \frac{n_c N N_b}{{\cal J}} \chi^2 - 2 m n_c N_b \chi \text{,}\end{aligned}$$ where $N_b$ is the number of links in the graph. Minimizing with respect to $\chi$, we find $$\label{eqn:kcluster-energy} E_{{\rm MFT}} = - \frac{n_c N N_b {\cal J}}{k^2} \text{.}$$ We know this must be the ground state energy of the isolated $k$-cluster because it saturates the bound Eq. (\[eqn:bond-gs-bound\]) provided by the ground state energy of the two-site problem. We note that this result also holds at any finite $N$. Schematically, this can be seen by noting that the ground state is the unique singlet that can be formed from the $k$ spins, which can be thought of as a $N \times n_c$ Young tableau, which is obtained by vertically stacking the $m \times n_c$ tableaux for each of the $k$ sites. Any pair of spins can then be seen to transform in the $2m \times n_c$ representation, which, by the discussion of Appendix \[app:2site-exact\], implies that the two-site Hamiltonian on the link connecting those sites is in its ground state. So the ground state energy is just the sum of the two-site ground state energies for each link in the graph, which again gives Eq. (\[eqn:kcluster-energy\]). Chiral spin liquid in large-$k$ limit {#app:largek} ===================================== Our demonstration that constant magnetic field with a flux of $2\pi/k$ per plaquette is the lowest energy solution to the saddle point equations – on the square lattice, for $n_c = 1$ and $5 \leq k \leq 8$ – is purely numerical. A natural question which arises in this context is whether this can be supplemented by additional analytical analysis. The solution to the problem of a particle hopping on a square lattice in a constant magnetic field of flux $2\pi/k$ cannot be found analytically. Yet it is well known that the spectrum consists of $k$ bands (Landau levels). [@hofstadter76] Since the fermions we work with are at a filling fraction $1/k$ (filling all the bands would correspond to $N$ particles per site, while we have instead $m=N/k$ particles per site), they fill precisely one lowest Landau level. Yet the energy of a filled Landau level is not known analytically, except at a very large $k$ where the problem becomes effectively continuous. Therefore, let us calculate the energetics of a saddle point solution with a flux of $2\pi/k$ per plaquette (corresponding to ACSL) in the limit of very large $k$ and compare it with other possible states at this $k$. Our analysis will be for the case $n_c = 1$, but also applies immediately to $n_c = 2$, since any $n_c = 1$ saddle point can be extended to a $n_c = 2$ saddle point of the diagonal form $$\begin{aligned} \chi^{a b}_{\br \br'} &=& \delta^{a b} \chi^a_{\br \br'} \text{ (no sum)} \\ \mu^{a b}_{\br} &=& \delta^{a b} \mu^a_{\br} \text{ (no sum),}\end{aligned}$$ where each pair $(\chi^a_{\br \br'}, \mu^a_{\br})$ is a $n_c = 1$ saddle point solution. The energy is simply a sum of energies of the $n_c = 1$ saddle points. We can obtain the nACSL and dCSL saddle points in this fashion from the ACSL saddle point, by choosing $\chi^1$ and $\chi^2$ to have the same or opposite magnetic fields, respectively. The other $n_c = 1$ states we consider can similarly be straightforwardly extended to $n_c = 2$ states in this fashion. We start by choosing the hoppings $\chi_{\br \br'}$ according to Eq. (\[eq:uniformfield\]). First of all, the first term in the mean-field energy is easy to calculate $$\label{eq:bondssL} \frac{N}{\cal J} \sum_{\left< \br \br' \right>} \left| \chi_{\br \br' } \right|^2 = \frac{2 N_s N \chi^2}{{\cal J} }.$$ Here, as before, $N_s$ is the total number of sites on the lattice, and $2N_s$ is the total number of bonds. Now let us find the energy of a fully filled Landau level. A particle hopping on a lattice with a hopping strength $\chi$ without the magnetic field has the spectrum $$\begin{aligned} \label{eq:dispersion} \epsilon(k_x,k_y) & = & -2 \chi \cos(k_x ) - 2\chi \cos(k_y ) \approx \cr && -4 \chi + \chi \left(k_x^2 +k_y^2 \right) - \frac \chi {12} \left(k_x^4+k_y^4 \right)\end{aligned}$$ where a small $k_x$, $k_y$ expansion was performed (lattice spacing is taken to be unity). Looking at the quadratic term, we read off the effective mass of the particle $m^* = 1/(2 \chi)$. This gives the cyclotron frequency $$\omega = \frac{B}{m^*} = \frac{4\pi \chi}{k},$$ since the magnetic field is $B=2\pi/k$. The energy of the lowest Landau level is then $$\label{eq:landaulevel} E_L = -4 \chi + \frac 1 2 \omega = - 4 \chi + \frac{2 \pi \chi}{k}.$$ For what follows we would like to also calculate the $1/k^2$ correction to this result. The corrections come from the quartic term in the dispersion, which takes into account the deviation of the lattice from the continuum limit. The correction to the Hamiltonian describing the motion of a particle in a magnetic field in the continuum due to this term in the dispersion can be found by minimal subtraction (for example, in Landau gauge), and gives $$V = - \frac{\chi}{12} \left[ \left( -i \frac{\partial}{\partial x} + \frac{2 \pi y}{k} \right)^4 +\frac{\partial^4}{\partial y^4} \right].$$ Considering this a perturbation, the unperturbed wave function is given by [@LandauLifshitzQuantum] $$\psi(k_x) = \left( \frac 2 k \right)^{\frac 1 4} e^{i k_x x} \exp \left( - \frac \pi k \left(y+ \frac{k k_x}{2 \pi} \right)^2 \right).$$ Calculating the matrix element $\left< \psi(k_x) \right| V \left| \psi(k_x) \right>$ we find $$E_L = -4 \chi + \frac{2\pi \chi}k - \frac{\pi^2\chi}{2 k^2}.$$ This is the energy of the lowest Landau level in the approximation up to terms $1/k^2$. Notice that the Landau level remains flat, that is, $k_x$ independent. It is easy to see that it will remain flat up to arbitrary order in $1/k$. This means that the broadening of the Landau level is exponentially small in $1/k$ and can be ignored for the purposes of this calculation. The total number of particles filling the Landau level is $N N_s/k$, so the mean field energy becomes $$E_{\rm MFT} = \frac{2 N_s N \chi^2}{\cal J} - \frac{N N_s E_L}{k}.$$ Minimizing this with respect to $\chi$ we find $$\label{eq:magneticfields} E_{\rm MFT} = - \frac{{\cal J} N N_s}{k^2} \left( 2 - \frac{2 \pi}{k} + \frac{ \pi^2}{k^2} + \dots \right).$$ Now let us consider alternative states. One alternative state is a Fermi surface state, where all hoppings are real and equal to $\chi$. The energy of such a state is straightforward to calculate. We take particles moving with the dispersion given by Eq. (\[eq:dispersion\]), fill all the states at an appropriate density up to Fermi energy, to find the total energy per particle to be $$E_{F} = -4 \chi + \frac{2\pi \chi}k - \frac{\pi^2\chi}{3 k^2}.$$ This energy is slightly higher than the energy of the Landau level given in Eq. (\[eq:landaulevel\]). Therefore the energy after minimization with respect to $\chi$ is also slightly higher $$\label{eq:fermisurface} E_{\rm MFT} = - \frac{{\cal J} N N_s}{k^2} \left( 2 - \frac{2 \pi}{k} + \frac{5 \pi^2}{6 k^2} + \dots \right).$$ Clearly, Eq. (\[eq:magneticfields\]) is greater than Eq. (\[eq:fermisurface\]), so the state with the uniform magnetic field wins. A second alternative state one might consider is a VCS state. Suppose the lattice is covered by clusters of exactly $k$-sites each, each containing $N_b$ bonds. Within each cluster $\chi_{\br \br' } $ are constant and equal to $\chi$, which is real, and $\chi_{\br \br' }$ for bonds connecting the clusters are zero. In Appendix \[app:kcluster\] it is found that the energy of a single cluster is given by Eq. \[eqn:kcluster-energy\]). Since the cluster energies simply add, and the number of clusters is $N_s / k$, the total energy is $$E_{{\rm MFT}} = - \frac{{\cal J} N N_s N_b}{k^3} \text{.}$$ Now we can define $N_{be}$ by $$\frac{N_s N_{b}}{k} = 2 N_s - N_{be} \text{,}$$ so that $N_{be}$ is the total number of bonds not contained inside some cluster. We have $$E_{\rm MFT} = - {\cal J} \frac{2 N N_s}{k^2} + {\cal J} \frac{N N_{be}}{ k^2}.$$ $N_{be}$ scales with the total perimeter of all clusters. Since the perimeter of a single large cluster goes like $\sqrt{k}$, $N_{be}$ is proportional to $\sqrt{k}$ times the total number of clusters $N_s / k$, or $$N_{be} = c \frac{N_s}{\sqrt{k}},$$ where $c$ is some constant. This gives $$E_{\rm MFT} = - \frac{ {\cal J} N N_s}{k^2} \left(2 - \frac{c}{ \sqrt{k}} \right).$$ Comparing this with the uniform magnetic mean-field energy Eq. (\[eq:magneticfields\]) as well as the uniform hopping given by Eq. (\[eq:fermisurface\]), we see that the magnetic field mean-field energy is again the lowest at large $k$. The arguments presented here do not prove that the uniform magnetic field is the lowest energy solution. That is demonstrated instead by the numerical solution of the mean-field equations. However, they do give a feel and perhaps some intuition as to why this solution wins over some of the possible alternatives. They also support the idea that chiral spin liquids are good ground states not just a few intermediate values of $k$, but also for larger $k$. Therefore we conjecture that the ACSL is the large-$N$ ground state for $n_c = 1$ and all $k \geq 5$, and that the nACSL and dCSL are the degenerate large-$N$ ground states for $n_c = 2$ and all $k \geq 6$. Localization and braiding of fractional and non-Abelian particles {#app:carriers} ================================================================= One of the most striking properties of the topological liquid states discussed in this paper is the presence of particles with fractional and non-Abelian statistics. It is therefore interesting to discuss how, in principle, such particles may be localized and braided, especially in view of the intense interest in topological quantum computation using non-Abelian particles. Our intent here is not to develop a detailed and realistically achievable proposal to carry out such a braiding experiment in a cold atom system, but simply to discuss in principle how such braiding may be achieved, and point out some of the issues that arise. Development of more detailed proposals is an interesting subject for future work. It would be also interesting if our discussion can be sharpened by appropriate calculations. For ease of presentation, we focus on the case $n_c = m = 1$; generalization to other cases is straightforward. We shall discuss how one may localize a particle called a holon that is spinless but carries the conserved atom number. The reason we consider holons rather than spinons, is that holons may be localized simply by modifying the optically generated single particle potential for the atoms. To do this, we need to go beyond the Heisenberg spin model, and for greatest simplicity we consider a $t$-$J$ model where strong correlation restricts the number of atoms per site to be less than or equal to one. The Hamiltonian is $$\begin{aligned} H_{tJ} &=& - t_g \sum_{\left<\br \br' \right>} {\cal P} \left( c^\dagger_{\br' \alpha } c_{\br \alpha} + {\rm H.c.} \right) {\cal P} \nonumber \\ &+& J \sum_{\left<\br \br' \right>} c^\dagger_{\br' \alpha} c_{\br' \beta} c^\dagger_{\br \beta} c_{\br \alpha} \text{,}\end{aligned}$$ where $c^\dagger_{\br \alpha}$ creates a ground state atom in spin state $\alpha$ on site $\br$, and ${\cal P}$ is a projector onto the subspace with one or fewer atoms on each site. $J_{\br \br'}$ has been replaced by $J$ on every bond, and the sum in the first term is over nearest-neighbor bonds of the square lattice. $c_{\br \alpha}$ is said to insert a hole (with spin $\alpha$) at site $\br$. When there are no holes present, this model reduces to the Heisenberg spin model with $n_c = m = 1$. Below, we rely on the approach of Lee and Nagaosa to discuss this model.[@nagaosa92] To make contact with the description of the topological liquid states, we decompose the hole insertion operator as $$\label{eqn:tj-decomposition} c_{\br \alpha} = f_{\br \alpha} b^\dagger_{\br} \text{,}$$ where $f^\dagger_{\br \alpha}$ creates a spinon and $b^\dagger_{\br}$ is a bosonic creation operator creating a holon. Spinon and holon densities obey the local constraint $$\label{eqn:tj-constraint} f^\dagger_{\br \alpha} f^{\vphantom\dagger}_{\br \alpha} + b^\dagger_{\br} b^{\vphantom\dagger}_{\br} = 1 \text{.}$$ Assuming the system (without holes) has an ACSL ground state, the spinons are low-energy quasiparticles, which couple to the Chern-Simons gauge field and thus acquire fractional statistics. The holon also carries gauge charge, and thus also acquires fractional statistics. It should be noted that, in the equations above the holon and spinon are formal objects used to microscopically represent the $t$-$J$ model, and these formal objects are not the same as the low-energy quasiparticle degrees of freedom. Holons and spinons emerge as low-energy degrees of freedom when we study the $t$-$J$ model starting from an appropriate mean-field theory, [@nagaosa92] and then including fluctuations. Since the discussion here is only qualitative, and since the needed mean-field theory is very closely related to that introduced in Sec. \[sec:models\], we shall not introduce it here. It suffices to note that, at the mean-field level, both spinons and holons are free particles, which are minimally coupled to the fluctuating gauge field upon going beyond mean-field theory. We now consider introducing the external potential $$\delta H_{tJ} = - \sum_{\br} U(\br) c^\dagger_{\br \alpha} c^{\vphantom\dagger}_{\br \alpha} \text{,} \label{eqn:ext-pot}$$ and adding a single hole into the system. The sign in Eq. (\[eqn:ext-pot\]) is chosen so that a negative $U(\br)$ is an attractive potential for the added hole. Up to an additive constant, we may use Eq. (\[eqn:tj-constraint\]) to re-express the potential as $$\delta H_{tJ} = \sum_{\br} U(\br) b^\dagger_{\br } b^{\vphantom\dagger}_{\br } \text{.}$$ We could have equally chosen the potential to couple to the spinons and not the holons; the above choice is convenient, but is purely a convention. For example, at the mean-field level, a change in the saddle point value of the Lagrange multiplier field enforcing the local constraint, will apportion the effect of $U(\br)$ between holons and spinons. Therefore the system dynamically determines the effect of the physical external potential $U(\br)$ on holons and spinons. When the hole is added, Eq. \[eqn:tj-decomposition\] tells us that we both add a holon and remove one spinon. (The removed spinon should really be called a spinon hole, but for ease of discussion we will simply call it a spinon.) We suppose that $U(\br)$ is negative, appreciable only in a small spatial region, and just strong enough to bind a particle. Because $U(\br)$ couples to the conserved density, we expect it to bind a particle carrying atom number $-1$, but it is not obvious whether this particle will be a hole or a holon. To understand this, the added holon and spinon will interact via some short-ranged potential. This potential may be attractive or repulsive. If the holon-spinon potential is attractive enough, the holon will be bound to the spinon, and they will be localized together by the external potential $U(\br)$. In this case we have localized a hole, which is not a fractional particle. On the other hand, if the holon-spinon potential is repulsive enough, a holon will be localized. In the latter case, we can then manipulate the fractional holon by adiabatically changing the external potential $U(\br)$. Multiple holons could be created by choosing $U(\br)$ to be a sum of several localized potentials. Since the goal is to create and manipulate a fractional particle, what should be done if a hole is localized by the external potential? One solution is to apply a time-varying Zeeman magnetic field, which will couple to the localized spinon and can be used to excite it to a delocalized state, leaving behind a localized holon. If we do this to create a state with several localized holons, the delocalized spinon excitations will induce some errors when the holons are braided. However, these spinon excitations can be made to relax by whatever cooling mechanism was used to prepare the state of several localized holes in the first place. (Finding a cooling mechanism capable of achieving this for cold atom Mott insulators is a significant unsolved problem. Solving it is a prerequisite for *any* experiment probing fractional or non-Abelian statistics in such systems, which would have to be carried out at temperatures well below the bulk gap.) While some spinons may relax back into the localized states and re-form holon-spinon bound states, because these states are localized, the rates for other relaxation processes (for instance, relaxation into low-energy edge excitations) are expected to dominate. [^1]: Topological band insulators are characterized by topological properties of electron band structure, which is distinct from topological order as the term is used here. [^2]: The CSL is still referred to as a spin liquid, even though it spontaneously breaks parity and time-reversal.
--- abstract: | Results obtained from 9 X-ray observations of 3C 273 performed by [*ASCA*]{} are presented (for a total exposure time of about 160 000 s). The analysis and interpretation of the results is complicated by the fact that 4 of these observations were used for on-board calibration of the CCDs spectral response. In particular, we had to pay special attention to the low energy band and 5–6 keV energy range where systematic effects could distort a correct interpretation of the data. The present standard analysis shows that, in agreement with official recommendations, a conservative systematic error (at low energies) of $\sim$ 2–3 $\times$ 10$^{20}$ cm$^{-2}$ must be assumed when analyzing [*ASCA*]{} SIS data. A soft-excess, with variable flux and/or shape, has been clearly detected as well as flux and spectral variability that confirm previous findings with other observatories. An anti-correlation is found between the spectral index and the flux in the 2-10 keV energy range. With the old response matrices, an iron emission line feature with EW $\sim$ 50–100 eV was initially detected at $\sim$ 5.6-5.7 keV ($\sim$ 6.5-6.6 keV in the quasar frame) in 6 observations and, in two occasions, the line was resolved ($\sigma \sim$ 0.2-0.6 keV). Comparison with the Crab spectrum indicates however that this feature was mostly due to remaining calibration uncertainties between 5–6 keV. Indeed, fitting the data with the latest publicly available calibration matrices, we find that the line remains unambiguously significant in (only) the two observations with lowest fluxes where it is weak (EW $\sim$ 20-30 eV), narrow and consistent with being produced by Fe K$_{\alpha}$ emission from neutral matter. Overall, the observations are qualitatively consistent with a variable, non-thermal X-ray continuum emission, i.e., a power law with $\Gamma$ $\sim$ 1.6 (possibly produced in the innermost regions of the radio-optical jet), plus underlying “Seyfert-like” features, i.e., a soft-excess and Fe K$_{\alpha}$ line emission. The data are consistent with some contribution (up to a few 10% level in the [*ASCA*]{} energy band) from a “Seyfert-like” direct continuum emission, i.e. a power law with $\Gamma$ $\sim$ 1.9 plus a reflection component, as well. When the continuum (jet) emission is in a low state, the spectral features produced by the Seyfert-like spectrum (soft-excess, iron line and possibly a steep power law plus a reflection continuum) are more easily seen. author: - 'M. Cappi$^{1,2}$, M. Matsuoka$^1$, C. Otani$^1$ & K.M. Leighly$^{1,3}$' title: 'The Complex X-ray Spectrum of 3C 273: [*ASCA*]{} Observations' --- 24.0cm -0.5cm -1.5 cm ß Accepted for publication in P.A.S.J. Introduction ============ The remarkable discovery by EGRET on-board [*CGRO*]{} that blazars (i.e. BL Lacertae objects and flat-spectrum radio quasars) are strong $\gamma$-ray emitters has drawn in recent years the attention of the astronomical community to this class of objects. Observations indicate that the overall energy distribution of blazars shows the signature of two different types of emission mechanisms: beamed non-thermal jet radiation producing the overall broad band (from radio to $\gamma$-rays) continuum emission common to all blazars, and quasi-isotropic thermal radiation by an accretion-disk producing a “UV Bump” observed in a large number of quasars and Seyfert galaxies but absent in BL Lac objects (e.g. Sambruna, Maraschi & Urry, 1996, Elvis et al. 1994). The non-thermal continuum consists of IR-optical and $\gamma$-ray peaks. The first peak is interpreted in terms of synchrotron emission and the second peak in terms of inverse Compton emission (see Urry & Padovani 1995 for a review on the subject). From object to object, the observed different spectral characteristics may be due to the relative importance of one emission mechanism to the other which, in turn, is likely to be related to the amount of beaming in one object or the other (e.g. Dondi & Ghisellini 1995). One of the most well-studied and characteristic example of blazars is the bright quasar 3C 273 (z$\simeq$0.158). It is a good example where both non-thermal and thermal emission mechanisms might be observed because its broad band energy distribution exhibits two large peaks, one peaking in the IR-optical and one peaking in the $\gamma$-rays with a ‘UV bump’’ superimposed on it (Courvoisier et al. 1987, Lichti et al. 1995, von Montigny et al. 1997). The study of its X-ray properties may provide important clues to understand the origin of both emission mechanisms because a) soft-X-ray excess emission has been observed and interpreted as the high-energy tail of the UV bump (Turner et al. 1985, Courvoisier et al. 1987, Walter et al. 1994, Leach, Mc Hardy & Papadakis 1995) and b) the 2-10 keV spectrum which is most likely associated to the $\gamma$-ray emission is known to be variable in time and shape (Turner et al. 1985), thus allowing for tests of different X- and $\gamma$-ray emission models. To clarify the mechanism responsible for the X-ray emission, high quality data are first necessary to disentangle the contributions from the different spectral components, namely the jet and Seyfert components. In this paper, we report on observations with [*ASCA*]{} during the first year of the mission. The source spectral properties are shown with particular attention to calibration uncertainties, most relevant in this source because it was used for on-board calibration of the CCDs. We show evidence of complex spectral features (soft-excess, Fe K emission line, flux and spectral correlated variability) and discuss their possible interpretation. Throughout the analysis we use $H_{0}$ = 50 km s$^{-1}$ Mpc$^{-1}$ and $q_{0}$ = 0. Observations and Data Reduction =============================== Over the period from 1993 June to 1993 December, 3C 273 was observed 9 times with the gas imaging spectrometer (GIS) and solid-state spectrometer (SIS) on-board the [*ASCA*]{} satellite (Tanaka, Inoue & Holt 1994). The observation log is given in Table 1. The SIS was operating in 1 CCD Faint mode during all the observations. Observations 2, 3, 5 and 6 have been used to calibrate each chip of the SIS (Dotani et al. 1996) and the remaining pointings are part of a multi-wavelength campaign on 3C 273 (von Montigny et al., 1997). Dark frame error (DFE) and echo corrections (Otani & Dotani 1994) have been applied to all the data. After removing hot and flickering pixels, standard selection criteria were used to select good observational intervals. The most relevant were an elevation and bright Earth angles greater than 5$^{\circ}$ and 25$^{\circ}$ respectively, and a magnetic cutoff rigidity greater than 8 GeV/c for SIS and 7 GeV/c for GIS. For all observations, source counts were extracted using circular region centered on the source of radius 6$^{\prime}$ for the GIS and 3$^{\prime}$ for the SIS. Hard X-ray emission was detected with [*SIGMA*]{} from a location about 15$^{\prime}$ away from 3C 273 (at R.A.\[1950\]=12$^h$27$^m$20$^s$, DEC\[1950\]=02$^\circ$30$^\prime$ with an associated error of $\sim$ 5$^\prime$) (Jourdain et al. 1992). We find no trace of such source in the [*ASCA*]{} field of views. Background spectra were obtained from the edges of the chips in the SIS and from the blank sky files in the GIS. The GIS background region was always chosen in regions uncontaminated by NGC 6552 and at similar off-axis distance as the source region (see Appendix A of Cappi et al. 1997 for a detailed discussion). Owing to the high count-rate of this bright source and the fact that the background typically contributed less than a few percent to the average count-rates in each observations, different choices of backgrounds (e.g. blank sky backgrounds for SIS or local backgrounds for GIS) did not introduce significant differences in the spectral results reported below. In total, [*ASCA*]{} collected about half a million counts per detector for an effective exposure time of $\sim$ 160 Ks for GIS and $\sim$ 130 Ks for SIS. Data preparation and spectral analysis have been done using version 1.2 of the XSELECT package and version 9.0 of the XSPEC program (Arnaud et al 1991). Official EA/PSF-V1.0 files, gis\[23\]v4\_0.rmf and rsp1.1alphaPI1.6 matrices (released in 1994) were used for all the observations reported in the following. Newer XRT calibration files (EA/PSF-V2.0) and related GIS and SIS responses were also used in §3.4 for the study of the iron line emission. Results ======= GIS2/3 and SIS0/1 pulse-height spectra were binned so as to have at least 200 counts per bin, in the energy ranges 0.7-10 keV and 0.4-10 keV, respectively. Light curves were accumulated for all observing periods, but none of these indicated significant variability (although variations as high as $\sim$ 10-20% per observation period cannot be excluded). All photons were, thus, accumulated observation-by-observation for the spectral analysis. It is emphasized that 3C 273 was used to calibrate the SIS response function imposing consistency with the GIS results (GIS was assumed to be well calibrated by fitting the Crab data with a single absorbed power law model; see Dotani et al. 1996 for details). Therefore, great care must be taken when interpreting the SIS results since systematic effects could be present. As a rule, [*absolute*]{} values obtained from the SIS (only) could not, in principle, be trusted. However, [*relative*]{} measurements (like flux and/or spectral variability) obtained from comparing different observations are likely to give valuable information since, in this case, systematic effects should cancel each other. Moreover, all the observations were taken during the first year of the mission, and the last 8 observations were performed within a period of 12 days. Thus, we assumed that the instrument did not change significantly from one observation to the other. Flux Variability ---------------- First, GIS and SIS data were fitted separately with a single absorbed power law model, with the column density, $N_{\rm H}$, free to vary. Throughout the analysis, the column density is calculated in the observer frame (i.e. z=0). This model gives an acceptable description of all the spectra. The best-fitting model results are given in Table 2. It is emphasized that the 2-10 keV flux measurements (column 4 in Table 2) do not vary significantly even if $N_{\rm H}$ is fixed at the Galactic value (§3.3) or if an iron emission line is added to the model (§3.4). Figure 1 clearly shows that the flux varied in time by up to $\sim$ 60% on a time-scale of $\sim$ 200 days. Significant (20–30%) shorter-term variations on a time-scale of $\sim$ 1 day are also evident in the light-curve. The luminosity varied, accordingly, from L$_{(2-10 \rm keV)}$ $\sim$ 1.4 $\times$ 10$^{46}$ erg s$^{-1}$ in observation 1 to a L$_{(2-10 \rm keV)}$ $\sim$ 2.1 $\times$ 10$^{46}$ erg s$^{-1}$ in observation 9. Note that the photon indices obtained from the GIS and SIS are in excellent agreement, while there are slight, but significant, discrepancies in the measurement of the low energy absorption column. This effect will be considered in more detail in the following section. The 2–10 keV flux measured with the SIS is also systematically $\sim$ 15–20% higher than that obtained from the GIS. The discrepancy is currently attributed to cross calibration uncertainties that have been largely reduced to less than 10% with the introduction of new (EA/PSF-V2.0) XRT calibration files. With the matrices used in the present analysis, the SIS flux can be considered to be the most reliable with an absolute uncertainty of less than $\sim$ 10% (K. Arnaud et al., [*ASCA*]{} calibration uncertainties, http:$//$heasarc.gsfc.nasa.gov$/$docs$/$asca$/$cal$\_$probs.html). The absolute value and long time-scale variations of the 2-10 keV flux of 3C 273 are roughly consistent with previous observations with [*EXOSAT*]{} and [*Ginga*]{} satellites (Turner et al. 1990). However, the better quality and temporal sampling of the present observations also reveal for the first time day-to-day variations of the 2–10 keV X-ray flux as large as $\sim$ 20%. Soft Excess and Excess Absorption --------------------------------- 90% confidence contours for the column densities versus photon indices obtained from the SIS spectra with the above absorbed power law model (Table 2) are shown in Fig. 2. The vertical lines represent the Galactic absorption (full line) and associated estimated errors (dotted lines) as obtained from measurements at 21 cm (Dickey & Lockman 1990). Fig. 2 clearly illustrates three interesting results: (1) The best-fit column density was significantly lower than the Galactic value during the first observation. This result can be promptly interpreted as evidence for a soft-excess in the data; (2) in all other observations, absorption was significantly, and systematically, higher than the Galactic value; (3) small, but significant, variations in the photon index are measured from observation to observation. Note that, though the GIS is less sensitive than the SIS at low energies, the soft-excess is also detected in the GIS data. Indeed, the GIS spectrum gives an upper limit for $N_{\rm H}$ of $\sim$ 0.65 $\times$ 10$^{20}$ cm$^{-2}$ during the first observation (significantly lower than $N_{\rm Hgal}$ $\sim$ 1.79 $\times$ 10$^{20}$ cm$^{-2}$) while all other observations give $N_{\rm H}$ values consistent with the Galactic absorption. Although this measurement is in agreement with that reported by Yaqoob et al. (1994), these authors did not report the detection of a soft-excess for this observation probably because, in their work, SIS spectra were consistent with the Galactic absorption. This discrepancy is likely to be attributed to the fact that Yaqoob et al. (1994) used older response matrices obtained from the preliminary calibrations of the GIS and SIS instruments. The evidence for a soft-excess during observation 1 is further strengthened by the ratio of the pulse height spectra of observation 1 divided by observation 9 of the summed SIS0 and SIS1 spectra (upper panel) and the summed GIS2 and GIS3 spectra (lower panel) shown in Fig. 3. Similar results were obtained dividing observation 1 with the other observations, thus excluding the possibility that the steepening of the ratios below about 2 keV could be due to a slight increase of the absorption measured during observation 9. Since pulse height spectral ratios directly compare the raw count-rates from two datasets, problems related to the calibration uncertainties of the instrumental response are in principle removed. These further confirm the presence of a continuum soft-excess emission at energies below $\sim$ 1.5 keV. In order to parameterize the measured soft-excess, two-component emission models were fitted to the GIS and SIS data of observation 1 with the absorption fixed at the Galactic value. Normalizations from the two detectors were free to vary independently. Results from the model-fittings are given in Table 3. Attempts to model the soft-excess with a Raymond & Smith (1977) plasma model and a warm absorber model, as proposed in previous work, are not reported here since both models gave poor fits to the data, consistent with the absence of any blend of emission lines around 0.8–0.9 keV (quasar frame) or absorption features around 0.7–1.0 keV (quasar frame). Acceptable fits (Table 3) were obtained from either a black-body (with kT $\sim$ 100 eV) plus power law model, a bremsstrahlung (with kT $\sim$ 240 eV) plus power law model or a double power law model (with $\Gamma_{\rm soft} \sim$ 3). It is interesting to note that the improvement in $\chi^2$ obtained with a double power law model is statistically significant when compared to the black-body ($\Delta \chi^2 = 10$) or bremsstrahlung ($\Delta \chi^2 = 5$) models at $\sim$ 99.8% and $\sim$ 96% confidence levels, respectively. This is similar to the findings of Leach et al. (1995) obtained from high quality [*ROSAT*]{} PSPC data and confirm that the soft-excess is better modeled by a power law than by a thermal emission model. This issue will be further addressed in §4.2. Moreover, it is likely that this soft-excess is intrinsically variable with time (in shape or in intensity) because our simulations indicate that, if constant, it should have been detected at least during observation 7, when the source was only $\sim$ 15% brighter than in observation 1. The following 4 calibration observations and 4 AO-I observations are, instead, all consistent with a constant absorption column of $\sim$ 3–4 $\times$ 10$^{20}$ cm$^{-2}$. This value is significantly higher than the Galactic absorption by $\sim$ 2-3 $\times$ 10$^{20}$ cm$^{-2}$. However, a physical interpretation of this result cannot be made because the SIS responses were calibrated from some of these observations (Table 1) assuming that 3C 273 was absorbed by the Galactic column only. Instead, this column should, by definition, be taken as a systematic error of the [*ASCA*]{} SIS response function for a standard analysis like the one presented in this work. Similar results have been reported from independent work on 3C 273 (Hayashida et al. 1995) and the Coma cluster (Dotani et al. 1996; Hashimotodani, private communication). Photon Index Variations ----------------------- Figure 2 also suggests small but significant variations of the power law photon index, at least from observation 2 to observation 9. However, because of the presence of the soft-excess and extra-absorption mentioned above, these variations may be due to an incorrect modeling of the low energy absorption. To avoid such complications, only the data above 2 keV were considered. Since the photon indices obtained by separately fitting the GIS and SIS data were all consistent at $\sim$ 90% confidence level, the data from the two instruments were fitted simultaneously tying the fitting parameters together but allowing the relative normalizations to be free. A single power law model plus absorption fixed at the Galactic value was used in all fits. It is pointed out that since the SIS best-fit spectra normally require a value of $N_{\rm H}$ systematically higher than the Galactic value when fitted between $\sim$ 0.4–10 keV (see §3.2), by fixing the absorption at the Galactic value, a systematic error will be introduced on the [*absolute*]{} value of the SIS photon indices. However, this (small) effect will not affect the measurements of [*relative*]{} values presented below. GIS results are not affected by these uncertainties. Best-fitting results are given in Table 4. The variations of the 2–10 keV photon indices with the observed 2–10 keV fluxes are shown in Fig. 4 for the (a) GIS+SIS data (all observations) and (b) GIS+SIS data, excluding calibration observations n. 2, 3 and 5. The reason we ignored observations 2, 3 and 5 in the dataset (b) is that these were performed using chips 0, 2 and 3 of SIS0 and chips 0, 1, 2 of SIS1 in which the gain (relative to the standard chip n.1 of SIS0 and chip n. 3 of SIS1) are uncertain by as much as $\sim$ 2% (Dotani et al. 1996). This could introduce systematic errors in the spectral slope of as much as $\sim$ 0.05 which are much larger than our statistical errors and could, therefore, distort any significant statistical correlation. Figure 4 shows evidence for a spectral index change in 3C 273. Though the variations are not large ($\Delta \Gamma \lsimeq$ 0.1), they are significant in both datasets (Table 5). This point is further illustrated in Figure 3 which shows that, though the data are somewhat noisy above $\sim$ 8 keV, the pulse height spectrum between 2-10 keV of observation 1 is steeper than that of observation 9, for both the GIS and SIS data. Table 5 shows the results from $\chi^2$-tests against constancy, linear correlation coefficients and rank correlation coefficients for the two datasets considered. The most interesting result obtained from this analysis is that there is a trend (most noticeable in Fig. 4) for the 2–10 keV photon index to be anti-correlated with the flux, i.e. a spectral flattening as the source brightens. This result is significant at a $\sim$ 99.95% and $\sim$ 81% confidence level for the dataset (a) and at a $\sim$ 99.999% and $\sim$ 89% confidence level for dataset (b), for a linear and non-parametric correlation respectively (Table 5). It is emphasized that the same trend was observed also in both SIS and GIS instruments taken individually, though with somewhat lower significance. However, it should be noted that the significance of this result depends mostly on the 2 data points from observations 1 and 9. If real, this behavior is opposite to that commonly seen in Seyfert galaxies (e.g. Mushotzky, Done & Pounds 1993 and references therein) and, on the contrary, provides an interesting analogy with BL Lac objects as discussed in §4. Iron Emission Line ------------------ The spectra have been inspected for the presence of an iron emission line. Our first analysis clearly reveals the presence of a broad ($\sigma \sim$ 0.4 keV) Fe K line with EW $\sim$ 90–100 eV during observations 1 and 7 (Table 6). Confidence contours in the Fe K parameters space $\sigma$–E and EW–E (observer frame) obtained from fitting the observation 1 data are shown in Fig. 5 as an example. A narrow line is also significant in 4 out of 7 of the remaining observations. The emission lines are all consistent with a neutral or mildly ionized iron emission line (at 6.4 keV in the quasar’s frame) at $\sim$ 90% confidence level. We find no significant correlation between the equivalent width of the Fe K line and the computed X-ray flux but it is interesting that the 2 strongest lines are detected while 3C 273 was in its lowest states (i.e. observations 1 and 7 reported in Table 6). However, as is evident from the dashed contours shown in Fig. 5, the GIS spectra of the Crab which were used to calibrate the GIS exhibit a miscalibration at similar energies which can be modeled with a broad ($\sigma \simeq$ 0.4 $\pm$ 0.05 keV) line at E $\simeq$ 5.7 $\pm$ 0.05 keV with an EW $\simeq$ 50 $\pm$ 15 eV (see also Ebisawa et al. 1996 for more details about this miscalibration in the Crab’s GIS spectra). New calibration matrices (ascaarf v2.62 with version 2.0 of the XRT matrices) have, however, recently been distributed to the [*ASCA*]{} guest observers through the [*ASCA*]{} Goddard Guest Observer Facility (K. Ebisawa, http:$//$heasarc.gsfc.nasa.gov$/$docs$/$asca$/ $xrt$\_$new$\_$response$\_$announce$/$announce.html). The new calibrations reduce the systematic differences in flux between GIS and SIS (see §3.1) and corrects the GIS and SIS response matrices for the $\sim$ 5–6 keV spectral feature detected in the Crab spectra (Fukazawa, Ishida & Ebisawa 1997, but see also Gendreau & Yaqoob 1997). It is emphasized that the use of this improved calibration does not affect our conclusions on flux variability, soft-excess and spectral variability because, as stated in §3, only relative measurements have been considered. It gives, however, significantly different results on the iron emission line measurements. Now, spectral fitting yields narrow ($\sigma$ $<$ 0.21 keV and $<$ 0.6 keV) and weaker, EW ($\sim$ 25 eV and 27 eV, observer frame) lines during observations 1 and 7, respectively (see Table 7). The line is also reduced in all other observations yielding only upper limits ranging between $\sim$ 10–30 eV. Analysis with the latest available calibration suggests, therefore, that the line detected during observations 1 and 7 is very likely to be real but that, contrary to our first results with previous calibration, it is consistent with being narrow and with having an EW $\sim$ 20-30 eV (observer frame). In the quasar’s frame, the Fe K line best-fit parameters are E$\simeq$6.47$^{+0.09}_{-0.07}$ keV, 6.31$^{+0.06}_{-0.06}$ keV and EW$\simeq$29$^{+23}_{-15}$ eV, 31$^{+18}_{-18}$ eV for observations 1 and 7, respectively. These results agree with the emission of a fluorescent narrow Fe K$_{\alpha}$ line from neutral matter (Makishima 1986). Discussion ========== On the X-ray Continuum Emission: Signatures of a Jet Component ? ---------------------------------------------------------------- The [*ASCA*]{} observations have shown that the 2-10 keV flux is variable with time by up to $\sim$ 60% on a time-scale of $\sim$ 200 days and shows day-to-day variations as large as $\sim$ 20%. Thus, unless it is relativistically beamed, the hard X-ray source must be smaller than R = c $\times$ $\Delta$t $\sim$ 10$^{16}$cm where the source doubling time-scale, i.e. the time necessary for the source to vary by a factor of two, can roughly be estimated as $\Delta$t $\sim$ ${F_{\rm init} \over{\Delta F}}$$\times$${\Delta t_{\rm obs}\over{1+z}}$, where z is the source redshift. It is well known that 3C 273 shows a one-sided jet which is clearly observed at both radio and optical wavelengths (Bahcall et al. 1995 and references therein) which is most likely produced by synchrotron emission from relativistic electrons (3C 273 exhibits superluminal motion with $\beta_{\rm app} \simeq$ 8.0 $\pm$ 1.0, e.g. Vermeulen & Cohen 1994). The main jet consists of a number of bright, elongated, knots extending from $\sim$ 30 kpc to $\sim$ 50 kpc from the quasar with kpc-scale transverse dimensions in both the radio and optical images. Therefore, the dimensions of the radio-optical jet are about 6 orders of magnitude larger than the X-ray source dimension. This, together with the high X-ray luminosity of $\sim$ 1.5-2 $\times$ 10$^{46}$ erg s$^{-1}$ inferred from the X-ray data, excludes the possibility that the bulk of the X-ray emission is produced by the jet’s knots only (whatever the emission mechanism is). Therefore, if it is related to the jet, the 2-10 keV X-ray emitting region must be located in the innermost regions of the jet. These conclusions are consistent with high resolution X-ray imaging (e.g. [*ROSAT*]{} HRI) of the jet emission of 3C 273 (Röser et al. 1996). For the first time in this source, we find evidence for a statistically significant anti-correlation between the 2-10 keV flux and spectral index, implying hardening of the spectrum as the source brightens. This is opposite to lower luminosity AGNs (e.g. Seyfert 1 galaxies) which often show a positive correlation (Grandi et al. 1992, Mushotzky, Done & Pounds 1993, and references therein) but similar to the anti-correlation flux-spectral index often observed in BL Lacertae objects (at least in X-ray selected BL Lac objects; e.g. Giommi et al. 1990, Urry et al. 1996). This suggests that the bulk of the X-ray emission in 3C 273 is produced by a mechanism similar to the one supposed in BL Lac objects, i.e. a process related to the existence of a beam of relativistic particles (e.g. Fichtel 1995). This is consistent with the recent results of von Montigny et al. (1997) which show that the multi-wavelength spectrum (from radio to $\gamma$-rays) of 3C 273 can be explained by any of the most prominent theoretical models (e.g. synchrotron self-Compton (SSC, Maraschi, Ghisellini & Celotti 1992), inverse Compton on external photons from an accretion disk or a broad-line region (EC, Sikora, Begelman & Rees 1994) or synchrotron from ultra-relativistic electrons and positrons in a proton-induced cascade (Mannheim & Biermann 1992)) for the explanation of the high $\gamma$-ray emission of BL Lac objects. On the basis of the present [*ASCA*]{} data, it is not possible, however, to distinguish between the different theoretical models since all of these could explain the observed spectral variability. Synchrotron losses, for example, have often been invoked to explain a steepening with decreasing flux (e.g. in PKS 2155-304, Sembay et al. 1993; in H0323+022, Kohmura et al. 1994; in Mkn 421, Takahashi et al. 1996). In these cases, the X-ray spectra were interpreted as the high energy tail of synchrotron emission, while the X-ray spectrum of 3C 273 is more likely due to Compton emission (von Montigny et al. 1997). However, one could expect that a similar behavior may hold also in the “Compton” bump, though perhaps at a different time-scale because of a possibly different emission region. Moreover, hysteresis “clockwise” flux-index relations were reported in these BL Lac objects. Further observations are clearly necessary to test this hypothesis in 3C 273. Alternatively, there might be a mixture of, say, SSC and EC contributing to the X-ray emission, the spectrum being harder when the EC flux increases. Or else, as discussed next, there might be a contribution from a Seyfert-like spectrum with a steeper spectral slope, e.g. $\Gamma$ $\sim$ 1.7 as commonly seen in Seyfert galaxies (Mushotzky, Done & Pounds 1993), which steepens the spectrum as the source becomes fainter. A small contribution of $\sim$10-20% in the 2-10 keV band could steepen the observed spectrum by $\Delta \Gamma$ $\sim$ 0.1 thus explain the flux-index anti-correlation. On the X-ray Spectral Features: Signatures of a Seyfert-like Component ? ------------------------------------------------------------------------ Another interesting result from our analysis is that the spectra intermittently show clear evidence for a separate soft component below $\sim$ 2 keV. This component was observed only during the first observation, when the source was faintest. The best fit values for this component (see §3.2) are in good agreement with previous [*EXOSAT*]{}, [*EINSTEIN*]{},[*GINGA*]{} and [*ROSAT*]{} findings (e.g. Courvoisier et al. 1987, Wilkes & Elvis 1987, Turner et al. 1990, Staubert 1992, Leach et al. 1995), in particular if one allows for cross-calibration uncertainties and considers that previous observations usually found, or fixed, the photon index between 2-10 keV at a slightly flatter value of $\Gamma \sim$ 1.5 than found with [*ASCA*]{}. The issue of interpreting the soft continuum cannot, however, be addressed in more detail with the present [*ASCA*]{} data since, as explained in $\S 3$, absolute values obtained from the SIS cannot in principle be trusted because 3C 273 was used to calibrate the SIS response (see Dotani et al. 1996 for details on the calibration procedure). For example, because of the systematic excess absorption found in §3.2 which is most likely attributed to a calibration uncertainty, we cannot trust absolute values of the best-fit parameters for the soft-excess. Only relative measurements can be considered, which tell us that a soft excess component is indeed required by the data during observation 1 and not during the following observations. It should be pointed out, however, that as found by Leach et al. (1995) from the [*ROSAT*]{} PSPC data, the soft component is modeled better by a single power law with absorption at the Galactic value ($\Delta \chi^{2}$ = 10 and 5 compared to the black-body and bremsstrahlung models, respectively). This may indicate that the data are not well described by a model with a concave shape but rather prefer a straight or convex model. At this point, it is interesting to note that a recent [*Beppo-SAX*]{} observation of 3C 273 performed in July 1996, when the source was at very low flux level ($\sim$ 7.1 $\times$ 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$), shows evidence for both a soft excess emission (below $\sim$ 0.3 keV) and a strong absorption structure at $\sim$ 0.5 keV indicating, possibly, the presence of a warm absorber in 3C 273 (Grandi et al. 1997). [*ASCA*]{} observed 3C 273 simultaneously with [*Beppo-SAX*]{} on that occasion. Results of this observation will be presented elsewhere (Yaqoob et al., in preparation). Fitting the [*ASCA*]{} observation 1 with a warm absorber model only does not however improve the spectral fitting significantly since, clearly, the data require some extra emission below $\sim$ 2 keV that cannot be explained with absorption features alone. Addition of an absorption edge at $\sim$ 0.5 keV to the double power model increases the quality of the fit slightly but not significantly. The [*ASCA*]{} spectra also reveal evidence for iron line emission in 3C 273 on two different occasions, during observations 1 and 7, when the source was faintest. The best-fit parameters of the Fe K line obtained from the fitting of the SIS data only with the latest available response matrices are consistent with the line being narrow and weak (EW $\sim$ 20-30 eV in both observations 1 and 7). These values are in reasonable agreement with the [*GINGA*]{} results of Turner et al. (1990) and the [*Beppo-SAX*]{} results of Grandi et al. (1997), who both found a significant iron line in 3C 273 when the source was in a low (F$_{(2-10 \rm keV)}$ $\lsimeq$ 1 $\times$ 10$^{-10}$ erg cm$^{-2}$ s$^{-1}$) state, similar to the results reported here. The line emission and, also the soft excess emission suggest that, at least during observations 1 and 7, a Seyfert-like component underlying a dominant simple power-law (jet ?) component contributes significantly to the X-ray spectrum of 3C 273. In order to give a rough quantitative estimate of the contribution from the Seyfert and to test this overall picture, we tried to fit the broad-band ($\sim$0.4–300 keV) [*ASCA*]{} plus [*OSSE*]{} spectrum with a model consisting in the sum of: a power-law with $\Gamma$ $\simeq$ 1.5-1.6, a “typical” Seyfert-like spectrum (Nandra & Pounds 1994), i.e. a power-law (with $\Gamma$=1.9) plus a reflection component (with R=${\rm normalization\ of\ the\ reflected\ component\over{normalization\ of\ the\ direct\ component}}$=1 corresponding to a 2$\pi$ coverage of the reflector) and associated iron line (with an equivalent width fixed at 150 eV, with respect to the Seyfert continuum, e.g. George & Fabian 1991), and a black-body. The [*ASCA*]{} spectra are from observation 1 and the [*OSSE*]{} spectrum is from the observation performed during 1991 June 15-28 (viewing period n.3 reported in Johnson et al. 1995). As demonstrated by the unfolded spectrum and residuals shown in Fig. 6, an acceptable fit is obtained with a black-body temperature of kT $\sim$ 100 eV, slightly reduced compared to the results given in Table 3 because of the Seyfert power law contribution, and with the Seyfert-like spectrum contributing about 30% at 1 keV and about 10-20% in the 2-10 keV energy band to the total spectrum. However, it is stressed that while the present data are consistent with this picture, there is actually no direct evidence for a steep “Seyfert-like” power-law, since its contribution cannot be unambiguously disentangled from the softer black-body component. However, we have also shown in §3.2 that the soft-excess emission is likely to be variable in time and, as discussed above, is better fitted with a model with a convex shape rather than a concave one. These may be the indication that some direct, steep, Seyfert-like component contributes partly to the soft-excess emission of 3C 273. An alternative explanation for the Fe K line emission may be reflection off optically thick matter (e.g. broad-line region blobs and/or a molecular torus and/or an accretion disk, located near the jet region) from the jet continuum itself. In such case, the estimation of the expected iron line intensity is complicated by the possibility that the X-ray and hard X-ray radiation could be beamed in the direction of, or away from, the reflecting matter and its precise evaluation is beyond the scope of this paper. The effect of the unknown effective covering factor of the reflector should also be taken into account. Conclusions =========== To date, [*ASCA*]{} has observed 3C 273 10 times. Results from the first 9 observations, all performed during the first year of the mission have been presented here. These confirm and expand the evidence that the X-ray emission of 3C 273 is complex, with different spectral components contributing to its X-ray emission. Because 4 of the 9 observations were used for the on-board calibration of the CCDs, great care had to be taken when interpreting the observational results for this source. As a rule, [*absolute*]{} values, in particular those obtained from the SIS, require a detailed estimate and knowledge of the instrumental systematic errors to be trusted. [*Relative*]{} measurements (like flux and/or spectral variability) obtained from comparing different observations are, however, more reliable since they should not be affected by calibration uncertainties. With this caveat in mind, it is found that: 1. A conservative systematic error at low energies that corresponds to an extra-absorption column of $\sim$ 2–3 $\times$ 10$^{20}$ cm$^{-2}$ is found for the SIS response, consistent with the ASCA Team’s official prescriptions. 2. 2–10 keV flux variations by up to $\sim$ 60% on a time-scale of $\sim$ 200 days and day-to-day variations as large as $\sim$ 20% were observed. 3. Extra soft X-ray emission is required by the data during the first observation, when the source was in its lowest flux level. 4. Flux and spectral slope variations are clearly detected as well and, for the first time, there is a statistically significant evidence that the index and flux are anti-correlated. 5. Iron line emission is detected in (only) the two observations with the lowest flux levels. The line is in both cases weak (EW$\sim$20-30 eV), but statistically significant at more than 99% confidence level, narrow and consistent with Fe K$_{\alpha}$ emission from neutral matter. We then speculate that all the above observable properties of the X-ray spectrum of 3C 273 can be interpreted in terms of the sum of two emission mechanisms. These are a non-thermal emission from the innermost regions of the jet which dominates the 2-10 keV region and whose signatures are the spectral variability and a flat ($\Gamma \sim 1.6$) power law continuum (that extrapolates well into higher energies), plus a diluted Seyfert-like spectrum whose signatures are the soft-excess and iron line emission. The newly discovered index-flux anti-correlation may be interpreted either by intrinsic variations of the jet power law index or by some contribution of the Seyfert-like continuum spectrum (say, a power law with $\Gamma \sim 1.9$) as the jet component varies. The overall scenario predicts that when the (dominant) jet component is in a low flux state, the spectral features produced by the Seyfert-like spectrum should be more easily detected (owing for variability of the Seyfert-like spectrum itself). ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ We are grateful to the [*ASCA*]{} team in ISAS for their operation of the satellite and to the [*ASCA*]{} GOF at NASA/GSFC for their assistance in data analysis. M.C. acknowledges colleagues in the Institute of Physical and Chemical Research (RIKEN) for their warm hospitality and the Italian Space Agency (ASI) for financial support. C.O. acknowledges the Special Postdoctoral Researchers Program of RIKEN for support. KML gratefully acknowledges support through NAG5-3307 ([*ASCA*]{}). We thank T. Yaqoob, T. Dotani, K. Gendreau and T. Kotani for very helpful discussion on the calibration-related issues and H. Kubo and G. Ghisellini for usefull comments. Arnaud, K.A., Haberl, F., & Tennant, A., 1991, XSPEC User’s Guide, ESA TM-09 Bahcall, J.N., Kirhakos, S., Schneider, D.P., Davis, R.J., Muxlow, T.W.B., Garrington, S.T., Conway, R.G., & Unwin, S.C., 1995, ApJL, 452, 91 Cappi, M., Matsuoka, M., Comastri, A., Brinkmann, W., Elvis, E., Palumbo, G.G.C., & Vignali, C., 1997, ApJ, 478, 492 Courvoisier, T.J.L., et al., 1987, A&A, 176, 197 Dickey, J.M. & Lockman, F.J., 1990, ARA&A, 28, 215 Dondi, L, & Ghisellini, G., 1995, MNRAS, 273, 583 Dotani, T., et al., 1996, [*ASCA*]{} News n. 4, 3 Ebisawa, K., Ueda, Y., Inoue, H., Tanaka, Y., & White, N.E., 1996, ApJ, 467, 419 Elvis, M., et al., 1994, ApJS, 95, 1 Fichtel, C.E., 1994, ApJS, 90, 917 Fukazawa, Y., Ishida, M. & Ebisawa, K, 1997, [*ASCA*]{} News n. 5, 3 Gendreau, K., & Yaqoob, T., 1996, [*ASCA*]{} News n. 5, 8 George, I.M., & Fabian, A.C., 1991, MNRAS, 249, 352 Giommi, P., Barr, P., Pollock, A.M.T., Garilli, B., & Maccagni, D., 1990, ApJ, 356, 432 Grandi, P., Sambruna R.M., Maraschi L., Matt G., Urry M., Mushotzky R.F., 1997, ApJ, 487, 636 Hayashida, K., Miura, N., Hashimotodani, K., & Murakami, S., 1995, ISAS Internal Report Johnson W.N., et al., 1995, ApJ, 445, 182 Jourdain, E., et al., 1992, ApJL, 395, 69 Kohmura, Y., Makishima, K., Tashiro, M., Ohashi, T. & Urry, C.M., 1994, PASJ, 46, 131 Leach, C.M., Mc Hardy, I.M., & Papadakis, I.E., 1995, MNRAS, 272, 221 Lichti, G.G., et al., 1995, A&A, 298, 711 Makishima, K., 1986, in Lectures Notes in Physics, Vol. 266, The physics of Accretion onto Compact Obejcts, ed. K.O. Mason, M.G. Watson & N.E. White (Berlin:Springer), 249 Mannheim, K., & Biermann, P.L., 1992, A&A, 253, L21 Maraschi, L., Ghisellini, G., & Celotti, A., 1992, ApJ, 397, L5 Mushotzky, R.F., Done, C., & Pounds, K.A., 1993, ARA&A, 31, 717 Nandra, K., & Pounds, K.A., 1994, MNRAS, 268, 405 Otani, C., & Dotani, T., 1994, [*ASCA*]{} News n. 2, 25 Raymond, J.C., & Smith, B.W., 1977, ApJS, 35, 419 Röser, H.-J., Meisenheimer, K., Neumann, M., & Conway, R.G., 1996, in Proc. Röentgenstrahlung from the Universe, Würzburg, Germany, Ed. H.-U. Zimmermann, J.E. Trümper & H. Yorke, MPE Report 263, 499. Sambruna, R.M., Maraschi, L., & Urry, C.M., 1996, ApJ, 463, 444 Sembay, S., Warwick, R.S., Urry, C.M., Sokoloski, J., George, I.M., Makino, I.M., Ohashi, F., & Tashiro, M., 1993, ApJ, 404, 112 Sikora, M., Begelman, M.C. & Rees, M., 1994, ApJ, 421, 153 Staubert, J., 1992, in “X-ray emission from Active Galactic Nuclei and the Cosmic X-ray Background”, Ed. W. Brinkmann & J. Trümper (MPE Report 235), p 42 Takahashi, T., et al., 1996, ApJL, 470, 89 Tanaka, Y., Inoue, H., & Holt, S.S., 1994, PASJ, 46, 37 Turner, M.J.L., et al. 1990, MNRAS, 244, 310 Urry, C.M., et al., 1996, ApJ, 463, 424 Urry, C.M., & Padovani, P., 1995, PASP, 107, 803 Vermeulen, R.C., & Cohen, M.H., 1994, ApJ, 430, 467 von Montigny, C., et al., 1997, ApJ, 483, 161 Walter, R., Orr, A., Courvoisier, T.J.L., Fink, H.H., Makino, F., Otani, C., & Wamsteker, W., 1994, A&A, 285, 119 Wilkes, B.J., & Elvis, M., 1987, ApJ, 323, 243 Yaqoob, T., et al. 1994, PASJ, 46, L49 --- ---------------- ------- ------- ------ ------ ------ ----------------------- -- 1 08/06/93 20:11 33300 24945 2.25 3.25 1CCD PV (Yaqoob et al. 94) 2 15/12/93 12:34 18130 14207 2.85 4.10 1CCD PV calib s0c2 3 15/12/93 23:53 21230 17033 3.88 4.96 1CCD PV calib s0c0 4 16/12/93 12:11 13295 9488 3.67 4.44 1CCD AOI 5 19/12/93 23:45 19867 16136 3.04 4.17 1CCD PV calib s0c3 6 20/12/93 10:45 19390 15497 2.88 5.32 1CCD PV calib s0c1 7 20/12/93 22:03 12291 10398 2.75 3.69 1CCD AOI 8 23/12/93 23:35 11605 10325 2.84 3.72 1CCD AOI 9 27/12/93 13:58 10493 7169 3.39 4.37 1CCD AOI --- ---------------- ------- ------- ------ ------ ------ ----------------------- -- : Observations log $^{*}$ Exposure and count-rates are average for GIS and SIS detectors. --- ----- ------------------------ ------------------------ ------ ----------- 1 GIS $<$ 0.65 1.63$_{-0.01}^{+0.01}$ 1.04 1.34/518 SIS $<$ 0.19 1.64$_{-0.01}^{+0.01}$ 1.23 1.62/350 2 GIS $<$ 1.86 1.62$_{-0.02}^{+0.02}$ 1.33 1.04/412 SIS 4.31$_{-0.59}^{+0.57}$ 1.60$_{-0.02}^{+0.01}$ 1.53 1.62/296 3 GIS $<$ 1.80 1.60$_{-0.02}^{+0.01}$ 1.54 1.05/546 SIS 4.48$_{-0.56}^{+0.52}$ 1.60$_{-0.01}^{+0.02}$ 1.70 1.73/358 4 GIS 3.09$_{-1.84}^{+1.83}$ 1.62$_{-0.02}^{+0.03}$ 1.56 0.95/382 SIS 4.18$_{-0.68}^{+0.64}$ 1.59$_{-0.02}^{+0.02}$ 1.71 1.02/445 5 GIS $<$ 1.50 1.53$_{-0.01}^{+0.02}$ 1.45 1.16/463 SIS 3.54$_{-0.55}^{+0.54}$ 1.53$_{-0.02}^{+0.02}$ 1.58 1.94/319 6 GIS 2.69$_{-1.65}^{+1.73}$ 1.58$_{-0.02}^{+0.02}$ 1.31 1.105/443 SIS 3.67$_{-0.59}^{+0.60}$ 1.56$_{-0.02}^{+0.02}$ 1.51 1.30/300 7 GIS $<$ 2.44 1.59$_{-0.02}^{+0.03}$ 1.22 0.96/263 SIS 3.55$_{-0.71}^{+0.68}$ 1.58$_{-0.03}^{+0.02}$ 1.44 1.22/430 8 GIS $<$ 3.61 1.56$_{-0.02}^{+0.03}$ 1.30 1.09/259 SIS 3.89$_{-0.71}^{+0.70}$ 1.55$_{-0.02}^{+0.02}$ 1.48 1.08/435 9 GIS 3.57$_{-2.10}^{+2.15}$ 1.54$_{-0.02}^{+0.03}$ 1.57 1.01/272 SIS 5.01$_{-0.85}^{+0.83}$ 1.52$_{-0.03}^{+0.02}$ 1.82 1.04/391 --- ----- ------------------------ ------------------------ ------ ----------- : Results Using a Single Absorbed Power Law Model $^{a}$ Absorption column density in units of $10^{20}$ cm$^{-2}$. $^{b}$ Observed flux in units of $10^{-10}$ erg cm$^{-2}$ s$^{-1}$. Note: Errors are 90 % confidence for 2 interesting parameters ($\Delta \chi^2$=4.61). ------------------------ -------------------------- -------------------------- ------ ---------- black body + power law $119^{+16}_{-16}$ ${1.64^{+0.02}_{-0.02}}$ 1.07 1.46/869 bremss. + power law ${281^{+89}_{-70}}$ ${1.64^{+1.66}_{-1.62}}$ 2.38 1.45/869 Two power laws ${2.99^{+0.84}_{-0.85}}$ ${1.59^{+0.08}_{-0.09}}$ 5.70 1.44/869 ------------------------ -------------------------- -------------------------- ------ ---------- : SIS – Two Component Models Fitted to the Observation 1 Data $^{a}$ calculated from only the soft component and the SIS normalization. Values with the GIS normalization were approximately 15% lower. Note: Intervals are at 90 % confidence for 2 interesting parameters. --- ------------------------ ------ ---------- 1 1.62$_{-0.01}^{+0.01}$ 1.22 1.50/454 2 1.58$_{-0.02}^{+0.02}$ 1.48 1.47/334 3 1.59$_{-0.01}^{+0.01}$ 1.67 1.40/498 4 1.56$_{-0.02}^{+0.02}$ 1.69 1.05/434 5 1.52$_{-0.01}^{+0.01}$ 1.56 1.22/403 6 1.57$_{-0.01}^{+0.01}$ 1.49 1.15/371 7 1.57$_{-0.02}^{+0.02}$ 1.41 1.24/370 8 1.55$_{-0.02}^{+0.02}$ 1.47 1.02/378 9 1.52$_{-0.02}^{+0.02}$ 1.77 1.06/355 --- ------------------------ ------ ---------- : Fits between 2-10 keV with a Single Absorbed Power Law Model - $N_{\rm H} \equiv N_{\rm Hgal}$ $^{a}$ Observed SIS flux in units of $10^{-10}$ erg cm$^{-2}$ s$^{-1}$. GIS flux was typically $\sim$15-20% lower. Note: Intervals are at 90 % confidence for 1 interesting parameter ($\Delta \chi^2$=2.71). --------------- --- ------ ------------------------ --------- ----------------------------- --------- ------ GIS+SIS (all) 9 35.1 5.7 $\times$ 10$^{-5}$ $-0.64$ $\sim$ 0.05 -0.48 0.19 GIS+SIS$^d$ 6 20.4 2.3 $\times$ 10$^{-3}$ $-0.89$ $\sim$ 1 $\times$ 10$^{-3}$ $-0.71$ 0.11 --------------- --- ------ ------------------------ --------- ----------------------------- --------- ------ : $\Gamma_{\rm 2-10 keV}$ – F$_{\rm X}$(2-10 keV) correlations $^{a}$ $\chi^2$ value and corresponding probability (p) for a $\chi^2$ test against constancy. $^{b}$ Linear correlation coefficient (r) and corresponding probability (p) for a linear correlation of the data. $^{c}$ Spearman rank-order coefficient (r$_{\rm s}$) and corresponding probability (p) for a non-parametric correlation of the data. $^{d}$ GIS plus SIS data excludind observations 2, 3 and 5 which were used to calibrate the “non-standard” SIS chips. [cccccc]{} & & & & &\ & & & & &\ \ Obs. 1 & 1.63$^{+0.01}_{-0.01}$ & 5.59$^{+0.04}_{-0.04}$&0 (fixed)&40$^{+10}_{-15}$&1.44/454\ & 1.63$^{+0.01}_{-0.01}$ & 5.67$^{+0.10}_{-0.12}$&0.43$^{+0.23}_{-0.19}$&100$^{+48}_{-34}$&1.43/453\ Obs. 7 & 1.57$^{+0.02}_{-0.02}$ & 5.46$^{+0.04}_{-0.04}$&0 (fixed)&38$^{+17}_{-17}$&1.21/372\ & 1.57$^{+0.02}_{-0.02}$ & 5.44$^{+0.16}_{-0.15}$&0.28$^{+0.25}_{-0.10}$&81$^{+29}_{-29}$&1.20/371\ \ Obs. 1 & 1.56$^{+0.01}_{-0.01}$ & 5.59$^{+0.08}_{-0.06}$&0 (fixed)&25$^{+20}_{-13}$&1.26/454\ & 1.56$^{+0.01}_{-0.01}$ & 5.59$^{+0.15}_{-0.10}$& $<$ 0.21&27$^{+50}_{-12}$&1.26/453\ Obs. 7 & 1.52$^{+0.04}_{-0.03}$ & 5.45$^{+0.05}_{-0.05}$&0 (fixed)&27$^{+16}_{-15}$&1.23/372\ & 1.52$^{+0.04}_{-0.04}$ & 5.45$^{+0.07}_{-0.06}$&$<$ 0.59 &34$^{+11}_{-24}$&1.23/371\ $^{*}$ The continuum emission was fitted between 2-10 keV with a single absorbed power law with $N_{\rm H}$=$N_{\rm Hgal}$=1.79 $\times$ 10$^{20}$ cm$^{-2}$. Note: Intervals are at 90 % confidence for 1 interesting parameter ($\Delta \chi^2$=2.71).
--- abstract: 'Internet-of-Things (IoTs) envisions to integrate, coordinate, communicate, and collaborate real-world objects in order to perform daily tasks in a more intelligent and efficient manner. To comprehend this vision, this paper studies the design of a large scale IoT system for smart grid application, which constitutes a large number of home users and has the requirement of fast response time. In particular, we focus on the messaging protocol of a universal IoT home gateway, where our cloud enabled system consists of a backend server, unified home gateway (UHG) at the end users, and user interface for mobile devices. We discuss the features of such IoT system to support a large scale deployment with a UHG and real-time residential smart grid applications. Based on the requirements, we design an IoT system using the XMPP protocol, and implemented in a testbed for energy management applications. To show the effectiveness of the designed testbed, we present some results using the proposed IoT architecture.' author: - 'Sanjana Kadaba Viswanath, Chau Yuen,  Wayes Tushar,  Wen-Tai Li,  Chao-Kai Wen,  Kun Hu, Cheng Chen, and Xiang Liu,  [^1] [^2] [^3] [^4]' title: 'System Design of Internet-of-Things for Residential Smart Grid' --- Introduction {#sec:introduction} ============ The smart grid, with its two-way information and power flow capacities through ubiquitous interconnections of equipment in power networks, enables internet-of-things (IoT) to control and coordinate smart devices and thus paves the way towards energy management of large-scale systems [@Ruilong-TII:2015; @Tushar-TIE:2014; @Liu:2014]. However, the smart devices need to have a set of capabilities including communication and cooperation, addressability, identification, sensing, actuation, interoperability, embedded information processing, and user interfacing to achieve such seamless deployment to form the IoT. Further, to achieve these capabilities, a number of protocols are required to define communication pattern and software features between the devices as well as there is a need to determine approaches that would support the Internet’s end-to-end functionality in order to create a user friendly smart technology with IoT [@Liu-TSG:2015]. This leads us into a thorough research on setting up a IoT testbed that is targeting residential consumers to provide real-time information to consumers and keeping the latency to control IoT devices to a minimum. In this respect, this paper focuses on the IoT elements, protocols, and the testbed setup for IoT environments along with the protocols and software designs that have been used to monitor and control consumers’ energy usage patterns. We have deployed smart home technology in a real world scenario with each housing unit with 3 bed rooms and a living room that can accommodate 6-9 consumers. Each unit consists of sensors, actuators, smart meters, smart plugs, a Universal Home Gateway (UHG), and together establish a home area network (HAN). Each of these smart devices communicates with the UHG through a different communication protocol. The UHG, on the other hand, interacts with cloud server where most of the processing is done. The implemented system can control and manage energy based on published dynamic pricing information, and can act as an energy management system. These capabilities of the UHG enable automatic demand response such as avoiding appliances usage during peak hours based on price signals and peak-load shaving; third party engagement in performing different tasks including energy and water monitoring, security and fire control for users; participation of residents in real-time energy market and building smart homes. Further, we have developed an Android mobile application to provide remote services to the consumers. Due to profound interaction of smart devices, we mainly emphasise on system design, IoT protocols and software implementation, which are essential for such lucrative deployment. State-of-The Art {#sec:state-of-art} ================ For the last few years, there has been a significant research interest in exploring the potential of IoT system design and their application in different aspect of our day-to-day life. Examples of such studies can be found in [@IoTServey:2010] and [@IoTServeystandard:2013]. Here, [@IoTServey:2010] provides a very broad overview of IoT and show how different technologies and communication technologies are merged together to benefit the everyday life. This study also depicts how different scientific domains view the applications of IoT from their own perspectives including 1. Transportation and logistics such as logistics \[Karpischek et al. (NFC-2009)\], mobile ticketing \[Broll et al. (IEEE IC-2009)\] and efficient supply chain \[Ilic et al. (IEEE PC-2009)\]. 2. Healthcare domain \[Vilamovska et al. (RAND Europe-2009)\]. 3. Smart environment domain including home and offices \[Buckl et al. (WAINA-2009)\] and commercial buildings \[Spiess et al. (IEEE ICWS-2009)\]. 4. Personal and social domain \[Welbourne et al. (IEEE IC-2009)\]. Please note that all the above mentioned references can be found in [@IoTServey:2010], which we skip here due to the constraints on the reference number by the magazine. With a view to meet the important criteria of power efficiency, reliability and Internet connectivity, the authors in [@IoTServeystandard:2013] discusses different IoT standards of the IEEE $802.15.4$ (Standard for Information Technology Std., September 2006.) and IETF working groups[^5]. The authors introduce and relate key requirements of the power-efficient IEEE $802.15.4-2006$ PHY layer, the power saving and reliable IEEE $802.15.4e$ MAC layer, the IETF $6$LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. Further discussion on different aspects of IoT can also be found in [@Ruilong-TII:2015], [@RongYu:2015]. and in studies, e.g, \[Chang et al. (SmartGridComm-2012)\] on IEEE standards[^6] in metering communications. Now, based on the above discussion, our proposed study differs from the existing studies in following ways. - Although there are studies on different applications of IoT and the standards in the literature, there is no detail study on any particular aspect related to the smart grid. In this paper, we zoom into the residential energy management aspect of smart grid and explored the potentiality of IoT in managing energy for residential customers. - Most of the works emphasize on the theoretical part of IoT system, and little has been done so far on the implementation aspect. We have complemented the existing studies on IoT by developing a testbed at SUTD and demonstrating its real-life applications. To this end, we have discuss the details the detail of a IoT testbed, specially the idea of a unified home gateway (UHG) that we are currently prototyping. Based on the technology used, the proposed system set up also leverage to handle a large number of users whereby also possess a fast response time. Elements of IoT System {#sec:ElementIoT} ====================== IoT needs to abide to a number of elements, as explained below, to form a complete system. ![image](layers){width="\textwidth"} IoT Layers ---------- IoT can be divided into four major layers including device layer, network layer, cloud management layer, and application layer as shown in Fig. \[fig:iot\]. ### Device Layer This comprises of two sub-layers. Things layer consists of sensors, actuators, smart plugs and smart meters and is responsible for sensing environment, collecting data, and controlling home appliances. Gateway Layer hosts the micro-controller, communication module, local storage, and display, where components from things layer are connected to. ### Network Layer This layer connects device layer to application layer. ### Cloud Management Layer Cloud services layer is where all the data storage and information retrieval will take place. Management Layer is where authentication, user management and data management are done. ### Application Layer This layer is responsible for providing services to the end users (home owner or smart grid). It hosts DRM, dynamic pricing for smart grid system, or Energy Management, Home Security for consumers services. The services can also be provided by third party, e.g. a home security management company. Comparison of IoT Protocols --------------------------- Since IoT comprises a large number of devices and communication intensive architecture, there is a need of standardize software protocols to enable all devices to communicate with one another with various features. ### RESTful HTTP The Representational State Transfer (REST) is an architectural style and not a protocol or standard. RESTful works on top of HTTP protocol and uses the REST principles at the same time. Hence, RESTful HTTP is lightweight and has a simple HTTP request format and is very easy to implement. REST is best suited for applications where periodic communication is required. However, to increase the end user privacy, HTTPS can be used. ### CoAP Constrained Application Protocol (CoAP) is a specialized web transfer protocol for using with constrained devices and networks (e.g., low power, lossy). CoAP provides a request/response interaction model between application end points, supports built-in discovery of services and resources and includes key concepts of the web. CoAP is designed to easily interface with HTTP for integration with the web while meeting specialised requirements such as multicast support, very low overhead and simplicity. ### MQTT Message Queue Telemetry Transport (MQTT) is publish/subscribe, extremely simple, and lightweight messaging protocol, designed for constrained devices and low bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also make the protocol ideal for the emerging IoT world of connected devices with an objective of collecting data from multiple devices and transporting to IT infrastructure. ### XMPP The Extensible Messaging and Presence Protocol (XMPP) is an application profile of the Extensible Markup Language (XML) that enables near-real-time exchange of structured yet extensible data between any two or more network entities [@TusharWCM:2016]. XMPP is highly extensible through multiple XMPP Extension Protocols (XEPs). XEPs provide various unique capabilities to XMPP such as interoperability, provisioning, security, scalability and low latency, which makes interaction with smart devices feasible and seamless. XMPP handles both simple and complex encoding of IoT data realising interoperability to a great extent and provides an interoperable way to publish control parameters and perform control operations. XMPP also enables publishing to multiple things under a single XMPP address through integration with subsystems. IoT Features for Universal Home Gateway {#sec:section4} --------------------------------------- In this section, we will describe the features required for the universal home gateway in IoT system. ### Security Pervasive nature of IoT systems demands security to be a vital feature. As the use of smart devices is growing at an exponential pace, the need for strong security is becoming more important. For example, in 2013, a security researcher showed the security vulnerability of Philips IoT gadgets by hacking their secured Hue lighting system. As a result of these security attacks, consumers might receive wrong pricing information on their energy usage or a malicious controller, other than the authorised grid, may take the control over their scheduled loads. Therefore, strong security is required for connected devices to ensure data confidentiality, data integrity, and user/device authenticity. Hence, a large number of research efforts is on the privacy [@Xu2015] and security issue in terms of both application layer data perturbation and secure communication [@Keoh2014]. ### Provisioning Due to a great number of interconnected devices in IoT, making rules for devices communication and their access to services, data, and resources via provisioning is very important for management of energy. The success of a number of important aspects such as demand management with dynamic pricing, Home Automation system, and more importantly the security [@Keoh2014] significantly relies on proper provisioning of the smart devices within IoT. ### Interoperability Interoperability is a vital concept of IoT, specially in a smart home implementation, which let the devices in a network to connect through a common platform in order to work together. Smart grid is composed of large number of smart homes, where each home has a number of intelligent devices, that can operate, communicate and interact autonomously. Users may purchase an off-the-shelf devices, e.g., a sensor like camera or an actuator like light controller, and integrate them to the system. Interoperability of IoT is critical in making these smart devices work together. ### Latency IoT protocols need to have minimum latency for information retrieval and executing control messages as most IoT services related to energy management rely on real-time services and control information. For instance, in the event of an unexpected outage of scheduled generating plant, control signal to turn off the flexible loads needs to arrive within a duration of 8 seconds, 30 seconds or 10 minutes depending on whether primary, secondary or tertiary reserve power is used respectively[^7]. However, it is really challenging to achieve a near real-time communication. Nevertheless, this can be achieved by a fully dedicated bidirectional, asynchronous communication channel with a capability to enable native push to the client. ### Massive number of devices Particularly in DRM, the utility can send information of its dynamic pricing or load control command, e.g., air conditioning (AC) system’s thermal set point, to a large number of consumers to encourage them to avoid their consumption during peak hour or force their load to operate at certain power rating during emergency. Similarly, an energy user with distributed renewable source can also communicate with the grid in order to determine the amount of energy it wants to buy from/sell to the grid at a particular time of the day. Further, the ability to broadcast messages should be considered as it can deliver the messages to all the devices in one single message and hence not jam the network. Thus, the massive feature along with broadcast messaging, is essential for successful energy management application. ### Scalability Different features of IoTs need to be highly scalable in order to provide services to large scale systems, i.e., the softwares and protocols should have designed such that new devices can be easily added to the system later, and yet fulfilling the strict quality-of-services such as delay requirements. This would leverage the extension of the smart grid network beyond a specific geographic area and number of smart devices, and also avoid the constraint on the number of smart devices that can be installed within a smart grid system for desired outcomes. ![image](sov){width="\textwidth"} Testbed Design for Residential Energy Management {#sec:section5} ================================================ In this section, we will describe the design of testbed focusing on residential household. Overall System Setup -------------------- ### Architecture The overall architecture of testbed targeting energy management applications is shown in Fig. \[fig:arch\]. Smart grid/smart home setup comprises of a cloud server where we store data as well as provide services for both gateways and consumers. Application layer and cloud management layer are implemented within the cloud server to simplify data sharing between the two layers. A number of services is developed to cater energy management and smart grid applications. Grid service handles all the commands from our smart grid and sends notifications either about DRM or dynamic pricing to consumers and gateways. Energy management service is responsible for monitoring and giving energy related information to consumers. Home Automation service is activated when a control command from consumers is received. We have implemented push capability on top of Openfire server on the device side and Google Cloud Messaging (GCM) on the mobile apps side. The incapability of Openfire to send real-time notifications to mobile app while offline drove us to implement a dual XMPP server with GCM and Openfire working together. We have grouped a number of homes (i.e., gateways and families) into an area according to their geographical location with an Area ID (AID). Each family comprises of a number of consumers is assigned a Family ID (FID). Each of their smart phones is registered with a unique Device ID (DID) issued by GCM and is used to push notifications to individual user. Each smart phone is assigned a unique User ID (UID) at the cloud server, which can be the user phone number that GCM is not aware of. Along with DID, UID can be used for authentication purpose. Thus, only authorised consumers can control their smart devices and monitor their own energy usage. Each UHG is assigned a unique Gateway ID (GID) by cloud server and a unique Jabber ID (JID) by Openfire. Each device in home network is assigned a Node ID (NID) through which the command is sent from UHG to end devices. GID is our cloud server’s reference to the gateway and Openfire is not aware of GID. JID is Openfire reference to the gateway and cloud server uses it to push control commands during home automation where control is pushed only to a particular gateway (e.g. a user can only switch on/off a plug of his own house). Furthermore, during broadcasting of messages for demand response management (DRM), cloud server publishes a message to a particular area using AID to Openfire, which then broadcasts the message to all the UHGs corresponding to that AID. ### Network Protocols (XMPP and RESTful HTTP) In the testbed, RESTful HTTP is selected for periodic uploading of sensor data from the gateway to the cloud. Although there are many desirable properties of an IoT system, the proposed system mainly focuses on to improve the response time and to be able to handle a large number of users. Further, there are also needs to push control commands towards the end users for different applications such as DRM, which require asynchronous communication model. Please note that XMPP is not only capable of sending asynchronous request but can also support a massive number of users at the same time. Further, XMPP has fast response time, and provides security, provisioning and interoperability features required by energy management application [@Guo2015]. Furthermore, both iOS and Android mobile operating system have provided build in support for XMPP, which facilitate the connection to end user through the mobile apps. In this context, XMPP is selected as the network protocol to be used in the proposed system. Please note that there could be a number of other protocol choices as explained in Section \[sec:ElementIoT\], and selecting the right protocol is never an easy task. The reason behind not choosing CoAP in this work is that CoAP is UDP based, and typically UDP traffic is not as firewall friendly as TCP. Besides, due to the lack of TCPs retransmission mechanisms, packet loss is more likely to happen when using CoAP. Hence, we decided to use pub/sub based TCP protocol, which is important for the broadcast / multicast needs in smart grid application. While MQTT and XMPP are very similar to each another, we simply choose XMPP over MQTT as most mobile operating systems (e.g. Android or iOS) support XMPP for messaging. So our thought is to have a single messaging platform, specially since it is important to push message to user mobile device for the home gateway / smart grid application. Nonetheless, we claim no performance superiority of XMPP over CoAP and MQTT. Gateway Side ------------ ### Openfire for massive number of devices, low latency and provisioning Openfire uses pubsub mechanism to push information to subscribers without providing configuration settings for pubsub services. To overcome this, we have designed and implemented our own configuration for provisioning devices and defining permissions for each of the devices e.g., distinguish publishers from subscribers. This was achieved by building a component that is an XMPP client in our cloud. The design goals of our component are threefold. Firstly, our component must be able to provision devices and assign different roles and responsibilities to each of them specifying their accessibilities. Secondly, the component should listen to any incoming messages. Finally, it needs to send messages to pubsub service for publishing to all its subscribers (e.g., by broadcasting the message to all UHGs in an area) or to a single UHG (e.g., one-to-one communication for home automation). Our setup, where all UHG of same area subscribe to single area, enables grid to broadcast DRM or dynamic pricing messages to a massive number of residential household with a short delay. ### UHG for interoperability and scalability UHG plays an important role in our implementation of smart home where interoperability and scalability is key. Each smart home unit accommodates a UHG, which communicates to all devices and sensors, and has a room for adding and deleting devices from its network. Furthermore, a UHG is responsible for transferring collected data to the cloud through network layer. In our testbed, UHG is built with Raspberry pi computer (Model-B Rev 1) that can communicate with sensors, actuators, smart plugs, and smart meters through different communications protocols including Zigbee, Z-wave and Bluetooth realising interoperability as shown in Fig. \[fig:msg\]. Please note that the gateway, which is a IP based system and runs in HTTP and XMPP protocols, serves as a translator and can communicate with other non-IP based devices (not only limited to BLE, Z-wave and Zigbee) in the system. Smart plugs that act as power outlet to different appliances communicate to UHG though Z-wave. Energy usage is pushed by UHG at regular interval and control command is pushed to the plug when necessary. Sensors like temperature, humidity, motion, light, noise update contextual and environmental data, and actuators such as IR Blasters and LED Potentiometers enable control systems. They communicate with UHG though Zigbee communication module and are powered by Arduino Fio micro controller. While traditional Zigbee protocol allows the Zigbee end device to push reading to the Zigbee gateway (i.e., UHG in our case), we have implemented a Z-wave like protocol to pull/push reading/control command to Zigbee end device to better manage the devices across different protocols and better control on delay of control command. Similarly, we perform for Bluetooth based sensors. UHG establishes a two-way communication channel where it listens to control commands from cloud on one thread and sends periodic data collected by the smart devices on the other. When control information is received by UHG, it is pushed to end devices through NID by their respective communications protocols and enables the device with desired setting (e.g., set temperature of AC to $24^o$ C). Furthermore, while receiving asynchronous messages from Openfire server, we implement Mutex to prioritise asynchronous messages over periodic data polling, and thus prevent the smart devices from being confused or overwhelmed with requests. The use of UHG per household eliminates the cloud server and Openfire to manage and control a large number of smart devices directly, which improves scalability of the system. Local storage and display can also be added to UHG such that consumers can retrieve historical energy usage directly from UHG instead from cloud. This reduces the burden of cloud and improves the scalability significantly. ![image](uhg){width="\textwidth"} Consumer Side with Mobile Apps ------------------------------ ### GCM for real-time notifications We have used GCM to realise push on mobile side. GCM facilitates implementation of collaborative real-time applications for Android. GCM server manages real-time push notifications to the consumers even when they are offline (i.e., the apps is not activated on smart phone). When a consumer installs the app, they are registered with GCM and assigned a unique DID. This DID is recorded at our cloud server. When there is any update either on energy consumption or emergency, it will be pushed to the app for a specified user. For instance, when energy usage reaches a pre-set threshold, user gets notified. For notifying consumers about dynamic pricing, we use HTTP multicast addressing supported by GCM and send messages to consumers. ### Mobile Application for user interface To utilize personal smart phone or tablet to control and monitor own smart devices in our IoT testbed, an Android mobile application was developed and installed on all consumers’ smart phones. To display real-time information (e.g., Dynamic Pricing) and for user to send control command (e.g., switch on/off plug) to the UHG, native push to smart phone devices through GCM was implemented. Case Study, Results and Discussion {#sec:section6} ================================== Case Study ---------- Now, we demonstrate several case studies on how our platform is applicable to perform certain applications related to energy management. ### Demand Response Management Essential functionalities of DRM include load balancing during peak demand periods and locate and drive power to emergency hit areas. In the Fig. \[fig:dp\], we illustrate our communication pattern for achieving DRM. In case of high energy demand, we cut down power consumption as our energy management system will notify grid service on cloud which in turn will publish command to all the gateways through Openfire and locate and route power around trouble spots to manage the situation. For instance, when smart grid detects a high demand and wants to reduce the excess demand, it will notify the DRM module on the cloud server, where the module may decide to have all AC units in Area 1 be adjusted with 2 degrees higher. Such control message will be forwarded to Openfire, and Openfire will notifies all UHGs in Area 1 through broadcast id AID1. Then, UHG will send the necessary control command to the AC unit. ![image](casestudies){width="\textwidth"} ### Dynamic Pricing To shape the demand and encourage energy awareness among consumers, our smart home system sends dynamic pricing information to consumers and smart meters. This greatly helps consumers to reduce energy usage during peak periods enabling demand shaping. Fig. \[fig:dp\] gives an overview of communication pattern for dynamic pricing for Area 2. Smart grid notifies our cloud and cloud in turn notifies XMPP servers via grid service to reach out to the consumers and smart meters, and hence encouraging consumers to run load consuming tasks during off-peak periods. Fig. \[fig:dp\] illustrates dynamic pricing flow in our energy management application where the dynamic pricing module in the cloud server broadcasts real-time pricing info to the affected area (e.g. AID2 in this case) once being notified by smart grid. At the same time, the cloud server also notifies users on their mobile through GCM (in this case DID5 - DID9 for the users in AID2). ### Energy Monitoring For energy monitoring, we focus on household appliances and monitor their usage to better understand the demand characteristics of the household and of the grid as a whole before making smart decisions on load scheduling and pricing. Fig. \[fig:dp\] demonstrates our mobile application for energy monitoring, where User1 from Family 1 can monitor his energy consumption using mobile application that gives graphical data to the user. ### Home Automation Multiple consumers in a family can control the set of appliances through the home automation services. For instance, a user, while at his office, wants to switch AC in his home to a preferred temperature before his arrival. For this purpose, he (User 1 of Family 2 ) will first send a control command with desired AC setting via Android application. Upon receiving this command, home automation service on cloud will process it and push the message along with JID of the corresponding UHG to Openfire. As soon as Openfire receives this information, it will push it towards the corresponding UHG. Then, the UHG will process the message and set control setting of the respective AC according to the command. Note that the similar automation technique is applicable when a user control other smart devices in the house. Fig. \[fig:dp\], illustrates home automation, where User 1 from Family 1 sends a command to the Home Automation module on the cloud server to control his AC system remotely. Server in turn will send the JID (JID2) to Openfire to control the AC system, which notifies the UHG and control it. We stress that the case study on home automation can be conducted in terms of other metrics such as energy savings. However, demonstration of such metrics requires a large discussion on other related topics such as algorithm for peak shaving [@Wen-TaiAccess:2015], real-time control algorithm and user study and incentive design [@Naveed-Energies:2013; @NaveedAccess:2015], which are beyond the scope of this paper and hence are not discussed here. ### Home Security In case of Home Security, user informs our cloud when they plan to go on a holiday. If intrusion is detected during that period by data management service, it will activate home security service which in turn will send a warning message to all consumers in the family and to UHG to start camera to record environment at home. Users can take actions henceforth. Results and Discussion ---------------------- ![image](results){width="\textwidth"} We performed delay and stress test to measure the time taken by each component in our system to deliver and process messages. We performed the test with 1000 simultaneous client requests and recorded the delay in terms of seconds. In our experimental set up, as illustrated in Fig. \[fig:dp\], we are running the client (mobile app), cloud server and Openfire on one Local Area Network (LAN) and the UHG on another LAN, with all the nodes connected to gateway through Z-wave. Our aim is to get a rough idea on the relative timing requirements for each portion. Results can be easily extended to wide area network by adding additional delay. In Fig. \[fig:res\], we can see that latency is low for sending a message from client (mobile app) to cloud server, which averages to $0.07106363$s. However, average time required to control a node from gateway and get acknowledgement is $0.7852$s. A higher latency is due to Z-wave communication module and the limited processing speed of Raspberry Pi gateway. Further, broadcast message (through AID) and one-to-one message (through JID) from openfire to UHG takes $0.34234637$s and $0.1615937$s respectively. The delay is considerably small and is in terms of milliseconds (thus, less than $8$ seconds). We have shown that our system has lower latencies even when there are $1000$ simultaneous requests[^8]. However, latencies might be higher if the system needs to be connected in public networks. ![Demonstration of peak load shaving through controlling loads of the homes using the proposed IoT architecture.[]{data-label="fig:fig2DRM"}](figDRMv){width="\columnwidth"} Now, to demonstrate the usefulness of proposed IoT architecture for energy management in smart grid we set up an experiment with 10 homes in a small residential community where each home is equipped with a UHG. Lighting units of each home are considered as the controllable loads and assumed to have on and off control. The base load is assumed to be a random curve based on the reports of National Electricity Market of Singapore (NEMS). The daily usage pattern of the controllable loads are generated with their probability of turning on. Now, once the grid system detects a demand exceeding the peak level it informs the DRM unit to reduce the consumption loads with a view to keep the total demand lower than the maximum available limit. Then, the DRM unit decides on which lights should be turned off by solving an optimization problem. The detail of the optimization problem can be found in [@Wen-TaiAccess:2015]. For this experiment, the maximum allowable peak demand is assumed to be $33$ kW and DRM unit is notified to control the loads whenever the total consumption is greater than $33.5$ kW. In Fig. \[fig:fig2DRM\], we show the use of proposed IoT system in controlling the light bulbs at each house for reducing the peak demand in smart grid through DRM[^9]. In Fig. \[fig:fig2DRM\], $24$ hours time duration is divided into $8640$ time slots, i.e., the smart grid detects the total demand in every 10 seconds. The green zone and blue zone of the figure denote the load profile with and without demand management respectively whereby grey zone indicate the base load. The red line shows the maximum allowable peak demand threshold ($33$ kW). As demonstrated in Fig. \[fig:fig2DRM\], when the total demand is under peak limit, smart grid does not need to do any controlling and hence no difference is observed between green zone and the blue line. However, once the total demand exceeds the peak limit, DRM unit controls the lights and turns some of them off to reduce demand load promptly as indicated by the blue line during $9-18$ o’clock. Thus, the result in Fig. \[fig:fig2DRM\] clearly shows that our IoT platform is effective in preforming energy management applications. However, we stress that in some instants the total load exceeds the peak limits during DRM. This is due to the fact that the smart power meter samples the total load at each $10$ seconds interval. If demand increases rapidly within these $10$ seconds, a delay occurs when the server receives the total load data to perform DRM. Therefore, the total load exceeds the peak limit sometimes. Furthermore, while controlling some appliances for managing the excess demand, there could be some new loads that are switched on by the users that also contribute to the overall demand. Hence, total load exceeds the threshold. However, the loads fall back below the peak limit promptly after performing DRM. Conclusion {#sec:section7} ========== In this paper we have given an overview of IoT elements along with architectural layers, compare IoT protocols like RESTful HTTP, CoAP, MQTT, and XMPP and IoT features that are customised to smart grid applications, such as security, provisioning, interoperability, latency and scalability. Following the design principles and IoT software protocols and features, we have developed a testbed where each housing unit is equipped with sensors, actuators, smart plugs, smart meters and UHGs. We have also highlighted the implementation of our applications in DRM, energy management, home automation, dynamic pricing and home security, which are critical factors in addressing efficient energy management and enabling a smarter lifestyle for consumers. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} R. Deng, Z. Yang, M.-Y. Chow, and J. Chen, “A survey on demand response in smart grids: [M]{}athematical models and approaches,” *IEEE Trans. Ind. Informat.*, vol. 11, no. 3, pp. 570–582, June 2015. W. Tushar, B. Chai, C. Yuen, D. B. Smith, K. L. Wood, Z. Yang, and H. V. Poor, “Three-party energy management with distributed energy resources in smart grid,” *IEEE Trans. Ind. Electron.*, vol. 62, no. 4, pp. 2487–2498, Apr. 2015. Y. Liu, C. Yuen, S. Huang, N. Ul Hassan, X. Wang, and S. Xie, “Peak-to-average ratio constrained demand-side management with consumer’s preference in residential smart grid,” *IEEE J. Sel. Topics Signal Process.*, vol. 8, no. 6, pp. 1084–1097, Dec 2014. Y. Liu, C. Yuen, R. Yu, Y. Zhang, and S. Xie, “Queuing-based energy consumption management for heterogeneous residential demands in smart grid,” *IEEE Trans. Smart Grid*, vol. Pre-print, pp. 1–10, 2015, (DOI: 10.1109/TSG.2015.2432571). L. Atzori, A. Iera, and G. Morabito, “[The internet-of-things: A survey]{},” *Computer Networks*, vol. 54, no. 15, pp. 2787–2805, Oct. 2010. M. R. Palattella, N. Accettura, X. Vilajosana, T. Watteyne, L. A. Grieco, G. Boggia, and M. Dohler, “[Standard protocol stack for the internet of (important) things]{},” *IEEE Commun. Surveys Tuts.*, vol. 15, no. 3, pp. 1389–1406, Third Quarter 2013. R. Yu, J. Ding, W. Zhong, Y. Zhang, S. Gjessing, A. Vinel, and M. Jonsson, “Price-based energy control for [V2G]{} networks in the industrial smart grid,” in *Proc. International Conference on Industrial Networks and Intelligent Systems (INISCom)*, Tokyo, Japan, Mar. 2015, pp. 107–112. W. Tushar, C. Yuen, B. Chai, S. Huang, K. L. Wood, S. G. Kerk, and Z. Yang, “Smart grid testbed for demand focused energy management in end user environments,” *IEEE Wireless Commun.*, vol. Pre-print, pp. 1–10, 2016, (available: http://arxiv.org/abs/1603.06756). L. Xu, C. Jiang, Y. Chen, Y. Ren, and K. J. R. Liu, “Privacy or utility in data collection? [A]{} contract theoretic approach,” *IEEE J. Sel. Topics Signal Process.*, vol. 9, no. 7, pp. 1256–1269, Oct. 2015. S. L. Keoh, S. S. Kumar, and H. Tschofenig, “Securing the internet of things: [A]{} standardization perspective,” *IEEE Internet Things J.*, vol. 1, no. 3, pp. 265–275, June 2014. L. Guo, J. Wu, Z. Xia, and J. Li, “Proposed security mechanism for [XMPP]{}-based communications of [ISO/IEC/IEEE]{} 21451 sensor networks,” *IEEE Sensors J.*, vol. 15, no. 5, pp. 2577–2586, May 2015. W.-T. Li, C. Yuen, N. U. Hassan, W. Tushar, and C.-K. Wen, “Demand response management for residential smart grid: [F]{}rom theory to practice,” *IEEE Access (Special issue on Smart Grids: A Hub of Interdisciplinary Research)*, vol. 3, pp. 2431–2440, Nov. 2015. N. U. Hassan, M. A. Pasha, C. Yuen, S. Huang, and X. Wang, “Impact of scheduling flexibility on demand profile flatness and user inconvenience in residential smart grid system,” *Energies*, vol. 6, no. 12, pp. 6608–6635, Dec 2013. A. Naeem, A. Shabbir, N. U. Hassan, C. Yuen, A. Ahmed, and W. Tushar, “Understanding customer behavior in multi-tier demand response management program,” *IEEE Access (Special issue on Smart Grids: A Hub of Interdisciplinary Research)*, vol. 3, pp. 2613–2625, Nov. 2015. N. U. Hassan, Y. Khalid, C. Yuen, and W. Tushar, “Customer engagement plans for peak load reduction in residential smart grids,” *IEEE Trans. Smart Grid*, vol. 6, no. 6, pp. 3029–3041, Nov. 2015. [^1]: S. K. Viswanath, C. Yuen, W. Tushar and W.-T. Li are with the Singapore University of Technology and Design (SUTD), Somapah Road, Singapore 487372 (Email: {sanjana, wayes\_tushar, yuenchau, wentai\_li}@sutd.edu.sg). [^2]: C.-Kai Wen is with the National Sun Yat-Sen University, Taiwan (Email: chaokai.wen@mail.nsysu.edu.tw). [^3]: Kun Hu, Cheng Chen and Xiang Liu are with the School of Software and Microelectronics, Peking University, Beijing, China (Email: {hk19900116, chengchen8901}@163.com, xliu@ss.pku.edu.cn). [^4]: This work is supported in part by the Singapore University of Technology and Design through the Energy Innovation Research Program Singapore under Grant NRF2012EWT-EIRP002-045, in part by the SUTD-MIT International Design Center, Singapore under Grant IDG31500106, and in part by the Grant NSFC 61550110244. [^5]: Available online in: http://www.ietf.org/dyn/wg/charter/core-charter.html. [^6]: Examples of smart metering standards are available online in https://standards.ieee.org/findstds/standard/1377-2012.html and http://standards.ieee.org/develop/msp/smartgrid.pdf [^7]: http://www.e-control.at/en/marketplayers/electricity/electricitymarket/ balancin%20energy. [^8]: We assume one request at a time to be processed by the server and to be received by the UHG as the processing speed is highly dependent on the processor power and other background load. Since we are interested in delay (especially each contribution of the delay) in our experiment, we assume just one request at time to remove other interruptions. [^9]: Please note that shifting of appliance from peak hours to off-peak hours could be another potential way to perform DRM. Examples of such scheme can be found in [@Wen-TaiAccess:2015], [@NaveedAccess:2015] and [@Naveed:2015]. However, this is beyond the scope of this paper.
--- abstract: 'We study the moduli of trigonal curves. We establish the exact upper bound of ${36(g+1)}/(5g+1)$ for the slope of trigonal fibrations. Here, the slope of any fibration $X\rightarrow B$ of stable curves with smooth general member is the ratio $\delta_B/\lambda_B$ of the restrictions of the boundary class $\delta$ and the Hodge class $\lambda$ on the moduli space $\overline{\mathfrak{M}}_g$ to the base $B$. We associate to a trigonal family $X$ a canonical rank two vector bundle $V$, and show that for Bogomolov-semistable $V$ the slope satisfies the stronger inequality ${\delta_B}/{\lambda_B}\leq 7+{6}/{g}$. We further describe the rational Picard group of the [trigonal]{} locus $\overline{\mathfrak T}_g$ in the moduli space $\overline{\mathfrak{M}}_g$ of genus $g$ curves. In the even genus case, we interpret the above Bogomolov semistability condition in terms of the so-called Maroni divisor in $\overline{\mathfrak T}_g$.' author: - 'Zvezdelina E. Stankova-Frenkel' title: Moduli of Trigonal Curves --- 1. Introduction {#introduction .unnumbered} =============== \[introduction\] In this paper $\overline{\mathfrak M}_g$ denotes the Deligne-Mumford compactification of the moduli space of smooth curves over $\mathbb{C}$ of genus $g\geq 2$. The boundary locus $\Delta$ of $\overline{\mathfrak M}_g$ consists of nodal curves with finite automorphism groups, which together with the smooth curves are referred to as [*stable*]{} curves. The locus of hyperelliptic curves will be denoted by $\overline{\mathfrak{I}}_g$, and the closure of the locus of trigonal curves will be denoted by $\overline{\mathfrak{T}}_g$. The main objects of our study will be families of genus $g$ stable curves, whose general members are smooth. Associated to any such [*flat and proper*]{} family $f\!:\!X\!\rightarrow\! B$ are three basic invariants $\lambda|_B$, $\delta|_B$ and $\kappa|_B$. We define these in Section \[definition\] as divisors on $B$, but for most purposes one can think of them as integers by considering their respective degrees. The invariant $\delta|_B$ counts, with appropriate multiplicities, the number of singular fibers of $X$. The self-intersection of the relative dualizing sheaf $\omega_f$ on $X$ defines $\kappa|_B$, and its pushforward to $B$ is a rank $g$ locally free sheaf, whose determinant is $\lambda|_B$. The basic relation $12\lambda|_B=\delta|_B+\kappa|_B$ and the positivity of the three invariants for non-isotrivial families force the [*slope*]{} $\displaystyle{\frac{\delta|_B}{\lambda|_B}}$ of $X/\!_{\displaystyle{B}}$ to fall into the interval $[0,12)$ (cf. Sect. \[slope-non-isotrivial\]). In fact, Cornalba-Harris and Xiao establish for this slope an exact upper bound of $8+4/g$, which is achieved only for certain hyperelliptic families (cf. Theorem \[CHX\]). However, if the base curve $B$ passes through a [*general*]{} point of $\overline{\mathfrak{M}}_g$, Mumford-Harris-Eisenbud give the better bound of $6+{\operatorname}{o}(1/g)$ (cf. Theorem \[generalbound1\]). The families violating this inequality are entirely contained in the closure $\overline{{\mathcal}{D}}_k$ of the locus of $k$-sheeted covers of ${{\mathbf P}}^1$, for a suitably chosen $k$. In particular, for $k=2$ we recover the [hyperelliptic]{} locus $\overline{\mathfrak{I}}_g$, for $k=3$ - the [trigonal]{} locus $\overline{\mathfrak{T}}_g$, etc. Therefore, higher than the above “generic” ratio can be obtained only for families with special linear series, such as $g^1_2$, $g^1_3$, etc. These observations clearly raise the following [**Question.**]{} [*According to the possession of special linear series, is there a stratification of $\overline{\mathfrak M}_g$ which would give successively smaller slopes $\delta/\lambda$? What would be the successive upper bounds with respect to such a stratification?*]{} The following result, whose proof will be given in the paper, answers this question for an exact upper bound for families with linear series $g^1_3$. [**Theorem I.**]{} [*If $f\!:\!X{\rightarrow} B$ is a trigonal nonisotrivial family with smooth general member, then the slope of $X/\!_{\displaystyle{B}}$ satisfies: $$\frac{\delta|_B}{\lambda|_B}\leq \frac{36(g+1)}{5g+1}\cdot$$ Equality is achieved if and only if all fibers are irreducible, $X$ is a triple cover of a ruled surface $Y$ over $B$, and a certain divisor class $\eta$ on $X$ is numerically zero.*]{} To understand the importance of this result and the above question, consider Mumford’s alternative description of the basic invariants (cf.  Sect. \[linebundles\]): $\lambda|_B$, $\delta|_B$ and $\kappa|_B$ are restrictions of certain rational divisor classes $\lambda, \delta, \kappa\in {\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{M}}_g$. Specifically, $\delta=\delta_0+\cdots+\delta_{[g/2]}$, where $\delta_i$ the class of the boundary divisor $\Delta_i$ of $\overline{\mathfrak{M}}_g$, and ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{M}}_g$ is freely generated by the Hodge class $\lambda$ and the boundary classes $\delta_i$ for $g\geq 3$ (cf. [@Ha2]). Thus, our question about a stratification of $\overline{\mathfrak{M}}_g$ translates into a question about the relations among the fundamental classes of various subvarieties defined by geometric conditions in $\overline{\mathfrak{M}}_g$ . Moreover, such a stratification would provide a link between the [*global*]{} invariant $\lambda$ (the degree of the Hodge bundle on $\overline{\mathfrak M}_g$) and the [*locally defined*]{} invariant $\delta$ of the singularities of our families. In the process of estimating the ratio $\delta / \lambda$ we hope to understand the geometry of interesting loci in $\overline{\mathfrak M}_g$, and describe their rational Picard groups. Such a program for the hyperelliptic locus $\overline{\mathfrak{I}}_g$ is completed by Cornalba-Harris (cf. Theorem \[theoremCHPic\]), who exhibit generators and relations for ${\operatorname}{Pic}_{\mathbb{Q}}{\overline{\mathfrak{I}}_g}$. The typical examples of families with maximal ratio of $8+{4}/{g}$ are constructed as blow-ups of pencils of hyperelliptic curves, embedded in the same ruled surface. Similar examples for [trigonal families]{} yield the slope $7+{6}/{g}$, but as Theorem I shows, this ratio is [*not*]{} an upper bound. This happens because of an “extra” [*Maroni*]{} locus in $\overline{\mathfrak{T}}_g$ (cf.  Sect. 12). While a general trigonal curve embeds in ${\mathbf F}_0={{\mathbf P}}^1\times {{\mathbf P}}^1$ or in the blow-up ${\mathbf F}_1$ of ${{{\mathbf P}}^2}$ at a point, the remaining trigonal curves embed in other rational ruled surfaces and comprise a closed subset in $\overline{\mathfrak{T}}_g$, called the Maroni locus. The proof of Theorem II, stated below, implies that all trigonal families achieving the maximal bound lie entirely in the Maroni locus, and moreover, their members are embedded in ruled surfaces “as far as possible from the generic” ruled surfaces ${\mathbf F}_0$ and ${\mathbf F}_1$. The ratio $7+{6}/{g}$, though not the “correct” maximum, plays a significant role in understanding the geometry of the trigonal locus, and in describing its rational Picard group. In particular, in a linear relation is established between the Hodge class, the boundary classes on $\overline{\mathfrak{T}}_g$, and a canonically defined vector bundle $V$ of rank 2 on a ruled surface $\widehat{Y}$ (cf. Sect. 9): [**Theorem II.**]{} [*Let $\delta_0$ denote the boundary class in $\overline {\mathfrak{T}}_g$ corresponding to irreducible singular curves, and let $\delta_{k,i}$ be the remaining boundary classes. For any trigonal non-isotrivial family with general smooth member, we have $$(7g+6)\lambda|_B=g\delta_0|_B+\sum_{k,i} \widetilde{c}_{k,i}\delta_{k,i}|_B +\frac{g-3}{2}(4c_2(V)-c_1^2(V)),$$ where $\widetilde{c}_{k,i}$ is a quadratic polynomial in $i$ with linear coefficients in $g$, and it is determined by the geometry of $\delta_{k,i}$.*]{} For example, $\widetilde{c}_{1,i}=3(i+2)(g-i)/2$ corresponds to the boundary divisor $\Delta{\mathfrak{T}}_{1,i}$, whose general member is the join in three points of two trigonal curves of genera $i$ and $g-i-2$, respectively (cf. Fig. \[Delta-k,i\]). Recall that the vector bundle $V$ is called [*Bogomolov semistable*]{} if its Chern classes satisfy $4c_2(V)\geq c_1^2(V)$ (cf. [@Bo; @Re]). We show in Section 9 the following [**Theorem III.**]{} [*For any trigonal nonisotrivial family $X\rightarrow B$ with general smooth member, if $V$ is Bogomolov semistable, then $$\frac{\delta|_B}{\lambda|_B}\leq 7+\frac{6}{g}\cdot$$*]{} In the even genus case, the Maroni locus is in fact a divisor on $\overline{\mathfrak{T}}_g$, whose class we denote by $\mu$. We further recognize the “Bogomolov quantity” $4c_2(V)-c_1^2(V)$ as counting roughly four times the number of Maroni fibers in $X$, and deduce [**Theorem IV.**]{} [*For even $g$, ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$ is freely generated by all boundary divisors $\delta_0$ and $\delta_{k,i}$, and the Maroni divisor $\mu$. The class of the Hodge bundle on $\overline{\mathfrak{T}}_g$ is expressed in terms of these generators as the following linear combination: $$(7g+6)\lambda|_{\overline{\mathfrak{T}}_g}=g\delta_0+ \sum_{k,i}\widehat{c}_{k,j}\delta_{k,i}+2(g-3){\mu}.$$*]{} Consequently, the condition $\eta\equiv 0$ in Theorem I can be interpreted as a relation among the number of irreducible singular curves and the “Maroni” fibers in our family: $(g+2)\delta_0|_B=-72(g+1)\mu|_B$, and hence maximal slope families are entirely contained in the Maroni locus of $\overline{\mathfrak{T}}_g$ (cf. Theorem \[maximalmaroni\]). The stated theorems complete the program for the trigonal locus $\overline{\mathfrak{T}}_g$, which was outlined earlier in this section. An important interpretation of these results can be traced back to [@MHE], where it is shown that the moduli space $\mathfrak{M}_g$ is of [*general type*]{}. The $k$-gonal locus $\overline{{\mathcal}{D}}_k$ is realized in terms of the generating classes as: $[\overline{{\mathcal}{D}}_k]=a\lambda-b\delta-{\mathcal}{E}$ for some $a,b>0$, and an effective boundary combination ${\mathcal}{E}$. Restricting to a general curve $B\subset \overline{\mathfrak{M}}_g$, we have $\overline{{\mathcal}{D}}_k|_B> 0$, and hence $a\lambda|_B-b\delta|_B>0$. Because of Seshadri’s criterion for ampleness of line bundles, in effect, we are asking for all positive numbers $a$ and $b$ such that the linear combination $a\lambda-b\delta$ is ample on $\overline{\mathfrak{M}}_g$. The smaller the ratio $a/b$ is, the stronger result we obtain. In other words, we are aiming at a maximal bound of $\delta /\lambda$, when we think of these classes as restricted to any curve $B\subset \overline{\mathfrak{M}}_g$. In view of this, part of this paper can be described as looking for all [*ample*]{} divisors on $\overline{\mathfrak{T}}_g$ of the form $a\lambda-b\delta$ with $a,b>0$. Theorem I then gives the necessary condition $(5g+1)a\geq 36(g+1)b$ (compare with [@M2; @MHE; @CH]). Some of the results can be applied to the study of the discriminant loci of a certain type of triple covers of surfaces. The methods and ideas for the trigonal case are in principal extendable to more general families of $k$-gonal curves. We refer the reader to Sect. 13 for a general maximal bound for tetragonal curves (for $g$ odd), and conjectures for the maximal and general bounds for any $d$-gonal and other families of stable curves. [Acknowledgments]{} This paper is based on my Ph.D. thesis at Harvard University. Joe Harris, my advisor, introduced me to the problem of finding a stratification of the moduli space with respect to a descending sequence of slopes of one–parameter families. I am very grateful to him for his advice and support throughout my work on the present thesis. I would like to thank Fedor Bogomolov, David Eisenbud, Benedict Gross, Brendan Hassett, David Mumford, Tony Pantev and Emma Previato for the helpful discussions that I have had with them at different stages of the project, as well as Kazuhiro Konno for providing me with his recent results on trigonal families. A source of inspiration and endless moral support has been my husband, Edward Frenkel, to whom goes my gratitude and love. 2. Preliminaries {#preliminaries .unnumbered} ================ \[preliminaries\] Definition of $\lambda|_B,\,\,\delta|_B$ and $\kappa|_B$ in Pic$B$ {#definition} ------------------------------------------------------------------ Let $f:X\rightarrow B$ be a flat proper one-parameter family of stable curves of genus $g$, where $B$ is a smooth projective curve. Assume in addition that the general member of $X$ is [*smooth*]{} (cf. Fig. \[family\]). $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=family.ps,width=1.5in,height=1.5in}} \hspace{-1mm}\end{array}}$$ Let $\omega_f=\omega_X\otimes f^*\omega_B^{-1}$ be the relative dualizing sheaf of $f$. Its pushforward $f_*(\omega_f)$ is a locally free sheaf on $B$ of rank $g$, and we set $$\lambda|_B=\lambda_X:=\wedge^gf_*(\omega_f)\in {\operatorname}{Pic}B$$ to be its determinant. The sheaf $f_*(\omega_f)$ is known as the “Hodge bundle” on $B$, and $\lambda|_B$ - as the “Hodge class” of $B$. In a similar way, we set $\kappa|_B$ to be the self-intersection of $\omega_f$: $$\kappa|_B=\kappa_X:=f_*(c_1^2(\omega_f))\in{\operatorname}{Pic}B.$$ The definition of $\delta|_B$, on the other hand, is local and requires some notation. Let $q$ be any singular point of a fiber $X_b$, $b\in B$. Since the general fiber of $X$ is smooth, the total space of $X$ near $q$ is given locally analytically by $xy=t^{m_q}$, where $x$ and $y$ are local parameters on $X_b$, $t$ is a local parameter on $B$ near $b$, and $m_q\geq 1$. (This follows from the one-dimensional versal deformation space of the nodal singularity at $q$.) For any other point $q$ of $X$ we set $m_q=0$. Now we can define $$\delta|_B=\delta_X:=f_*(\sum_{q\in X}m_qq)\in {\operatorname}{Pic}B.$$ By abuse of notation, we shall use the same letters for the line bundles $\lambda|_B, \,\,\kappa|_B$ and $\delta|_B$ and for their respective degrees, e.g. $\lambda|_B={\operatorname}{deg}\lambda|_B$. [**Remark 2.1.**]{} It is possible to define the three basic invariants for a wider variety of families. In particular, dropping the assumption of smoothness of the general fiber roughly means that the base curve $B$ is contained entirely in the boundary locus of $\overline{\mathfrak M}_g$. Since such families are not discussed in our paper, we shall not give here these definitions. The existence, however, of such invariants for any one-parameter family of stable curves will follow from the description of $\lambda,\,\,\delta$ and $\kappa$ as “global” classes in ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak M}_g$ (cf. Sect. \[linebundles\]). [**Remark 2.2.**]{} It is also possible to consider families whose special members are not stable, e.g. cuspidal, tacnodal and other types of singular curves. One reduces to the above cases by applying [*semistable reduction*]{} (cf. [@FM]). The line bundles $\lambda,\,\,\delta$ and $\kappa$ in Pic$_{\mathbb Q}\overline{\mathfrak M}_g$ {#linebundles} ----------------------------------------------------------------------------------------------- Another way to interpret the classes $\lambda|_B,\,\,\delta|_B$ and $\kappa|_B$ is to think of them as rational divisor classes on $\overline{\mathfrak{M}}_g$. In fact, Mumford shows that such invariants, defined for [*any*]{} proper flat family $X\rightarrow S$ and natural under base change, induce line bundles in Pic$_{\mathbb Q}\overline{\mathfrak M}_g$. Here follows a rough sketch of the argument (cf. [@M2]). Consider ${\operatorname}{Hilb}^{p(x)}_r$, the Hilbert scheme parametrizing all closed subschemes of ${\mathbf P}^r$ with Hilbert polynomial $p(x)=dx-g+1$ for some $d=2n(g-1)\gg 0$ and $r=d-g$. Let ${\mathcal}H\subset {\operatorname}{Hilb}^{p(x)}_r$ be the locally closed smooth subscheme of $n$-canonical stable curves of genus $g$. Then $\overline{\mathfrak M}_g$ is the GIT-quotient of ${\mathcal}H$ by ${{\operatorname}{PGL}}_r$. Let $$\rho:{\mathcal}H\rightarrow \overline{\mathfrak M}_g={\mathcal}H/{{\operatorname}{PGL}}_r$$ be the natural surjection, and let $({\operatorname}{Pic}{\mathcal}H)^{{{\operatorname}{PGL}}_r}$ be the subgroup of isomorphism classes of line bundles on ${\mathcal}H$ invariant under the action of ${{\operatorname}{PGL}}_r$. Consider also ${\operatorname}{Pic}_{{\operatorname}{fun}}\overline{\mathfrak M}_g$, the group of line bundles on the [*moduli functor*]{}. An element $L$ of ${\operatorname}{Pic}_{{\operatorname}{fun}}\overline{\mathfrak M}_g$ consists of the following data: for any proper flat family $f:X\rightarrow S$ of stable curves a line bundle $L_S$ on $S$ natural under base change. Two such elements are declared isomorphic under the obvious compatibility conditions. Naturally, a line bundle on $\overline{\mathfrak M}_g$ gives rise by pull-back to a line bundle on the moduli functor. In fact, this map is an inclusion with a torsion cokernel, and ${\operatorname}{Pic}_{{\operatorname}{fun}}\overline{\mathfrak M}_g$ is torsion free and isomorphic to $({\operatorname}{Pic}{\mathcal}H)^{{{\operatorname}{PGL}}_r}$: $${\operatorname}{Pic}\overline{\mathfrak M}_g\stackrel{\rho^*}{\hookrightarrow} {\operatorname}{Pic}_{{\operatorname}{fun}}\overline{\mathfrak M}_g\cong ({\operatorname}{Pic}{\mathcal}H)^{{{\operatorname}{PGL}}_r}.$$ Hence we may regard all these groups as sublattices of ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak M}_g$. In particular, $${\operatorname}{Pic}_{{\operatorname}{fun}}\overline{\mathfrak M}_g\otimes {\mathbb Q}\cong {\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak M}_g,$$ and any line bundle on the moduli functor can be thought of as a rational class on $\overline{\mathfrak M}_g$. These identifications allow us to make the following [**Definition 2.1.**]{} In ${\operatorname}{Pic}_{\mathbb Q} \overline{\mathfrak M}_g$ we define the line bundles $\lambda,\,\,\kappa$ and $\delta$ by $$\lambda=\det\pi_*(\omega_{{\mathcal}{C}/{\mathcal}H}),\,\, \kappa=\pi_*c_1(\omega _{{\mathcal}{C}/{\mathcal}H})^2,\,\, \delta={\mathcal}{O}_{{\mathcal}H}(\Delta{\mathcal}H),$$ where ${{\mathcal}{C}}\subset {\mathcal}H\times {\mathbb P}^r$ is the universal curve over ${\mathcal}H$, $\pi:{\mathcal}C \rightarrow {\mathcal}H$ is the projection map, $\omega_{{\mathcal}{C}/{\mathcal}H}$ is the relative dualizing sheaf of $\pi$, and $\Delta{\mathcal}H\subset {\mathcal}H$ is the divisor of singular curves on ${\mathcal}H$. As defined, $\lambda,\kappa$ and $\delta$ lie in ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak M}_g$, and as such they are only [*rational*]{} Cartier divisors on $\overline{\mathfrak M}_g$. In [@MHE] one can find examples where $\lambda$ does [*not*]{} descend to a line bundle on $\overline{\mathfrak M}_g$. On the other hand, it is obvious from which divisor on $\overline{\mathfrak M}_g$ our $\delta$ comes: $\delta={\mathcal}O_{\overline{\mathfrak M}_g}(\Delta)$, where $\Delta$ denotes the divisor on $\overline{\mathfrak M}_g$ comprised of all singular stable curves. Again, due to singularities of the total space of $\overline{\mathfrak M}_g$, $\Delta$ is only a [*rational*]{} Cartier divisor. In fact, the only locus of $\overline{\mathfrak{M}}_g$ on which $\lambda,\,\,\delta$ and $\kappa$ are necessarily [*integer*]{} divisor classes is $(\overline{\mathfrak{M}}_g)_0$ - the locus of automorphism-free curves. We can further define the [*boundary*]{} classes $\delta_0, \delta_1,...,\delta_{[\frac{g}{2}]}$ in ${\operatorname}{Pic} _{\mathbb Q}\overline {\mathfrak M}_g$. Let $\Delta_i$ be the $\mathbb Q$–Cartier divisor on $\overline {\mathfrak M}_g$ whose general member is an irreducible nodal curve with a single node (if $i=0$), or the join of two irreducible smooth curves of genera $i$ and $g-i$ intersecting transversally in one point (if $i>0$). Setting $\delta_i={{\mathcal}O}_{\overline {\mathfrak M}_g}(\Delta_i)$, we have $\Delta=\sum_i\Delta_i$ and $\delta=\sum_i\delta_i$. As the following result of Harer [@Ha1; @Ha2] suggests that, in order to describe the geometry of the moduli space $\overline{\mathfrak{M}}_g$, it will be useful to study the divisor classes defined above, and to understand the relations between them. The Hodge class $\lambda$ and the boundary classes $\delta_0,\delta_1,...,\delta_{[\frac{g}{2}]}$ generate ${\operatorname}{Pic}_{\mathbb Q} \overline{\mathfrak M}_g$, and for $g\geq 3$ they are linearly independent. It is easy to recognize the restrictions of $\lambda,\,\,\delta$ and $\kappa$ to a curve $B$ in $\overline{\mathfrak{M}}_g$ as the previously defined $\lambda|_B,\,\,\delta|_B$ and $\kappa|_B$. For example, the restriction of $\delta$ to the base curve $B$ counts, with appropriate multiplicities, the number of intersections of $B$ with the boundary components $\Delta_i$ of $\overline{\mathfrak{M}}_g$. As a final remark, applying Grothendieck Riemann-Roch Theorem (GRR) to the map $\pi:{\mathcal}C \rightarrow {\mathcal}H$ and the sheaf $\omega_{{\mathcal}{C}/{\mathcal}H}$, implies the basic relation: $$12\lambda=\kappa+\delta. \label{GRR}$$ Slope of non-isotrivial families {#slope-non-isotrivial} -------------------------------- Let $f:X\rightarrow B$ be our family of stable curves with a smooth general member. By definition, $\delta_B\geq 0$. Further, all locally free quotients of the Hodge bundle $f_*(\omega_f)$ have non-negative degrees [@key13]. If $X$ is a non-isotrivial family, then $\lambda|_B>0$, and since the relative canonical divisor $K_{X/B}$ is nef, $\kappa|_B>0$ [@key32]. In particular, we can divide by $\lambda|_B$. [**Definition 2.2.**]{} The [*slope*]{} of a non-isotrivial family $f:X\rightarrow B$ of stable curves with a smooth general member is the ratio $${\operatorname}{slope}(X/\!_{\displaystyle{B}}):=\frac{\delta|_B}{\lambda|_B} \cdot$$ Suppose we make a base change $B_1\rightarrow B$ of degree $d$, and set $X_1=X\times_{B}B_1$ to be the pull-back of our family over the new base $B_1$ (cf. Fig. \[basechange\]). Then the three invariants on $B$ will pull-back to the corresponding invariants on $B_1$, and their degrees will be multipied by $d$, e.g. $\lambda|_{B_1}=d\lambda|_B$, etc. In particular, the slope of $X/_{\displaystyle{B}}$ will be preserved. (1.8,1.8)(-0.2,3.9) (0,4)[$B_1\stackrel{d}{\longrightarrow} B$]{} (0,5.1)[$X_1\,\,{\longrightarrow}\,X$]{} (0.2,5)(1.25,0)[2]{}[(0,-1)[0.6]{}]{} In view of (\[GRR\]), restricting to the base curve $B$ yields $$12\lambda|_B=\kappa|_B+\delta|_B.$$ From the positivity conditions above, we deduce that $0\leq {\operatorname}{slope}(X/\!_{\displaystyle{B}})<12.$ Statement of the problem and what is known {#statement} ------------------------------------------ It is natural to ask whether we can find a better estimate for the slope of $X$. The first fundamental result in this direction is the following Let $f:X\rightarrow B$ be a nonisotrivial family with smooth general member. Then the slope of the family satisfies: $$\frac{\delta|_B}{\lambda|_B}\leq 8+\frac{4}{g}\cdot \label{8+4/g}$$ Equality holds if and only if the general fiber of $f$ is hyperelliptic, and all singular fibers are irreducible. \[CHX\] Note that the upper bound is achieved only for hyperelliptic families. Such families are of very special type since the hyperelliptic locus $\overline{\mathfrak I}_g$ has codimension $g-2$ in $\overline{\mathfrak M}_g$. On the other hand, if the base curve $B$ is general enough, a much better estimate can be shown (cf. [@MHE]): If $B$ passes through a general point $[C]\in \overline{\mathfrak{M}}_g$, then $$\frac{\delta|_B}{\lambda|_B}\leq 6+{\operatorname}{o}(\frac{1}{g})\cdot \label{6+o(1/g)}$$ \[generalbound1\] For example, when $g$ is odd, we can set $k=(g+1)/2$ and define the divisor $\overline{{\mathcal}{D}}_k$ in ${\overline{\mathfrak M}}_g$ as the closure of the $k$-sheeted covers of ${{\mathbf P}}^1$: $$\overline{{\mathcal}{D}}_k=\overline{\{C\in{\mathfrak{M}}_g\,\,|\,\,C\,\, {\operatorname}{has}\,\,{g}^1_k\}}.$$ $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=general.ps,width=1.5in,height=1.3in}} \hspace{-1mm}\end{array}}$$ If our family is not entirely contained in $\overline{{\mathcal}{D}}_k$, or equivalently, if $B$ passes through a point $[C]\not\in\overline{{\mathcal}{D}}_k$ (cf. Fig. \[generalcurve\]), $$\frac{\delta|_B}{\lambda|_B}\leq 6+\frac{12}{g+1}\cdot \label{6+12/(g+1)}$$ Higher than the “generic” ratio can be obtained, therefore, only for a very special type of families: those entirely contained in $\overline{{\mathcal}{D}}_k$, and hence possessing ${g}^1_2$, ${g}^1_3$, etc. ### The rational Picard group of the hyperelliptic locus $\overline{\mathfrak{I}}_g$ {#rationalhyper} In proving the maximal bound $8+4/g$, Cornalba-Harris also describe ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak I}_g$ by exhibiting generators and relations (cf. [@CH]). Here we briefly discuss their result. Recall the irreducible divisors $\Delta_i$ on $\overline{\mathfrak M}_g$. For $i=1,...,[g/2]$, $\Delta_i$ cuts out an irreducible divisor on $\overline{\mathfrak I}_g$, while the intersection $\Delta_0\cap\overline{\mathfrak I}_g$ breaks up into several components: $$\Delta_0\cap\overline{\mathfrak I}_g=\Xi_0\cap\Xi_1\cap\cdots \cap\Xi_{[\frac{g-1}{2}]}.$$ Set $\xi_i={\mathcal}O_{\overline{\mathfrak I}_g}(\Xi_i)$ for the class of $\Xi_i$ in $\overline{\mathfrak I}_g$, and retain the symbols $\lambda$ and $\delta_i$ for their corresponding restrictions to ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak I}_g$. Thus, $\delta_i:={\mathcal}O_{\overline{\mathfrak I}_g}(\Delta_i\cap\overline{\mathfrak I}_g)$ for all $i$. Finally note that the class $\delta_0$ is realised in ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak I}_g$ as the sum $$\delta_0=\xi_0+2\xi_1+\cdots+2\xi_{[\frac{g-1}{2}]}.$$ The coefficient $2$ roughly means that $\Delta_0$ is [*double*]{} along $\Xi_i$, for $i>0$. The classes $\xi_0,\cdots,\xi_{[\frac{g-1}{2}]}$ and $\delta_1,\cdots,\delta_{[\frac{g} {2}]}$ freely generate ${\operatorname}{Pic}_{\mathbb Q}{\overline{\mathfrak I}_g}$. The Hodge class $\lambda\in {\operatorname}{Pic}_{\mathbb Q}{\overline{\mathfrak I}_g}$ is expressed in terms of these generators as the following linear combination: $$(8g+4)\lambda=g\xi_0+\sum_{i=1}^{[(g-1)/2]}2(i+1)(g-i)\xi_i +\sum_{j=1}^{[g/2]}4j(g-j)\delta_j. \label{CHPic}$$ \[theoremCHPic\] For a specific family $f:X\rightarrow B$ of hyperelliptic stable curves this relation reads: $$(8g+4)\lambda|_B=g\xi_{0}|_B+\sum_{i=1}^{[(g-1)/2]}2(i+1)(g-i)\xi_{i}|_B +\sum_{j=1}^{[g/2]}4j(g-j)\delta_{j}|_B$$ $$\Rightarrow\,\, (8+4/g)\lambda|_B\geq \xi_{0}|_B+\sum_i2\xi_{i}|_B+ \sum_j2\delta_{j}|_B=\delta|_B.$$ This yields the desired $8+4/g$ inequality for the slope of a hyperelliptic family, and shows that the maximum can be obtained exactly when all $\xi_1,\cdots,\xi_{[\frac{g-1}{2}]},\delta_1,\cdots,\delta_{[\frac{g}{2}]}$ vanish on $B$. In other words, the singular fibers of $X$ belong only to the boundary divisor $\Xi_0$, and hence are irreducible. In Appendix we review the description of the divisors $\Xi_i$ via admissible covers, and give an alternative proof of Theorem \[theoremCHPic\]. ### Example of a hyperelliptic family with maximal slope {#example} We present here a typical example in which the upper bound $8+4/g$ is achieved, and show how to calculate explicitly the basic invariants $\lambda|_B$ and $\delta|_B$ for this family. [**Example 2.1.**]{} Consider a pencil ${\mathcal}{P}$ of hyperelliptic curves of genus $g$ on ${{\mathbf P}}^1\!\times \!{{\mathbf P}}^1$. Because of genus considerations, its members must be of type $(2,g+1)$. Our family $f\!:\!X\!\rightarrow \!{{\mathbf P}}^1$ will be obtained by blowing-up ${{\mathbf P}}^1\!\times\! {{\mathbf P}}^1$ at the $4(g+1)$ base points of the pencil in order to separate its members (cf. Fig. \[ratio8+4/g\]). Hence, $\chi(X)=\chi({{\mathbf P}}^1\!\times\! {{\mathbf P}}^1)+4(g+1)$ for the corresponding topological Euler characteristics. Riemann-Hurwitz formula for the map $f$ gives a second relation: $\chi(X)=\chi({{\mathbf P}}^1)\chi(X_b)+\delta|_B$, where ${{\mathbf P}}^1$ is the base $B$ and $X_b$ is the general fiber of $X$. Putting together, $\delta|_B=8g+4.$ (1,3)(0,3.1) (-0.7,5.5)[$X\,\,\hookrightarrow\,\,{{\mathbf P}}^1\! \!\times\! {{\mathbf P}}^1\!\!\times\!{{\mathbf P}}^1$]{} (-0.5,5.4)[(1,-1)[1]{}]{} (1.7,5.4)[(-1,-1)[1]{}]{} (0,3.9)[${{\mathbf P}}^1\!\!\times\! {{\mathbf P}}^1$]{} (0.4,2.7)[${{\mathbf P}}^1$]{} (0.6,3.8)[(0,-1)[0.6]{}]{} The total space of $X$ is a divisor on ${{\mathbf P}}^1\!\!\times\! {{\mathbf P}}^1\!\!\times\! {{\mathbf P}}^1$ of type $(2,g+1,1)$, and the map $f\!:\!X\!\rightarrow \!{{\mathbf P}}^1$ is the restriction to $X$ of the third projection $\pi_3\!:\!{{\mathbf P}}^1\!\!\times\! {{\mathbf P}}^1\!\!\times\!{{\mathbf P}^1}\!\rightarrow \!{{\mathbf P}}^1$. Using standard methods, we compute $h^0( (f_*(\omega_f))(-2))=0$. From the positivity of all free quotients of the Hodge bundle on ${{\mathbf P}}^1$, $f_*(\omega_f)$ splits as a direct sum $\bigoplus_{i=1}^g {{\mathcal}O}_{{{\mathbf P}}^1}(a_i)$ for some $a_i>0$. Then, for $f_*(\omega_f)(-2) =\bigoplus_i{\mathcal}{O}_{{\mathbf P}^1}(a_i-2)$ to have no sections, all $a_i$’s must be at most $1$. Finally, $$f_*(\omega_f)=\bigoplus_{i=1}^g{{\mathcal}O}_{{{\mathbf P}}^1}(+1),\,\,\lambda|_B=g, \,\,{\operatorname}{and}\,\,\frac{\delta|_B}{\lambda|_B}=8+\frac{4}{g} \cdot$$ ### The Trigonal Locus $\overline{\mathfrak{T}}_g$ {#trigonallocus} In a similar vein as in the above example, we consider pencils of trigonal curves on ruled surfaces, and obtain the slope $7+6/g$. It is somewhat reasonable to expect that this is the maximal ratio. Recall that a bundle ${\mathcal}{E}$ on a curve $B$ is [*semistable*]{} if for any proper subbundle ${\mathcal}{F}$, we have $q({\mathcal}{F})\leq q({\mathcal}{E})$, where $q$ is the quotient of the degree and the rank of the corresponding bundle. Following Xiao’s approach in the proof of Theorem \[CHX\], Konno shows that for non-hyperelliptic fibrations of genus $g$ with semistable Hodge bundle $f_*\omega_{f}$ (cf. [@HN; @key5]): $$\frac{\delta|_B}{\lambda|_B}\leq 7+\frac{6}{g}\cdot \label{7+6/g}$$ As for any trigonal families, he establishes the inequality (cf. [@key6]): $$\frac{\delta|_B}{\lambda|_B}\leq \frac{22g+26}{3g+1}\sim 7\frac{1}{3}+{\operatorname}{o} (\frac{1}{g})\cdot$$ Examples of trigonal families achieving this ratio were not found, which suggested that this bound might be too big. On the other hand, in trying to disprove the smaller bound $7+6/g$, we naturally arrived at counterexamples pointing to a third intermediate ratio (cf. Theorem \[maximal bound2\]): $$\frac{36(g+1)}{5g+1}\sim 7\frac{1}{5}+{\operatorname}{o} (\frac{1}{g})\cdot$$ The difference between the last two estimates may seem negligible, but this would not be so when both $\lambda|_B$ and $\delta|_B$ become large and we attempt to bound $\lambda|_B$ from below by $\delta|_B$. What is more important, the second ratio is in fact [*exact*]{}, and we give equivalent conditions for it to be achieved (cf. Sect. \[whenmaximal\], \[Maroni-maximal\]). This maximal bound confirms Chen’s result for genus $g=4$ in [@Chen]. The reader may ask why the “generic” examples for the maximum in the hyperelliptic case fail to provide also the maximum in the trigonal case. As we noted in the Introduction, the answer is closely related to the so-called [*Maroni*]{} locus in $\overline{\mathfrak{T}}_g$. More precisely, if ${\mathbf F}_k={{\mathbf P}}({{\mathcal}O}_{{{\mathbf P}}^1}\oplus {{\mathcal}O}_{{{\mathbf P}}^1}(k))$ denotes the corresponding rational ruled surface, a general curve $C$ embeds in ${\mathbf F}_0$ is $g$ is even, and in ${\mathbf F}_1$ if $g$ is odd. The Maroni locus consists of those curves that embed in ${\mathbf F}_k$ with $k\geq 2$. The number $k/2$ is referred to as the [*Maroni invariant*]{} of $C$. In these terms, the examples of pencils of trigonal curves on ${\mathbf F}_0$ and ${\mathbf F}_1$ have the lowest possible constant Maroni invariant, and we shall see that the maximum bound can be obtained only for families entirely contained in the Maroni locus, and having very high Maroni invariants. The “semistable” bound $7+6/g$ appears in Theorem \[7+6/g Bogomolov2\], where we give instead a sufficient [*Bogomolov-semistability*]{} condition $4c_2(V)-c_1^2(V)\geq 0$ for a canonically associated to $X$ vector bundle $V$ of rank 2. The rational Picard group of $\overline{\mathfrak{T}}_g$ is described in terms of generators and relations in Section \[generators\], providing thus in the trigonal case an analog of Theorem \[theoremCHPic\]. Note the apparent similarity of the coefficients $\widetilde{c}_{k,i}$ of the trigonal boundary classes and the coefficients of the hyperelliptic boundary classes. This is not coincidental. In fact, the $\widetilde{c}_{k,i}$’s are in a sense the “smallest” coefficients that could have been associated to the corresponding classes $\delta_{k,i}$ (cf. Fig. \[Delta-k,i\]): they are symmetric with respect to the two genera of the components in the general member of $\delta_{k,i}$. A crucial role in the proof of Theorem \[Pic trigonal\] is played by the interpretation of the above Bogomolov semistability condition in terms of the Maroni locus it $\overline{\mathfrak{T}}_g$ (cf. Sect. \[interpretation\]). The idea of the proof {#idea} --------------------- Let $f:X\rightarrow B$ be a family of stable curves, whose general member $X_b$ is a smooth trigonal curve. By definition, $X_b$ is a triple cover of ${{\mathbf P}}^1$. We would like to study how this triple cover varies as $X_b$ moves in the family $X$. Thus, it would be desirable to represent $X$, by analogy with $X_b$, as a triple cover of a ruled surface $Y$, comprised by the image lines ${{\mathbf P}}^1$. Unfortunately, due to existence of hyperelliptic and other special singular fibers, this is not always possible. (3,4.9)(-1,2) (0,4)[$\widetilde{X}\,\stackrel{\widetilde{\phi}} {\longrightarrow}\, \widetilde{Y}$]{} (0,5.1)[$\widehat{X}\,\stackrel{\widehat{\phi}}{\longrightarrow}\, \widehat{Y}$]{} (0.2,3.85)(-1.3,-0.7)[2]{}[(1,-1)[0.5]{}]{} (1.4,3.85)[(-1,-1)[0.5]{}]{} (0.5,3)(-0.5,0.85)[2]{}[(-2,-1)[0.8]{}]{} (0.6,2.9)[$\widetilde{B}$]{} (-1.3,3.3)[$X$]{} (-0.7,2.3)[$B$]{} (0.1,5)(1.5,0)[2]{}[(0,-1)[0.6]{}]{} (1.6,6.2)[(0,-1)[0.6]{}]{} (1.35,6.3)[${{\mathbf P}}V$]{} (0.2,5.6)[(2,1)[1.2]{}]{} ### The basic construction The “closest” model of such a triple cover can be obtained after a finite number of birational transformations on $X$, and a possible base change over the base $B$. This way we construct a [*quasi-admissible*]{} cover $\widetilde{\phi}:{\widetilde{X}} \rightarrow {\widetilde{Y}}$ over a new base ${\widetilde{B}}$ (cf. Prop. \[propquasi\]). Here ${\widetilde{Y}}$ is a [*birationally*]{} ruled surface over $\widetilde{B}$ with reduced, but non necessarily irreducible, special fibers: $\widetilde{Y}$ allows for [*pointed stable*]{} rational fibers, i.e. trees of ${{\mathbf P}}^1$’s with points marked in a certain (stable) way. The map $\widetilde{\phi}$ expresses any fiber $\widetilde{X}_ b$ as a triple quasi-admissible cover of the corresponding [*pointed stable*]{} rational curve $\widetilde{Y}_b$. To calculate effectively our invariants $\lambda,\delta$ and $\kappa$, we need that $\widetilde{\phi}$ be [*flat*]{}, which could force a few additional blow-ups on $\widetilde{X}$ and $\widetilde{Y}$. We end up with a flat proper triple cover $\widehat{\phi}: \widehat{X}\rightarrow \widehat{Y}$, where certain fibers of $\widehat{X}$ and $\widehat{Y}$ are allowed to be [*non-reduced*]{}: these are the scheme-theoretic preimages under the blow-ups on $\widetilde{X}$ and $\widetilde{Y}$. We call such covers $\widehat{\phi}$ [*effective*]{}. We observe next that any smooth trigonal curve $C$ can be naturally embedded in a ruled surface ${\mathbf F}_k$ over $B$. If $\alpha:C\rightarrow {{{\mathbf P}}^1}$ is the corresponding triple cover, there is an exact sequence of locally free sheaves on ${{{\mathbf P}}^1}$: $$0\rightarrow {V}\rightarrow {\alpha}_*{{\mathcal}O}_{C}\stackrel {{\operatorname}{tr}}{\rightarrow}{{\mathcal}O}_{{{\mathbf P}}^1}\rightarrow 0.$$ The projectivization ${\mathbf P}V$ of the rank 2 vector bundle $V$ is the ruled surface ${\mathbf F}_k$. This construction can be extended as $C$ moves in the effective cover $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$. The flatness of $\widehat{\phi}$ forces the pushforward ${\phi}_*{{\mathcal}O}_{\widehat{X}}$ to be a locally free sheaf of rank 3 on $\widehat{Y}$, and the finiteness of $\widehat{\phi}$ ensures the existence of a [*trace map*]{} ${\operatorname}{tr}:{\phi}_*{{\mathcal}O}_{\widehat{X}}\rightarrow {{\mathcal}O}_{\widehat{Y}}$. Again, the kernel $V$ of ${\operatorname}{tr}$ is the desired rank 2 vector bundle on $\widehat{Y}$, in whose projectivization, ${{\mathbf P}}V$, we embed $\widehat{X}$ (cf. Fig. \[Basic construction idea\]). ### Chow Rings Calculations We can now use the relations in the Chow rings of ${\mathbb{A}}({\mathbf P}V)$, $\mathbb{A}\widehat{Y}$ and $\mathbb{A}\widehat{X}$ to calculate the invariants $\lambda_{\widehat{X}}$ and $\delta_{\widehat{X}}$, appropriately defined for the new family $\widehat{X}\rightarrow {\widetilde{B}}$ of semistable and occasionally non-reduced fibers. Then, of course, we translate $\lambda_{\widehat{X}}$ and $\delta_{\widehat{X}}$ into $\lambda_{{X}}$ and $\delta_{{X}}$ with the necessary adjustments from the birational transformations on $X$ and the base change on $B$. We compare the resulting expressions to obtain a relation among $\lambda_X$ and $\delta_X$. ### Boundary of the Trigonal Locus As we vary the base curve $B$ inside $\overline{\mathfrak{T}}_g$, we actually obtain a relation among the restrictions of $\lambda$ and $\delta$ in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$, rather than just among $\lambda|_B=\!\lambda_X$ and $\delta|_B=\!\delta_X$ in ${\operatorname}{Pic}B$. have we thus represented and linked $\lambda|_{\overline{\mathfrak{T}}_g}$ and $\delta|_{\overline {\mathfrak{T}}_g}$? To answer this question, we need first to understand the boundary divisors of the trigonal locus $\overline{\mathfrak{T}}_g$. As we shall see, there are seven types of such divisors, denoted by $\Delta{\mathfrak{T}}_{0}$ and $\Delta{\mathfrak{T}}_{k,i}$ for $k=1,...,6$. Each type is determined by the specific geometry of its general member. For example, $\Delta{\mathfrak{T}}_0$ is the closure of all irreducible trigonal curves with one node, while $\Delta{\mathfrak{T}}_{2,i}$ corresponds to joins in two points of a trigonal and a hyperelliptic curve with genera $i$ and $g-1-i$, respectively (cf. Fig. \[Delta-k,i\]). Naturally, we derive an expression for the restriction of the divisor class $\delta\in{\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{M}}_g$ to $\overline{\mathfrak{T}}_g$: $$\delta|_{\displaystyle{\overline{\mathfrak{T}}_g}}=\delta_0+ \sum_{i=1}^{\scriptscriptstyle{[(g-2)/2]}}3\delta_{1,i} +\sum_{i=1}^{\scriptscriptstyle{g-2}}2\delta_{2,i} +\sum_{i=1}^{\scriptscriptstyle{[g/2]}}\delta_{3,i} +\sum_{i=1}^{\scriptscriptstyle{[(g-1)/2]}}3\delta_{4,i}+ \sum_{i=1}^{\scriptscriptstyle{g-1}}\delta_{5,i} +\sum_{i=1}^{\scriptscriptstyle{[g/2]}}\delta_{6,i}. \label{delta}$$ Here $\delta_0$ and $\delta_{k,i}$ are the divisor classes of $\Delta{\mathfrak{T}}_0$ and $\Delta{\mathfrak{T}}_{k,i}$ in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$. ### Relations among $\lambda$ and $\delta$ For a fixed family $X\rightarrow B$ with a smooth trigonal general member, we establish a relation among the Hodge class $\lambda|_B$, the boundary classes $\delta_{k,i}|_B$, and the Bogomolov quantity $4c_2(V)-c_1^2(V)$ for the associated vector bundle $V$: $$(7g+6)\lambda|_B=g\delta_0|_B+\sum_{k,i}\widetilde{c}_ {k,i}\delta_{k,i}|_B+\frac{g-3}{2}(4c_2(V)-c_1^2(V)). \label{tobelifted}$$ The polynomial coefficients $\widetilde{c}_{k,i}$ are comparatively larger than the corresponding coefficients of the boundary divisors in the expression for $\delta|_{\displaystyle{\overline{\mathfrak{T}}_g}}$. As a result, we rewrite (\[tobelifted\]) as $$(7g+6)\lambda|_B=g\delta|_B+{\mathcal}{E}|_B+\frac{g-3}{2}(4c_2(V)-c_1^2(V)), \label{E-argument}$$ where ${\mathcal}{E}$ is an effective combination of the boundary classes on $\overline{\mathfrak{T}}_g$. In particular, if $V$ is Bogomolov semistable, the slope satisfies (cf. Theorem \[7+6/g Bogomolov2\]): $${\operatorname}{slope}(X/_{\displaystyle{B}})\leq 7+\frac{6}{g}. \label{idea7+6/g}$$ Further, we describe ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$ as generated freely by the restriction $\lambda|_{\overline{\mathfrak{T}}_g}$ and the boundary classes of $\overline{\mathfrak{T}}_g$. In the even genus $g$ case, we can replace $\lambda|_{\overline{\mathfrak{T}}_g}$ by a geometrically defined class $\mu$, corresponding to the so-called Maroni divisor in $\overline{\mathfrak{T}}_g$. This, of course, means that the Hodge class $\lambda|_{\overline{\mathfrak{T}}_g}$ must be some linear combination of the boundary classes and $\mu$. The Bogomolov quantity is interpreted as $$4c_2(V)-c_1^2(V)=4\mu|_B+0\cdot \delta_0|_B+\sum_{k,i}\alpha_{k,i} \delta_{k,i}|_B,$$ which in turn “lifts” (\[tobelifted\]) to the wanted relation in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$: $$(7g+6)\lambda|_{\overline{\mathfrak{T}}_g}=g\delta_0+ \sum_{k,i}\widehat{c}_{k,i}\delta_{k,i}+2(g-3){\mu}.$$ We have not yet computed explicitly all coefficients $\widehat{c}_{k,i}$. In the cases which we have completed ($\Delta_{0}\mathfrak{T}_g$ and $\Delta_{1,i}\mathfrak{T}_g$), these coefficients turn out again sufficiently large so that we can repeat the argument in (\[E-argument\]). Thus, if $X$ has at least one non-Maroni fiber, and its singular fibers belong to $\Delta_{0}\mathfrak{T}_g\cup\Delta_{1,i}\mathfrak{T}_g$, then $\mu|_B\geq 0$, and hence the stronger bound of (\[idea7+6/g\]) holds (cf. Prop. \[Maroni inequality\] and Conj. \[Maroni-conj\]). ### Maximal Bound Since the Bogomolov semistability condition $4c_1^2(V)-c_2(V)\geq 0$ is not always satisfied, the above discussion shows that $7+6/g$ is [*not*]{} the maximal bound for the slope of trigonal families, Therefore, we need another, more subtle, estimate. The expressions for $\lambda|_B$ and $\delta|_B$ suggest that any maximal bound would be equivalent to an inequality involving $c_1^2(V)$, $c_2(V)$, and possibly some other invariants. We construct a specific divisor class $\eta$ on $\widetilde{X}$, for which the [*Hodge Index*]{} theorem implies $\eta^2\leq 0$, and we translate this into $9c_2(V)-2c_1^2(V)\geq 0$ (cf. Prop. \[genindex\]). We notice that the only reasonable way to replace Bogomolov’s condition $4c_1^2(V)-c_2(V)\geq 0$ by the newly found inequality is by subtracting the following quantities: $$36(g+1)\lambda|_B-(5g+1)\delta|_B= {\mathcal}{E}^{\prime}|_B+(g-3)\big(9c_2(V)- 2c_1^2(V)\big), \label{maximum1}$$ so that the “left-over” linear combination of boundary divisors ${\mathcal}{E}^{\prime}$ is again effective (cf. Theorem \[maximal relation2\]). Hence, we conclude that for [*all*]{} trigonal families: $${\operatorname}{slope}(X/_{\displaystyle{B}})\leq \frac{36(g+1)}{5g+1} \cdot$$ The organization of the paper {#organization} ----------------------------- The presentation of the [*Basic Construction*]{} is done in several stages. Fig. \[stages\] shows schematically the connection between the three types of covers, admissible, quasi-admissible and effective, in relation to the original family $X\rightarrow B$ of stable curves. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=stages.ps,width=1.6in,height=1.35in}} \hspace{-1mm}\end{array}}$$ We start in Section \[hurwitz\] by introducing a compactification $\overline{{\mathcal}{H}}_{d,g}$ of the Hurwitz scheme, parametrizing [*admissible*]{} $d$-uple covers of stable pointed rational curves. Using its coarse moduli properties, we show in Section  \[admissible\] the existence of admissible covers of surfaces $X^a\rightarrow Y^a$ associated to the original family $f\!:\!X\!\rightarrow \!B$. Next we modify these covers to [*quasi-admissible*]{} covers $\widetilde{\phi}:\widetilde{X} \rightarrow \widetilde{Y}$ (cf. Prop. \[propquasi\]), and further to [*effective*]{} covers $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$ in order to resolve the technical difficulties arising from the non-flatness of $\widetilde{\phi}$ (cf. Sect. \[effectivecovers\]). We devote Section 4 to the study of the boundary components of the trigonal locus $\overline{\mathfrak{T}}_g$ inside the moduli space $\overline{\mathfrak{M}}_g$, and express the restriction $\Delta|_{\overline{\mathfrak{T}}_g}$ as a linear combination of the boundary divisors (cf. Prop. \[divisorrel\]). In Section 6 we complete the Basic Construction by embedding the effective cover $\widehat{X}$ in a rank 1 projective bundle ${{\mathbf P}}V$ over $\widehat{Y}$. For convenience of the reader, the proofs of the maximal $36(g+1)/(5g+1)$ and the semistable $7+6/g$ bounds are presented first in the special, but fundamental case when the original family $f\!:\!X\!\rightarrow \!B$ is already an effective triple cover of a ruled surface $Y$ (cf. Sect. 7). The discussion results in finding the coefficients of $\delta_0$ in two different expressions of $\lambda|_{\overline{\mathfrak{T}}_g}$, but, as it turns out, the knowledge of these coefficients is enough to determine the desired two bounds. We refer to this as the [*global*]{} calculation. The Hodge Index Theorem and Nakai-Moishezon criterion on $X$ complete the global calculation in Sect. \[indextheorem\]. A discussion of maximal bound examples can be found in Section \[whenmaximal\]. The [*local*]{} calculations in Sections 8-10 compute the contributions of the other boundary classes $\delta_{k,i}$, and express $\lambda|_{\overline{\mathfrak{T}}_g}$ in terms of these contributions and the Chern classes of the rank 2 vector bundle $V$ on $\widehat{Y}$. For clearer exposition, the proofs of the two bounds are shown first for a [*general*]{} base curve $B$ (i.e. $B$ intersects transversally the boundary components in general points), and then in Section 11 the results are extended to [*any*]{} base curve $B$. We develop the necessary notation and techniques for the local calculations in Section  \[conventions\]. Section 12 discusses the relation between the Bogomolov semistability condition and the Maroni locus, and describes the structure of ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$. In Section \[Maroni-maximal\] we give another interpretation of the conditions for the maximal bound. We present further results and conjectures for $d$-gonal families in Section 13. In the Appendix, we give another proof of the $8+4/g$ bound in the hyperelliptic case and show an application of the maximal trigonal bound to the study of the discriminant locus of certain triple covers. 3. Quasi-Admissible Covers of Surfaces {#quasi-admissible-covers-of-surfaces .unnumbered} ====================================== \[quasi-admissible\] We first review briefly the theory of admissible covers. For more details, we refer the reader to [@MHE; @HM]. The Hurwitz scheme $\overline{{\mathcal}H}_{d,g}$ {#hurwitz} ------------------------------------------------- Let ${{\mathcal}H}_{d,g}$ be the [*small Hurwitz scheme*]{} parametrizing the pairs $(C,\phi)$, where $C$ is a smooth curve of genus $g$ and $\phi:C\rightarrow {{\mathbf P}}^1$ is a cover of degree $d$, simply branched over $b=2d+2g-2$ distinct points. Since $C\in {\mathfrak M}_g$, there is a natural map ${{\mathcal}H}_{d,g}\rightarrow {\mathfrak M}_g$, whose image contains an open dense subset of ${\mathfrak M}_g$. The theory of admissible covers provides the commutative diagram in Fig. \[Hurwitz figure\]. (5,3.5)(-0.8,2.2) (0,4)[${{\mathcal}H}_{d,g}\hookrightarrow \overline{{\mathcal}H}_{d,g}$]{} (0.4,3.85)[(1,-1)[0.9]{}]{} (1.9,3.85)[(1,-1)[0.9]{}]{} (0.4,4.2)[(1,1)[0.9]{}]{} (1.9,4.2)[(1,1)[0.9]{}]{} (2.4,4.5)[${pr}_1$]{} (2.4,3.5)[${pr}_2$]{} (1.1,2.5)[${\mathfrak P}_{0,b}\hookrightarrow \overline{\mathfrak P}_{0,b}$]{} (1.1,5.2)[${\mathfrak M}_g\hookrightarrow \overline{\mathfrak M}_g$]{} There ${\mathfrak P}_{0,b}$ (resp. $\overline{\mathfrak P}_{0,b}$) is the moduli space of $m$-pointed ${{\mathbf P}}^1$’s (resp. of stable $m$-pointed rational curves), and $\overline{{\mathcal}H}_{d,g}$ is a compactification of the Hurwirz scheme. The points of $\overline{{\mathcal}H}_{d,g}$ correspond to triples $(C,(P;p_1,...,p_m),\phi)$, where $C$ is a connected reduced nodal curve of genus $g$, $(P;p_1,...,p_m)$ is a stable $m$-pointed rational curve, and $\phi:C\rightarrow P$ is a so-called [*admissible cover*]{}. [**Definition 3.1.**]{} Given the curves $C$ and $P$ as above, an [*admissible cover*]{} $\phi:C\rightarrow P$ is a regular map satisfying the following conditions: (A1) $\phi^{-1}(P_{{\operatorname}{sm}})=C_{{\operatorname}{sm}}$ and $\phi:C_{{\operatorname}{sm}} \rightarrow P_{{\operatorname}{sm}}$ is simply branched over the distinct points $p_1,...,p_b\in P_{{\operatorname}{sm}}$; (A2) for every $q\in C_{{\operatorname}{sing}}$ lying over a node $p\in P$, the two branches through $q$ map with the same ramification index to the two branches through $p$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=nonstable.ps,width=1in,height=1in}} \hspace{-1mm}\end{array}}$$ Note that $C$ is not necessarily a stable curve, but contracting its destabilizing rational chains yields the corresponding stable curve $pr_1(C)\in \overline{\mathfrak M}_g$. In such a case, we say that $C$ is the “admissible model” for $pr_1(C)$ (cf. Fig. \[nonstable\]). Harris-Mumford have shown that the compactification $\overline{{\mathcal}H}_{d,g}$ is in fact a [*coarse moduli space*]{} for the admissible covers $\phi:C\rightarrow P$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=admfamilies.ps,width=3.4in,height=1.4in}} \hspace{-1mm}\end{array}}$$ Local properties of admissible covers {#localproperties} ------------------------------------- When we vary the admissible covers of curves in families, the local structure of the corresponding total spaces becomes apparent. Let $\phi:{\mathcal}C\rightarrow {\mathcal}P$ be a proper flat family (over a scheme ${\mathcal}B$) of admissible covers of curves (cf. Fig. \[admfamilies\]). Assume that $\phi$ is étale everywhere except over the nodes of the fibers of ${\mathcal}P/_ {\textstyle{{\mathcal}B}}$, and except over some sections $\sigma_i:{\mathcal}B\rightarrow {\mathcal}C$ and their images $\omega_i:{\mathcal}B\rightarrow {\mathcal}P$: there $\phi$ is simply branched along $\sigma_i$ over $\omega_i$ for all $i$. If $q\in {{\mathcal}C}_b$ is a point lying above a node $p\in {{\mathcal}P}_b$ for some $b\in {\mathcal}B$, then ${\mathcal}C_b$ has a node at $q$, and locally analytically we can describe ${\mathcal}C,{\mathcal}P$ and $\phi$ near $q$ and $p$ by: $$\left\{\begin{array}{lll} {\mathcal}C: & xy=a, &x,y\,\,{\operatorname}{generate}\,\,\widehat{\mathfrak m}_{q,{\mathcal}C_b},\,\, a\in \widehat{{\mathcal}O}_{b,{\mathcal}B},\\ {\mathcal}P: & uv=a^n, &u,v\,\,{\operatorname}{generate}\,\,\widehat{\mathfrak m}_{p,{\mathcal}P_b},\\ \phi: & u=x^n,v=u^n. \end{array}\right.$$ One can see that $n$ is the index of ramification of $\phi$ at $q$, and that fiberwise ${\mathcal}C_b\rightarrow {\mathcal}P_b$ is an admissible cover (of curves). From now on, by [*admissible covers*]{} we mean, more generally, families ${\mathcal}C\rightarrow {\mathcal}P$ over ${\mathcal}B$ with the above description. The local properties of the admissible cover $\phi:{\mathcal}C\rightarrow {\mathcal}P$ over the nodes in ${\mathcal}P_b$ forces singularities on the total spaces of ${\mathcal}C$ and ${\mathcal}P$. Since we will be interested only in the cases when the base ${\mathcal}B$ is a smooth projective curve $B$ and the general fiber of ${\mathcal}C$ is smooth, we can always pick a generator $t$ for $\widehat{{\mathcal}O}_{b,B}$, and express $a=t^l$ for some $l\in {\mathbb N}$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=singular.ps,width=1.2in,height=1.2in}} \hspace{-1mm}\end{array}}$$ [**Example 3.1.**]{} Let the triple admissible cover $\phi\!:\!{\mathcal}C\!\rightarrow\!{\mathcal}P$ contain the fiber ${\mathcal}C_b$ as in Fig. \[singular\]. At $q$, ${{\mathcal}C}$ is given by $\,\,xy=t^{l}$, and at $p$, ${\mathcal}P$ is given by $uv=t^{2l}$, where $u=x^2,\,\,v=y^2$. This forces at $r$ the local equation $xy=t^{2l}$ ($u=x,\,\,v=y$). Even if ${\mathcal}C$ is smooth at $q$ ($l=1$), ${\mathcal}C$ and ${\mathcal}P$ will be singular at $r$ and $p$, respectively ($xy=t^2,\,\,uv=t^2$). Compare this with the non-flat cover of ramification index 1 in Fig. \[mult4\]. Recall that a rational double point $s$ on a surface $S$ is of type $A_{l-1}$ if locally analytically $S$ is given at $s$ by the equation $xy=t^l$. Thus, $r$ and $p$ above are rational double points on ${\mathcal}C$ and ${\mathcal}P$, respectively, of type $A_{l-1}$. [**Remark 3.1.**]{} In the sequel, we use the fact that the projection $pr_1\!:\!\overline{{\mathcal}H}_{d,g}\!\rightarrow \!\overline{\mathfrak P}_{0,b}$ is a [*finite*]{} map. From the weak valuative criterion for properness, this means that given a family of admissible covers $\phi:{{\mathcal}C}^*\rightarrow {{\mathcal}P}^*$ over the punctured disc ${{\operatorname}{Spec}}\,{\mathbb C}((t))$, there is some $n\in {\mathbb N}$ for which $\phi$ extends to a family $\phi_n:{{\mathcal}C}_n\rightarrow {{\mathcal}P}_n$ of admissible covers over ${{\operatorname}{Spec}}\,{\mathbb C}[[t^{1/n}]]$. In particular, if the base for the admissible cover $\phi:X^*\rightarrow Y^*$ is an open set $B^*$ of a smooth projective curve $B$, modulo a finite base change, we can extend this to a family of admissible covers $X^a\rightarrow Y^a$ over the whole curve $B$. Admissible covers of surfaces {#admissible} ----------------------------- Consider a family $f:X\rightarrow B$ of stable curves of genus $g$, whose general member is smooth and $d$-gonal. Let $\psi:B\rightarrow {\overline{\mathfrak M}_g}$ be the canonical map, and let $\overline B$ denote the fiber product $B\times_{\overline{\mathfrak M}_g} \overline{{\mathcal}H}_{d,g}$. (5,4)(-0.7,1.9) (-2,4)[$\overline{B}_0\subset \overline B\stackrel{\eta}{\longrightarrow} \overline{{\mathcal}H}_{d,g}$]{} (-0.8,3.85)(1.4,0)[2]{}[(0,-1)[0.9]{}]{} (-1,2.5)[$B\stackrel{\psi}{\longrightarrow} \,\overline{\mathfrak M}_{g}$]{} (-2.5,3.3)[$X$]{} (0.7,3.35)[$\scriptstyle{pr_1}$]{} (-3.5,4.8)[$\overline{X}$]{} (-3.2,4.7)(1.5,-0.8)[2]{}[(2,-3)[0.7]{}]{} (-2.1,3.4)(-1,1.5)[2]{}[(3,-2)[1.1]{}]{} (3,3.7)[$B^*\subset B\stackrel{\eta}{\longrightarrow} \overline{{\mathcal}H}_{d,g}$]{} (4.2,5.05)(1.4,-1.5)[2]{}[(0,-1)[0.9]{}]{} (3.2,5.05)[(0,-1)[0.9]{}]{} (5.4,2.2)[$\overline{\mathfrak M}_{g}$]{} (5.7,3.1)[$\scriptstyle{pr_1}$]{} (3,5.2)[$X^*\subset X$]{} (4.3,3.6)[(1,-1)[1]{}]{} (4.8,3.1)[$\scriptstyle{\psi}$]{} If the general member of $X$ has infinitely many ${g}_d^1$’s, the variety $\overline B$ will have dimension $\geq 2$. We can resolve this by considering an intersection of the appropriate number of hyperplane sections of $\overline B$, and picking a one-dimensional component $\overline B_0$ dominating $B$. The curve $\overline B_0$ might be singular, but by normalizing it and pulling $X$ over it, we get another family of stable curves (cf. Fig. \[map eta\]): $$\overline X=X\times_B(\overline B_0)^{{\operatorname}{norm}} \rightarrow (\overline B_0)^{{\operatorname}{norm}}.$$ Since the two families have the same basic invariants, we can replace the original with the new one, and assume the existence of a map $\eta:B\rightarrow \overline{{\mathcal}H}_{d,g}$ compatible with $\psi: B\rightarrow \overline{\mathfrak M}_g$. In other words, $\eta$ associates to every fiber $C$ of $X$ a specific $g^1_d$ on $C$ or, possibly, a $g^1_d$ on an admissible model $C^a$ of $C$. Let $B^*$ be the open subset of $B$ over which [*all*]{} fibers are smooth and $d$-gonal. For simplicity, assume for now that all the fibers over $B^*$ can be represented as admissible covers of ${{\mathbf P}}^1$ via the chosen $g^1_d$’s, i.e. they are [*simply branched*]{} covers of ${{\mathbf P}}^1$ over $m$ distinct points of ${{\mathbf P}}^1$. Denote by $X^*$ the restriction of $X$ over $B^*$ (cf. Fig.12). The map $\eta:B^*\rightarrow {{\mathcal}H}_{d,g}$ induces a section $$\sigma:B^*\rightarrow {{\operatorname}{Pic}}^d(X^*/B^*),$$ where ${{\operatorname}{Pic}}^d(X^*/B^*)$ is the [*relative degree $d$ Picard variety*]{} of $X^*$ over $B^*$. ${{\operatorname}{Pic}}^d(X^*/B^*)$ parametrizes the line bundles on $X^*$ of relative degree $d$. The image $\sigma(B^*)\subset {{\operatorname}{Pic}}^d(X^*/B^*)$ is a class of line bundles on $X^*$ whose fiberwise restrictions are the chosen $g^1_d$’s. Let ${\mathcal}L$ be a representative of this class, and let $Y^*$ be the ruled surface ${{\mathbf P}}((f_*{{\mathcal}L})^{\widehat {\phantom{n}}})$ over $B^*$. The map $\phi:X^*\rightarrow Y^*$ induced by ${\mathcal}L$ defines an admissible cover over $B^*$, as shown in Fig. \[construction Y\*\]. (5,2.2)(-0.3,2.6) (0,4)[$X^*\stackrel{\phi}{\longrightarrow} Y^*={{\mathbf P}}((f_*{{\mathcal}L})^ {\widehat{\phantom{n}}})$]{} (0.2,3.85)[(1,-1)[0.5]{}]{} (1.4,3.85)[(-1,-1)[0.5]{}]{} (0.6,2.9)[$B^*$]{} (0.05,3.4)[$f$]{} (1.3,3.4)[$h$]{} From Remark 3.1, $\phi$ extends to a family of admissible covers ${\phi}^a:{X}^a\rightarrow {Y}^a$ over the whole base $B$. Since ${X}^a$ and $X$ are isomorphic over $B^*$, they are birational to each other. In other words, the fibers $C$ of $X$, over which ${\mathcal}L$ does not extend to the base-point free linear series $g^1_d=\sigma_1(b)$, are modified by blow-ups and blow-downs so as to arrive at their admissible models in ${X}^a$. We have thus proved the following Let $f:X\rightarrow B$ be a family of stable curves, whose general member over an open subset $B^*\subset B$ is a smooth $d$-uple admissible cover of ${{\mathbf P}}^1$. Then, modulo a finite base change, there exists an admissible cover of surfaces ${X}^a\rightarrow {Y}^a$ over $B$ such that ${X}^a$ is obtained from $X$ by a finite number of birational transformations performed on the fibers over $B-B^*$. \[quasicov\] Quasi-admissible covers {#quasi-covers} ----------------------- In case the general member of $X$ is [*not*]{} an admissible cover of ${{\mathbf P}}^1$, e.g. it is trigonal with a total point of ramification, we have to modify the above construction. To start with, we cannot expect to obtain an [*admissible*]{} cover $X^*\rightarrow Y^*$, even modulo a finite base change. This leads us to consider a different kind of covers, which we call [*quasi-admissible*]{}. [**Definition 3.2.**]{} A [*quasi-admissible cover*]{} $\widetilde{\phi}: C\rightarrow P$ of a nodal curve $C$ over a semistable pointed rational curve $P$ is a regular map which behaves like an admissible cover over the singular locus of $P$, i.e. for any $q\in C$ lying over a node $p\in P$ the two branches through $q$ map with the same ramification index to the two branches through $p$. $$\vspace*{5mm}{\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=quasi.ps,width=4.5in,height=0.5in}} \hspace{-1mm}\end{array}}$$ Quasi-admissible covers differ from admissible covers in allowing more diverse behavior of $C$ over $P_{{\operatorname}{sm}}$, e.g. having singularities, higher ramification points and multiple simple ramification points. Fig. \[quasicovers\] displays several degree 3 quasi-admissible covers over ${{\mathbf P}}^1$: However, any quasi-admissible cover can be obtained from an admissible cover $\phi^a\!:\!C^a\!\rightarrow\! P^a$ by simultaneous contractions of components in $P^a$ and their (rational) inverses on $C^a$. [**Definition 3.3.**]{} A [*minimal*]{} quasi-admissible cover $\widetilde{\phi}:C\rightarrow P$ is minimal with respect to the number of components of $P$. In other words, one cannot apply more simultaneous contractions on $C\rightarrow P$ and end with another quasi-admissible cover. [**Example 3.2.**]{} A smooth trigonal curve $C$ with a total point of ramification $q$ is a minimal quasi-admissible cover of $P={{\mathbf P}}^1$. Blowing up $q$ on $C$ and $p=\widetilde{\phi}(q)\in P$, gives an admissible cover $C^a=C\cup C_1\rightarrow P\cup P_1$, where $C_1\cong {{\mathbf P}}^1$ maps three-to-one onto $P_1\cong {{\mathbf P}}^1$ with a total point of ramification $q=C_1\cap C$ (cf. Fig. \[quasi/adm\]). $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=quasiadm.ps,width=2.4in,height=0.8in}} \hspace{-1mm}\end{array}}$$ The motivation for using [*minimal*]{} quasi-admissible covers, instead of just admissible or quasi-admissible covers, is that the former are the closest covers to the original families $X\rightarrow B$ of stable curves, and calculations on them will yield the best possible estimate for the ratio $\delta_X/\lambda_X $ (cf. Fig. \[stages\]). ### Quasi-admissible covers for families with higher ramification sections Now let us consider the remaining case of a family $X\rightarrow B$, whose general member over $B^*$ is smooth and $d$-gonal, but [*not*]{} an admissible cover of ${{\mathbf P}}^1$. After a possible base change, we still have the map (cf. Fig. [12]{}) $$\eta:B\longrightarrow {\overline {{\mathcal}H}}_{d,g}.$$ It associates to every fiber $C$ a $g^1_d$ on its admissible model $C^a$. Let $C^a\rightarrow P^a$ be the corresponding admissible cover. Since $C$ itself is $d$-gonal, and by assumption it does not possess a $g^1_e$ with $e<d$, $C$ must be a $d$-uple cover of some component of $P^a$. In particular, the $g^1_d$ on $C^a$ restricts to a $g^1_d$ on $C$. Thus, in effect, $\eta$ gives again a section $\sigma:B^*\rightarrow {{\operatorname}{Pic}}^d(X^*/B^*)$. As before, we obtain a degree $d$ finite map $\phi:X^*\rightarrow Y^*$ to the ruled surface ${\mathbf P}((f_*{{\mathcal}L})^{\widehat{\phantom{n}}})$ over $B^*$. Note that this is a family of [*minimal quasi-admissible*]{} covers. We extend $\phi$ over the curve $B$ as follows. For simplicity, assume that $d=3$. Let $R$ be the ramification divisor of $\phi$ in $X^*$. By hypothesis, there is a component $R_0$ of $R$ which passes through total ramification points and dominates $B^*$. Letting $\overline{R}_0$ be the closure of $R_0$ in $X$, we can normalize it and pull the family $X$ over it. So we may assume that $\overline{R}_0$ is a section of $X\rightarrow B$. If there are some other components $R_1,R_2,...,R_l$ of the ramification divisor $R$ passing through higher ramification points, we repeat the same procedure for them, until we “straighten out” all $\overline{R}_i$’s into sections of $X\rightarrow B$. Let $E_i=\phi(R_i)$ be the corresponding sections of $Y^*$ over $B^*$. We can shrink $B^*$ in order to exclude any fibers with isolated higher ramification points. Consider a fiber $C$ in $X^*$. Let $\{r_i=C\cap R_i\}$ be its total ramification points, and let $\{p_i=\phi(r_i)\}$ be their images on $P=\phi(C)$ in $Y^*$. It is clear that blowing-up all $r_i$’s and $p_i$’s will give an admissible triple cover $C^a= {\operatorname}{Bl}_{\{r_i\}}(C)\rightarrow P^a={\operatorname}{Bl}_{\{p_i\}}(P)$. The $g^1_d$, giving this cover, is the original one assigned by $\eta:B^*\rightarrow {\overline{{\mathcal}H}}_{d,g}$. We globalize this construction by blowing-up the sections ${R}_i$ on $X^*$ and $E_i$ on $Y^*$. Similarly as above, we obtain a triple admissible cover of surfaces $\phi^*:{\operatorname}{Bl}_{\cup R_i}(X^*)\rightarrow {\operatorname}{Bl}_{\cup E_i} (Y^*)$ over $B^*$. The properness of $pr_1: \overline{{\mathcal}H}_{d,g}\rightarrow \overline{\mathfrak M}_g$ allows us to extend this to an admissible cover ${\phi}^a: \overline{{\operatorname}{Bl}_{\cup R_i}(X^*)} \rightarrow\overline{{\operatorname}{Bl}_{\cup E_i}(Y^*)}$ over $B$ (cf.  Fig. \[blowing up\]). (3,5)(4.3,-0.5) (2,3.7)[$\overline{{{\mathcal}R}_i}\subset \overline{{\operatorname}{Bl}_{\cup R_i}(X^*)}\stackrel{{\phi}^a} {\longrightarrow} \overline{{\operatorname}{Bl}_{\cup E_i}(Y^*)}\supset \overline{{{\mathcal}E}_i}$]{} (2,2.2)[${{{\mathcal}R}_i}\subset {{\operatorname}{Bl}_{\cup R_i}(X^*)} \stackrel{\phi^*}{\longrightarrow} {{\operatorname}{Bl}_{\cup E_i}(Y^*)}\supset{{{\mathcal}E}_i}$]{} (2,0.7)[$R_i\hspace{0.5mm}\subset \hspace{6.6mm} X^*\hspace{6mm}\stackrel{\phi}{\longrightarrow}\hspace{7.3mm}Y^* \hspace*{5.6mm}\supset E_i$]{} (2.2,3.5)(6.1,0)[2]{}[(0,-1)[0.9]{}]{} (3.9,3.5)(2.8,0)[2]{}[(0,-1)[0.9]{}]{} (2.2,2)(6.1,0)[2]{}[(0,-1)[0.9]{}]{} (3.9,2)(2.8,0)[2]{}[(0,-1)[0.9]{}]{} (3,0)(6.7,-2.3) (7.8,2.2)[${X}^q\stackrel{\phi^q}{\longrightarrow} {Y}^q$]{} (8,2)(1.4,0)[2]{}[(0,-1)[0.9]{}]{} (8.1,1.5)(1.4,0)[2]{}[$\wr$]{} (7.8,0.7)[$X^*\longrightarrow Y^*$]{} (8,0.5)[(1,-1)[0.5]{}]{} (9.4,0.5)[(-1,-1)[0.5]{}]{} (8.5,-0.4)[$B^*$]{} \[over B\*\] Denote by ${{\mathcal}R}_i$ the component of ${\operatorname}{Bl}_{\cup R_i}(X^*)$, obtained by blowing up ${R}_i\subset X^*$, and let $\overline{{{\mathcal}R}_i}$ be its closure in $\overline{{\operatorname}{Bl}_{\cup R_i}(X^*)}$. Define similarly ${{\mathcal}E}_i\subset {\operatorname}{Bl}_{\cup E_i}(Y^*)$ and $\overline{{\mathcal}E}_i \subset \overline{{\operatorname}{Bl}_{\cup E_i}(Y^*)}$. The admissible cover $\phi^a$ maps $\overline{{\mathcal}R}_i$ to $\overline{{\mathcal}E}_i$, so that after removing all the $\overline{{\mathcal}R}_i$’s and $\overline{{\mathcal}E}_i$’s we still have a triple cover $${\phi}^q:{X}^q=\overline{{\operatorname}{Bl}_{\cup R _i} (X^*)}-\cup\overline{{\mathcal}R_i}\longrightarrow {Y}^q=\overline{{\operatorname}{Bl}_{\{{ E_i}\}} (Y^*)}-\cup\overline{{\mathcal}E_i}.$$ Note that ${X}^q\cong X$ and ${Y}^q\cong Y$ over the open set $B^*$, and that ${Y}^q$ is a birationally ruled surface over $B$ (cf. Fig. 17). Finally, note that from the quasi-admissible cover ${\phi}^q:{X}^q\rightarrow {Y}^q$ we obtain a family $\widetilde{\phi}:\widetilde{X}\rightarrow \widetilde{Y}$ of [*minimal*]{} quasi-admissible covers: simply contract the unnecessary rational components in the fibers of ${X}^q$ and ${Y}^q$, and observe that the triple map ${\phi}^q$ restricts to the corresponding triple map $\widetilde{\phi}$. This completes the construction of minimal quasi-admissible covers for any family $X\rightarrow B$ with general smooth trigonal member. The cases $d>3$ are only notationally more difficult. One has to keep track of the possibly different higher multiplicities in $C$ and multiple double points in $C$ over the same $p\in P$. The construction of an admissible cover ${X}^a\rightarrow {Y}^a$ goes through with minimal modifications. We combine the results of this section in the following Let $f:X\rightarrow B$ be a family of stable curves, whose general member over an open subset $B^*\subset B$ is smooth and $d$-gonal. Then, modulo a finite base change, there exists a minimal quasi-admissible cover of surfaces $\widetilde{X}\rightarrow \widetilde{Y}$ over $B$ such that $\widetilde{X}$ is obtained from $X$ by a finite number of birational transformations performed on the fibers over $B-B^*$. \[propquasi\] 4. The Boundary $\Delta{\mathfrak{T}}_g$ of the Trigonal Locus $\overline{\mathfrak{T}}_g$ {#the-boundary-deltamathfrakt_g-of-the-trigonal-locus-overlinemathfrakt_g .unnumbered} ========================================================================================== \[boundarycomponents\] Description and notation for the boundary of $\overline{\mathfrak{T}}_g$ {#description} ------------------------------------------------------------------------ In this section we shall see that there are [*seven types*]{} of boundary divisors of $\overline{\mathfrak{T}}_g$, each denoted by $\Delta{\mathfrak{T}}_{k,i}$ for $k=0,1,...,6$. The second index $i$ is determined in the following way. Let $C=C_1\cup C_2$ be the general member of $\Delta{\mathfrak{T}}_{k,i}$, where $C_1$ and $C_2$ are smooth curves. If $C_1$ and $C_2$ are both trigonal or both hyperelliptic, then we set $i$ to be the smaller of the two genera $p(C_1)$ or $p(C_2)$. If, say, $C_1$ is a trigonal, but $C_2$ is hyperelliptic, then we set $i$ to be genus of the trigonal component $C_1$. The only exception to this rule occurs when $C$ is irreducible (and hence of genus $g$ with exactly one node). We denote this boundary component by $\Delta{\mathfrak{T}}_0$. When we view a general member $C$ roughly as a triple cover of ${{\mathbf P}}^1$’s in the Hurwitz scheme (consider the pull-back $pr_1[C]\in\overline{{\mathcal}{H}}_{3,g}$), then it may or may not be ramified. If there is no ramification, then $C$ lies in one of the first four types of trigonal boundary divisors $\Delta{\mathfrak{T}}_{k,i}$, $k=0,1,2,3$. Ramification index 1 characterizes the general members of $\Delta{\mathfrak{T}}_{4,i}$ and $\Delta{\mathfrak{T}}_{5,i}$, and in case of $\Delta{\mathfrak{T}}_{6,i}$ the ramification index is 2 (cf. Fig. \[Delta-k,i\]). There is an alternative description of the boundary components $\Delta{\mathfrak{T}}_{k,i}$’s of $\overline{\mathfrak{T}}_g$. If one such $\Delta{\mathfrak{T}}_{k,i}$ lies in the restriction $\Delta_0\big|_{\displaystyle{\overline{\mathfrak{T}}_g}}$ of the divisor $\Delta_0$ in $\overline{\mathfrak{M}}_g$, then $\Delta{\mathfrak{T}}_{k,i}$ is one of $\Delta{\mathfrak{T}}_0,\,\, \Delta{\mathfrak{T}}_{1,i},\,\,\Delta{\mathfrak{T}}_{2,i},$ or $\Delta{\mathfrak{T}}_{4,i}$. The partial normalization of their general members $C$ is still connected, i.e. $C$ is either irreducible, or the join of two smooth curves meeting in at least two points. Correspondingly, for the general member $C$ of the remaining three types of boundary components, $\Delta{\mathfrak{T}}_{3,i},\,\,\Delta{\mathfrak{T}}_{5,i}$ and $\Delta{\mathfrak{T}}_{6,i}$, the irreducible components of $C$ intersect transversely in exactly one point, so that the normalization of $C$ is disconnected. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=boundary.ps,width=4.5in,height=1in}} \hspace{-1mm}\end{array}}$$ (6,1)(2.7,-1.4) (-0.7,2.45)[$\Delta{\mathfrak{T}}_{0} \hspace{19mm}\Delta{\mathfrak{T}}_{1,i} \hspace{25mm}\Delta{\mathfrak{T}}_{2,i} \hspace{23mm}\Delta{\mathfrak{T}}_{3,i}$]{} (2.8,1.2)[$\scriptstyle{i=1,2,...,}\left[\frac{g-2}{2}\right]\hspace{17mm} \scriptstyle{i=1,2,..., g-2}\hspace{17mm}\scriptstyle{i=1,2, ...,} \left[\frac{g}{2}\right]$]{} (-0.2,0.5)[$\Delta{\mathfrak{T}}_{4,i}\hspace{31mm}\Delta{\mathfrak{T}}_{5,i} \hspace{31mm}\Delta{\mathfrak{T}}_{6,i}$]{} (01.1,-0.6)[$\scriptstyle{i=1,2,...,\left[\frac{g-1}{2}\right]\hspace{24mm} i=1,2,...,g-1 \hspace{22mm}i=1,2,...,\left[\frac{g}{2}\right]}$]{} The boundary divisors of $\overline{\mathfrak{T}}_g$ can be grouped in seven types: $\Delta{\mathfrak{T}}_{0}$ and $\Delta{\mathfrak{T}}_{k,i}$ for $k=1,...,6$. Their general members and range of index $i$ are shown in Fig. \[Delta-k,i\]. The boundary of $\overline{\mathfrak{T}}_g$ consists of $\Delta{\mathfrak{T}}_{0}$, $\Delta{\mathfrak{T}}_{k,i}$, and the codimension 2 component $\overline{\mathfrak{I}}_g$ of hyperelliptic curves. \[boundary\] Consider the projection map $pr_1:\overline{{\mathcal}{H}}_{3,g}\rightarrow \overline{\mathfrak{M}}_g$, whose image is the trigonal locus $\overline{\mathfrak{T}}_g$. Thus, the inverse image of each boundary divisor $\Delta{\mathfrak{T}}_{k,i}$ will be a boundary divisor $\Delta{{\mathcal}{H}}_{k,i}$ in $\overline{{\mathcal}{H}}_{3,g}$. The converse, however, is not always true, i.e. certain boundary divisors of $\overline{{\mathcal}{H}}_{3,g}$ contract under $pr_1$ to smaller subschemes of $\overline{\mathfrak{T}}_g$, e.g. the hyperelliptic locus $\overline{\mathfrak{I}}_g$. With the description of the Hurwitz scheme $\overline{{\mathcal}{H}}_{3,g}$, given in Section 3, it is easier to determine first $\overline{{\mathcal}{H}}_{3,g}$’s boundary divisors. Thus, we postpone the proof of Proposition \[boundary\] until the end of the next subsection. ### The Boundary of $\overline{{\mathcal}{H}}_{3,g}$. {#admissibleaboundary} The boundary divisors of $\overline{{\mathcal}{H}}_{3,g}$ can be grouped in six types: $\Delta{{\mathcal}{H}}_{k,i}$ for $k=1,...,6$. Their general members and range of index $i$ are shown in Fig. \[admissible-k,i\]. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=admissible.ps,width=4.5in,height=1in}} \hspace{-1mm}\end{array}}$$ (5,1)(3,-2) (-0.9,1.85)[$\Delta{{\mathcal}{H}}_{1,i} \hspace{33mm}\Delta{{\mathcal}{H}}_{2,i} \hspace{32mm}\Delta{{\mathcal}{H}}_{3,i}$]{} (0.3,0.7)[$\scriptstyle{i=1,2,...,}\left[\frac{g-2}{2}\right]\hspace{29mm} \scriptstyle{i=1,..., g-1}\hspace{24mm}\scriptstyle{i=0,1, ...,} \left[\frac{g}{2}\right]$]{} (-0.9,0.15)[$\Delta{{\mathcal}{H}}_{4,i}\hspace{33mm}\Delta{{\mathcal}{H}}_{5,i} \hspace{32mm}\Delta{{\mathcal}{H}}_{6,i}$]{} (0.3,-1.2)[$\scriptstyle{i=1,2,...,\left[\frac{g-1}{2}\right]\hspace{29mm} i=1,2,...,g-1 \hspace{23mm}i=1,2,...,\left[\frac{g}{2}\right]}$]{} \[boundary2\] A general member $A$ of the boundary $\Delta{{\mathcal}{H}}$ is a triple admissible cover of a chain of [*two*]{} ${{\mathbf P}}^1$. (From the dimension calculations that follow it will become clear that an admissible cover of a chain of three or more ${\mathbf P}^1$’s will generate a subscheme in $\overline{{\mathcal}{H}}_{3,g}$ of codimension $\geq 2$.) Note that [*three*]{} connected components of $A$ over one ${{\mathbf P}}^1$ means that they are all smooth ${{\mathbf P}}^1$’s themselves, and hence they can all be contracted simultaneously, leaving us with a smooth trigonal curve, or with a hyperelliptic curve with an attached ${{{\mathbf P}}}^1$, neither of which cases by dimension count corresponds to a [*general*]{} member of a boundary component $\Delta{{\mathcal}{H}}_{k,i}$. Considering all combinations of one or two connected components of $A$ over each ${{\mathbf P}}^1$, we generate a list of the possible general members of the boundary divisors $\Delta{{\mathcal}{H}}_{k,i}$. To see which of these are indeed of codimension 1 in $\overline{{\mathcal}{H}}_{3,g}$, we do the following calculation. First we note that, for a fixed set of $2i+4$ ramification points in ${{\mathbf P}}^1$, there are finitely many covers of degree $3$ and genus $i$, that is, $${\operatorname}{dim}\overline{\mathfrak{T}}_i=2i+4-3=2i+1.$$ Substracting $3$ takes into account the projectively equivalent triples of points on ${{\mathbf P}}^1$. In particular, ${\operatorname}{dim}\overline{\mathfrak{T}}_g=2g+1$. A similar agrument (with $2i+2$ ramification points) shows that for the hyperelliptic locus: $${\operatorname}{dim}\overline{\mathfrak{I}}_i=2i+2-3=2i-1.$$ These computations are valid for $i>0$, whereas $0= {\operatorname}{dim}\overline{\mathfrak{T}}_i={\operatorname}{dim}\overline{\mathfrak{I}}_i$. Thus, to compute the dimensions of the six types of subschemes of $\overline{{\mathcal}{H}}_{3,g}$, one adds the corresponding dimensions of $\overline{\mathfrak{T}}_i$ and $\overline{\mathfrak{I}}_j$, making the necessary adjustments for the choice of intersection points on the components of each curve $A$. For example, when $i>0$ the dimension of the subscheme with general member $A$, shown in Fig. \[admissible-k,i\], is $${\operatorname}{dim}\overline{\mathfrak{T}}_i+{\operatorname}{dim}\overline{\mathfrak{T}}_{g-i-2}+1+1=2g.$$ The final 1’s account for the choice of triples of points in the $g^1_3$’s on each component. We conclude that for $i=1,2,...,[(g-2)/2]$ the join at three points of two trigonal curves, one of genus $i$ and the other of genus $g-i-2$, is the general member of a boundary component of $\overline{{\mathcal}{H}}_{3,g}$. We denote it by $\Delta{{\mathcal}{H}}_{1,i}$. The range of $i$ stops at $[(g-2)/2]$ for symmetry considerations. When $i=0$, the corresponding subscheme has a smaller dimension of $2g-2$ and hence no boundary divisor is generated by such curves. As another example, consider the fifth sketch in Fig. \[admissible-k,i\]. It corresponds to the join at one point of a trigonal curve $C_1$ of genus $i$, a hyperelliptic curve $C_2$ of genus $g-i$, and an attached ${{\mathbf P}}^1$ to $C_2$ to make the whole curve a triple cover. Note that $C_1$ and $C_2$ intersect transversally at a point $q$, but when presented as covers of ${{\mathbf P}}^1$ they both have ramifications at $q$ of index 1. On all such curves $C_1$ and $C_2$ the total number of ramification points over ${{\mathbf P}}^1$ is finite, and hence their choice does not affect the dimension of our subscheme. Thus, $${\operatorname}{dim}\overline{\mathfrak{T}}_i+{\operatorname}{dim}\overline{\mathfrak{I}}_{g-i}=2i+1+2(g-i)= 2g.$$ Therefore, this subscheme is in fact a divisor in $\overline{{\mathcal}{H}}_{3,g}$, which we denote by $\Delta {{\mathcal}{H}}_{5,i}$. The cases of $i=0$ or $i=g$ lead to contractions of unstable rational components ($C_1$ or $C_2$), and do not yield the necessary dimension of $2g$. Hence, $i=1,2,...,g-1$. In the case of $\Delta{{\mathcal}{H}}_{6,i}$, the two components $C_1$ and $C_2$ meet transversally in one point $q$, but both have ramification of index $2$ at $q$ as triple covers of ${{\mathbf P}}^1$. Smooth trigonal curves of genus $i$ with such high ramification form a codimension 1 subscheme of the trigonal locus ${\mathfrak{T}}_i$, hence the dimension of $\Delta{{\mathcal}{H}}_{6,i}$ is $${\operatorname}{\dim}\overline{\mathfrak{T}}_i-1+{\operatorname}{dim}\overline{\mathfrak{T}}_{g-i}-1= (2i+1)-1+(2(g-i)+1)-1=2g.$$ Thus, $\Delta{{\mathcal}{H}}_{6,i}$ is a boundary divisor in $\overline{{\mathcal}{H}}_{3,g}$ for $i=1,2,...,[g/2]$. The case of $i=0$ yields dimension $2g-1$, and hence we disregard it. The remaining cases are treated similarly. We conclude that $\overline{{\mathcal}{H}}_{3,g}$ has six types of boundary divisors, $\Delta{{\mathcal}{H}}_{k,i}$, whose general members and range of indices are indicated in Fig. \[admissible-k,i\]. ### Boundary of $\overline{\mathfrak{T}}_g$. Proof of Proposition \[boundary\] {#trigonalboundary} Having described the boundary of $\overline{{\mathcal}{H}}_{3,g}$, it remains to check which of the divisors $\Delta{\mathcal}{H}_{k,i}$ preserve their dimension under the map $pr_1$ and hence map into divisors of $\overline {\mathfrak{T}}_g$. The only “surprises” can be expected where $pr_1$ contracts unstable ${{\mathbf P}}^1$, such as in $\Delta{{\mathcal}{H}}_{2,i}$, $\Delta{{\mathcal}{H}}_{3,i}$, and $\Delta{{\mathcal}{H}}_{5,i}$. In fact, only $\Delta{{\mathcal}{H}}_{2,g-1}$ and $\Delta{{\mathcal}{H}}_{3,0}$ diverge from the common pattern; in all other cases, we set $\Delta{\mathfrak{T}}_{k,i}:= pr_1\left(\Delta{{\mathcal}{H}}_{k,i}\right)$ to be the corresponding boundary divisor in $\overline{\mathfrak{T}}_g$. The map $pr_1$ contracts the three rational components of the general member of $\Delta{{\mathcal}{H}}_{3,0}$, leaving only a smooth hyperelliptic curve of genus $g$. Thus, the image $pr_1\left(\Delta{{\mathcal}{H}}_{3,0}\right)$ is the hyperelliptic locus $\overline{\mathfrak{I}}_g$, which is of dimension $2g-1$. Hence $\Delta{{\mathcal}{H}}_{3,0}$ does not yield a divisor in $\overline{\mathfrak{T}}_g$, but a boundary component of codimension 2. Finally we consider $\Delta{{\mathcal}{H}}_{2,g-1}$. After we contract its two rational components, we arrive at an [*irreducible nodal*]{} trigonal curve with exactly one node. The dimension of the subscheme of such curves is $${\operatorname}{dim}\overline{\mathfrak{T}}_{g-1}+1=2(g-1)+1+1=2g,$$ where the final $1$ indicates the choice of a triple of points on a smooth trigonal curve (belonging to the $g^1_3$), two of which will be identified as a node. Correspondingly, we obtain another divisor in $\overline{\mathfrak{T}}_g$, which we denote by $\Delta{\mathfrak{T}}_0.$ Multiplicities of the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$ in the restriction $\delta|_{\overline{\mathfrak{T}}_g}$ {#multiplicities} ----------------------------------------------------------------------------------------------------------------------------- By abuse of notation, we will denote by $\delta_0$ and $\delta_{k,i}$ the classes in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$ of $\Delta{\mathfrak{T}}_0$ and $\Delta{\mathfrak{T}}_{k,i}$, respectively. The divisor class $\delta\in{\operatorname}{Pic}_{\mathbb{Q}} \overline{\mathfrak{M}}_g$ restricts to $\overline{\mathfrak{T}}_g$ as the following linear combination of the boundary classes in $\overline{\mathfrak{T}}_g$: $$\delta|_{\displaystyle{\overline{\mathfrak{T}}_g}}=\delta_0+ \sum_{i=1}^{\scriptscriptstyle{[(g-2)/2]}}3\delta_{1,i} +\sum_{i=1}^{\scriptscriptstyle{g-2}}2\delta_{2,i} +\sum_{i=1}^{\scriptscriptstyle{[g/2]}}\delta_{3,i} +\sum_{i=1}^{\scriptscriptstyle{[(g-1)/2]}}3\delta_{4,i}+ \sum_{i=1}^{\scriptscriptstyle{g-1}}\delta_{5,i} +\sum_{i=1}^{\scriptscriptstyle{[g/2]}}\delta_{6,i}. \label{divisorrel}$$ [*Proof.*]{} Let us rewrite equation (\[divisorrel\]) in the form $$\delta|_{\displaystyle{\overline{\mathfrak{T}}_g}}=({\operatorname}{mult}_{\delta} \delta_0)\delta_0+\sum_{k,i}({\operatorname}{mult}_{\delta}\delta_{k,i})\delta_{k,i},$$ and call ${\operatorname}{mult}_{\delta}\delta_{k,i}$ the [*multiplicity*]{} of $\delta_{k,i}$ in $\delta|_{\overline{\mathfrak{T}}_g}$. This linear relation simply counts the contribution of each singular curve of a specific boundary type in $\Delta\mathfrak{T}_g$ to the degree of $\delta$. Recall that for any trigonal family $f:X\rightarrow B$: $${\operatorname}{deg}\delta|_B=\sum_{q\in X}m_q.$$ Here $m_q$ denotes the local analytic multiplicity of the total space of $X$ nearby $q$ measured by the equation $xy=t^{m_q}$, where $x$ and $y$ are local parameters on the singular fiber $X_b$, and $t$ is a local parameter on $B$ near $b=f(q)$. For each boundary class $\Delta{\mathfrak{T}}_{k,i}$ of $\overline{\mathfrak{T}}_g$, we consider its general member $C\!=\!C_1\cup C_2$, and a base curve $B$ in $\overline{\mathfrak{T}}_g$ which intersects transversally $\Delta{\mathfrak{T}}_{k,i}$ in $[C]$. In the corresponding one-parameter trigonal family $f:X\rightarrow B$, we must find the sum of the multiplicities $m_q$ of the singularities of $C$. Thus, $${\operatorname}{mult}_{\delta}\delta_{k,i}=\sum_{\,\,q\in C_{{\operatorname}{sing}}}\!\!m_q.$$ For most of the divisors classes, this sum is actually quite straight forward. For example, the general member $[C]\in \Delta{\mathfrak{T}}_{3,i}$ is the join of two smooth hyperelliptic curves $C_1$ and $C_2$, which intersect transversally in one point $q$. The family $X$, constructed as above, will be given locally analytically nearby $q$ by $xy=t$, and hence ${\operatorname}{mult}_{\delta}\delta_{k,i}=m_q=1$. A similar situation occurs in the cases of $\Delta\mathfrak{T}_0, \Delta\mathfrak{T}_{5,i}$ and $\Delta\mathfrak{T}_{6,i}$: there is one point of transversal intersection (or one node) forcing $${\operatorname}{mult}_{\delta}\delta_0= {\operatorname}{mult}_{\delta}\delta_{k,i}=1\,\,{\operatorname}{for}\,\, k=3,5,6.$$ In the cases of $\Delta\mathfrak{T}_{2,i}$ and $\Delta\mathfrak{T}_{1,i}$ there are correspondingly two or three points of transversal intersection, forcing $${\operatorname}{mult}_{\delta}\delta_{2,i}=2\,\,{\operatorname}{and}\,\, {\operatorname}{mult}_{\delta}\delta_{1,i}=3.$$ This can be also interpreted by the fact that $\Delta\mathfrak{T}_{2,i}$ and $\Delta\mathfrak{T}_{1,i}$ lie entirely in the divisor $\Delta_0$ in $\overline{\mathfrak{M}}_g$ with, $\Delta_0$ being [*double*]{} along $\Delta\mathfrak{T}_{2,i}$ and [*triple*]{} along $\Delta\mathfrak{T}_{1,i}$. A slightly more complex situation occurs in the case of $\Delta\mathfrak{T}_{4,i}$. The general member $C$ consists of two curves $C_1$ and $C_2$, meeting transversally in two points $q$ and $r$ (see Fig. \[mult4\]). But, as in an admissible triple cover of two ${{\mathbf P}}^1$’s, the points $q$ and $r$ behave differently: at one of them, say $r$, the triple cover is [*not*]{} ramified, while at $q$ there is ramification of index $1$. In the local analytic rings of $p,q$ and $r$ the generators of $\widehat{{\mathcal}{O}}_{Y,p}$ map into the squares of the generators of $\widehat{{\mathcal}{O}}_{X,q}$: $u\mapsto x^2, v\mapsto y^2$, and of course, $t\mapsto t$, so that the local equation of $Y$ near $p$ is $uv=t^2$, and that of $X$ near $q$ is $xy=t$. But since the triple cover is a local isomorphism of $\widehat{{\mathcal}{O}}_{Y,p}$ into $\widehat{{\mathcal}{O}} _{X,r}$, the total space of $X$ near $r$ is given locally analytically by $zw=t^2$ ($u\mapsto z, v\mapsto w, t\mapsto t$). Therefore, $m_q=1$, but $m_r=2$, and $${\operatorname}{mult}_{\delta_0}\delta_{4,i}=m_q+m_r=3.\,\,\,\qed$$ $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=mult4.ps,width=2.8in,height=1.3in}} \hspace{-1mm}\end{array}}$$ The hyperelliptic locus $\overline{\mathfrak{I}}_g$ inside $\overline{\mathfrak{T}}_g$ {#hyperelliptic locus} -------------------------------------------------------------------------------------- Although the relations proved in this paper will be valid on the Picard group ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak{T}}_g$, it will be interesting to check what happens with the hyperelliptic curves inside the trigonal locus $\overline{\mathfrak{T}}_g$. We noted that $\overline{\mathfrak{I}}_g$ is the only boundary component of $\overline{\mathfrak{T}}_g$ of codimension 2. It is obtained as the image $pr_1(\Delta{{\mathcal}H}_{3,0})$. By blowing up a point on a smooth hyperelliptic curve $C$, we add a ${\mathbf P}^1$–component to $C$ to make it a triple cover $C^{\prime}$ (cf. Fig. \[smoothhyper\]). It terms of the quasi-admissible covers, such $C^{\prime}$ behaves exactly as an irreducible singular trigonal curve in $\Delta{\mathfrak{T}}_0$. However, $C$ does not contribute to the invariant $\delta|_B$, as defined in Section \[definition\]. In fact, in a certain sense, it even decreases $\delta|_B$. To simplify the exposition, we shall postpone the discussion of families with hyperelliptic fibers until Section 11, where we will explain the behavior of trigonal families with finitely many hyperelliptic fibers in terms of the exceptional divisor $\Delta{{\mathcal}H}_{3,0}$ of the projection $pr_1$. A similar phenomenon occurs with the boundary component $\Delta\mathfrak{T}_{1,0}=pr_1(\Delta{\mathcal}{H}_{1,0})$, but it does not make sense to exclude its members from our discussion, since they behave exactly as members of the boundary divisor $\Delta\mathfrak{T}_{1,i}$ for $i\geq 1$. The invariants $\mu(C)$ {#The invariants} ----------------------- In the transition from the original family $X\rightarrow B$ to the minimal quasi-admissible family $\widetilde{X}\rightarrow \widetilde{Y}$ over $\widetilde{B}$, certain changes occur in the calculation of the basic invariants. To start with, it is easy to redefine $\lambda_{\widetilde{X}},\kappa_{\widetilde{X}}$ and $\delta_{\widetilde{X}}$ for $\widetilde{X}\rightarrow \widetilde{B}$: simply use the corresponding definitions from Section \[definition\]. Since we are interested in the slope of the family, which is invariant under base change, we may assume that $\widetilde{B}:=B$ and that $X$ is the pull-back over the new base $\widetilde{B}$. Now the difference between $X$ and $\widetilde{X}$ is reduced to several “quasi-admissible” blow-ups on $X$. Blowing up smooth or rational double points on a surface does not affect its structure sheaf. Therefore, the degrees of the Hodge bundles on the two surfaces $X$ and $\widetilde{X}$ will be the same: $\lambda_{\widetilde{X}}=\lambda_X$. On the other hand, blowing up a smooth point on a surface decreases the square of its dualizing sheaf by 1, while there is no effect when blowing up a rational double point. Each type of singular fibers $C$ in $X$ requires apriori different quasi-admissible modifications (or no modifications at all), and thus decreases $\kappa_X$ by some nonnegative integer, denoted by $\mu(C)$: $$\kappa_X=\kappa_{\widetilde{X}}+\sum_{C}\mu(C).$$ Thus, $\mu(C)$ counts the number of “smooth blow-ups” on $C$, which are needed to obtain the minimal quasi-admissible cover $\widetilde{C}\rightarrow C$ within the surface quasi-admissible cover $\widetilde{\phi}: \widetilde{X}\rightarrow \widetilde{Y}$. In the following Lemma, we compute the invariants $\mu(C)$ only for the general members of the boundary $\Delta{\mathfrak{T}}_g$ (cf. Fig. \[Delta-k,i\]). The remaining, more special, singular curves in $\Delta{\mathfrak{T}}_g$ will be linear combinations of these $\mu(C)$’s (cf. Sect. 11). If $\mu_{k,i}$ denotes the invariant $\mu(C)$ for a general curve $C\in\Delta\mathfrak{T}_{k,i}$, then $$\begin{aligned} {\operatorname}{(a)}&&\mu_0=\mu_{1,i}=\mu_{4,i}=\mu_{6,i}=0;\\ {\operatorname}{(b)}&&\mu_{2,i}=1;\\ {\operatorname}{(c)}&&\mu_{3,i}=\mu_{5,i}=2.\end{aligned}$$ \[mu(C)\] [*Proof.*]{} The general members of the boundary $\Delta{{\mathcal}H}$ are in fact the minimal quasi-admissible covers associated to the general members of the boundary $\Delta{\mathfrak{T}}$, except for $\Delta_0$ which has $\mu_0=0$. Thus, we trace the blow-ups necessary to transform the curves in Fig. \[Delta-k,i\] to the curves in Fig. \[admissible-k,i\]. For example, no blow-ups are needed in the case of $\Delta_{1,i}$, so that $\mu_{1,i}=0$, while we need 2 blow-ups in the case of $\Delta_{3,i}$, and hence $\mu_{3,i}=2$. The only interesting situation occurs for $\Delta_{5,i}$. Apparently, there is only [*one*]{} added component ${\mathbf P}^1$ to the original $C\in\Delta_{3,i}$, but the lemma states that $\mu_{3,i}=2$. The difference comes from the fact that near the intersection $r=C\cup{\mathbf P}^1$ the surface $\widetilde{X}$ has equation $xy=t^2$, i.e. $r$ is a rational double point on $\widetilde{X}$ of type $A_1$ (a similar situation occurred in Fig. \[mult4\]). To obtain such a point $r$ in place of a smooth point $r_1$ on $X$, we first blow up $r_1$, and then on the obtained exceptional divisor we blow up another point $r_2$, so as to end with a [*chain of two*]{} ${\mathbf P}^1$’s (cf. Fig. \[mu-5,i\]). Finally, we blow down the first ${\mathbf P}^1$, and develop the required rational double point $r$. As a result, we have two “smooth” and one “singular” blow-ups, which implies $\mu_{5,i}=2.$ $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=mu.ps,width=3in,height=1in}} \hspace{-1mm}\end{array}}$$ 5. Effective Covers {#effective-covers .unnumbered} =================== \[effectivecovers\] In this section we construct the final type of triple covers in the Basic Construction. These will not be necessary for the global calculation in Section 7, so the reader may wish to skip this more technical part on a first reading, and assume in Section 6 that all covers are flat. Construction of effective covers $\widehat{X}\rightarrow\widehat{Y}$ {#constructioneffective} -------------------------------------------------------------------- Consider a quasi-admissible cover $\widetilde{\phi}:\widetilde{X}\rightarrow \widetilde{Y}$, as given in Prop. \[propquasi\]. In order to use the fact that the pushforward $\widetilde{\phi}_*{{\mathcal}{O}_{\widetilde{X}}}$ is locally free on $\widetilde{Y}$, we need to assure that the map $\widetilde{\phi}$ is [*flat*]{}. Unfortunately, there are certain points on $\widetilde{X}$ where this fails to be true: exactly where the fibers of $\widetilde{X}$ are ramified as triple covers of the corresponding fibers of $\widetilde{Y}$. The situation can be resolved by several further blow-ups. Namely, we work locally analytically near the points in $\widehat{X}$ of ramifications index 1 or 2, and consider correspondingly two cases. ### Case of ramification index 1 {#caseram1} This case involves members of the boundary divisors $\Delta{\mathfrak{T}}_{4,i}$ and $\Delta{\mathfrak{T}}_{5,i}$. Let $q$ be the point of ramification in the fiber of $\widetilde{X}$ over the point $p$ in the fiber of $\widetilde{Y}$ (cf. Fig. \[ram\]). We use the pull-back of the map $\widetilde{\phi}$ to study the embedding of the completion of the local ring of $p$ into that of $q$: $$\widehat{{\mathcal}{O}}_{\widetilde{Y}\!,p}=\mathbb{C}[[u,v,t]] \big/_{\displaystyle{(uv-t^2)}} \stackrel{\widetilde{\phi}^{\#}} {\hookrightarrow}\widehat{{\mathcal}{O}}_{\widetilde{X}\!,q}= \mathbb{C}[[x,y,t]]\big/_{\displaystyle{(xy-t)}}.$$ $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=ram.ps,width=2.7in,height=1.2in}} \hspace{-1mm}\end{array}}$$ Therefore, as an $\widehat{{\mathcal}{O}}_{\widetilde{Y}\!,p}$-module, $$\widehat{{\mathcal}{O}}_{\widetilde{X}\!,q}= \widehat{{\mathcal}{O}}_{\widetilde{Y}\!,p}+ \widehat{{\mathcal}{O}}_{\widetilde{Y}\!,p}x+ \widehat{{\mathcal}{O}}_{\widetilde{Y}\!,p}y.$$ However, this is not a locally-free $\widehat{{\mathcal}{O}}_{\widetilde{Y},p}$-module: for instance, one relation among the generators is $(v-t)x+(u-t)y=0$. Alternatively, the fiber of $\phi$ over $p$ is supported at $q$, but it is of degree 3 rather than 2, which would have been necessary for the flatness of a degree $2$ map. Indeed, as $\mathbb{C}-$vector spaces: $$\widehat{{\mathcal}{O}}_{\widetilde{X}\!,q}\otimes_{\widehat{{\mathcal}{O}}_ {\widetilde{Y}\!,p}} {\operatorname}{Spec}k(p)\cong \widehat{{\mathcal}{O}}_{\widetilde{X}\!,q}\big/_{\displaystyle {\widehat{\mathfrak{m}}_{Y\!,p}\widehat{{\mathcal}{O}}_{\widetilde{X}\!,q}}}\cong \mathbb{C}[[x,y]]\big/_{\displaystyle{(x^2,y^2,xy)}}=\mathbb{C}\oplus \mathbb{C}x\oplus\mathbb{C}y.$$ In Fig. \[ram\] one can visually observe the two distinct tangent directions at $q$ making it a [*fat*]{} point of degree $3$. We conclude that $\widetilde{\phi}$ is indeed non-flat at $q$. To resolve this, we blow-up $\widetilde{Y}$ at $p$ and $\widetilde{X}$ at $q$, denoting the new surfaces by $\widehat{Y}$ and $\widehat{X}$. It is easy to see that they fit into the following coming diagram: $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=blow1.ps,width=3.5in,height=1.4in}} \hspace{-1mm}\end{array}}$$ In order to keep the map $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$ of degree 3, we need to blow-up one further point on $\widetilde{X}$: if the inverse image of $p$ is $\{q,r\}$ we blow-up $r$, and thus we add the necessary component to $\widehat{X}$ to make it a triple cover of $\widehat{Y}$ (cf. Fig \[coef2.fig\]). ### Case of ramification index 2 {#caseram2} The only boundary component, where ramification index 2 occurs, is $\Delta{\mathfrak{T}}_{6,i}$. Similarly as above, $\widetilde{\phi}: \widetilde{X}\rightarrow \widetilde{Y}$ is non-flat at $q$. Indeed, $\widehat{{\mathcal}{O}}_{\widetilde{X}\!,q}$ is generated as an $\widehat{{\mathcal}{O}}_{\widetilde{Y}\!,p}$-module by $1,x,y,x^2,y^2$, but not-freely due to the relation $u\cdot x+v\cdot y-t\cdot x^2-t\cdot y^2=0$. To resolve the apparent non-flatness of $\widetilde{\phi}$, we can blow-up once $\widetilde{X}$ and $\widetilde{Y}$ at $q$ and $p$, but this would not be sufficient. In fact, we must make further blows-ups on each surface, as Fig. \[resolve2\] suggests: two more on $\widetilde{X}$ and one more on $\widetilde{Y}$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=blow2.ps,width=5.3in,height=1.4in}} \hspace{-1mm}\end{array}}$$ In both cases of ramification index 1 or 2, the new map $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$ is obtained from $\widetilde{\phi}$ by a base change, and hence $\widehat{\phi}$ is [*proper*]{} and [*finite*]{}, and by construction, also a [*flat*]{} morphism. We call such covers [*effective*]{}. The above considerations combined with Prop. \[quasicov\] imply the existence of effective covers for our families of trigonal curves: Let $X/\!_{\displaystyle{B}}$ be a family of trigonal curves with smooth general member. After several blow-ups (and possibly modulo a base change) we can associate to it an effective cover $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$. \[effexist\] Here $\widehat{Y}$ is a birationally ruled surface over $B$. If the base curve $B$ is [*not*]{} tangent to the boundary divisors $\Delta {\mathfrak{T}}_{k,i}$, then the resulting surfaces $\widehat{X}$ and $\widehat{Y}$ will have smooth total spaces. If, moreover, $B$ intersects the $\Delta_{k,i}$’s only in their general points (as given in Fig. \[Delta-k,i\]), then the special fibers of $\widehat{Y}$ and $\widehat{X}$ are easy to describe (cf. Fig. \[coef1.fig\]-\[coef3.fig\]). For example, $\widehat{Y}$’s special fibers are either chains of two or three reduced projective lines, or chains of five smooth rational curves with non-reduced middle component of multiplicity two. The special fibers of $\widehat{X}$ can also contain nonreduced components (of multiplicity 2 or 3), and this occurs only in the ramification cases discussed above ($\Delta{\mathfrak{T}}_{k,i}$ for $k=4,5,6$). Change of $\lambda_X,\kappa_X$ and $\delta_X$ in the effective covers {#change} --------------------------------------------------------------------- This is an analog to the discussion in Section \[The invariants\]. After the necessary base changes we again identify, without loss of generality, the new base curve $\widetilde{B}$ with $B$, and the pull-back of $X$ over $\widetilde{B}$ with $X$, and we redefine the basic invariants $\lambda_{\widehat{X}}$ and $\kappa_{\widehat{X}}$ for the effective family $\widehat{X}$ over $\widetilde{B}$. (It doesn’t make sense to define directly $\delta_{\widehat{X}}$, because of the nonreduced fiber components in $\widehat{X}$. We could, of course, set $\delta_{\widehat{X}}=12\lambda_{\widehat{X}}-\kappa_{\widehat{X}}$, but we will not need this in the sequel.) Now the original $X$ and the effective $\widehat{X}$ differ by “quasi-admissible” and “effective” blow-ups. The connections between the invariants of $X$, $\widetilde{X}$ and $\widehat{X}$ are expresssed by the following With the above notation, $$\begin{aligned} {\operatorname}{(a)}&\!\!\!\!&\displaystyle{\lambda_X}=\lambda_{\widetilde{X}}= \lambda_{\widehat{X}};\\ {\operatorname}{(b)}&\!\!\!\! &\kappa_X=\kappa_{\widetilde{X}}+\sum_{C}\mu(C);\\ {\operatorname}{(c)}&\!\!\!\! &\displaystyle{\kappa_{\widetilde{X}}= \kappa_{\widehat{X}}+\sum_{{\operatorname}{ram}1}1+\sum_{{\operatorname}{ram}2}3}.\end{aligned}$$ \[changeinv\] In view of Lemma \[mu(C)\], the first and the second statements are obvious. Obtaining a flat cover $\widehat{X}\rightarrow \widehat{Y}$ requires blowing up on $\widetilde{X}$ one smooth point for each ramification index 1, and three smooth points for each ramification index 2. Hence the relation between $\kappa_{\widehat{X}}$ and $\kappa_{\widetilde{X}}.$ 6. Embedding $\widehat{X}$ in a Projective Bundle over $\widehat{Y}$ {#embedding-widehatx-in-a-projective-bundle-over-widehaty .unnumbered} ==================================================================== \[embedding\] Given the effective degree $3$ map $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$, our next step is to embed $\widehat{X}$ into a projective bundle ${\mathbf P}V$ of rank $1$ over the birationally ruled surface $\widehat{Y}$. We shall consider a degree $3$ morphism $\widehat{\phi}$, but the same discussion is valid for any degree $d$. Trace map {#tracemap} --------- Since $\widehat{\phi}$ is flat and finite, the pushforward $\widehat{\phi}_*({\mathcal}O_{\widehat{X}})$ is a locally free sheaf on $\widehat{Y}$ of rank $3$. Define the [*trace*]{} map $${\operatorname}{tr}:\widehat{\phi} _*({\mathcal}O_{\widehat{X}})\rightarrow {\mathcal}O_{\widehat{Y}}$$ as follows. The finite field extension $K(\widehat{X})$ of $K(\widehat{Y})$ induces the [*algebraic*]{} trace map ${\operatorname}{tr^\#}:K(\widehat{X})\rightarrow K(\widehat{Y})$, defined by ${\operatorname}{tr^\#}(a)=\textstyle{\frac{1}{3}}(a_1+a_2+a_3)$. Here the $a_i$’s are the conjugates of $a$ over $K(\widehat{Y})$ in an algebraic closure of $K(\widehat{X})$. The restriction ${\operatorname}{tr^\#}{|}_{K(\widehat{Y})}={\operatorname}{id}_{K(\widehat{Y})}$. Over an affine open $U={{\operatorname}{Spec}}\,A\subset \widehat{Y}$ and its affine inverse $\widehat{\phi}^{-1}(U)= {{\operatorname}{Spec}}\,B\subset \widehat{X}$, $B$ is the integral closure of $A$ in its field of fractions $K(\widehat{X})$. Therefore, the trace map restricts to the $A$-module homomorphism ${\operatorname}{tr^\#}:B\rightarrow A$. We have a commutative diagram: (4,2.5)(4,0) (7.8,2.2)[$B\hspace{1mm}\hookrightarrow \hspace{1mm}K(\widehat{X})$]{} (8,1.2)(1.6,0)[2]{}[(0,1)[0.9]{}]{} (7.8,0.7)[$A\hspace{1mm}\hookrightarrow \hspace{1mm}K(\widehat{Y})$]{} (7.8,1.65)[(0.35,0.9)\[l\]]{} (7.8,1.2)[(1,0)[0.1]{}]{} (7.8,2.1)[(1,0)[0.1]{}]{} (9.8,1.65)[(0.35,0.9)\[r\]]{} (9.8,1.2)[(-1,0)[0.1]{}]{} (9.8,2.1)[(-1,0)[0.1]{}]{} (7,1.5)(3.1,0)[2]{}[${\operatorname}{tr}^{\#}$]{} (3.2,0.7)[$U={\operatorname}{Spec}\,A$]{} (2.3,2.2)[$\widehat{\phi}^{-1}(U)={\operatorname}{Spec}\,B$]{} (3.7,1.5)[$\big\downarrow$]{} (3.7,1.4)[$\big\downarrow$]{} (3.3,1.5)[$\widehat{\phi}$]{} The so-defined local maps ${\operatorname}{tr}:\widehat{\phi}_*{\mathcal}O_{{\operatorname}{Spec}\,B} \twoheadrightarrow {\mathcal}O_{{\operatorname}{Spec}\,A}$ patch up to give a global trace map ${\operatorname}{tr}:\widehat{\phi}_*{\mathcal}O_{\widehat{X}} \twoheadrightarrow {\mathcal}O_{\widehat{Y}}.$ Let ${\mathcal}V$ be the kernel of ${\operatorname}{tr}$: $$0\rightarrow {{\mathcal}V}\rightarrow {\widehat{\phi}} _*{{\mathcal}O}_{\widehat X}\stackrel {{\operatorname}{tr}}{\rightarrow}{{\mathcal}O}_{\widehat Y}\rightarrow 0. \label{splitting}$$ Note that ${\mathcal}V$ is locally free of rank $2$. The natural inclusion ${\mathcal}O_{\widehat{Y}}\hookrightarrow \widehat{\phi} _*{\mathcal}O_{\widehat{X}}$, composed with ${\operatorname}{tr}$, is the identity on ${\mathcal}O_{\widehat{Y}}$, hence the exact sequence splits: $${\widehat{\phi}}_*{{\mathcal}O}_{\widehat X}={{\mathcal}O}_{\widehat Y}\oplus {{\mathcal}V}. \label{directsum}$$ ### Geometric interpretation of the trace map It is useful to interpret the trace map geometrically in terms of the corresponding vector bundles $\widehat{\phi}_*{O_{\widehat X}}$, $O_{\widehat Y}$ and $V$ associated to the sheaves $\widehat{\phi}_*{{\mathcal}O}_{\widehat X}$, ${{\mathcal}O}_{\widehat Y}$ and ${\mathcal}V$. We again localize over affine opens, and if necessary, we shrink $U={\operatorname}{Spec}\,A$ so that $\widehat{\phi}_* {\mathcal}O_{\widehat X}$ becomes a [*free*]{} ${\mathcal}O_{\widehat Y}$-module. Let $p$ be a closed point in ${\operatorname}{Spec}\,A$ with maximal ideal $\mathfrak{p} \subset A$, having three distinct preimages $q,r,s\in{\operatorname}{Spec}\,B$ with maximal ideals $\mathfrak{q,r,s}\subset B$. Since $B$ is a free $A$-module, the quotient $B/{\mathfrak{p}}B$ is a 3-dim’l algebra over the ground field ${\operatorname}{k}(p)=A/{\mathfrak{p}}$, i.e. a 3-dim’l vector space over ${\mathbb C}$. On the other hand, from $\mathfrak{qrs}=\mathfrak{q}\cap \mathfrak{r}\cap \mathfrak{s}$ and the Chinese Remainder Theorem, it is clear that $B/{\mathfrak{p}}B\cong B/{\mathfrak{q}} \oplus B/{\mathfrak{r}} \oplus B/{\mathfrak{s}}\cong {\mathbb C}\overline{f}_q\oplus{\mathbb C}\overline{f}_r\oplus{\mathbb C}\overline{f}_s.$ The generators $\overline{f}_q,\overline{f}_r,\overline{f}_s$ are chosen as usual: $\overline{f}_q$, for instance, is the residue in ${\operatorname}{k}(q)$ of a function $f_q\in B$ such that $f_q\equiv 1({\operatorname}{mod}\mathfrak{q}),\,\, f_q\equiv 0({\operatorname}{mod}\mathfrak{r,s})$. In the Groethendieck style, the vector bundle over $\widehat Y$ associated to $\widehat{\phi}_*{\mathcal}O_{\widehat X}$ is ${\operatorname}{ Spec}{\operatorname}{S}(B_A)$, where ${\operatorname}{S}(B_A)$ is the symmetric algebra of $B$ over $A$. Its fiber over $p$ is the pull-back ${\operatorname}{ Spec}{\operatorname}{S}(B_A)\times _{{\operatorname}{Spec\,A}}{\operatorname}{Spec\,k}(p) = {\operatorname}{ Spec}({\operatorname}{S}(B_A)\times_A A/\mathfrak {p})={\operatorname}{Spec}{\operatorname}{S(B}/{\mathfrak{p}}B).$ We prefer to work with the dual $\widehat{\phi}_*{O_{\widehat X}}$ of this bundle, and the same goes for projectivizations: we projectivize the 1-dim’l subspaces of $\widehat{\phi}_*{O_{\widehat X}}$ rather than its 1-dim’l quotients. In view of this convention, the fiber of the bundle $\widehat{\phi}_*{O_{\widehat X}}$ is canonically identified as $$(\widehat{\phi}_*{O_{\widehat X}})_p=B/{\mathfrak p}B\cong {\mathbb C}\overline{f}_q \oplus{\mathbb C}\overline{f}_r\oplus{\mathbb C}\overline{f}_s \cong {\bf A}^3_{\mathbb C}.$$ The generators $\overline{f}_q,\overline{f}_r,\overline{f}_s$ span three lines in ${\bf A}^3_{\mathbb C}$, which can be canonically described: the line $\Lambda_q= {\mathbb C}\overline{f}_q$, for example, corresponds to all functions regular at $q,r$ and $s$, and vanishing at $r$ and $s$. Similarly, the vector bundle $O_{\widehat Y}$ associated to ${\mathcal}O_{\widehat Y}$ has fiber $(O_{\widehat Y})_p=A/{\mathfrak {p}} \cong {\mathbb C}\overline{h}_p$, where $h_p$ is a function near $p$ having residue $h_p(p)=1$ in ${\operatorname}{k}(p)$. The trace map ${\operatorname}{tr}:\widehat{\phi}_*{\mathcal}O_{\widehat{X}} \twoheadrightarrow {\mathcal}O_{\widehat{Y}}$ translates fiberwise in terms of the vector bundles $\widehat{\phi}_*{O_{\widehat X}}$ and $O_{\widehat Y}$ as: $${\operatorname}{tr}_p:{\mathbb C}\overline{f}_q\oplus{\mathbb C}\overline{f}_r\oplus{\mathbb C}\overline{f}_s \rightarrow {\mathbb C}\overline{h}_p,\,\,\overline{f}\mapsto \frac{1}{3} (f(q)+f(r)+f(s)).$$ Finally, the locally free subsheaf ${{\mathcal}V}={\operatorname}{Ker(tr)}\subset \widehat{\phi}_*{{\mathcal}O}_{\widehat X}$ is associated to a vector bundle $V$ with fiber $V_p=\{\overline{f}\,\,|\,\,f(q)+f(r)+f(s)=0\} \subset (\widehat{\phi}_*{O_{\widehat X}})_q.$ Equivalently, from the direct sum (\[directsum\]), $V_p=(\widehat{\phi}_*{O_{\widehat X}})_p \big{/}_{\textstyle{\Lambda}}$, where the line $\Lambda=\{\overline{f}\,\,|\,\,f(q)=f(r)=f(s)\}$ corresponds to pull-backs of functions regular at $p$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=embedding.ps,width=3in,height=1.88in}} \hspace{-1mm}\end{array}}$$ Since the four lines $\Lambda_q,\Lambda_r,\Lambda_s$ and $\Lambda$ are in general position in the fiber $(\widehat{\phi} _*{O_{\widehat X}})_p$, modding out by $\Lambda$ yields three distinct lines in the fiber $V_p$ (cf. Fig. \[geometric\]). Therefore, projectivizing $V_p$ naturally induces three distinct points $Q,R,S$ in the fiber ${{\mathbf P}}^1$ of ${{\mathbf P}}V$. Going the other way around the diagram, we first projectivize $(\widehat{\phi}_*{O_{\widehat X}})_q\cong{\bf A}^3$, and then we project from the point $[\Lambda]$ onto the fiber of ${{\mathbf P}}V$. In other words, $\pi_{[\Lambda]} :{{\mathbf P}}^2\dashrightarrow {{\mathbf P}}^1$ is well-defined at $[\Lambda_q],[\Lambda_r]$ and $[\Lambda_s]$. This completes the interpretation of the trace map in the case of three distinct preimages $q,r,s$ in $\widehat{X}$. In case of only two or one preimage of $p$ in $\widehat{X}$, one modifies correspondingly the above interpretation. $\widehat{X}$ embeds naturally in ${\mathbf P}V$ over $\widehat{Y}$ {#naturally} ------------------------------------------------------------------- We construct the map $i:\widehat{X}\hookrightarrow {\mathbf P}V$ via the use of an invertible relative dualizing sheaf $\omega_{\widehat X/ \widehat Y}$. Its existence imposes a mild technical condition on the schemes $\widehat X$ and $\widehat Y$: we assume that they are Gorenstein, i.e. Cohen-Macaulay with invertible dualizing sheaves $\omega_{\widehat X/{\mathbb C}}$ and $\omega_{\widehat Y/{\mathbb C}}$. In our situation this will be sufficient. As we noted in Section \[constructioneffective\], when the base curve $B$ is [*not*]{} tangent to the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$, then $\widehat{X}$ and $\widehat{Y}$ are smooth surfaces. The remaining cases are “local” base changes of these, and the construction carries over. Let $\widehat{\phi}:\widehat X\rightarrow \widehat Y$ be a flat and finite degree $d$ morphism of Gorenstein schemes, with $\widehat Y$ integral. Then $\widehat{\phi}$ factors through a natural embedding of $\widehat X$ into the projective bundle ${\mathbf P}V$, followed by the projection $\pi: {\mathbf P}V\rightarrow \widehat Y$ (cf. Fig. \[basicconstruction\]). \[propembedding\] For easier referencing in the sequel, we have kept the notation $\widehat X$ and $\widehat Y$, but the statement is true for [*any*]{} schemes satisfying the Gorenstein condition. For another proof of Prop. \[propembedding\], see [@embedding]. [*Proof of Prop. \[propembedding\]*]{}. Here we construct the map $i:\widehat X\rightarrow {\mathbf P}V$, give the proof of its regularity, and point out how to show its injectivity. (6,2.5)(-2.3,2.5) (-1.7,4)[${\mathbf P}({{\mathcal}O}_{\widehat Y})\stackrel {{\operatorname}{tr}^{\#}}{\hookrightarrow}{\mathbf P}({\widehat{\phi}}_*{{\mathcal}O}_{\widehat X}) \stackrel{\rho}{\dashrightarrow} {\mathbf P}V$]{} (0.95,3.05)[(0,1)[0.8]{}]{} (0.75,2.6)[$\widehat X$]{} (0.6,3.35)[$\psi$]{} (1.1,2.95)[(2,1)[1.85]{}]{} (1.7,3.35)[$i$]{} ### Construction of the embedding map According to Prop. II.7.12 [@Hartshorne], to give a morphism $\psi:\widehat X\rightarrow {{\mathbf P}}(\widehat{\phi} _*({\mathcal}O_{\widehat X}))$ over $\widehat Y$ is equivalent to give an invertible sheaf ${\mathcal}L$ on $\widehat Y$ and a surjective map of sheaves $\widehat{\phi}^*(\widehat{\phi}_*({\mathcal}O_{\widehat X}){\textstyle {\widehat{\phantom{n}}}})\twoheadrightarrow {\mathcal}L$. Recall from [*relative Serre-duality*]{} that $(\widehat{\phi}_*{\mathcal}O_{\widehat X}){\textstyle {\widehat{\phantom{n}}}}\cong \widehat{\phi} _*\omega_{\widehat X/\widehat Y}$, and let ${\mathcal}L=\omega_{\widehat X/\widehat Y}$. The natural morphism $$\sigma:\widehat{\phi}^*\widehat{\phi} _*\omega_{\widehat X/\widehat Y}\rightarrow\omega_{\widehat X/\widehat Y}$$ is [*surjective*]{}. This is in fact true for any quasicoherent sheaf ${\mathcal}F$ on $\widehat X$. Indeed, restricting to the affine open sets $\widehat{\phi}:{\operatorname}{Spec}\,B\rightarrow {\operatorname}{Spec}\,A$, we have ${\mathcal}F=M^{\sim}$ for some finitely generated $B$-module $M$, and $\widehat{\phi}^*\widehat{\phi} _*{{\mathcal}F}=\widehat{\phi}^*(M_A)^{\sim}=(M_A\otimes_A B)^{\sim}.$ The surjective $B$-module homomorphism $M_A\otimes_A B \twoheadrightarrow M$, given by $m\otimes b \mapsto b\circ m$, induces $\widehat{\phi}^*\widehat{\phi}_*{{\mathcal}F}\twoheadrightarrow {\mathcal}F$. Thus, the above map $\sigma$ naturally defines a morphism $\psi:\widehat X\rightarrow {{\mathbf P}}(\widehat{\phi}_*({\mathcal}O_ {\widehat X}))$ over $\widehat Y$. Projectivizing $0\rightarrow {{\mathcal}O}_{\widehat Y} \rightarrow {\widehat{\phi}} _*{{\mathcal}O}_{\widehat X}\rightarrow {{\mathcal}V} \rightarrow 0$ gives a sequence of projective bundles, as in Fig. \[construction of i\]. The map $\rho$ is undefined exactly on the image of ${\operatorname}{tr} ^{\#}$. Composing $\rho$ with the map $\psi$ yields the map $i:\widehat{X}\dasharrow {\mathbf P}V$, which we claim is a regular map. ### Regularity and injectivity of $i$. To see regularity, we show that the restriction of $\sigma|_{\widehat{\phi}^*({\mathcal}V{{\widehat{\phantom{n}}}})} :{\widehat{\phi}^*({\mathcal}V{{\widehat{\phantom{n}}}})} \rightarrow \omega_{\widehat X/\widehat Y}$ is also surjective. Indeed, we again work locally, and let $B=A\oplus C$ be the decomposition of $B$ via the trace map as a free $A$-module, where $C=A\cdot b_1\oplus A\cdot b_2$ with ${\operatorname}{tr}b_1={\operatorname}{tr}b_2=0$. Let $\omega_{\widehat X/\widehat Y}=M^{\sim}$ for some finitely generated $B$-module $M$. Recall that $\widehat{\phi}_*\omega_{\widehat X/\widehat Y}\cong (\widehat{\phi}_*{\mathcal}O_{\widehat X}){\textstyle{\widehat{\phantom{n}}}}$, so that as $A$-modules: $M\cong (B_A){{\widehat{\phantom{n}}}}={\operatorname}{Hom}_A (B,A)$, and $B$ acts on $M$ by $$(b\circ f)(x)=f(bx)\,\,\,{\operatorname}{for}\,\,\,f\in {\operatorname}{Hom}_A(B,A),\,x\in B.$$ Naturally, the sheaf ${\mathcal}V=C^{\sim}$, and $\widehat{\phi}^* ({\mathcal}V{{\widehat{\phantom{n}}}})=({\operatorname}{Hom}_A(C,A)\otimes_A B)^{\sim}$, where we think of $f\in {\operatorname}{Hom}_A(C,A)$ as an element of ${\operatorname}{Hom}_A(B,A)$ by extending it via $f(1)=0$. Our statement is equivalent to showing that the $B$-module homomorphism $$\sigma:{\operatorname}{Hom}_A(C,A)\otimes_A B \rightarrow \big{(}{\operatorname}{Hom}_A(B,A) \big{)_B},\,\,f\otimes b \mapsto b\circ f,$$ is surjective. In fact, it suffices to show that the trace map is in the image of $\sigma$, i.e. to find $c_1,c_2\in B$ such that $${\operatorname}{tr}\equiv c_1\circ {\pi_1}+c_2\circ {\pi_2}. \label{trace equation}$$ Here $\pi_j:B\rightarrow A$ gives the $b_j$-coordinate of $b\in B$, $j=1,2$. Set $c_1=b_1-\pi_1(b_1^2)$ and $c_2=-\pi_1(b_1b_2)$. Evaluating both sides of (\[trace equation\]) at $1,b_1$ and $b_2$ yields the same result, hence the identity is established, and $\sigma|_{\widehat{\phi}^*({\mathcal}V{{\widehat{\phantom{n}}}})} :{\widehat{\phi}^*({\mathcal}V{{\widehat{\phantom{n}}}})} \rightarrow \omega_{\widehat X/\widehat Y}$ is surjective. We have shown that the composition $\rho\circ \psi= i:\widehat{X}\dasharrow {\mathbf P}V$ is a regular map on $\widehat{X}$. Alternatively, one could employ the geometric interpretation of the trace map. A [*general*]{} point $p\in {\widehat Y}$ has three preimages $q,r,s$ in $\widehat X$, each of which defines canonically a distinct point $[\Lambda_q],[\Lambda_r]$ or $[\Lambda_s]$ in the fiber of ${{\mathbf P}}(\widehat{\phi}_*{{\mathcal}O}_{\widehat X})$. As we pointed above, the projection $\pi_{[\Lambda]} :{{\mathbf P}}^2\dashrightarrow {{\mathbf P}}^1$ is well-defined at $[\Lambda_q],[\Lambda_r]$ and $[\Lambda_s]$. But $\pi_{[\Lambda]}$ is precisely the fiberwise restriction of ${\mathbf P}({\widehat{\phi}}_* {{\mathcal}O}_{\widehat X}) \stackrel{\rho}{\dashrightarrow} {\mathbf P}V$, which shows again that the composition $i=\rho\circ\psi:\widehat{X}\dasharrow {\mathbf P}V$ is regular on an open set of $\widehat X$. One makes the necessary modifications in the cases of fewer preimages of $p$ in $\widehat{X}$. Finally, one can show, using similar methods (either algebraically or geometrically), that the map $i$ is also an embedding. [**Remark 6.1.**]{} Since the general fiber $C$ of $\widehat{X}$ is a smooth trigonal curve, the restriction $i|_{\displaystyle{C}}$ embeds $C$ in a ruled surface ${\mathbf F}_k$ over the corresponding fiber $F_{\widehat{Y}}= {\mathbf P}^1$ of $\widehat{Y}$, where ${\mathbf F}_k={\mathbf P}(V|_{F_{\widehat{Y}}})$. In Section \[Maroniinvariant\] we will describe how the ruled surface ${\mathbf F}_k$ varies as the fiber $C$ varies in $\widehat{X}$. 7. Global Calculation on a Triple Cover $X\rightarrow Y$ {#global-calculation-on-a-triple-cover-xrightarrow-y .unnumbered} ======================================================== \[triplecover\] In this section we consider the simplest case of effective covers, namely, when the original family $X$ is itself a triple cover of a [*ruled surface*]{} $Y$ over the base curve $B$. This happens exactly when all fibers of $X$ are irreducible, and the linear system of $g^1_3$’s on the smooth fibers extends over the singular fibers to base-point free line bundles of degree 3 with two linearly independent sections. As we saw in Section \[quasi-admissible\], we can patch together all these $g^1_3$’s in a line bundle ${{\mathcal}L}$ on the total space of $X$. Thus, $X$ will map to ${{\mathbf P}}(H^0(X,{{\mathcal}L})^{\widehat{\phantom{n}}})$ via $\phi_{{\mathcal}L}$, and this map will factor through our ruled surface $Y$ over $B$: $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=triple.ps,width=2.7in,height=1.31in}} \hspace{-1mm}\end{array}}$$ Equivalently, we can describe such a family $X\rightarrow B$ via the classification of the boundary components of the trigonal locus in Section 4: in $\overline{\mathfrak T}_g$ the base curve $B$ meets only the boundary component $\Delta{\mathfrak{T}}_0$ of irreducible curves ($\delta_0|_B>0$), and there are no hyperelliptic fibers in $X$ ($B\cap \overline{\mathfrak I}_g=\emptyset$). Global versus local calculation {#global} ------------------------------- As it will turn out, the calculation of the slope $\delta_X /\lambda_X$ in this basic case yields the actual maximal bound $\frac{36(g+1)}{5g+1}$: any addition of singular fibers belonging to other boundary components of $\overline{\mathfrak T}_g$ will only decrease the ratio. Henceforth, we distinguish among two types of calculation: [*global*]{} and [*local*]{}. The [*global*]{} calculation refers to finding the coefficients of $\delta_0$ and the Hodge bundle $\lambda|_{\overline{\mathfrak T}_g}$ in a relation in ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak T}_g$ involving [*all*]{} boundary classes. The [*local*]{} calculation provides the remaining coefficients by considering [*local invariants*]{} of each individual boundary class (cf. Sect. 8). The basic construction {#basic} ---------------------- For the remainder of this section, we consider a family $X\rightarrow B$ such that, as above, $X$ is a triple cover of the corresponding ruled surface $Y$, and we carry out the proposed global calculation. 0.11in (10,9)(25,21) (26.5,24)[$X$]{} (28.4,24.2)[(1,0)[3.1]{}]{} (32,24)[$Y$]{} (29,20)[$B$]{} (27.3,23.5)[(1,-1)[2]{}]{} (32.3,23.5)[(-1,-1)[2]{}]{} (26.6,22)[$f$]{} (31.75,22)[$h$]{} (29.45,24.7)[$\phi$]{} (27.3,25.4)[(1,1)[2]{}]{} (30.3,27.4)[(1,-1)[2]{}]{} (29,27.8)[${{\mathbf P}}V$]{} (32,26.4)[$\pi$]{} (27.4,26.4)[$i$]{} Recall that the pushforward of the structure sheaf ${{\mathcal}O}_X$ to $Y$ is a locally free sheaf of rank $3$. In the exact sequence: $$0\rightarrow E\rightarrow {\phi}_*{{\mathcal}O}_X\stackrel {{\operatorname}{tr}}{\rightarrow}{{\mathcal}O}_Y\rightarrow 0,$$ the kernel of the trace map ${\operatorname}{tr}$ is a vector bundle $E$ on $Y$ of rank $2$, and $X$ naturally embeds in the rank 1 projective bundle ${{\mathbf P}}V$ over $Y$, where $V=E\,\widehat{\phantom{n}}$. Any rank 2 vector bundle $E$ has the same projectivizations as its dual bundle $V$ since $E\cong \bigwedge^2E\otimes V$, where $\bigwedge^2E$ is a line bundle. For easier notation, in the trigonal case we use the dual $V$ instead of $E$ from Section \[embedding\]. A basis for ${\operatorname}{Pic}Y$ can be chosen by letting $F_Y$ be the fiber of $Y$, and $B^{\prime}$ be any section of $Y\rightarrow B$. Hence ${\operatorname}{Pic}Y={\mathbb Z}B^{\prime}\oplus {\mathbb Z}F_Y$. We normalize by replacing $B^{\prime}$ with the ${\mathbb Q}$-linear combination $B_0=B^{\prime}-\displaystyle{\frac{(B^{\prime})^2}{2}}{F_Y}$, and provide a $\mathbb{Q}$-basis for ${\operatorname}{Pic}_{\mathbb Q}Y$: $${\operatorname}{Pic}_{\mathbb Q}Y={\mathbb Q}B_0\oplus {\mathbb Q}F_Y\,\,\,{\operatorname}{with}\,\,\, B_0^2=F_Y^2=0\,\,\,{\operatorname}{and}\,\,\,B_0\cdot F_Y=1. \label{normalize}$$ Let $\zeta$ denote the class of the hyperplane line bundle ${{\mathcal}O}_{{{\mathbf P}}V}(+1)$ on ${{\mathbf P}}V$, and let $c_1(V)$ and $c_2(V)$ be the Chern classes of $V$ on $Y$. The Chow ring $A({{\mathbf P}}V)$ is generated as a $\pi^*(A(Y))$-module by $\zeta$ with the only relation: $$\zeta^2+\pi^*c_1(V)\zeta+\pi^*c_2(V)=0. \label{zeta-relation}$$ In particular, for the Picard groups: $${\operatorname}{Pic}_{\mathbb Q}({{\mathbf P}}V)=\pi^*({\operatorname}{Pic} _{\mathbb Q}Y)\oplus {\mathbb Q}\zeta. \label{Q-basis}$$ As a divisor on ${{\mathbf P}}V$, the surface $X$ meets the fiber $F_{\pi}$ of $\pi$ generically in three points ($X$ maps three-to-one onto $Y$). Thus in the Chow ring $A({{\mathbf P}}V)$ we have $[X]\cdot [F_{\pi}]=3$, which simply means that $X$ can be expressed as $$X\sim 3\zeta + \pi^*D$$ for some divisor $D$ on $Y$ (see (\[Q-basis\])). With respect to the chosen basis for ${\operatorname}{Pic}_{\mathbb Q}Y$: $$D\sim aB_0+bF_Y\,\,{\operatorname}{and}\,\, c_1(V)\sim cB_0+dF_Y \label{define D,c1(V)}$$ for some $a,b,c,d\in {\mathbb Z}$. Note that $\deg(D|_{B_0})=b$ and $\deg(c_1(V)|_{B_0})=d$. Relation among the divisor classes $D$ and $c_1(V)$ {#relation} --------------------------------------------------- It is evident that the divisors $D$ and $c_1(V)$ cannot be independent on ruled surface $Y$ since both are canonically determined by the surface $X$. The relation is in fact quite straightforward. With the above notation for the triple cover $\phi:X\rightarrow Y$, we have $D=2c_1(V)$ in ${\operatorname}{Pic}Y$. \[D=2c\_1(V)\] We start with the standard exact sequence for the divisor $X$ on ${{\mathbf P}}V$: $$0\rightarrow {\mathcal}{O}_{{{\mathbf P}}V}(-X)\rightarrow {\mathcal}{O}_{{{\mathbf P}}V} \rightarrow {\mathcal}{O}_X\rightarrow 0. \label{X-divisorsequence}$$ Pushing to $Y$ yields: $$0 \!\rightarrow \!\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(-X) \!\rightarrow\! \pi_*{\mathcal}{O}_{{{\mathbf P}}V}\!\rightarrow\! \pi_*{\mathcal}{O}_X \!\rightarrow \! R^1\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(-X) \!\rightarrow \!R^1 \pi_*{\mathcal}{O}_{{{\mathbf P}}V} \!\rightarrow \cdots \label{pushforward}$$ It is easy to show that $R^1\pi_*{\mathcal}{O}_{{{\mathbf P}}V}=0$ and $\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(-X)=0$. This follows from Grauert’s theorem [@Hartshorne]: $h^1({\mathcal}{O}_{{{\mathbf P}}V}|_{F_{\pi}})=h^1({\mathcal}O_{{{\mathbf P}}^1})=0,$ and $$h^0({\mathcal}{O}_{{{\mathbf P}}V}(-X)|_{F_{\pi}})=h^0({\mathcal}{O}_{{{\mathbf P}}V} (-3\zeta-\pi^*D)|_{F_{\pi}})=h^0({\mathcal}{O}_{{{\mathbf P}}^1}(-3))=0.$$ Furthermore, $\pi_*{\mathcal}{O}_{{{\mathbf P}}V}= {\mathcal}{O}_Y$ and $\pi_*{\mathcal}{O}_X=\phi_*{\mathcal}{O}_X$, so that (\[pushforward\]) becomes $$0\rightarrow {\mathcal}{O}_Y\rightarrow \phi_*{\mathcal}{O}_X\rightarrow R^1\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(-X)\rightarrow 0. \label{remainingnonzero}$$ From relative Serre-duality, and using the first Chern class of the relative dualizing sheaf, $c_1(\omega_{\pi})$ (cf. (\[omega-pi\])): $$R^1\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(-X) \cong \big(\pi_*(\omega_{\pi}\otimes {\mathcal}{O}_{{{\mathbf P}}V}(X))\big)\widehat{\phantom{t}}= \big(\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(\zeta+\pi^*D-\pi^*c_1(V))\big) \widehat{\phantom{t}}.$$ Since $\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(\zeta)=V\widehat{\phantom{t}}$ (cf.   [@BPV]), $$R^1\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(-X) \cong \big[V\widehat{\phantom{t}} \otimes {\mathcal}{O}_Y(D-c_1(V))\big]\widehat{\phantom{t}}. \label{tranformedsequence}$$ Finally, combining (\[tranformedsequence\]) with (\[remainingnonzero\]) and $\phi_*{\mathcal}{O}_X/{\mathcal}{O}_Y=V\widehat{\phantom{t}}$, we arrive at $$V\cong V\widehat{\phantom{t}}\otimes {\mathcal}{O}_Y(D-c_1(V))\,\,\Rightarrow\,\, {\mathcal}{O}_Y(D-c_1(V))\cong \bigwedge ^2 V\cong{\mathcal}{O}_Y(c_1(V)),$$ and hence $D=2c_1(V)$ in ${\operatorname}{Pic}Y. \,$ Global calculation of $\lambda_X$ and $\kappa_X$. {#globalcalculation} ------------------------------------------------- In the following proposition we express $\lambda_X$ and $\kappa_X$ in terms of $\deg(c_1(V)|_{B_0})=d$ and the Chern class polynomial $c_1^2(V)-4c_2(V)$, both of which are independent of the choice of the vector bundle $V$. Indeed, if we twist $V$ by a line bundle $M$ on $Y$ and set $V^{\prime}=V\otimes M$, then $$c_1(V^{\prime})=c_1(V)+2c_1(M),\,\, c_2(V^{\prime})=c_2(V)+c_1(V)c_1(M)+c_1(M)^2,$$ $$\Rightarrow\,\,c_1(V^{\prime})^2-4c_2(V^{\prime})=c_1(V)^2-4c_2(V).$$ Recall the notation of (\[define D,c1(V)\]). In order to make $d$ also invariant, we use $b=2d$ from Lemma \[D=2c\_1(V)\] and write $d=2b-3d$. Now, if we replace ${{\mathbf P}}V$ with its isomorphic ${{\mathbf P}}V^{\prime}$ (cf. Fig. \[invariance\]), and set $\zeta^{\prime}=i^* \zeta\otimes(\pi^{\prime})^*M^{-1}$ to be the new hyperplane bundle, then in ${\operatorname}{Pic}({{\mathbf P}}V)$: $X\sim 3\xi^{\prime}+(\pi^{\prime})^*D^{\prime}$ with $D^{\prime}\sim D+3c_1(M)$. Hence $$2D^{\prime}-3c_1(V^{\prime})=2D+6c_1(M)-3c_1(V)-6c_1(M)=2D-3c_1(V),$$ and equating their degrees on $B_0$, we obtain $2b^{\prime}-3d^{\prime}=2b-3d$. 0.11in (10,7)(25,20.5) (26,24)[${{\mathbf P}}V^{\prime}$]{} (28.4,24.2)[(1,0)[3.1]{}]{} (32,24)[${{\mathbf P}}V$]{} (29,20)[$Y$]{} (27.3,23.5)[(1,-1)[2]{}]{} (32.3,23.5)[(-1,-1)[2]{}]{} (26.5,22)[$\pi^{\prime}$]{} (31.75,22)[$\pi$]{} (29.45,24.7)[$i$]{} In other words, the following formulas for $\lambda_X$ and $\kappa_X$ would be valid for any vector bundle $V^{\prime}$ in place of the canonically defined $V$, as long as the diagram of the basic construction (cf. Fig. \[basicconstruction\]) is satisfied, and as long as we adjust the degree $d=\deg(c_1(V)|_{B_0})$ by its invariant form $2b-3d= 2\deg(D|_{B_0})-3\deg(c_1(V)|_{B_0})$. Let $\phi:X\rightarrow Y$ be a degree 3 map from the original family $X$ of trigonal curves to the ruled surface $Y$ over $B$. The invariants $\lambda_X$ and $\kappa_X$ are given by the formulas: $$\begin{aligned} \lambda_X&=&\displaystyle{\frac{g}{2}\deg\big(c_1(V)|_{B_0}\big)+\frac{1}{4} \big(c_1(V)^2-4c_2(V)\big)},\\ \kappa_X&=&\displaystyle{\frac{5g-6}{2}\deg\big(c_1(V)|_{B_0}\big)+\frac{3}{4} \big(c_1(V)^2-4c_2(V)\big)}.\end{aligned}$$ \[lambda\_X,kappa\_X\] We defer the proof of Prop. \[lambda\_X,kappa\_X\] to Subsections 7.4.2-3. ### Notation and Basic Tools. {#notation} The proof of Prop. \[lambda\_X,kappa\_X\] consists of two calculations in the Chow ring of ${{\mathbf P}}V$; one uses versions of Riemann-Roch theorem on $X$ and ${{\mathbf P}}V$, and the other uses the adjunction formula on ${{\mathbf P}}V$ for the divisor $X$. Here we discuss these statements and set up the necessary notation. In order to work in ${\mathbb A}({{{\mathbf P}}}V)$, we express the Chern classes of ${{\mathbf P}}V$ in terms of the hyperplane class $\zeta$ and the Chern classes of $Y$. We first recall that $\pi_*{\mathcal}{O}_{{{\mathbf P}}V}(+1)\cong V\widehat{\phantom{t}}$. In the Euler sequence on ${{{\mathbf P}}V}$: $$0\rightarrow {\mathcal}{O}_{{{\mathbf P}}V} \rightarrow {\mathcal}{O}_{{{\mathbf P}}V}(+1)\otimes \pi^*V\rightarrow {\mathcal}{T}_{\pi} \rightarrow 0, \label{Eulersequence}$$ we compare the Chern polynomials $c_t({\mathcal}{O}_{{{\mathbf P}}V}(+1)\otimes \pi^*V)=c_t({\mathcal}{T}_{\pi})$, and obtain: $$\begin{aligned} K_{{{\mathbf P}}V}&\!\!\!=&\!\!\!\!-2\zeta-\pi^*c_1(V)+\pi^*K_Y, \label{K_PV}\\ c_1(\omega_{\pi})&\!\!\!=&\!\!\!\!-2\zeta-\pi^*c_1(V), \label{omega-pi}\label{omega_pi}\\ c_2({{\mathbf P}}V)&\!\!\!=&\!\!\!\!-2\zeta\pi^*K_Y+\pi^*\big(c_1(V)K_Y+c_2(Y)\big). \label{c_2(PV)}\end{aligned}$$ Here ${\mathcal}{T}_{\pi}$ and $\omega_{\pi}$ are correspondingly the relative tangent and the relative dualizing sheaves of $\pi$, while $K_{{{\mathbf P}}V}$ is the class of the canonical sheaf on ${{\mathbf P}}V$. On the ruled surface $Y$ over the curve $B$ of genus $g_{\scriptscriptstyle B}$ we similarly have $$\begin{aligned} \hspace{9mm}K_Y&\!\!\!=&\!\!\!\!-2B_0+h^*(K_B) \equiv -2B_0+(2g_{\scriptscriptstyle B}-2)F_Y \label{K_Yglobal},\\ \hspace{9mm}c_2(Y)&\!\!\!=&\!\!4(1-g_{\scriptscriptstyle B}). \label{c_2(Y)global}\end{aligned}$$ Now let $C$ be the general fiber of $X$, i.e. a smooth trigonal curve of genus $g$. Assuming the Basic construction for the triple cover $X\rightarrow Y$ (cf. Fig. \[basicconstruction\]), we have the following lemmas. If $\chi({\mathcal}{E})$ denotes the holomorphic Euler characteristic of any sheaf ${\mathcal}{E}$, then the invariant $\lambda_X$ is expressible as $\lambda_X=\chi({{\mathcal}O}_X)-\chi({{\mathcal}O}_B)\cdot \chi({{\mathcal}O}_C).$ \[Euler\] [*Proof.*]{} From Grothendieck-Riemann-Roch theorem for the map $f:X\rightarrow B$, $${\operatorname}{ch}(f_{!}{\mathcal}{O}_X).{\operatorname}{td}{\mathcal}{T}_B= f_*({\operatorname}{ch}{\mathcal}{O}_X.{\operatorname}{td}{\mathcal}{T}_X),$$ where ${\mathcal}{T}_X$ and ${\mathcal}{T}_B$ are the corresponding tangent sheaves. Since the fibers of $f$ are one-dimensional, $f_{!}{\mathcal}{O}_X= f_*{\mathcal}{O}_X-R^1\!f_*{\mathcal}{O}_X={\mathcal}{O}_B-(f_*\omega_f)\widehat{\phantom{t}}$. Substituting: $$\big(1-g+c_1(f_*\omega_f)\big).\big(1-\frac{1}{2}K_B\big)= f_*\big(1-\frac{1}{2}K_X+\frac{1}{12}(K^2_X+c_2(X))\big),$$ $$\Rightarrow\,\, c_1(f_*\omega_f)=\frac{1}{12}f_*(K^2_X+c_2(X))-\frac{g-1}{2}K_B,$$ $$\,\,\,\,\,\,\,\,\,\,\Rightarrow\,\, \lambda_X={\operatorname}{deg}(f_*\omega_f)=\chi({\mathcal}{O}_X)-\chi({\mathcal}{O}_B)\cdot \chi({\mathcal}{O}_C).\,\,\,\qed$$ Note the similarity between this formula and the formula for $\delta_B$ in Example 2.1. Both quantities are expressed as differences of the Euler characteristic (holomorphic or topological) on the total space of $X$ and the product of the corresponding Euler characteristics on the base $B$ and the general fiber $C$. Lemma \[Euler\] suggests that in order to calculate $\lambda_X$, we must have control over $\chi({\mathcal}{O}_X)$. In the Chow ring of ${{\mathbf P}}V$: $$\chi({{\mathcal}O}_X)=\frac{1}{12}X\big[\big(X+K_{{{\mathbf P}}V}\big) \big(2X+K_{{{\mathbf P}}V}\big)+ c_2({{\mathbf P}}V)\big]$$ \[holomorphicEuler\] From the standard exact sequence (\[X-divisorsequence\]) for the divisor $X$ on ${{\mathbf P}}V$ we have $\chi({{\mathcal}O}_X)=\chi({{\mathcal}O}_{{{\mathbf P}}V})-\chi({{\mathcal}O}_{\mathbf {P}V}(-X))$. On the other hand, Hirzebruch-Riemann-Roch claims that for any sheaf ${\mathcal}E$ on ${{\mathbf P}}V$: $\chi({\mathcal}{E})={\operatorname}{deg}\big({\operatorname}{ch}({\mathcal}{E}).{\operatorname}{td}{\mathcal}{T}_{{{\mathbf P}}V} \big)_3$. Applying this to the line bundles ${\mathcal}{O}_{{{\mathbf P}}V}$ and ${{\mathcal}O}_{{{\mathbf P}}V}(-X)$, and subtracting the results completes the proof of the lemma. The reader may have noticed that all quantities discussed in the above lemmas are elements of the third graded piece ${\mathbb A}^3({{\mathbf P}}V)\otimes \mathbb{Q}$ of the Chow ring ${\mathbb A}({{\mathbf P}}V)\otimes \mathbb{Q}$. Hence they are cubic polynomials in the class $\zeta$, whose coefficients are appropriate products of pull-backs from ${\mathbb A}(Y)\otimes \mathbb{Q}$. The higher degrees $\zeta^3$ and $\zeta^2$ can be decreased using the basic relation (\[zeta-relation\]), while $\zeta$ itself can be altogether eliminated by noticing that for any $\vartheta\in {\mathbb A}^2(Y)$: $$\zeta.\pi^*({\operatorname}{point})=\zeta.F_{\pi}=1\,\, \Rightarrow\,\,\zeta . \pi^*\vartheta={\operatorname}{deg}\vartheta. \label{trivial}$$ It is also useful to remember the trivial fact that for any divisors $D_i$ on $Y$, ${\operatorname}{dim}Y=2$ implies $D_1.D_2.D_3=0=D_i.c_2(V)$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=intersection.ps,width=1.8in,height=1.8in}} \hspace{-1mm}\end{array}}$$ The canonical bundle $\omega_Z$ of a smooth divisor $Z$ on the smooth variety $T$ can be expressed as $\omega_Z\cong \omega_T\otimes {\mathcal}{O}_T(Z)\otimes {\mathcal}{O}_Z$. Consequently, $$K_X^2=\big(K_{{{\mathbf P}}V}+X\big)^2X\,\,\,{and}\,\,\, g+2={\operatorname}{deg}c_1(V)|_{F_Y}.$$ \[adjunction\] [*Proof.*]{} For the general statement of the adjunction formula see [@Hartshorne]. The expression for $K_X^2$ is a straightforward application to the divisor $X$ on ${{\mathbf P}}V$: $K_X=\big(K_{{{\mathbf P}}V}+X\big)\big|_X$ is being squared in ${\mathbb A}({{\mathbf P}}V)$. As for the genus $g$ of the general member $C$ of our family, we consider a general fiber $F_Y$ of $Y$ (cf. Fig. \[intersection\]). Its pullback $\pi^*F_Y$ is a rational ruled surface $\mathbf F$ over $F_Y$, embedded in the 3-fold ${{\mathbf P}}V$. The intersection of $\mathbf F$ with the surface $X$ is the trigonal fiber $C=X\cdot\pi^*F_Y= (3\zeta+2\pi^*c_1(V))|_{\pi^*F_Y}$. From the adjunction formulas for $C\subset \pi^*F_Y$ and $\pi^*F_Y\subset {{\mathbf P}}V$: $$\begin{aligned} 2g-2&\!\!=&\!\!(K_{\pi^*F_Y}+C)\cdot C=\big((K_{{{\mathbf P}}V}+\pi^*F_Y)\big|_{\pi^*F_Y} +X\big|_{\pi^*F_Y}\big)\cdot X\big|_{\pi^*F_Y}\\ &\!\!=&\!\!\big(\zeta+\pi^*c_1(V)+\pi^*K_Y+\pi^*F_Y\big)\big( 3\zeta+2\pi^*c_1(V)\big)\cdot \pi^*F_Y\\ &\!\!=&\!\!(2c_1(V)+3K_Y)\cdot F_Y=2{\operatorname}{deg}c_1(V)\big|_{F_Y}-6. \,\,\,\qed\end{aligned}$$ ### Global Calculation of $\lambda_X$. {#globallambda} We substitute in Lemma \[holomorphicEuler\] the expressions (\[K\_PV\]–\[c\_2(PV)\]) for $X,\,\, K_{{{\mathbf P}}V}$ and $c_2(V)$, as well as the identity $D=2c_1(V)$: $$\chi({\mathcal}{O}_X)=\frac{3\xi+2\pi^*c_1(V)}{12}\big[ \big(\xi+\pi^*c_1(V)+\pi^*K_Y\big) \big(4\xi+3\pi^*c_1(V)+\pi^*K_Y\big)$$ $$-2\xi\pi^*K_Y+\pi^* c_1(V)\pi^*K_Y+\pi^*c_2(Y)\big].$$ Applying the necessary reductions, we arrive at: $$\chi({{\mathcal}O}_X)=\frac{1}{2}\big(c_1^2(V)-2c_2(V)\big)+\frac{1}{2}c_1(V)K_Y+ \frac{1}{4}\big(K_Y^2+c_2(Y)\big).$$ We expect our formula for $\lambda_X$ to be independent of the base curve $B$. The contribution of $g_{\scriptscriptstyle B}$ in $\chi({\mathcal}{O}_X)$ can be written as: $(g_{\scriptscriptstyle B}-1){\operatorname}{deg}c_1(V)\big|_{F_Y}+\chi({\mathcal}{O}_Y)= (g_{\scriptscriptstyle B}-1)(g-1)$, but this is precisely the adjustment $\chi({\mathcal}{O}_B)\chi({\mathcal}{O}_C)$ given in Lemma \[Euler\]. Thus, $$\lambda_X=\frac{1}{2}\big(c_1^2(V)-2c_2(V)\big)-{\operatorname}{deg}c_1(V)\big|_{B_0}.$$ It remains to notice that $c_1^2(V)=2{\operatorname}{deg}c_1(V)|_{F_Y} {\operatorname}{deg}c_1(V)|_{B_0}=2(g+2){\operatorname}{deg}c_1(V)|_{B_0}$ and rewrite $\lambda_X$ in the form $$\lambda_X=\frac{1}{4}\big (c_1^2(V)-4c_2(V)\big)+\frac{g}{2}{\operatorname}{deg}c_1(V)\big|_{B_0}.\,\,\,\qed$$ ### Global Calculation of $\kappa_X$. {#globalkappa} Since $\omega_f= \omega_X\otimes \omega_B^{-1}$, $$\kappa_X=(K_X-\pi^*K_B)^2=K_X^2-8(g_{\scriptscriptstyle B}-1)(g-1). \label{kappa}$$ From Lemma \[adjunction\] we calculate $$\begin{aligned} K_X^2&\!\!=\!\!\!&(K_{{{\mathbf P}}V}+X)^2X=\big(\xi+\pi^*c_1(V)+\pi^*K_Y\big)^2 (3\xi+2\pi^*c_1(V))\\ &\!\!=\!\!\!\!&2c_1^2(V)-3c_2(V)+4c_1(V)K_Y+3K_Y^2.\end{aligned}$$ We calculate the contribution of $g_{\scriptscriptstyle B}$ in $K_X^2$: $8(g_{\scriptscriptstyle B}-1){\operatorname}{deg}c_1(V)|_{F_Y}+ 24(1-g_{\scriptscriptstyle B})=8(g_{\scriptscriptstyle B}-1)(g-1)$, which is exactly the necessary adjustment for $\kappa_X$ in (\[kappa\]). Therefore, $$\begin{aligned} \kappa_X&\!\!\!=\!\!\!&2c_1^2(V)-3c_2(V)-8{\operatorname}{deg}c_1(V)\big|_{B_0}\\ &\!\!\!=\!\!\!&\frac{3}{4}\big(c_1^2(V)-4c_2(V)\big)+\frac{5}{2} {\operatorname}{deg}c_1(V)\big|_{B_0}{\operatorname}{deg}c_1(V)\big|_{F_Y}-8{\operatorname}{deg}c_1(V)\big|_{B_0}\\ &\!\!\!=\!\!\!&\frac{3}{4}(c_1(V)^2-4c_2(V))+\frac{5g-6}{2} {\operatorname}{deg}c_1(V)\big|_{B_0}.\,\,\,\qed\end{aligned}$$ Index theorem on the surface $X$. {#indextheorem} --------------------------------- Now that we have completed the proof of Prop. \[lambda\_X,kappa\_X\], we notice that any bound on the ratio $\delta_X/\lambda_X$ would be equivalent to some inequality involving the genus $g$ and the two invariants discussed earlier: ${\operatorname}{deg}c_1(V)|_{B_0}$ and the quantity $c_1(V)^2\!-\!4c_2(V)$. This inequality should be a fairly general one, since the only relevant information in our situation is that $X$ is a triple cover of a ruled surface $Y$. One way of obtaining such general inequalities in ${\mathbb A}^2(X)\otimes \mathbb{Q}$ is via Let $H$ be an ample divisor on the smooth surface $X$, and let $\eta$ be a divisor on $X$, numerically not equivalent to 0. If $\eta \cdot H=0$, then $\eta^2<0$. \[Index\] The question here, of course, is how to find suitable divisors $H$ and $\eta$ that would yield our result for the maximal slope bound. For that, we make use of the triple cover $\phi:X\rightarrow Y$. If $H$ is any [*ample*]{} divisor on $Y$, then its pullback $\phi^*H$ to $X$ is also ample. This follows from A divisor $A$ on the smooth surface $X$ is ample if and only if $A^2>0$ and $A\cdot C>0$ for all irreducible curves $C$ in $X$. \[Nakai\] Since $H$ is ample itself, $(\phi^*H)^2=3H^2>0$ and $(\phi^*H)\!\cdot\! C=H\!\cdot\!\phi_*(C)>0$ for any curve $C$ on $X$, so that $\phi^*H$ is also ample on $X$. Now, if we find a divisor $\eta$ on $X$ such that $\eta\cdot\phi^*{\operatorname}{Pic} Y=0$, we will have assured that $\eta\cdot\phi^*H=0$, and then the Index theorem will assert $\eta^2\leq 0$. As $X$ is a divisor itself on ${{\mathbf P}}V$, its Picard group naturally contains the restriction of ${\operatorname}{Pic}{{\mathbf P}}V$ to $X$. We look for $\eta$ inside this subgroup, and for our purposes we may write it in the form $\eta=\big(\zeta+\pi^*C_1\big)\big|_X$ for some $C_1\in {\operatorname}{Pic}_{\mathbb Q}Y$. Let $C$ be any divisor class ${\operatorname}{Pic}_{\mathbb {Q}}Y$. We compute $$\eta\cdot \phi^*C=\big(\zeta+\pi^*C_1\big)\big(3\zeta+2\pi^*c_1(V)\big) \pi^*C=C(3C_1-c_1(V)).$$ We want this to be zero for all $C$, so we naturally take $C_1={\displaystyle\frac{1}{3}}c_1(V)\in{\operatorname}{Pic}_{\mathbb {Q}}Y$. We summarize the above discussion in The divisor class $\eta=\big(\zeta+\frac{1}{3}\pi^*c_1(V)\big)\big|_X$ on $X$ satisfies $\eta\cdot \phi^*{\operatorname}{Pic}(Y)=0$. In particular, for an ample divisor $H$ on $Y$, the pullback $\phi^*H$ is also ample on $X$ and $\eta \cdot \phi^*H=0$. Consequently, $\eta^2\leq 0$ with equality if and only if $\eta$ is numerically equivalent to $0$ on $X$. \[Eta\] We have shown that $$0\geq3\eta^2=3\big(\zeta+\frac{1}{3}\pi^*c_1(V)\big)^2 \big(3\zeta+2\pi^*c_1(V)\big)=2c_1^2(V)-9c_2(V), \label{eta}$$ or equivalently, $$2(g+2){\operatorname}{deg}c_1(V)\big|_{B_0}-9\big(c_1^2(V)-4c_2(V)\big)\geq 0. \label{indexinequality}$$ We are now ready to find a maximal bound for the slope of $X$. Recall the formulas for $\lambda_X$ and $\kappa_X$ (cf. Prop. \[lambda\_X,kappa\_X\]), and write $$\delta_X=12\lambda_X-\kappa_X=\displaystyle{ \frac{7g+6}{2}{\operatorname}{deg}c_1(V)\big|_{B_0} +\frac{9}{4}\big(c_1^2(V)-4c_2(V)\big)}.$$ In view of the type of bound for the ratio $\delta_X/\lambda_X$, which we aim to achieve, we have to eliminate any extra terms and use inequality (\[eta\]). Our only choice is to subtract $$\begin{aligned} 36(g+1)\lambda_X-(5g+1)\delta_X&\!\!\!=\!\!\!& \frac{1}{2}\big(36(g+1)g-(5g+1)(7g+6)\big) {\operatorname}{deg}c_1(V)\big|_{B_0}+\\ && +\frac{1}{4}\big(9(g+1)-9(5g+1)\big)\big(c_1^2(V)-4c_2(V)\big)\\ &\!\!\!=\!\!\!& \frac{1}{2}(g^2-g-6){\operatorname}{deg}c_1(V)\big|_{B_0}-\frac{9}{4}(g-3) \big(c_1^2(V)-4c_2(V)\big)\\ &\!\!\!=\!\!\!&\frac{g-3}{4}\big[2(g+2){\operatorname}{deg}c_1(V)\big|_{B_0}- 9\big(c_1^2(V)-4c_2(V)\big)\big]\\ &\!\!\!=\!\!\!&(g-3)(9c_2(V)-2c_1^2(V))\geq 0\end{aligned}$$ As a result, we establish an exact maximal bound for the slopes of our triple covers: Given a triple cover $\phi:X\!\rightarrow \!Y$ satisfying in the Basic construction, the slope of $X$ satisfies $$\frac{\delta_X}{\lambda_X}\leq \frac{36(g+1)}{5g+1}\cdot$$ Equality is achieved if and only if $g=3$, or $g>3$ and $\eta\equiv 0$ on $X$. \[maintheorem\] When is the maximal bound achieved? {#whenmaximal} ----------------------------------- ### The branch divisor of $\phi$ From GRR, applied to $\phi:X\rightarrow Y$ and the sheaf ${\mathcal}{O}_X$, we obtain a description of $c_1(V)$: $${\operatorname}{ch}(\phi_{!}{\mathcal}{O}_X).{\operatorname}{td}{\mathcal}{T}_Y= \phi_*({\operatorname}{ch}{\mathcal}{O}_X.{\operatorname}{td}{\mathcal}{T}_X),$$ $${\operatorname}{ch}(\phi_*{\mathcal}{O}_X)\big(1-\frac{1}{2}K_Y+\frac{1}{12}(K_Y^2+c_2(Y))\ \big)=\phi_*\big(1-\frac{1}{2}K_X+\frac{1}{12}(K_X^2+c_2(X)\big)$$ $$\Rightarrow c_1(\phi_*{\mathcal}{O}_X)=-\frac{1}{2}\big(\phi_*K_X-3K_Y).$$ For the [*ramification*]{} divisor $R$ on $X$ we know $K_X=\phi^*K_Y+R$, so that $\phi_*K_X=3K_Y+\phi_*R$. Hence $c_1(V)=-c_1(\phi_*{\mathcal}{O}_X)= \frac{1}{2}\phi_* R$. In other words, from Lemma \[D=2c\_1(V)\] we conclude that $c_1(V)$ is half of the [*branch*]{} divisor $D$ on $Y$. On the other hand, we can rewrite the condition $\eta\equiv 0$ in the following way: $$0 \equiv 3\eta=\big(3\zeta+\pi^*c_1(V)\big)\big|_X= \big(X-\pi^*c_1(V)\big)\big|_X= c_1\big({\mathcal}{O}_{{{\mathbf P}}V}(X)\big|_X\big)-\pi^*c_1(V)\big|_X$$ $$\Leftrightarrow\,\,c_1\big({\mathcal}{O}_{{{\mathbf P}}V}(X)\big|_X\big)\equiv \frac{1}{2}\phi^*D.$$ The self-intersection of $X$ on ${{\mathbf P}}V$ satisfies (cf.  [@self-intersection]) $$i^*i_*(1_X)=c_1({\mathcal}{N}_{X/{{\mathbf P}}V})\,\,\Rightarrow\,\, X\cdot X=i_*c_1({\mathcal}{N}_{X/{{\mathbf P}}V}).$$ In particular, our condition $\eta\equiv 0$ can be expressed as $\displaystyle {c_1({\mathcal}{N}_{X/{{\mathbf P}}V})\equiv \displaystyle{\frac{1}{2}}\phi^*D}$. ### Examples of the maximal bound Constructing examples of families achieving the maximal bound is not so easy, considering that the condition $\eta\equiv 0$ is not useful in practice. Instead, we start from the Basic construction and attempt to find a ruled surface $Y$ and a rank 2 vector bundle $V$ on it satisfying the equality in (\[indexinequality\]), as well as the “genus condition” given in Lemma \[adjunction\]. The former will ensure the maximal ratio $\delta/\lambda = 36(g+1)/(5g+1)$, while the latter will imply that the fibers of our family are indeed of genus $g$. The remaining question is what linear series $3\zeta+\phi^*D$ has an irreducible member with at most rational double points as singularities, which would serve as the total space of our family $X$. It is hard to work with the canonically defined bundle $V= \phi_*({\mathcal}{O}_X)/{\mathcal}{O}_Y$, since not every vector bundle $W$ of rank $2$ on $Y$ is of this form for some surface $X$. But any $W$ is congruent to some $V$ after a twist by an appropriate line bundle $M$: $V=W\otimes M$, and ${{\mathbf P}}V\cong {{\mathbf P}}W$. So, it seems reasonable to start with $W$ rather than $V$, and use the invariant forms of our required equalities (cf. Sect. \[globalcalculation\]). This means replacing the degrees of $c_1(V)$ on $B_0$ and $F_Y$ by the corresponding invariant degrees of $2D-3c_1(V)$. Thus, we need for some divisor $\widehat{D}$ on $Y$: $$2(g+2)(2{\operatorname}{deg}\widehat{D}\big|_{B_0}-3{\operatorname}{deg}c_1(W)\big|_{B_0}) =9\big(c_1^2(W)-4c_2(W)\big), \label{condition1}$$ $$g+2=2{\operatorname}{deg}\widehat{D}\big|_{F_Y}-3{\operatorname}{deg}c_1(W)\big|_{F_Y}. \label{condition2}$$ For a general fiber $F_Y$ of $Y$ consider the rational ruled surface (cf. Fig. \[intersection\]): $${\mathbf F}_e=\pi^*F_Y={{\mathbf P}}(W|_{F_Y})= {{\mathbf P}}({\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1}(e)),\,\,\,{\operatorname}{with}\,\, e\geq 0,$$ Let $S^{\prime}$ be the section in ${\mathbf F}_e$ with self-intersection $(S^{\prime})^2=-e$, and let $F_{\pi}$ be the fiber of ${\mathbf F}_e$ (in terms of the map $\pi:{\mathbf P}V\rightarrow Y$, $F_{\pi}=\pi^*({\operatorname}{pt})$). Since a general fiber $C$ of our family is embedded in ${\mathbf F}_e$, the linear system $$|C|=\big|3S^{\prime}+\frac{g+2+3e}{2}F_{\pi}\big|$$ has an irreducible nonsingular member. Equivalently, $C\cdot S^{\prime}\geq 0$, i.e. $$e\leq (g+2)/3\,\,\,{\operatorname}{and}\,\,\, e\equiv g({\operatorname}{mod}2) \label{e-conditions}$$ (compare with Lemma \[gentrig\]). This forces three types of extremal examples according to the remainders $g({\operatorname}{mod}3)$. [**Example 7.1 ($g\equiv 0({\operatorname}{mod}3)$).**]{} Let $g=3e$ for some $e\in \mathbb{N}$. Set the base curve $B={{\mathbf P}}^1$, and the ruled surface $$Y={{\mathbf P}}({\mathcal}{O}_B\oplus {\mathcal}{O}_B(6))={\mathbf F}_6.$$ Let $B^{\prime}$ be the section in $Y$ with smallest self-intersection: $(B^{\prime})^2=-6$, thus $B_0=B^{\prime}+3F_Y$ with $B_0^2=0$. Let $Q=B^{\prime}+6F_Y$, and we choose two divisors $\widehat{D}$ and $E$ on $Y$ as follows: $$\widehat{D}=(g+1)Q\,\,\,{\operatorname}{and}\,\,\,E=eB^{\prime}+2(g+1)F_Y.$$ For the vector bundle $W$ on $Y$ we set $W={\mathcal}{O}_Y\oplus {\mathcal}{O}_Y(E)$ so that $c_1(W)=E$ and $c_2(W)=0$. We claim that the linear system $L=|3\zeta+\pi^*\widehat{D}|$ on the 3-fold ${{\mathbf P}}W$ contains an irreducible smooth member, which we set to be our surface $X$ with maximal ratio $\delta/\lambda$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=maximal1.ps,width=2.6in,height=1.2in}} \hspace{-1mm}\end{array}}$$ Indeed, it is trivial to check conditions (\[condition1\]–20). Further, for [*any*]{} fiber $F_Y$ of $Y$: $$\pi^*F_Y={{\mathbf P}}\big({\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1} (E\cdot F_Y)\big)={{\mathbf P}}\big({\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1} (e)\big)={\mathbf F}_e,$$ so that $e=g/3$ satisfies the required conditions (\[e-conditions\]). The only nontrivial fact is the existence of the wanted member $X$ in the linear system $L$ on ${{\mathbf P}}W$. Consider two sections $\Sigma_0$ and $\Sigma_1$ of ${{\mathbf P}}W$ corresponding to the subbundles ${\mathcal}{O}_Y$ and ${\mathcal}{O}_Y(E)$ of $W$, respectively: $\Sigma_0\in |\zeta|,\,\,\,\Sigma_1\in |\zeta+\pi^*(E)|,$ so that $\Sigma_1\sim \Sigma_0+E$ (cf. Fig. \[maximal1.fig\]). Note that $\Sigma_0\cdot \Sigma_1=0$ and $\Sigma_0\cdot L=\Sigma_0\cdot \pi^*B^{\prime}$. In other words, if $G=\pi^*B^{\prime}$ is the ruled surface over $B^{\prime}$, then $\Sigma_0$ intersects every irreducible member of $L$ in the curve $R=\Sigma_0\cap G$. On the other hand, if a member of $L$ meets $\Sigma_0$ in a point outside $R$, then this member contains entirely $\Sigma_0$. Thus, $L$ does not distinguish the points on $\Sigma_0$, and $R$ is in the base locus of $L$. Similarly, the restriction $L|_G=|3\Sigma_0|_G|=|3R|$ has exactly one section on $G$, namely, $3R$. Again it follows that $L$ does not distinguish the points on $G$. Away from the closed subset $Z=\Sigma_0\cup G$, the linear system $L$ is in fact very ample. This can be checked by showing directly that $L$ separates points and tangent vectors on ${\mathbf P}W-Z$. Therefore, $L$ defines a rational map $$\phi_L:{{\mathbf P}}W \rightarrow {\mathbf P}(H^0(L)^{\widehat{ \phantom{o}}})={{\mathbf P}}^N.$$ The map $\phi_L$ is regular on ${{\mathbf P}}W-R$, embeds ${{\mathbf P}}W-Z$, and contracts $\Sigma_0-R$ and $G-R$ to two distinct points $p$ and $q$ in ${{\mathbf P}}^N$. By Bertini’s theorem (cf. [@Hartshorne]), the [*general*]{} member of $L$ is [*smooth*]{} away from the base locus $R$. Let $H$ be a general hyperplane in ${{\mathbf P}}^N$ not passing through $p$ and $q$. Pulling $H$ back to ${{\mathbf P}}W$ yields a member $X$ of $L$ not containing $\Sigma_0$ or $G$, and hence irreducible. It remains to show that the total space of $X$ has at most finitely many double point singularities along the curve $R$. Since the member $3\Sigma_1+G\in|L|$ is smooth along $R$, then the general member of $|L|$ must be smooth along $R$. Hence our surface $X$ has, in fact, a smooth total space. This concludes the construction of our maximal bound family of trigonal curves. [**Example 7.2 ($g\equiv 1({\operatorname}{mod}3)$).**]{} Set $g=3e-2$ for $e\in{\mathbb N}$. Then $e$ satisfies the requirements of our construction: $e=(g+2)/3$ and $e\equiv g ({\operatorname}{mod}2)$. For the ruled surface $Y$ we choose $Y={\mathbf P}^1\times{\mathbf P}^1$. Let $E$ and $\widehat{D}$ be the following divisors on $Y$: $E=eB_0+fF_Y\,\,{\operatorname}{and}\,\,\widehat{D}=3E,$ where $f\in{\mathbb N}$. The vector bundle $W$ on $Y$ is then defined by $W={\mathcal}{O}_Y\oplus{\mathcal}{O}_Y(E)$. Finally, we indentify the total space of the surface $X$ with an irreducible smooth member of the linear system $L=|3\zeta+\pi^*\widehat{D}|$ on the 3-fold ${\mathbf P}W$. The verification of this construction is similar to the previous example. Here $L$ is very ample everywhere on ${\mathbf P}W$ except on the section $\Sigma_0$, which is contracted to a point under the map $\phi_{L}$. This example, in somewhat different context, is shown in [@small]. [**Remark 7.1**]{} The case of $g\equiv 2({\operatorname}{mod}3)$ is complicated by the fact that we cannot take $e=(g+1)/3$, for then $e\not \equiv g({\operatorname}{mod}2)$. For example, if $g=5$, then the only possibility is $e=1$. In the notation of Section 12, all trigonal curves have lowest Maroni invariant of $1$, and there is no Maroni locus. For now, in this case we have not been able to construct a trigonal family with singular general member, whose ratio is $36(g+1)/(5g+1)$. 8. Local Calculation of $\lambda,\delta$ and $\kappa$ in the General Case {#local-calculation-of-lambdadelta-and-kappa-in-the-general-case .unnumbered} ========================================================================= \[generalcase\] Notation and conventions {#Conventions} ------------------------ In this section we consider the general case of a trigonal family $X\rightarrow B$. For convenience of notation, we shall assume that the base curve $B$ intersects transversally and in general points the boundary divisors of $\overline{\mathfrak{T}}_g$ (cf. Fig. \[Delta-k,i\]). We will call such a base curve [*general*]{}, and use this definition throughout Section 8-10. Since we work in the rational Picard group of $\overline{\mathfrak{M}}_g$, all arguments and statements in the remaining cases are shown similarly in Sect. 11. From Prop. 6.1, we may assume that modulo a base change, our family $X\!\rightarrow\! B$ fits in the following commutative diagram: (3,4.3)(-1,2) (0,3.9)[$\widetilde{X}\,\stackrel{\phi}{\longrightarrow} \widetilde{Y}$]{} (0,5.1)[$\widehat{X}\,\stackrel{\widehat{\phi}}{\longrightarrow} \widehat{Y}$]{} (0.2,5)(0,-1.2)[3]{}[(0,-1)[0.6]{}]{} (1.45,3.8)[(-1,-2)[1]{}]{} (0,2.8)[$X$]{} (0,1.6)[$B$]{} (1.55,5)[(0,-1)[0.6]{}]{} (1.55,6.2)[(0,-1)[0.6]{}]{} (1.65,5.8)[$\pi$]{} (1.3,6.3)[${{\mathbf P}}V$]{} (0.2,5.6)[(2,1)[1.2]{}]{} ### Relations in ${\rm Pic}_{\mathbb{Q}}\widehat{Y}$ and ${\rm Pic}_{\mathbb{Q}}{{\mathbf P}}V$ {#relations} The special fibers of of $\widehat{X}$ and of the birationally ruled surface $\widehat{Y}$ over $B$ are described in Fig. \[coef1.fig\]–\[coef3.fig\]. Since each such fiber in $\widehat{Y}$ is a [*chain*]{} $T$ of rational components, we can fix one of the end components to be the [*root*]{} $R$. We keep the notation $E^-$ ($E^+$, respectively) for the ancestor (descendants, respectively) of a component $E$ in $T$. We also fix a general fiber $F_{\widehat{Y}}\cong {{{\mathbf P}}}^1$ of $\widehat{Y}$, and a section $B_{\widehat{Y}}$, which is the pullback of the corresponding section $B_0$ in $\widetilde{Y}$ (cf. (\[normalize\])). The rational Picard group of $\widehat{Y}$ is generated by $F_{\widehat{Y}}$, $B_{\widehat{Y}}$ and all non-root components $E$ of the special fibers of $\widehat{Y}$: $${\operatorname}{Pic}_{\mathbb{Q}}\widehat{Y}=\mathbb{Q} B_{\widehat{Y}}\bigoplus \mathbb{Q}F_{\widehat{Y}}\!\!\!\bigoplus _{E-{\operatorname}{not}\,{\operatorname}{root}}\!\!\!\mathbb{Q}E.$$ The intersection numbers of these generators are as follows: $B_{\widehat{Y}}^2=0=F_{\widehat{Y}}^2,\,\,\, B_{\widehat{Y}}\cdot F_{\widehat{Y}}=1,\,\,{\operatorname}{and}\,\, E\cdot B_{\widehat{Y}}=E\cdot F_{\widehat{Y}}=0.$ (3,2.3)(12.6,4.5) (6,5.3)[(1,0)[2]{}]{} (5,4.6)(0,-1.7)[2]{}[(2,1)[1.6]{}]{} (6,3.6)(0,-0.1)[2]{}[(1,0)[2]{}]{} (9.1,4.2)[(-2,-1)[1.6]{}]{} (4.9,5)(0,-1.8)[2]{}[$E^-$]{} (6.8,5.45)(0,-1.7)[2]{}[$E$]{} (8.7,4.3)[$E^+$]{} (7,4)(3,-2.4) (4.45,2.9)[(2,1)[1.6]{}]{} (5.5,3.6)[(1,0)[3]{}]{} (6.5,3.4)[(-1,1)[1.2]{}]{} (7.5,3.4)[(1,1)[1.2]{}]{} (4.5,3.2)[$E^-$]{} (6.8,2.9)[$E$]{} (5.1,4.75)[$E^+$]{} (5.9,4.85)[$E^+$]{} (7.6,4.85)[$E^+$]{} (8.4,4.75)[$E^+$]{} (6.75,3.4)[(-1,2)[0.65]{}]{} (7.25,3.4)[(1,2)[0.65]{}]{} (3,4)(-1,-1.9) (6,5.3)(0,-0.1)[2]{}[(1,0)[2]{}]{} (5,4.6)(0,-1.65)[2]{}[(2,1)[1.6]{}]{} (6,3.6)[(1,0)[2]{}]{} (9.1,4.2)(0,1.8)[2]{}[(-2,-1)[1.6]{}]{} (9.1,4.3)[(-2,-1)[1.6]{}]{} (4.8,5)(0,-1.7)[2]{}[$R$]{} (6.8,5.45)(0,-1.7)[2]{}[$E^-$]{} (8.7,4.3)(0,1.8)[2]{}[$E$]{} We also set $m_{\!\stackrel{\phantom{.}}{E}}=E\cdot E^-$ (cf. Fig. \[m E\]): $$m_{\!\stackrel{\phantom{.}}{E}}=\left\{\begin{array}{l} 0\,\,{\operatorname}{if}\,\,E=R\,\,{\operatorname}{root},\\ 1\,\,{\operatorname}{if}\,\,E\,\,{\operatorname}{and}\,\,E^-\,\,{\operatorname}{reduced},\\ 2\,\,{\operatorname}{if}\,\,E\,\,{\operatorname}{or}\,\,E^-\,\,{\operatorname}{nonreduced}. \end{array}\right.$$ In this notation, due to the fact that $E\cdot T=E\cdot F_{\widehat{Y}}=0$, the self-intersection of any $E$ is computed by (cf. Fig. \[E\^2\]): $$E^2=-\sum_{\!\stackrel{\scriptstyle{E^{\prime}\not= E}} {E^{\prime}\cap E\not= \emptyset}} E\cdot E^{\prime}=-\sum_{\!\stackrel{\scriptstyle{E^{\prime}=E^+}} {{\operatorname}{or}\,\,E^{\prime}=E}}m(E^{\prime})$$ In order to express the dualizing sheaf $K_{\widehat{Y}}$ in terms of the above generators of ${\operatorname}{Pic}_{\mathbb{Q}}\widehat{Y}$, for each component $E$ in $\widehat{Y}$ we denote by $\theta_E$ the length of the path $\stackrel{\longrightarrow} {RE}$, omitting any nonreduced components except for $E$ itself. For example, in the two cases in Fig. \[theta\_E\] we have $\theta_E=1$ and $\theta_E=2$. Note that $\theta_R=0$. Considering the “effective” blow-ups on $\widetilde{Y}$, necessary to construct $\widehat{Y}$, we immediately obtain the following identities (compare with (\[K\_PV\]) and (\[K\_Yglobal\])). In ${\operatorname}{Pic}_{\mathbb{Q}}\widehat{Y}$ and ${\operatorname}{Pic}_{\mathbb{Q}}{{\mathbf P}}V$: $$\begin{aligned} {\operatorname}{(a)}&\!\!\!\!&\!\! \displaystyle{K_{\widehat{Y}} \equiv -2B_{\widehat{Y}}+(2g_B-2)F_{\widehat{Y}}+\sum_E \theta_EE},\\ \vspace*{-2mm} {\operatorname}{(b)}&\!\!\!\!&\!\!K_{{{\mathbf P}}V}\equiv -2\zeta -\pi^* c_1(V)+\pi^*K_{\widehat{Y}},\\ {\operatorname}{(c)}&\!\!\!\!&\!\!K_{{{\mathbf P}}V/\!_{\scriptstyle{B}}} \equiv -2\zeta -\pi^*c_1(V)-2\pi^*B_{\widehat{Y}}+\sum_E \theta_E\pi^*E.\end{aligned}$$ \[Kdivisors\] The hyperplane section $\zeta$ of ${{\mathbf P}}V$ and the rank 2 vector bundle $V$ on $\widehat{Y}$ are defined similarly as in Section 7. Thus, in ${\operatorname}{Pic}_{\mathbb{Q}}{{\mathbf P}}V$ we have $\widehat{X}\sim 3\zeta +\pi^*D$ for a certain divisor $D$ on $\widehat{Y}$. By analogy with Lemma \[D=2c\_1(V)\], one shows that $D\equiv 2c_1(V)$ in ${\operatorname}{Pic}_{\mathbb{Q}}Y$, so that $$\widehat{X}\equiv 3\zeta+ 2\pi^*c_1(V). \label{genX}$$ Using the above notation for ${\operatorname}{Pic}_{\mathbb{Q}}\widehat{Y}$ we can write for some half-integers $c,d,\gamma_{\!\stackrel{\phantom{.}}{E}}$: $$\displaystyle{c_1(V)\equiv cB_{\widehat{Y}}+dF_{\widehat{Y}}+ \sum_{E}\gamma_{\!\stackrel{\phantom{.}}{E}}E}. \label{genc_1(V)}$$ Here we can assume that $\gamma_{\!\stackrel{\phantom{.}}{R}}=0$ by replacing $R$ with a linear combination of the remaining components $E$ in its chain $T$ (compare with (\[define D,c1(V)\])). Finally, we need the top Chern classes of $\widehat{Y}$ and ${{\mathbf P}}V$ in terms of intersections of known divisors and other known invariants of the two surfaces (compare with (\[c\_2(PV)\]) and (\[c\_2(Y)global\])). In the Chow rings $\mathbb{A}(\widehat{Y})$ and $\mathbb{A}({{\mathbf P}}V)$ the following equalities are true: $$\begin{aligned} {\operatorname}{(a)}&\!\!&\!\!\!\!c_2(\widehat{Y})=c_2(Y)+\sum_{E\not =R} 1=4(1-g_B)+\sum_ {E\not = R} 1, \\ {\operatorname}{(b)}&\!\!&\!\!\!\!c_2({{\mathbf P}}V)= c_2(\widehat{Y})-\pi^*K_{\widehat{Y}}(2\zeta+\pi^*c_1(V)),\\ {\operatorname}{(c)}&\!\!&\!\!\!\!\displaystyle{ c_2({{\mathbf P}}V/\!_{\displaystyle{B}})= -\pi^*K_{\widehat{Y}/B}(2\zeta+\pi^*c_1(V))+\sum_{E\not =R} 1}.\end{aligned}$$ \[conventions\] ### A technical lemma {#technicallemma} In the sequel, we will work with several functions defined on the set of components $\{E\}$ in $\widehat{Y}$. For easier calculations, to any such function $f$ we associate the [*difference function*]{} $F$ by setting $F_{\!\stackrel{\phantom{.}}{E}}:= f_{\!\stackrel{\phantom{.}}{E}}-f_{\!\stackrel{\phantom{.}} {E^-}}$ for all $E$. Since $R^-$ does not exist, we define $f_{R^-}=0$ for all roots $R$ in $\widehat{Y}$. For any functions $f$ and $h$ defined on the set of components $\{E\}$ in $\widehat{Y}$, the following identity holds true: $$\sum_E f_{\!\stackrel{\phantom{.}}{E}}E \cdot \sum_E h_{\!\stackrel{\phantom{.}}{E}}E=-\sum_E (m\!\cdot\! F\!\cdot\! H)_{\!\stackrel{\phantom{.}}{E}}.$$ \[technical\] [*Proof.*]{} We rewrite the lefthand side as $\displaystyle{\sum_{E_1\not= E_2}f_{\!\stackrel{\phantom{.}}{E_1}} h_{\!\stackrel{\phantom{.}}{E_2}}E_1E_2+\sum_E f_{\!\stackrel{\phantom{.}}{E}}h_{\!\stackrel{\phantom{.}}{E}}E^2=}$ $$=\sum_{E}\left(f_{\!\stackrel{\phantom{.}}{E^-}} h_{\!\stackrel{\phantom{.}}{E}}+f_{\!\stackrel{\phantom{.}}{E}} h_{\!\stackrel{\phantom{.}}{E^-}}\right)m_{\!\stackrel{\phantom{.}}{E}}- \sum_E \left(f_{\!\stackrel{\phantom{.}}{E}}h_{\!\stackrel{\phantom{.}}{E}}+ f_{\!\stackrel{\phantom{.}}{E^-}} h_{\!\stackrel{\phantom{.}}{E^-}}\right)m_{\!\stackrel{\phantom{.}}{E}}=$$ $$=\sum_E\left(f_{\!\stackrel{\phantom{.}}{E^-}}- f_{\!\stackrel{\phantom{.}}{E}}\right)\left(h_{\!\stackrel{\phantom{.}}{E}} -h_{\!\stackrel{\phantom{.}}{E^-}}\right)m_{\!\stackrel{\phantom{.}}{E}} =\sum_E(m\!\cdot\! F\!\cdot\! H) _{\!\stackrel{\phantom{.}}{E}}.\,\,\,\qed$$ We have noted that all three functions $m,\theta$ and $\gamma$ are zero on the roots $R$ in $\widehat{Y}$. Since we shall be working specifically with these three functions, it makes sense to restrict from now on all sums $\sum_E$ only to the non-roots $E$ in $\widehat{Y}$. With this in mind, in every application of Lemma \[technical\] one must check that the corresponding functions $f$ and $h$ have the same property: $f_R=0=h_R$, so that we can restrict the sums in Lemma \[technical\] also to all [*non-roots*]{} $E$ in $\widehat{Y}$. In fact, in all cases this verification will be obvious as $f$ and $h$ will be, for the most part, linear combinations of $\theta$ and $\gamma$. [**Example 8.1.**]{} From expression (\[genc\_1(V)\]) for $c_1(V)$ as a divisor on $\widehat{Y}$, and Lemma \[technical\]: $$c_1^2(V)=2cd+ \sum_{E}\gamma_{\!\stackrel{\phantom{.}}{E}}E\cdot \sum_{E}\gamma_{\!\stackrel{\phantom{.}}{E}}E= 2cd-\sum_{E}m_{\!\stackrel{\phantom{.}}{E}} \Gamma^2_{\!\stackrel{\phantom{.}}{E}}. \label{c^2_1(V)}$$ Computation of the invariants $\lambda_{\widehat{X}}, \kappa_{\widehat{X}}$ and $\delta$ {#computation} ----------------------------------------------------- The following proposition \[hatlambda\] is a generalization of the corresponding statement in Section 7 (cf.  Prop. \[lambda\_X,kappa\_X\]). We set $\Gamma_{ \!\stackrel{\phantom{.}}{E}}=\gamma_{\!\stackrel {\phantom{.}}{E}}-\gamma_{\!\stackrel{\phantom{.}}{E^-}}$ and $\Theta_{\!\stackrel{\phantom{.}}{E}} =\theta_{\!\stackrel{\phantom{.}}{E}} -\theta_{\!\stackrel{\phantom{.}}{E^-}}$ to be the difference functions of $\gamma$ and $\theta$. The degrees of the invariants $\lambda_{\widehat{X}}$ and $\kappa_{\widehat{X}}$ on $\widehat{X}$ are given by $$\lambda_{\widehat{X}}=d(g+1)-c_2(V)-\frac{1}{4}\sum_E \left\{m_{\!\stackrel{\phantom{.}}{E}}\cdot\left(2\Gamma^2+2\Gamma\cdot \Theta +\Theta^2\right)_{\!\stackrel{\phantom{.}}{E}}-1\right\},$$ $$\,\,\,\,\kappa_{\widehat{X}}=4dg-3c_2(V)-\sum_Em_{\!\stackrel{\phantom{.}}{E}}\left (2\Gamma^2+4\Gamma\Theta+3\Theta^2\right)_{\!\stackrel{\phantom{.}}{E}}.$$ \[hatlambda\] One starts with the Euler characteristic formula $\lambda_{\widehat{X}}=\chi({\mathcal}{O}_{\widehat{X}})- \chi({\mathcal}{O}_C)\cdot \chi({\mathcal}{O}_B),$ or the adjunction formula $\kappa_{\widehat{X}}=\big(\widehat{X}+K_{{{\mathbf P}}V/\!_{\scriptstyle{B}}}) ^2\widehat{X}.$ The rest of the proof is a straight forward calculation, which uses the equalities given in \[conventions\], and is substantially simplified by Lemma \[technical\]. The degree $\delta$ on the original family $X$ is given by $$\delta=4d(2g+3)-9c_2(V)-\!\sum_T\mu(T)-\!\sum_{{\operatorname}{ram}1} 1-\! \sum_{{\operatorname}{ram}2} 3-\!\sum_E\left\{m_{\!\stackrel{\phantom{.}}{E}} \left(4\Gamma^2+2\Gamma\Theta\right)_{\!\stackrel{\phantom{.}}{E}}-3\right\}$$ \[hatdelta\] Here $\mu(T)$ stands for the quasi-admissible contribution to $\kappa_{\widehat{X}}$ of the preimage $C=\widehat{\phi}^*T$ in $\widehat{X}$, as defined in Lemma \[mu(C)\]. Since $\lambda=\lambda_{\widehat{X}}$, $\kappa=\kappa_{\widehat{X}}+\sum_T \mu(T)+ \sum_{{\operatorname}{ram}1} 1+\sum_{{\operatorname}{ram}2}3$, and $\delta=12\lambda-\kappa$, the statement immediately follows from Prop. \[hatlambda\]. The arithmetic genus $p_{{E}}$, and the invariants $\Gamma_{{E^{\prime}}}$ and $\Theta_{{E^{\prime}}}$ {#arithmetic} ------------------------------------------------------------------------------------------------------ For a component $E$ in a special fiber $T$ of $\widehat{Y}$, we define $T(E)$ to be the subtree of $T$ generated by the component $E$. In other words, $T(E)$ is the union of all components $E^{\prime}\in T$ such that $E^{\prime}\geq E$ (cf. Fig. \[subtree\]). For simplicity, we set $p_{\!\stackrel{\phantom{.}}{E}}:=p_a\big(\phi^*(T(E))\big)$ to be the arithmetic genus of the inverse image $\phi^*(T(E))$ in $\widehat{X}$. It can be easily computed via the following analog of Lemma \[adjunction\], where $T$ consisted of a single component $E=R$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=ex5.ps,width=1.2in,height=1.2in}} \hspace{-1mm}\end{array}}$$ For a general base curve $B$ and for any non-root component $E\in T$: $$\displaystyle{p_{\!\stackrel{\phantom{.}}{E}}=-m_{\!\stackrel {\phantom{.}}{E}}\left(\Gamma_E+\frac{3(\Theta_E+1)}{2}\right)+1}. \label{arithmetic equation}$$ \[arithgenus\] [*Proof.*]{} From the adjunction formula for the divisor $\phi^*(T(E))$ in $\widehat{X}$: $$2p_{\!\stackrel{\phantom{.}}{E}}-2=\left(K_{\widehat{X}}+\phi^*(T(E))\right)\phi^*(T(E))= \left((K_{{{\mathbf P}}V}+\widehat{X})|_{\widehat{X}}+\sum_{E^{\prime}} \delta_{E^{\prime}}\widehat{\phi}^*E^{\prime}\right)\sum_{E^{\prime}} \delta_{E^{\prime}}\widehat{\phi}^*E^{\prime}.$$ Here $\delta_{E^{\prime}}=0$ if $E^{\prime}<E$, and $\delta_{E^{\prime}}=1$ otherwise. Thus, the sums above are effectively taken over all $E^{\prime}\geq E$. Substituting the expressions for $K_{{{\mathbf P}}V}$ and $\widehat{X}$ as divisors in ${{{\mathbf P}}V}$ from Lemma \[Kdivisors\] and (\[genX\]), we arrive at $$2p_{\!\stackrel{\phantom{.}}{E}}-2=\sum_{E^{\prime}}\left(2 \gamma_{\!\stackrel{\phantom{.}}{E^{\prime}}}+ 3\theta_{\!\stackrel{\phantom{.}}{E^{\prime}}}+ 3\delta_{\!\stackrel{\phantom{.}}{E^{\prime}}} \right)E^{\prime} \sum_{E^{\prime}} \delta_{\!\stackrel{\phantom{.}}{E^{\prime}}}E^{\prime}.$$ Set $\Delta_{\!\stackrel{\phantom{.}}{E}} =\delta_{\!\stackrel{\phantom{.}}{E}} -\delta_{\!\stackrel{\phantom{.}}{E^-}}$, i.e. $\Delta_{\!\stackrel{\phantom{.}}{E^{\prime}}}=1$ only if $E^{\prime}=E$; otherwise, $\Delta_{\!\stackrel{\phantom{.}}{E^{\prime}}}=0$. By Lemma \[technical\], $$2p_{\!\stackrel{\phantom{.}}{E}}-2=-\sum_{E^{\prime}} m_ {\!\stackrel{\phantom{.}}{E^{\prime}}} \left(2\Gamma_{\!\stackrel{\phantom{.}}{E^{\prime}}}+ 3\Theta_{\!\stackrel{\phantom{.}}{E^{\prime}}}+3 \Delta_{\!\stackrel{\phantom{.}}{E^{\prime}}}\right)\Delta_{\!\stackrel{\phantom{.}}{E^{\prime}}}\,\, \Rightarrow\,\,2p_{\!\stackrel{\phantom{.}}{E}}-2= -m_{\!\stackrel{\phantom{.}}{E}}\left(2\Gamma_{\!\stackrel{\phantom{.}}{E}} +3\Theta_{\!\stackrel{\phantom{.}}{E}}+3\right).\,\,\,\qed$$ Now we can easily compute the invariants $m_{\!\stackrel{\phantom{.}}{E}}$, $\Theta_{\!\stackrel{\phantom{.}}{E^{\prime}}}$ and $\Gamma_{\!\stackrel{\phantom{.}}{E^{\prime}}}$, appearing in the formulas for $\lambda_{X}$ and $\kappa_{X}$. There are three possibilities for the triple $(m_{\!\stackrel{\phantom{.}}{E}}, \Theta_{\!\stackrel{\phantom{.}}{E^{\prime}}}, \Gamma_{\!\stackrel{\phantom{.}}{E^{\prime}}})$, depending on whether the components $E$ and $E^-$ of $T$ are reduced: $$\begin{aligned} {\operatorname}{(a)}\,{\operatorname}{if}\,E,E^-\,{\operatorname}{reduced},\,{\operatorname}{then}&\!\!\!&\!\!\!\! m_{\!\stackrel{\phantom{.}}{E}}=1,\,\,\Theta_{\!\stackrel{\phantom{.}}{E}} =1,\,\,\Gamma_{\!\stackrel{\phantom{.}}{E}}= -(p_{\!\stackrel{\phantom{.}}{E}}+2).\\ {\operatorname}{(b)}\, {\operatorname}{if}\,E\,\,{\operatorname}{nonreduced}, \,{\operatorname}{then}&\!\!\!&\!\!\!\! m_{\!\stackrel{\phantom{.}}{E}}=2,\,\,\Theta_{\!\stackrel{\phantom{.}}{E}} =1,\,\,\Gamma_{\!\stackrel{\phantom{.}}{E}}= -({p_{\!\stackrel{\phantom{.}}{E}}+5})/{2}.\\ {\operatorname}{(c)}\,{\operatorname}{if}\,E^-\!{\operatorname}{nonreduced}, \,{\operatorname}{then}&\!\!\!&\!\!\!\! m_{\!\stackrel{\phantom{.}}{E}}=2,\,\,\Theta_{\!\stackrel{\phantom{.}}{E}} =0,\,\,\Gamma_{\!\stackrel{\phantom{.}}{E}}= -({p_{\!\stackrel{\phantom{.}}{E}}+2})/{2}.\end{aligned}$$ \[constants\] Note that for the list all possible special fibers $T$ of $\widehat{Y}$, each component $E$ fits in exactly one of the three cases above (cf. Fig. \[coef1.fig\]–\[coef3.fig\]). The proof of the statement is immediate from the definitions of $m_{\!\stackrel{\phantom{.}}{E}}$ and $\Theta_{\!\stackrel{\phantom{.}}{E^{\prime}}}$, and from Lemma \[arithgenus\]. 9. The Bogomolov Condition $4c_2-c_1^2$ and the $7+6/g$ Bound in $\overline{\mathfrak{T}}_g$ {#the-bogomolov-condition-4c_2-c_12-and-the-76g-bound-in-overlinemathfrakt_g .unnumbered} ============================================================================================ \[Bogomolov1\] With the conventions of Section 8, we state the main proposition of the section. There exists an effective $\mathbb Q$-linear combination ${\mathcal}{E}$ of boundary divisors $\Delta{\mathfrak{T}}_{k,i}$, not containing $\Delta{\mathfrak{T}}_0$, such that for a general base curve $B$ in $\overline{\mathfrak{T}}_g$: $$(7g+6)\lambda|_B=g\delta|_B+{\mathcal}{E}|_B+\frac{g-3}{2}\left(4c_2(V)-c_1^2(V) \right).$$ \[bogomolov1\] For a shorthand notation, we denote by $\mathfrak{S}$ the difference $$\mathfrak{S}:=(7g+6)\lambda|_B-g\delta|_B -\frac{g-3}{2}\left( 4c_2(V)-c_1^2(V)\right).$$ Using the results of the previous section, we can write: $$\begin{aligned} \mathfrak{S}&=& -\frac{1}{4}\sum_E\left\{m_{\!\stackrel{\phantom{.}}{E}}\left(6\Gamma^2+ 6(g+2)\Gamma\Theta+(7g+6)\Theta^2\right)_{\!\stackrel{\phantom{.}}{E}}+5g-6 \right\}\\ &&+\sum_T g\mu(T)+\sum_{{\operatorname}{ram}1} g +\sum_{{\operatorname}{ram}3} 3g.\end{aligned}$$ We defer the proof of Prop. \[bogomolov1\] until the end of this section, when all of the terms in this sum will be computed. Grouping the contributions of each $\Delta{\mathfrak{T}}_{k,i}$ in $\mathfrak{S}$ {#Grouping} --------------------------------------------------------------------------------- Substituting the results of Corollary \[constants\] in the expression for $\mathfrak{S}$, we eliminate $m_{\!\stackrel{\phantom{.}}{E}}$, $\Theta_{\!\stackrel{\phantom{.}}{E^{\prime}}}$ and $\Gamma_{\!\stackrel{\phantom{.}}{E^{\prime}}}$: $$\begin{aligned} \mathfrak{S}&\!\!=\!\!&\sum_T g\mu(T)+\sum_{{\operatorname}{ram}1} g+\sum_{{\operatorname}{ram}2} 3g+ \frac{1}{4}\sum_{E,E^-{\operatorname}{red}}\!\!\left(6(2+p_{\!\stackrel{\phantom{.}}{E}}) (g-p_{\!\stackrel{\phantom{.}}{E}})-12g\right)\\ &\!\!-\!\! &\frac{1}{4}\sum_{E^-{\operatorname}{nonred}}\!\!\!\!\!\left(3(p_{\!\stackrel{\phantom{.}}{E}}+2)^2+5g-6\right)+\frac{1}{4}\sum_{E{\operatorname}{nonred}}\!\!\left(3(p_{\!\stackrel{\phantom{.}}{E}}+5)(2g-p_{\!\stackrel{\phantom{.}}{E}}-1)-19g+6\right).\end{aligned}$$ For each chain $T$ in $\widehat{Y}$, the inverse image $\widehat{\phi}^*(T)$ in $\widehat{X}$ is a member (or a blow-up of a member) of exactly one boundary divisor $\Delta{\mathfrak{T}}_{k,i}$. Consequently, to find the contribution to $\mathfrak{S}$ of a specific $\Delta{\mathfrak{T}}_{k,i}$, we calculate the sum in $\mathfrak{S}$ corresponding to all types of special fibers $\widehat{\phi}^*(T)$. $$\hspace*{-7mm}{\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=coef1.ps,width=4.5in,height=1.3in}} \hspace{-1mm}\end{array}}$$ ### Contributions of $\Delta{\mathfrak{T}}_{1,i}, \Delta{\mathfrak{T}}_{2,i}$ and $\Delta{\mathfrak{T}}_{3,i}$ {#contribution1} Fig. \[coef1.fig\] presents the special fibers corresponding to the boundary divisors $\Delta{\mathfrak{T}}_{1,i},\,\,\Delta{\mathfrak{T}}_{2,i},\,\,\Delta{\mathfrak{T}}_{3,i}$. In each of these cases, there is only one component $E$ in $T$ besides the root $R=E^-$. Thus, the subchain $T(E)$ in $T$ is trivial – it consists only of $E$. Its inverse image $\widehat{\phi}^*(E)$ is connected for $\Delta{\mathfrak{T}}_{1,i}$, and consists of two connected curves for $\Delta{\mathfrak{T}}_{2,i}$ and $\Delta{\mathfrak{T}}_{3,i}$. Setting the genus of the inverse image of $R$ to be $i$, it is easy to see that the genus $p_{\!\stackrel{\phantom{.}}{E}}$ of $\phi^*(E)$ is $g-i-2$ in the first two cases, and $g-i-1$ in the third case.(The total genus of the original fiber of $X$, drawn in full lines, must be $g$.) Finally, counting the number of “quasi-admissible” blow-ups (drawn by dashed lines), we see that $\mu(T)=0$ for $\Delta{\mathfrak{T}}_{1,i}$, $\mu(T)=1$ for $\Delta{\mathfrak{T}}_{2,i}$, and $\mu(T)=2$ for $\Delta{\mathfrak{T}}_{3,i}$ (cf. Lemma \[mu(C)\]). Note that there are no ramification modifications. The contribution of each such fiber $\widehat{\phi}^*T$ to the sum $\mathfrak{S}$ is only one summand of the first type ($E,E^-$reduced), plus the quasi-admissible adjustment $g\mu(T)$. If $\widehat{\phi}^*T$ corresponds to the boundary divisor $\Delta{\mathfrak{T}}_{k,i}$, we denote this contribution by $c_{k,i}$. In conclusion, $$c_{k,i}= \frac{1}{4}\big(6(2+p_{\!\stackrel{\phantom{.}}{E}})(g-p_{\!\stackrel {\phantom{.}}{E}})-12g\big)+g\mu(T)\,\,\Rightarrow\,\, c_{k,i}=\frac{3}{2}(i+2)(g-i)-(4-k)g,\,\,k=1,2,3.$$ ### Contributions of $\Delta{\mathfrak{T}}_{4,i}$ and $\Delta{\mathfrak{T}}_{5,i}$: ramification index 1 {#contribution2} In each of these cases, the fiber $T$ of $\widehat{Y}$ consists of two rational curves $E_1$ and $E_2$, and the root $R=E_1^-$ (cf. Fig. \[coef2.fig\]). There are no nonreduced components in $T$, so the contribution to $\mathfrak{S}$ consists of two summands of the first type ($E,E^-$ nonreduced), plus a quasi-admissible adjustment of $\mu(T)=1$ for $\Delta{\mathfrak{T}}_{5,i}$, and a ramification adjustment of $g$ in both cases: $$c_{k,i}=\frac{1}{4}\sum_{j=1,2} \big(6(2+p_{\!\stackrel{\phantom{.}}{E_j}})(g-p_ {\!\stackrel{\phantom{.}}{E_j}}) -12g\big)+g\mu(T)+g\,\,{\operatorname}{for}\,\,k=4,5.$$ $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=coef2.ps,width=3.25in,height=2in}} \hspace{-1mm}\end{array}}$$ The arithmetic genus of the nonreduced component of $\widehat{X}$ is $-2$, and its intersection number with each of the neighboring components is 2. Setting $p_a(\widehat{\phi}^*R)=i$ forces $p_a(\widehat{\phi}^*E_2)= g-i-1$. Hence, $p_{\!\stackrel{\phantom{.}}{E_1}}=g-i-1$ and $p_{\!\stackrel{\phantom{.}}{E_2}}=g-i-2$. Substituting: $$c_{k,i}=3(g-i)(i+1)-\frac{7g-3}{2}+g\mu(T),$$ $$\vspace*{-5mm} c_{4,i}=3(i+1)(g-i)-\frac{7g-3}{2},\,c_{5,i}=3(i+1)(g-i)-\frac{7g-3}{2}+2g.$$ ### Contribution of $\Delta{\mathfrak{T}}_{6,i}$: ramification index 2 {#contribution3} It remains to consider the case of ramification index 2. Here there are four components $E$ besides the root $R$ in the special fiber $T\subset \widehat{Y}$. Consequently, there are four summands in $\mathfrak{S}$ corresponding to the $E_i$’s: $E_1$ and $E_4$ yield summands of the first type ($E,E^-$ reduced), $E_2$ yields a summand of the second type ($E$ nonreduced), and $E_3$ yields a summand of the third type ($E^-$ nonreduced). $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=coef3.ps,width=3.5in,height=1.5in}} \hspace{-1mm}\end{array}}$$ Since $\mu(T)=0$, and the ramification adjustment is $3g$, we obtain for the contribution of $\Delta{\mathfrak{T}}_{6,i}$ to $\mathfrak{S}$ the following expression: $$\begin{aligned} c_{6,i}\!\!&\!\!=\!\!&\!\!\frac{1}{4}\big(6(2+ p_{\!\stackrel{\phantom{.}}{E_1}})(g- p_{\!\stackrel{\phantom{.}}{E_1}})-12g\big)+ \frac{1}{4}\big(6(2+p_{\!\stackrel{\phantom{.}}{E_4}})(g- p_{\!\stackrel{\phantom{.}}{E_4}})-12g\big)+\\ \!\!&\!\!+\!\!&\!\!\frac{1}{4}\left(3( p_{\!\stackrel{\phantom{.}}{E_2}}+5)(2g- p_{\!\stackrel{\phantom{.}}{E_2}}-1)-19g+6\right) -\frac{1}{4}\left(3(p_{\!\stackrel{\phantom{.}}{E_3}}+2)^2+5g-6\right)+3g.\end{aligned}$$ The arithmetic genera of the components in $\widehat{X}$ are denoted in the Fig. \[coef3.fig\]. It is easy to see that $p_{\!\stackrel{\phantom{.}}{E_4}} =i$, $p_{\!\stackrel{\phantom{.}}{E_3}}=i-3$, $p_{\!\stackrel{\phantom{.}}{E_2}}=i-2$, $p_{\!\stackrel{\phantom{.}}{E_1}}=i-2$. Finally, $$c_{6,i}=\frac{9}{2}i(g-i)-\frac{3}{2}(g-1).$$ Proof of Proposition \[bogomolov1\] {#Proof} ----------------------------------- In the above discussion we calculated the contributions of the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$ to the sum $\mathfrak{S}$, so that $\mathfrak{S}=\sum_{k,i}c_{k,i}$ with $k=1,...,6$, and the corresponding limits for the index $i$ (cf. Prop. \[boundary\]). It is now clear what the divisor ${\mathcal}{E}$ should be. We set ${\mathcal}{E}:=\sum_{k,i}c_{k,i}\Delta{\mathfrak{T}}_{k,i}$, and thus, $\mathfrak{S}={\mathcal}{E}|_B$, $$\Rightarrow\,\,\, (7g+6)\lambda|_B=g\delta|_B+{\mathcal}{E}|_B+\frac{g-3}{2}(4c_2(V)-c_1^2(V)).$$ Using the restrictions on the index $i$ for each type of boundary divisor $\Delta{\mathfrak{T}}_{k,i}$, one can easily deduce that all coefficients $c_{k,i}> 0$. For instance, when $i=1,...,[g/2]$: $$c_{6,i}=\frac{9}{2}i(g-i)-\frac{3}{2}(g-1)>\frac{9}{2}1\cdot (g-1)-\frac{3}{2}(g-1)=3(g-1)>0.$$ In other words, ${\mathcal}{E}$ is an effective rational linear combination of boundary divisors in $\overline{\mathfrak{T}}_g$, which by construction does not contain $\Delta{\mathfrak{T}}_0.\,\,\,\qed$ The slope bound $7+6/g$ and a relation restricted to the base curve $B$ {#slopebound} ----------------------------------------------------------------------- Recall that a vector bundle $V$ of rank 2 is [*Bogomolov semistable*]{} if $4c_2(V)\geq c^2_1(V)$. For a general base curve $B$, if the canonically associated vector bundle $V$ is Bogomolov semistable, then the slope of $X/\!_{\displaystyle{B}}$ is bounded by $$\frac{\delta|_B}{\lambda|_B}\leq 7+\frac{6}{g}\cdot$$ \[7+6/g Bogomolov\] The statement follows directly from Prop. \[bogomolov1\]. Indeed, since ${\mathcal}{E}$ is effective, then ${\mathcal}{E}|_B\geq 0$. By hypothesis, $4c_2(V)-c^2_1(V)\geq 0$, and $g\geq 3$. Hence, $(7g+6)\lambda|_B\geq g\delta|_B.$ For a general base curve $B$ the following relation holds true: $$\begin{aligned} \!(7g+6)\lambda|_B&\!\!\!=\!\!\!&g\delta_0|_B+ \frac{g-3}{2}\left(4c_2(V)-c_1^2(V)\right)\\ &\!\!\!+\!\!\!\!\!\!& \sum_{i=1}^{[(g-2)/2]}\frac{3}{2}(i+2)(g-i)\delta_{1,i}|_B+ \sum_{i=1}^{g-2}\frac{3}{2}(i+2)(g-i)\delta_{2,i}|_B\\ &\!\!\!+&\sum_{i=1}^{[g/2]}\frac{3}{2}(i+1)(g-i+1)\delta _{3,i}|_B+\sum_{i=1}^{[(g-1)/2]} \big(3(i+1)(g-i)-\frac{g-3}{2}\big)\delta_{4,i}|_B\\ &\!\!\!+&\sum_{i=1}^{g-1}\big(3(i+1)(g-i)-\frac{g-3}{2}\big) \delta_{5,i}|_B+\sum_{i=1}^{[g/2]} \big(\frac{9}{2}i(g-i)-{\frac{g-3}{2}}\big)\delta_{6,i}|_B.\end{aligned}$$ \[analog1\] [*Proof.*]{} This is an immediate consequence of the established relation in Prop. \[bogomolov1\]. We replace $\delta$ by the linear combination (\[divisorrel\]) of the boundary classes of $\overline{\mathfrak{T}}_g$, and write $$(7g+6)\lambda=g\delta_0|_B+\sum_{k,i}\widetilde{c}_{k,i}\delta_{k,i}|_B +\frac{g-3}{2}(4c_2(V)-c_1^2(V)),$$ for some new coefficients $\widetilde{c}_{k,i}$. Recall that ${\operatorname}{mult}_{\delta}(\delta_{k,i})$ denotes the [*multiplicity*]{} of $\delta_{k,i}$ in $\delta$, so that $\widetilde{c}_{k,i}=c_{k,i}+{\operatorname}{mult}_{\delta}(\delta_{k,i})g$. For example, the coefficient of $\delta_{1,i}$ is $$\widetilde{c}_{1,i}=\left\{\frac{3}{2}(i+2)(g-i)-3g\right\}+3g= \frac{3}{2}(i+2)(g-i),$$ or the coefficient of $\delta_{5,i}$ is $$\widetilde{c}_{5,i}=\left\{3(i+2)(g-i)-\frac{7g-3}{2}+2g\right\}+g= 3(i+1)(g-i)-\frac{g-3}{2}.\,\,\,\qed$$ 10. Generalized Index Theorem and Upper Bound {#generalized-index-theorem-and-upper-bound .unnumbered} ============================================= \[Index1\] For a general base curve $B$ and for the rank 2 vector bundle $V$ on $\widehat{Y}$, we have $9c_2(V)-2c_1^2(V)\geq 0.$ \[genindex\] The proof is identical to that of Theorem \[indextheorem\]. One considers the divisor $\eta$ on $\widehat{X}$ defined by $$\eta:=\left(\zeta+\frac{1}{3}\pi^*c_1(V)\right)\big|_ {\widehat{X}},$$ and shows that $\eta$ kills the pullback of any divisor on $\widehat{Y}$. In particular, $\eta$ kills an ample divisor on $\widehat{X}$. By the index theorem on $\widehat{X}$, $\eta^2 \leq 0$. From expression (\[genX\]), this can be also written as $9c_2(V)-2c_1^2(V)\geq 0.$ As in Section 7, the index theorem on $\widehat{X}$ suggests to replace the Bogomolov difference $4c_2(V)-c^2_1(V)$ by another linear combination of $c_2(V)$ and $c^2_1(V)$, which would behave in a more “predictable” way, namely, by $9c_2(V)-2c_1^2(V)$. In the process of doing so, the only way to eliminate the unnecessary global terms $d$ and $c$ from a relation among $\lambda|_B$ and $\delta|_B$ is to subtract: $36(g+1)\lambda|_B-(5g+1)\delta|_B.$ For a general base curve $B$ and an effective rational combination ${\mathcal}{E}^{\prime}$ of the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$, not containing $\Delta{\mathfrak{T}}_0$, we have: $$36(g+1)\lambda|_B=(5g+1)\delta|_B+{\mathcal}{E}^{\prime}|_B+(g-3) \big(9c_2(V)-2c_1^2(V)\big).$$ \[indexrelation\] Note the apparent similarity between this relation and Prop. \[bogomolov1\]. One may use the latter to prove the former, but the calculations are not simpler than if one starts from scratch. We will show a sketch of this proof, leaving the details to the reader, and referring to the proof of Prop. \[bogomolov1\] for comparison. We denote by $\mathfrak{S}^{\prime}$ the difference $$\mathfrak{S}^{\prime}:= 36(g+1)\lambda|_B-(5g+1)\delta|_B -(g-3)\left(9c_2(V)-2c_1^2(V)\right).$$ Substituting for $\delta|_B,\lambda|_B$ and $c_1^2(V)$ the corresponding identities from Prop. \[hatlambda\] and Example 8.1, and recalling that $c=g+2$ (cf. Lemma \[adjunction\]), we write $\mathfrak{S}^{\prime}$ as $$\begin{aligned} \mathfrak{S}^{\prime}&= &-\sum_E\left\{m_{\!\stackrel{\phantom{.}}{E}}\left(8\Gamma^2+ 8(g+2)\Gamma\Theta+9(g+1)\Theta^2\right)_{\!\stackrel{\phantom{.}}{E}}+6(g-1) \right\}\\ &&+\,(5g+1)\left(\sum_T \mu(T)+\sum_{{\operatorname}{ram}1}1 +\sum_{{\operatorname}{ram}3} 3\right).\end{aligned}$$ As in Lemma \[bogomolov1\], we group the above summands for every special fiber in $\widehat{X}$, and correspondingly, for every chain $T$ in $\widehat{Y}$. Recall Corollary \[constants\], and the computations of the arithmetic genera $p_{\!\stackrel{\phantom{.}}{E}}$ in the previous section: $$\begin{aligned} \mathfrak{S}^{\prime} &\!\!=\!\!&(5g+1)\big(\sum_T \mu(T)+\sum_{{\operatorname}{ram}1} 1+\sum_{{\operatorname}{ram}2} 3\big)+\!\!\sum_{E,E^-{\operatorname}{red}}\!\!\left(8(p_{\!\stackrel{\phantom{.}}{E}}+2) (g-p_{\!\stackrel{\phantom{.}}{E}})-3(5g+1)\right)\\ &\!\!-\!\!\!\!&\sum_{E^-{\operatorname}{nonred}}\!\!\!\!\!\left(4 (p_{\!\stackrel{\phantom{.}}{E}}+2)^2+6(g-1)\right)\,+ \sum_{E{\operatorname}{nonred}}\!\!\left(4(p_{\!\stackrel{\phantom{.}}{E}}+5) (2g-1-p_{\!\stackrel{\phantom{.}}{E}})-12(g-1)\right).\end{aligned}$$ With this at hand, it is not hard to calculate the contributions $d_{k,i}$ of each boundary component $\Delta{\mathfrak{T}}_{k,i}$ to the sum $\mathfrak{S}^{\prime}$: $$\begin{array}{|l|l|}\hline \!d_{1,i}\stackrel{\phantom{l}}{=}8(i+2)(g-i)\phantom{+1}\!-3(5g+1)\!& \!d_{4,i}=16(i+1)(g-i)-2(g-3)-3(5g+1)\\ \!d_{2,i}\stackrel{\phantom{l}}{=}8(i+2)(g-i)\phantom{+1}-\!2(5g+1)\!& \!d_{5,i}\stackrel{\phantom{l}}{=}16(i+1)(g-i)-2(g-3)-\phantom{3}(5g+1)\\ \!d_{3,i}\stackrel{\phantom{l}}{=}8(i+1)(g-i+1)-(5g+1)\!& \!d_{6,i}\stackrel{\phantom{l}}{=}24i(g-i)-(5g+1).\phantom{\big)}\\\hline \end{array}$$ \[d\_[k,i]{}table\] Let ${\mathcal}{E}^{\prime}=\sum_{k,i}d_{k,i}\Delta{\mathfrak{T}}_{k,i}$. Then $\mathfrak{S}^{\prime}={\mathcal}{E}^{\prime}|_B$, and the desired relation would be established if ${\mathcal}{E}^{\prime}$ is effective. Given the restrictions on the indices $i$ of the coefficients $d_{k,i}$ in Prop. \[boundary\], one easily shows that all $d_{k,i}>0.$ For a general base curve $B$, the slope satisfies: $$\frac{\delta}{\lambda}\leq \frac{36(g+1)}{5g+1},$$ with equality if and only all fibers of $X$ are irreducible curves, and either $g=3$ or the divisor $\eta$ on the total space of ${X}$ is numerically zero. \[genmaximal\] From the Index Theorem on $\widehat{X}$, it follows that $9c_2(V)-2c_1^2(V)\geq 0$. Since ${\mathcal}{E}^{\prime}$ is effective, ${\mathcal}{E}^{\prime}|_B\geq 0$. Then Prop. \[indexrelation\] implies $36(g+1)\lambda|_B\geq(5g+1)\delta|_B$, with equality exactly when $9c_2(V)-2c_1^2(V)=0$ and ${\mathcal}{E}^{\prime}|_B=0$. The latter means that $B\cap \Delta{\mathfrak{T}}_{k,i}=\emptyset$ because all coefficients $d_{k,i}$ of ${\mathcal}{E}^{\prime}$ are strictly positive. In other words, the family $\widehat{X}$ has only [*irreducible*]{} fibers ($B\cap\Delta{\mathfrak{T}}_{0}\not = \emptyset$). This takes us back to Section 7, where we presented the global calculation on the triple cover $X\rightarrow Y$. There we concluded that the [*index condition*]{} $9c_2(V)-2c_1^2(V)=0$ was equivalent to $\eta\equiv 0$ on $X$($=\widehat{X}$), or the genus $g=3$. For a general base curve $B$, $$\begin{aligned} \vspace*{-1mm} \!36(g+1)\lambda|_B&\!\!\!\!=\!\!\!\!&(5g+1)\delta_0|_B+ (g-3)\left(9c_2(V)-2c_1^2(V)\right)\\ &\!\!\!\!+\!\!\!\!\!&\sum_{i=0}^{[(g-2)/2]}8(i+2)(g-i)\delta_{1,i}|_B+ \sum_{i=1}^{g-2}8(i+2)(g-i)\delta_{2,i}|_B\\ &\!\!\!\!+\!\!\!\!\!&\sum_{i=1}^{[g/2]}8(i+1)(g-i+1)\delta_{3,i}|_B+ \!\!\!\sum_{i=1}^{[(g-1)/2]}\!\!\!\!\! \big(16(i+1)(g-i)-2(g-3)\big)\delta_{4,i}|_B\\ &\!\!\!\!+\! \!\!\!&\sum_{i=1}^{g-1}\big(16(i+1)(g-i)-2(g-3)\big)\delta_{5,i}|_B+ \sum_{i=1}^{[g/2]}24i(g-i)\delta_{6,i}|_B.\end{aligned}$$ \[analog2\] \[page analog2\] [*Proof.*]{} We only need to substitute the known expressions for the divisors ${\mathcal}{E}^{\prime}$ and $\delta$ into Prop. \[indexrelation\]: $$36(g+1)\lambda|_B=(5g+1)\delta_0|_B+\sum_{k,i}\big((5g+1){\operatorname}{mult}_{\delta} (\delta_{k,i})+d_{k,i}\big)+(g-3)\big(9c_2(V)-2c_1^2(V)\big).$$ The rest is a simple calculation. For example, the total coefficient $\widetilde{d}_{3,i}$ of $\delta_{3,i}$ equals $$\begin{aligned} d_{3,i}+(5g+1){\operatorname}{mult}_{\delta}(\delta_{3,i})&=&\{8(i+1)(g-i+1)-(5g+1)\}+ (5g+1)\cdot 1\\ &=&8(i+1)(g-i+1).\,\,\,\qed\end{aligned}$$ 11. Extension to an Arbitrary Base $B$ {#extension-to-an-arbitrary-base-b .unnumbered} ====================================== \[arbitrary\] We extend now the results of Sect. 8-10 to arbitrary nonisotrivial families $X\!\rightarrow \!B$ with smooth trigonal general member. The essential case is when $B$ is [*not*]{} tangent to the boundary $\Delta{\mathfrak{T}}$, from which the remaining cases easily follows. The base curve $B$ not tangent to $\Delta\mathfrak{T}$ {#nontangentB} ------------------------------------------------------ We now drop the hypothesis of the base curve $B$ intersecting the boundary divisors in general points. Instead, for now we only assume that the base curve $B$ is not tangent the boundary $\Delta{\mathfrak{T}}$. This means that all special fibers of $X$ locally look like the general ones (cf. Fig. \[coef1.fig\]–\[coef3.fig\]). Therefore, from the quasiadmissible cover $\widetilde{X}\rightarrow \widetilde{Y}$ we can construct an effective cover $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$ of [*smooth*]{} surfaces $\widehat{X}$ and $\widehat{Y}$. (The [*smoothness*]{} indicates that $B$ is [*not*]{} tangent to any $\Delta{\mathfrak{T}}_{k,i}$. Otherwise, there would be a higher local multiplicity $xy=t^n$ near a node of a special fiber $C_{X}$, $n> 1$. Hence $\widehat{X}$ would be obtained locally via a base change from a smooth surface, but $\widehat{X}$ would have a singular total space.) Now the special fibers of $\widehat{Y}$ are [*trees*]{} $T$ (rather than just chains) of reduced smooth rational curves with occasional nonreduced rational components of multiplicity 2. The latter occur again exactly for each singular point in $C_{\widetilde{X}}$ of ramification index 2 under the quasiadmissible cover $\widetilde{\phi}: \widetilde{X}\rightarrow\widetilde{Y}$ (cf. Fig. \[coef3.fig\]). The notation and conventions from Sections \[conventions\] are also valid here. In particular, for any tree $T$, we fix one of its end (nonreduced) components to be its root $R$, and we define as before the functions $m,\theta,\gamma$ on the components $E$ of $T$. Moreover, since Lemma \[technical\] can be applied also for any tree $T$, the calculations of $\lambda_{\widehat{X}},\kappa_{\widehat{X}}$ and $\delta$ in Prop. \[hatlambda\] and Cor. \[hatdelta\] go through without any modifications. Finally, we wish to extend all results of Sections 8-10 over the new base $B$. The only difference arises in the final calculation of the coefficients $c_{k,i}$ and $d_{k,i}$. The fiber $C_X$ in $X$, corresponding to a tree $T$, may now lie in the intersection of [*several*]{} boundary divisors $\Delta{\mathfrak{T}}_{k,i}$. Such a trigonal curve $C_X$ is called a [*special boundary*]{} curve. Accordingly, its contribution $c_{\!\stackrel{\phantom{.}}{T}}$ to $\mathfrak{S}$ (or $d_{\!\stackrel{\phantom{.}}{T}}$ to $\mathfrak{S}^{\prime}$) will be distributed among these divisors $\Delta{\mathfrak{T}}_{k,i}$’s, rather than just yielding a single coefficient $c_{k,i}$ (or $d_{k,i}$) as before. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=arbitrary.ps,width=1.6in,height=1.5in}} \hspace{-1mm}\end{array}}$$ This can be easily resolved. The idea is to replace any special singular fiber in $\widehat{X}$ by a suitable combination of [*general*]{} fibers, without changing the sums $\mathfrak{S}$ and $\mathfrak{S}^{\prime}$. We can imagine this as “moving” the base curve $B$ in $\overline{\mathfrak{T}}_g$ [*away from*]{} the special singular locus of $\overline{\mathfrak{T}}_g$, and replacing it with a [*general*]{} base curve $B^{\prime}$, as defined in Section 8. For example, in Fig. \[arbitrary1\] the base $B$ passes through a point $p$ in the intersection of two boundary divisors $\Delta\mathfrak{T}_{k,i}$. Two new general points $p_1$ and $p_2$, each lying on a single $\Delta\mathfrak{T}_{k,i}$, replace the special point $p$, and thus $B$ moves to a [*general*]{} curve $B^{\prime}$. Let $C_X$ be a special boundary curve in $\overline{\mathfrak{T}}_g$. Denote by $\alpha_{k,i}$ the degree of the point $[C_X]$ in the intersection $\Delta{\mathfrak{T}}_{k,i}\cdot B$. Then the contributions of $T=\widehat{\phi}(C_{\widehat{X}})$ to $\mathfrak{S}$ and to $\mathfrak{S}^{\prime}$ are $c_{\!\stackrel{\phantom{.}}{T}}=\sum_{k,i}\alpha_{k,i}c_{k,i}$ and $d_{\!\stackrel{\phantom{.}}{T}}=\sum_{k,i}\alpha_{k,i}d_{k,i}$, respectively. \[contributions d\_Tc\_T\] [*Proof:*]{} Rewrite $\mathfrak{S}$ and $\mathfrak{S}^{\prime}$ as sums over the non-root components $E$ of the special trees $T$: $$\begin{aligned} \mathfrak{S}&\!\!=\!\!& \sum_{E,E^-{\operatorname}{red}}F_1(p_{\!\stackrel{\phantom{.}}{E}})+ \sum_{E^-{\operatorname}{nonred}}\!\!\!\!\!F_2(p_{\!\stackrel{\phantom{.}}{E}}) +\sum_{E{\operatorname}{nonred}}\!\!F_3(p_{\!\stackrel{\phantom{.}}{E}}) +gH,\\ \mathfrak{S}^{\prime}&\!\!=\!\!& \sum_{E,E^-{\operatorname}{red}}G_1(p_{\!\stackrel{\phantom{.}}{E}})+ \sum_{E^-{\operatorname}{nonred}}\!\!\!\!\!G_2(p_{\!\stackrel{\phantom{.}}{E}}) +\sum_{E{\operatorname}{nonred}}\!\!G_3(p_{\!\stackrel{\phantom{.}}{E}})+(5g+1)H,\end{aligned}$$ where $H=\sum_T\mu(T)+\sum_{{\operatorname}{ram}1}1+\sum_{{\operatorname}{ram}2}3$ is the quasi-admissible and effective adjustment, and the functions $F_i$ and $G_j$ are quadratic polynomials in $p_{\!\stackrel{\phantom{.}}{E}}$ with linear coefficients in $g$. Recall that in these sums each non-root component $E$ appears exactly once, and $p_{\!\stackrel{\phantom{.}}{E}}$ is the arithmetic genus of the inverse image $\widehat{\phi}^*(T(E))$ of the subtree $T(E)$ generated by $E$. There is a simple way to recognize the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$ in which a special trigonal fiber $C_X$ lies. Consider the corresponding “effective” fiber $C_{\widehat{X}}=\widehat{\phi}^*T$ in $\widehat{X}$. For any non-root component $E$ in $T$ there are two possibilities: either $\widehat{\phi}^*E$ and $\widehat{\phi}^*{E^-}$ are both reduced, or $E$ is part of a chain of length 3 or 5, constructed to resolve ramifications in the quasi-admissible fiber $C_{\widetilde{X}}$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=notchain.ps,width=3.8in,height=1.8in}} \hspace{-1mm}\end{array}}$$ ### Contributions to the degrees $\alpha_{1,i},\alpha_{2,i},\alpha_{3,i}$ Consider the first situation, and denote by $C^{\prime}$ the preimage $\widehat{\phi}^*E\cup \widehat{\phi}^*{E^-}$ in $\widehat{X}$. Thus, $C^{\prime}$ corresponds to a general member of $\Delta{\mathfrak{T}}_{1,i},\Delta{\mathfrak{T}}_{2,i}, \Delta{\mathfrak{T}}_{3,i}$, possibly of lower genus (cf. Fig. \[notchain\]). As part of the fiber $C_{\widehat{X}}$, the curve $C^{\prime}$ is represented for simplicity by the triple intersection of two [*smooth*]{} trigonal curves (in $\Delta{\mathfrak{T}}_{1,i}$), but it could have corresponded to any general member of $\Delta{\mathfrak{T}}_{2,i}$ or $\Delta{\mathfrak{T}}_{3,i}$. The solid box encompasses the preimage $\widehat{\phi}^*T(E)$, and the dashed box encompasses the preimage of the rest, $\widehat{\phi}^*\big(T-T(E)\big)$. Each of these boxes represents a limit of a quasi-admissible curve, $C_1$ or $C_2$, which is naturally a triple cover of ${{\mathbf P}}^1$. Thus, we can [*“smoothen”*]{} each box to such a curve $C_i$. As a result we obtain a quasiadmissible curve $C_1\cup C_2$ of total genus $g$, which corresponds to a general member of $\Delta{\mathfrak{T}}_{1,i},\Delta{\mathfrak{T}}_{2,i}$ or $\Delta{\mathfrak{T}}_{3,i}$. Depending on which divisor $\Delta{\mathfrak{T}}_{k,i}$ is evoked, there is a corresponding contribution of $1$ to the coefficient $\alpha_{k,i}$: $[C_X]\in\Delta{\mathfrak{T}}_{k,i}$. Note that the arithmetic genus of $C_2$ is the previously defined $p_{\!\stackrel{\phantom{.}}{E}}$. The contribution of $E$ to $\mathfrak{S}$ is $F_1(p_{\!\stackrel{\phantom{.}}{E}})$ plus the possible quasi-admissible adjustment in $\mu(T)$ needed to obtain $\widetilde {\phi}^*(E\cup E^-)$. In view of the above “smoothening”, this can be thought of as the contribution of $C_2$ in the effective curve $C_1\cup C_2$, and by Prop. \[bogomolov1\] this is exactly the coefficient $c_{k,i}$. The same argument works in the case of $\mathfrak{S}^{\prime}$ from Prop. \[indexrelation\]. We conclude that $\alpha_{k,i}$ (for $k=1,2,3$) equals the number of $c_{k,i}$’s and $d_{k,i}$’s in $\mathfrak{S}$ and $\mathfrak{S}^{\prime}$, respectively. ### Contributions to the degrees $\alpha_{4,i},\alpha_{5,i},\alpha_{6,i}$ We treat analogously the remaining case when the component $E$ is part of a chain of length 3 or 5. Here, however, one must consider [*simultaneously all*]{} the components $E$ of $T$ participating in such a chain, and take a quasi-admissible limit only [*over the reduced*]{} curves in $C_{\widehat{X}}$. In Fig. \[chain\] one can see all three ramification cases, or equivalently, the boundary divisors $\Delta{\mathfrak{T}}_{4,i},\Delta{\mathfrak{T}}_{5,i}$ and $\Delta{\mathfrak{T}}_{6,i}$. For simplicity, we have again depicted the reduced components in $\widehat{X}$ by smooth trigonal curves, which may not always be true for every tree $T$: they could, for instance, be singular or reducible, but they will keep the ramification index 1 or 2 at the appropriate points. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=chain.ps,width=5.5in,height=2in}} \hspace{-1mm}\end{array}}$$ To see how $c_{k,i}$ and $d_{k,i}$ are obtained, let us calculate, for example, the contributions of $E_1,E_2,E_3$ and $E_4$ in the case of $\Delta{\mathfrak{T}}_{6,i}$. The inverse images in $\widehat{X}$ of $T-T(E_1)$ and $T(E_4)$ are marked by dashed and solid boxes, respectively. We [*“smoothen”*]{} each box by a smooth trigonal curve, $C_1$ or $C_2$, and keep the inverse images of $E_1$,$E_2$ and $E_3$. Thus, we obtain a general member ${C}^{\prime\prime}$ of $\Delta{\mathfrak{T}}_{6,i}$. The arithmetic genera, necessary to calculate the contribution of ${C}^{\prime\prime}$ to $\mathfrak{S}$, are given from right to left by: $$p_a(C_2)=p_{\!\stackrel{\phantom{.}}{E_4}},\,\, p_{\!\stackrel{\phantom{.}}{E_3}}=p_{\!\stackrel{\phantom{.}}{E_4}}\!\!-3,\,\, p_{\!\stackrel{\phantom{.}}{E_2}}=p_{\!\stackrel{\phantom{.}}{E_4}}\!\!-2,\,\, p_{\!\stackrel{\phantom{.}}{E_1}}=p_{\!\stackrel{\phantom{.}}{E_4}}\!\!-2.$$ As in the proof of Prop. \[bogomolov1\], we substitute these in the sum $\mathfrak{S}$, and for $i=p_{\!\stackrel{\phantom{.}}{E_4}}$ we obtain $$F_1(E_1)+F_1(E_4)+F_2(E_2)+F_3(E_3)+3g=\frac{9}{2} p_{\!\stackrel{\phantom{.}}{E_4}}(g-p_{\!\stackrel{\phantom{.}}{E_4}})- \frac{3}{2}(g-1)=c_{6,i}.$$ Combining all of the above results, we conclude that the contributions of any tree $T$ to the sums $\mathfrak{S}$ and $\mathfrak{S}^{\prime}$ are $c_{\!\stackrel{\phantom{.}}{T}}=\sum_{k,i}\alpha_{k,i}c_{k,i}\,\,{\operatorname}{and}\,\, d_{\!\stackrel{\phantom{.}}{T}}=\sum_{k,i}\alpha_{k,i}d_{k,i}.$ This allows us to extend all results of Sect. \[Bogomolov1\]–\[Index1\] to the case of a base curve $B$ meeting transversally the boundary $\Delta{\mathfrak{T}}_g$. Extension to an arbitrary base $B$, not contained in $\Delta{\mathfrak{T}}_g$ {#extension} ----------------------------------------------------------------------------- If the base curve $B$ happens to be [*tangent*]{} to a boundary divisor $\Delta\mathfrak{T}_{k,i}$ at a point $[C_{X}]$, then over some node $p$ of the corresponding tree $T=\widehat{\phi}(C_{\widehat{X}})$ [*all*]{} local analytic multiplicities $m_q$ (cf. Sect. \[definition\]) will be multiplied by the degree of tangency of $B$ and $\Delta\mathfrak{T}_{k,i}$. Fig. \[Local multiplicities\] presents a few examples of possible fibers in $\widetilde{X}$: $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=tangent.ps,width=5.4in,height=1.2in}} \hspace{-1mm}\end{array}}$$ In the nonramification cases of $\Delta\mathfrak{T}_{1,i},\Delta\mathfrak{T}_{2,i}$ and $\Delta\mathfrak{T}_{3,i}$, this would force rational double points as singularities on the total spaces of $\widehat{X}$ and $\widehat{Y}$, whereas in the ramification cases of $\Delta\mathfrak{T}_{4,i},\Delta\mathfrak{T}_{5,i}$ and $\Delta\mathfrak{T}_{6,i}$, one may arrive at surfaces $\widehat{X}$ and $\widehat{Y}$, nonnormal over some nonreduced fibers. But in both cases, one can roughly view the corresponding fibers as being obtained by a base change from the general or special fibers of Sect. 8 and Sect.  11.1. Alternatively, one can go through the arguments of the paper for the new surfaces $\widehat{X}$ and $\widehat{Y}$ (normalizing, if necessary), and notice that all formulas (e.g. Euler characteristic formula for $\lambda$, Index theorem on $\widehat{X}$, adjunction formula in ${\mathbf P}V$, etc.) hold for surfaces with double point singularities. Thus, in effect, one may replace a given singular fiber $C_X$ by a bunch of general boundary curves $C$, following the procedure described in Section \[nontangentB\]. Furthermore, if some of these general curves $C$ are “multiple” (i.e. $B$ is tangent to $\Delta\mathfrak{T}_{k,i}$ at $[C]$), one may in turn replace each $C$ by several “transversal” general boundary curves, and refer to the statements in Sections 8.3 and 9.3. The only notational difference in this approach will appear in the definition of the invariants $m,\theta$ and $\gamma$ from Sect. 8: now we have to allow for them to be [*rational*]{}, rather than integral, due to possible rational intersections $E\cdot E^-$. This will be “compensated” in the final calculations, which will take into account the multiplicity of the corresponding fibers, and roughly speaking, will “multiply back” our invariants $\delta,\lambda$ and $\kappa$ by what they were divided by in the beginning of the calculations. This concludes the proof of our results for all families of stable curves $X\rightarrow B$ with general smooth trigonal members. Statements of the results for any family $X\rightarrow B$ {#results} --------------------------------------------------------- In the following list of results, Theorems \[7+6/g relation2\] and \[maximal relation2\] can be viewed as local trigonal analogs of Cornalba-Harris’s relation in the Picard group of the hyperelliptic locus $\overline{\mathfrak{I}}_g$ (cf. Theorem \[CHPic\]). Similarly, Theorem \[maximal bound2\] is the analog of the $8+4/g$ maximal bound in the hyperelliptic case (cf. Theorem \[CHX\]). For any family $X\rightarrow B$ of stable curves with smooth trigonal general member, if $V$ is the canonically associated to $X$ vector bundle of rank 2, then the following relation holds true $$(7g+6)\lambda|_B=g\delta|_B+{\mathcal}{E}|_B+\frac{g-3}{2}\left(4c_2(V)-c_1^2(V) \right),$$ where ${\mathcal}{E}$ is an effective rational linear combination of boundary components of $\overline{\mathfrak{T}}_g$, not containing $\Delta{\mathfrak{T}}_0$. In particular, $$(7g+6)\lambda|_B=g\delta_0|_B+\sum_{k,i}\widetilde{c}_{k,i} \delta_{k,i}|_B+\frac{g-3}{2}\left(4c_2(V)-c_1^2(V)\right),$$ where $\widetilde{c}_{k,i}$ is quadratic polynomial in $i$ with linear coefficients in $g$, and it is determined by the geometry of $\delta_{k,i}$ (cf. p. ). \[7+6/g relation2\] For any nonisotrivial family $X\rightarrow B$ of stable curves with smooth trigonal general member, if the canonically associated to $X$ vector bundle $V$ is Bogomolov semistable, then the slope of $X/\!_{\displaystyle{B}}$ is bounded from above by $$\frac{\delta}{\lambda}\leq 7+\frac{6}{g}\cdot\vspace*{-5mm}$$ \[7+6/g Bogomolov2\] For any family $X\rightarrow B$ of stable curves with smooth trigonal general member, if $V$ is the canonically associated to $X$ vector bundle of rank 2, then the following relation holds true $$36(g+1)\lambda|_B=(5g+1)\delta|_B+{\mathcal}{E}^{\prime}|_B+(g-3) \big(9c_2(V)-2c_1^2(V)\big),$$ where ${\mathcal}{E}^{\prime}$ is an effective rational combination of the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$, not containing $\Delta{\mathfrak{T}}_0$. In particular, $$36(g+1)\lambda|_B=(5g+1)\delta_0|_B+\sum_{k,i}\widetilde{d}_{k,i} \delta_{k,i}|_B+(g-3)\left(9c_2(V)-2c_1^2(V)\right),$$ where $\widetilde{d}_{k,i}$ is quadratic polynomial in $i$ with linear coefficients in $g$, and it is determined by the geometry of $\delta_{k,i}$ (cf. p. ). \[maximal relation2\] For any nonisotrivial family $X\rightarrow B$ of stable curves with smooth trigonal general member, the slope of $X/\!_{\displaystyle{B}}$ satisfies: $$\frac{\delta}{\lambda}\leq \frac{36(g+1)}{5g+1},$$ with equality if and only all fibers of $X$ are irreducible curves, and either $g=3$ or the divisor $\eta$ on the total space of ${X}$ is numerically zero. \[maximal bound2\] \[list of theorems\] What happens with the hyperelliptic locus $\overline{\mathfrak{I}}_g$ --------------------------------------------------------------------- As we promised in Section \[hyperelliptic locus\], we consider the contribution of the hyperelliptic locus to the above theorems. For any hyperelliptic curve $C$, we need to blow up a point on $C$ before it starts “behaving” like a trigonal curve in the quasi-admissible and effective covers. Below we have shown what happens to a smooth or general singular hyperelliptic curve (cf. Fig. \[hyperboundary\] for the admissible classification of the boundary locus $\Delta\mathfrak{I}_g$). $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=smoothhyper.ps,width=1in,height=1in}} \hspace{-1mm}\end{array}}$$ ### Smooth hyperelliptic curves We blow up $C$ at a point, and thus add a smooth rational component ${\mathbf P}^1$ to make it a triple cover $C^{\prime}$ (cf. Fig. \[smoothhyper\]). The quasi-admissible adjustment of $C$ is $\mu(C^{\prime})=1$. From here on, $C$ will behave essentially like a smooth trigonal curve. Therefore, in all relations $C$ is going to contribute $g$ or $(5g+1)$, depending on what $\delta$ is multiplied by. ### Singular hyperelliptic curves in $\Delta\mathfrak{T}_{2,i}$ and $\Delta\mathfrak{T}_{5,i}$ The necessary effective and quasi-admissible modifications are shown in Fig. \[singularhyper\]–47. In the first case, there are two hyperelliptic components intersecting transversally in two points. For the quasi-admissible cover, we need two “smooth” blow-ups, which makes $\mu=2$. From now on, this curve will behave like a element of $\Delta\mathfrak{T}_{2,i}$, where $\mu_{2,i}=1$. Thus, the coefficient in, say, the maximal bound relation will be: $\widetilde{d}_{2,i}+(5g+1)$, due to the extra blow-up in $\mu$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=singularhyper.ps,width=5in,height=1.1in}} \hspace{-1mm}\end{array}}$$ In the second case, two hyperelliptic components meet transversally in one point, but have a ramification index 1 at this point when viewed as double covers. Fig.46 presents first the quasi-admissible modification: as in the case of $\Delta\mathfrak{T}_{5,i}$, the local analytic multiplicity between the two rational components is $2$, which means that we must have made three “smooth” blow-ups and one “singular” blow-down. As a result, $\mu=3$. From here on, this curve behaves exactly as a general member of $\Delta\mathfrak{T}_{5,i}$. Recall that $\mu_{5,i}=2$, and the extra $1$ in the hyperelliptic case accounts for the one extra blow-up. Therefore, the coefficient of this fiber $C$, say, in the maximal bound relation, will be $\widetilde{d}_{5,i}+(5g+1)$. We conclude that a base curve $B$, passing through the hyperelliptic locus, will contribute in the results listed in Section \[results\] roughly $g$, or $(5g+1)$, times the number of elements in $B\cap\overline{\mathfrak{I}}_g$. We cannot write the latter in the form of a scheme-theoretic intersection, since $\overline{\mathfrak{I}}_g$ is of a larger codimension in $\overline{\mathfrak{T}}_g$. \[hypercalculations\] One can explain these extra summands in the expressions for $\lambda$ in the following way. Recall the projection map $pr_1:\overline{{\mathcal}{H}}_{3,g} \rightarrow \overline{\mathfrak{T}}_g$. The exceptional locus of $pr_1$ is the admissible boundary divisor $\Delta{{\mathcal}{H}}_{3,0}$, which is blown down to the codimension 2 hyperelliptic locus $\overline{\mathfrak{I}}_g$ inside $\overline{\mathfrak{T}}_g$. For calculation purposes, it will be easier to work instead with the space of minimal quasi-admissible covers $\overline{{\mathcal}Q}_{3,g}$, which replaces $\overline{{\mathcal}{H}}_{3,g}$. The same situation of a blow-down occurs, where the exceptional divisor in $\overline{{\mathcal}Q}_{3,g}$ consists of reducible curves $C^{\prime}$, as shown in Fig. \[smoothhyper\]. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=exceptional.ps,width=1.3in,height=1.8in}} \hspace{-1mm}\end{array}}$$ Let $D$ be the linear combination of divisors in $\overline{\mathfrak{T}}_g$ given by the restriction $\Delta|_{\overline{\mathfrak{T}}_g}$, and consider a curve $B\subset \overline{\mathfrak{T}}_g$, intersecting the hyperelliptic locus in finitely many points. By abuse of notation, we denote by $pr_1$ the projection from $\overline{{\mathcal}Q}_{3,g}$ to $\overline{\mathfrak{T}}_g$. Then for the intersection $D\cdot B$ we have: $$D\cdot B=pr_1^*(D)\cdot pr_1^*(B)=pr_1^*(D)\cdot(\overline{B}+\sum E_j),$$ where $\overline{B}$ is the proper transform of $B$, and the $E_j$’s are the corresponding exceptional curves above $B$. Note that each $E_j$ is in fact a line ${\mathbf P}^1$ representing all possible quasi-admissible covers, arising from a hyperelliptic curve $[C]\in B\cap \overline{\mathfrak{I}}_g$. From Fig. \[smoothhyper\], these are the blow-ups of $C$ at a point, one for each involution pair $\{p_1,p_2\}\in g^1_2$, and that is why $E_j\cong{\mathbf P}^1$. The extra summands on p. , induced by the base curve $B$, are result of the extra intersections $pr_1^*(D)\cdot E_j$ from above. Indeed, the relations, as they stand, compute only $pr_1^*(D)\cdot\overline{B}$, the component corresponding to families with general smooth members. From the calculations on p. , we expect that each $pr_1^*(D)\cdot E_j=1$, and this will account for the extra $1$ apprearing in all $\mu$’s. To verify this, we only needs to show $\delta|_{E_j}=1$. Since we cannot pick out canonically one point $p_i$ from each hyperelliptic pair $\{p_1,p_2\}$ on $C$, and thus construct a family of blow-ups at $p_i$ of $C$ over $E_j\cong {\mathbf P}^1$, we make a base change of degree two $C\rightarrow E_j$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=deltahyper.ps,width=1.1in,height=1.3in}} \hspace{-1mm}\end{array}}$$ We construct a family over $C$, corresponding to [*all*]{} blow-ups of $C$ at point $p\in C$. This is simply the products $C\times C$ and ${\mathbf P}^1\times C$, identified at two sections $S_i$: $S_1$ is the diagonal on $C\times C$, and $S_2$ is a trivial section of ${\mathbf P}^1\times C$ over $C$ (cf. Fig. \[deltahyper.fig\]). From [@CH], for the base curve $C$ of this family, the degree $\delta|_C$ is computed as $$\delta|_C=\delta_{C\times C}+\delta_{{\mathbf P}^1\times C}+S_1^2+S_2^2= 0+0+2+0=2.$$ Taking into account the base change $C\rightarrow E_j$, $\delta|_{E_j}=1$. Finally, if we allow for our families to have finitely many hyperelliptic fibers, we adjust the relation in  \[7+6/g relation2\] by $g\Delta{{\mathcal}H}_{3,0}\cdot B$, and the relation in  \[maximal relation2\] by $(5g+1)\Delta{{\mathcal}H}_{3,0}\cdot B$. The two bounds in Theorems \[7+6/g Bogomolov2\]-\[maximal bound2\] are unaffected by the above discussion. 12. Interpretation of the Bogomolov Index $4c_2-c_1^2$ via the Maroni Divisor {#interpretation-of-the-bogomolov-index-4c_2-c_12-via-the-maroni-divisor .unnumbered} ============================================================================= \[Bog-Maroni\] The Maroni invariant of trigonal curves {#Maroniinvariant} --------------------------------------- For any smooth trigonal curve $C$, consider the triple cover $f:C\rightarrow {{{\mathbf P}}^1}$. The pushforward $f_*({\mathcal}{O}_{C})$, as we noted before, is a locally free sheaf of rank 3 on ${{{\mathbf P}}^1}$, and hence decomposes into a direct sum of three invertible sheaves on ${{{\mathbf P}}}^1$: $$f_*({\mathcal}{O}_{C})={\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1}(a)\oplus {\mathcal}{O}_{{{\mathbf P}}^1}(b).$$ The first summand is trivial due to the split exact sequence $$0\rightarrow {V}\rightarrow {\alpha}_*{{\mathcal}O}_{C}\stackrel {{\operatorname}{tr}}{\rightarrow}{{\mathcal}O}_{{{\mathbf P}}^1}\rightarrow 0,$$ where $V={\mathcal}{O}_{{{\mathbf P}}^1}(a)\oplus{\mathcal}{O}_{{{\mathbf P}}^1}(b)$. From GRR, $a+b=g+2$. We have observed in Section 6 that $C$ embeds in the rational ruled surface ${{\mathbf P}}V={\mathbf F}_k$, for $k=|b-a|$. [**Definition 12.1.**]{} The [*Maroni invariant*]{} of an irreducible trigonal curve $C$ is the difference $|b-a|$. The [*Maroni locus*]{} in $\overline{\mathfrak{T}}_g$ is the closure of the set of curves with Maroni invariants $\geq 2$ (cf. \[Ma\]). For a general trigonal curve $C$ the vector bundle $V$ is [*balanced*]{}, i.e. the integers $a$ and $b$ are equal or 1 apart according to $g({\operatorname}{mod}2)$. \[gentrig\] Let $a\leq b$. The statement follows from a dimension count of the linear system $L=|3B_0+\frac{g+2}{2}F|$ on the ruled surface ${\mathbf F}_{b-a}={\mathbf F}_k$. Indeed, all trigonal curves with Maroni invariant $(b-a)/2$ are elements of $L$. If $p:{\mathbf F}_k\rightarrow {{\mathbf P}}^1$ is the projection map, the projective dimension of $L$ equals $$r(L)=h^0\big(p_*{\mathcal}{O}_{{\mathbf F}_k}(3B_0+\textstyle{\frac{g+2}{2}} F)\big)-1.$$ Denoting by $\widetilde{B}=B_0-\frac{k}{2}F$ the section of ${\mathbf F}_k$ with smallest self-intersection of $-k$, we have $p_*{\mathcal}{O}_{{\mathbf F}_k}(\widetilde{B})\cong {\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1}(-k)$. The necessary pushforward from above is: $$p_*{\mathcal}{O}_{{\mathbf F}_k}(3\widetilde{B}+\textstyle{\frac{g+2+3k}{2}}F)= {\operatorname}{Sym}^3({\mathcal}{O}_{{{\mathbf P}}^1}\!\!\oplus{\mathcal}{O}_{{{\mathbf P}}^1}(-k)) \otimes {\mathcal}{O}_{{{\mathbf P}}^1}({\textstyle{\frac{g+2+3k}{2}}})= \!\!\!\!\displaystyle{\bigoplus_{j=\pm 1,\pm 3}} \!\!\!\!{\mathcal}{O}_{{{\mathbf P}}^1}({\textstyle{\frac{g+2+jk}{2}}}).$$ Since an irreducible trigonal curve $C$ lies in $L$, we have $C\cdot \widetilde{B}\geq 0$, hence $g+2-3k\geq 0$ and $g\equiv k ({\operatorname}{mod}2)$. Evaluating the sections of this sum of sheaves, we obtain $r(L)=2g+7.$ The ruled surface ${\mathbf F}_k$ has automorphisms, inducing automorphisms of the linear system $L$. We need to mod out these in order to obtain the dimension of the space of trigonal curves embedded in ${\mathbf F}_k$. The group ${\operatorname}{Aut}{\mathbf F}_k$ is a product (not necessarily direct) of the base automorphisms ${\operatorname}{Aut}{{\mathbf P}}^1={\operatorname}{PGL}_2$, and the projective automorphisms of the vector bundle $V$. The latter is an open set (up to projectivity) of the homomorphisms of $V$ into $V$, and hence has the same dimension as: $${\operatorname}{Hom}(V,V)\cong H^0(V\otimes V\,\,\widehat{\phantom{n}})= H^0\big({\mathcal}{O}_{{{\mathbf P}}^1}(-k)\oplus{\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1}\oplus{\mathcal}{O}_{{{\mathbf P}}^1}(k)\big).$$ For $k>0$, ${\operatorname}{dim}{\operatorname}{Aut}V=k+3$, while for $k=0$, ${\operatorname}{dim}{\operatorname}{Aut}V=4$. We conclude that the dimension of the set of trigonal curves with Maroni invariant $k/2$ is $$r(L)-{\operatorname}{dim}{\operatorname}{Aut}{\mathbf F}_k= \left\{\begin{array}{l} 2g+1\,\,{\operatorname}{if}\,\,k=0,\\ 2g+2-k\,\,{\operatorname}{if}\,\,k>0. \end{array}\right.$$ When $k=0$ or $k=1$, this space corresponds to an open dense set of $\overline{\mathfrak{T}}_g$. For an even $g$ a general trigonal curve has Maroni invariant $0$ and therefore embeds in ${\mathbf F}_0= {{\mathbf P}}^1\times{{\mathbf P}}^1$, while for an odd $g$ a general trigonal curve has Maroni invariant $1$ and embeds in ${\mathbf F}_1={\operatorname}{Bl}_{{\operatorname}{pt}}({\mathbf P}^2)$. In both cases, the vector bundle $V$ is balanced. For $g$ even, the Maroni locus is a divisor in $\overline{\mathfrak{T}}_g$ whose general member embeds in ${\mathbf F}_2$. For $g$ odd, the Maroni locus has codimension 2 in $\overline{\mathfrak{T}}_g$ and its general member embeds in ${\mathbf F}_3$. \[maronilocus\] [**Remark 12.1.**]{} It will be useful to identify precisely the group of authomorphisms of the linear system $L$ for $k=0,1$. We have ${\operatorname}{Aut}({\mathbf P}^1\!\times {\mathbf P}^1)\cong PGL_2\times PGL_2\times {\mathbb Z}/2{\mathbb Z}$. The last factor comes from the exchange of the fiber and the base of ${\mathbf P}^1\times {\mathbf P}^1$ and it is relevant only for $g=4$: then $L=|3B_0+3F|$. Otherwise, for any even $g>4$: $${\operatorname}{Aut}L\cong PGL_2\times PGL_2.$$ When $g$ is odd, the ruled surface ${\mathbf F}_1$ can be thought of as the blow-up of ${\mathbf P}^2$ at the point $q=[0,0,1]$. Any automorphism of ${\operatorname}{Bl}_q{{\mathbf P}^2}$ carries the exceptional divisor $E_q$ of $\mathbf F_1$ to itself, and hence is induced by an automorphism of the plane preserving the point $q$. The group of such automorphisms of ${\mathbf P}^2$ is the subgroup of $PGL_3$ corresponding to matrices: $$\left(\begin{array}{ccc} a_{11} & a_{12} & 0\\ a_{21} & a_{22} & 0\\ a_{31} & a_{32} & a_{33} \end{array}\right).$$ Taking into account the discriminant of these matrices, we easily identify for odd $g$: $${\operatorname}{Aut}L\cong \mathbf A^2\times GL_2.$$ Note that all of the above groups ${\operatorname}{Aut}L$ have dimension $6$, which was claimed already in Lemma \[gentrig\]. Generators of Pic$_{\mathbb{Q}}\overline{\mathfrak{T}}_g$ {#generators} --------------------------------------------------------- The rational Picard group of $\overline{\mathfrak{T}}_g$, ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$, is freely generated by the boundary classes $\delta_0$, $\delta_{k,i}$, and one additional class, which for even genus $g$ coincides with the Maroni class $\mu$. \[genPic\] Since a general trigonal curve $C$ embeds in the ruled surface ${\mathbf F}_k$ ($k=0,1$), $C$ is a member of the linear system $L=|3B_0+ \frac{g+2}{2}F|$ on ${\mathbf F}_k$. Let $U$ be the open set inside ${{\mathbf P}}L\cong {\mathbf P}^{2g+7}$ corresponding to the [*smooth trigonal*]{} members of $L$. The surjection $${\mathbb Z}={\operatorname}{Pic}{\mathbf P}^{2g+7}\twoheadrightarrow {\operatorname}{Pic}U$$ has a nontrivial kernel, because the set of singular trigonal curves in ${\mathbf F}_k$ is a divisor in ${{\mathbf P}}L$. Hence ${\operatorname}{Pic}U={\mathbb Z}/n{\mathbb Z}$ for some integer $n\!>\!0$, and ${\operatorname}{Pic}_{\mathbb{Q}}U\!=\!0$. The image of the natural projection map $p:U\rightarrow \overline{{\mathfrak{T}}}_g$ is the open dense set $W$ of smooth trigonal curves with lowest Marone invariant of $0$ or $1$. Let $F$ denote the fiber of $p$. From Remark 12.1, $$F\cong\left\{\begin{array}{l} PGL_2\times PGL_2\,\,{\operatorname}{if}\,\,g-{\operatorname}{even},g>4;\\ PGL_2\times PGL_2\times {\mathbb Z}/2{\mathbb Z}\,\,{\operatorname}{if}\,\,g=4;\\ {\operatorname}{Aut}L\cong {\mathbf A}^2\times GL_2\,\,{\operatorname}{if}\,\,g-{\operatorname}{odd}. \end{array}\right.$$ Leray spectral sequence or other methods (cf.  [@Gr-Ha; @Milne]) yield: $$H^1(W,f_*{{\mathcal}O}^*_U)\hookrightarrow H^1(U,{{\mathcal}O}^*_U).$$ Pushing the exponential sequence on $U$ to $W$: $$0\rightarrow {\mathbb Z}\rightarrow {{\mathcal}O}_U \rightarrow {{\mathcal}O}^*_U \rightarrow 0\,\, \Rightarrow \,\,0\rightarrow {\mathbb Z}\rightarrow {{\mathcal}O}_{W} \rightarrow f_*{{\mathcal}O}^*_U\rightarrow R^1\!\!f_*{\mathbb Z}.$$ Combining with the exponential sequence on $W$: $$0\rightarrow {{\mathcal}O}^*_{W} \rightarrow f_*{{\mathcal}O}_U^* \rightarrow R^1\!\!f_*{\mathbb Z}\,\, \Rightarrow\,\,H^1(W,{{\mathcal}O}^*_{W}) \stackrel{p^*}{\rightarrow}H^1(U,{{\mathcal}O}_X^*),$$ with ${\operatorname}{ker}p^*\subset H^0(W,R^1\!\!f_*{\mathbb Z})\subset H^1(F,{\mathbb Z})$. For even $g$, $H^1(F,{\mathbb Z})$ is torsion (a direct sum of copies of ${\mathbb Z}/2{\mathbb Z}$), but for odd $g$ it is isomorphic to ${\mathbb Z}$. Hence, for even $g$ we have the natural embedding $p^*:{\operatorname}{Pic}_{\mathbb Q}W\hookrightarrow {\operatorname}{Pic}_{\mathbb Q}U$, and in view of ${\operatorname}{Pic}_{\mathbb{Q}}U=0$, it follows that ${\operatorname}{Pic}_{\mathbb{Q}} W=0$. The complement of $W$ in $\overline{\mathfrak{T}}_g$ is the union of the boundary of $\overline{\mathfrak{T}}_g$ and the Maroni divisor. Therefore, $\delta_0$, $\delta_{k,i}$ and $\mu$ generate ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak{T}}_g$. Since the class of the Hodge bundle $\lambda$ is [*not*]{} a linear combination of the boundary classes (cf. p. ), the boundary divisors are [*not*]{} sufficient to generate the rational Picard group of $\overline{\mathfrak{T}}_g$, and $\mu$ must be linearly independent of them. We conclude that $\delta_0$, $\delta_{k,i}$, and $\mu$ generate freely ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak{T}}_g$ for even genus $g$. For $g$-odd, $p^*:{\operatorname}{Pic}_{\mathbb{Q}}W\rightarrow {\operatorname}{Pic}_{\mathbb{Q}}U$ is either an inclusion, or has a kernel with one generator. Since the Maroni locus for $g$-odd is not a divisor, an inclusion would imply as above that $\lambda$ is a linear combination of the boundary classes, which is not true. Hence, ${\operatorname}{ker}p^*={\mathbb Q}$ and ${\operatorname}{Pic}_{\mathbb{Q}}W$ is generated freely by the boundary classes $\delta_0$ and $\delta_{k,i}$, and one additional class. The Bogomolov condition and the Maroni divisor {#interpretation} ---------------------------------------------- For even genus $g$ and a base curve $B$, not contained in $\Delta{\mathfrak{T}}_g$: $$(7g+6)\lambda=g\delta_0+ \sum_{k,i}\widehat{c}_{k,i}\delta_{k,i}+2(g-3)\mu,$$ where $\widehat{c}_{k,i}$ are certain polynomial coefficients computed similarly as $\widetilde{c}_{k,i}$. (cf. p. ) \[Maroni2\] We set $g=2(m-1)$. Let us consider for now only families with irreducible trigonal fibers, i.e. the base curve $B$ intersects only the boundary component $\Delta{\mathfrak{T}}_0$. [*Case 1.*]{} If $B$ does not intersect the Maroni divisor $\mu$, then the Maroni invariant of the fibers in $X$ is constant, and equal to $0$. The fibers $C$ of $X$ embed in the projectivization ${{\mathbf P}}(V|_{F_Y})\cong {{\mathbf P}}^1\times {{\mathbf P}}^1$. Since ${\operatorname}{deg}V|_{F_Y} =g+2$ and $V$ is balanced, the restriction of $V$ to the fiber $F_Y$ on the ruled surface $Y$ is $$V|_{F_Y}={\mathcal}{O}_{{{\mathbf P}}^1}(m)\oplus {\mathcal}{O}_{{{\mathbf P}}^1}(m).$$ Moreover, $V|_{F_Y}$ does not jump as $F_Y$ moves, so that $V$ can be written as: $$V\cong h^*M\otimes{\mathcal}{O}_{Y}\left(mB_0\right)$$ for some vector bundle $M$ of rank 2 on $B$. But the Bogomolov index $4c_2(V)-c_1^2(V)$ is independent of twisting $V$ by line bundles, in particular, by ${\mathcal}{O}_{Y}\left(mB_0\right)$, so that $$4c_2(V)-c_1^2(V)=4c_2(h^*M)-c_1^2(h^*M)= 4c_2(M)-c_1^2(M)=0.$$ The last equality follows from $c_2(M)=0=c_1^2(M)$ for any bundle on the curve $B$. We conclude that $4c_2(V)-c_1^2(V)=4\mu|_B=0$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=marone1.ps,width=1.5in,height=1.3in}} \hspace{-1mm}\end{array}}$$ [*Case 2.*]{} Now let $B$ intersect the Maroni divisor $\mu$ in [*finitely*]{} many points. Assume also that these points are [*general*]{} in $\mu$, i.e. they correspond to trigonal curves $C$ embeddable in the ruled surface ${\mathbf F}_2$. We twist $V$ by a line bundle $M= {\mathcal}{O}_{Y}\left(mB_0\right)$, and set $\widetilde{V}=V\otimes M$, so that ${\operatorname}{deg}\widetilde{V}|_{F_Y}=0$ and $$\,\,\,\,\left\{\begin{array}{l} \widetilde{V}|_{F_Y}= {\mathcal}{O}_{{{\mathbf P}}^1}\oplus {\mathcal}{O}_{{{\mathbf P}}^1}\,\,\, \phantom{(-1)(1)}{\operatorname}{when}\,\,F_Y\,\,{\operatorname}{is\,\,generic,}\\ \widetilde{V}|_{F_Y}={\mathcal}{O}_{{{\mathbf P}}^1}(-1)\oplus {\mathcal}{O}_{{{\mathbf P}}^1}(1)\,\,\,{\operatorname}{when}\,\,F_Y\,\,{\operatorname}{is\,\,special}. \end{array}\right.$$ Then $\widetilde{V}$ is the middle term of a short exact sequence on $Y$ $$0\rightarrow \widetilde{V}^{\prime} \rightarrow \widetilde{V} \rightarrow {\mathcal}{I} \rightarrow 0,$$ where $\widetilde{V}^{\prime}=h^*(h_*\widetilde{V})$ is a vector bundle of rank 2 on $Y$. In the notation of [@Br], let $W$ be the sum of the special fibers of $Y$, and let $Z$ be the union of certain isolated points on each member of $W$, so that ${\mathcal}{I}={\mathcal}{I}_{Z\subset W}$ is the ideal sheaf of $Z$ inside $W$. Note that the number of the special fibers, which comprise $W$, equals ${\operatorname}{deg}Z=\mu|_B$. We can now compute the Chern classes of $\widetilde{V}$: $$\left\{\begin{array}{l} c_1(\widetilde{V})=c_1(\widetilde{V}^{\prime})+W={\operatorname}{a\,\,sum\,\,of\,\, fibers\,\,of\,\,Y},\\ c_2(\widetilde{V})=c_2(\widetilde{V}^{\prime})+{\operatorname}{deg}Z={\operatorname}{deg}Z. \end{array}\right.$$ The last equality follows from the fact that $\widetilde{V}^{\prime}$ is the pull-back of a bundle on the curve $B$, hence of zero higher Chern classes. We conclude that $c_1^2(\widetilde{V})=0$, and $$4c_2({V})-c_1^2({V})= 4c_2(\widetilde{V})-c_1^2(\widetilde{V})=4{\operatorname}{deg}Z=4\mu|_B.$$ Putting the above two cases together, we have for any family with irreducible trigonal members, not entirely contained in the Maroni locus: $$4c_2({V})-c_1^2({V})=4\mu|_B.$$ Prop. \[genPic\] then implies that $\lambda$ is a linear combination of the boundary and the Maroni class: $$(7g+6)\lambda|_{\overline{\mathfrak{T}}_g}=g\delta_0+\sum_{k,i} \widehat{c}_{k,i}\delta_{k,i}+2(g-3)\mu,$$ where the coefficients $\widehat{c}_{k,i}$ are computed in a similar way, or by direct computation with families of singular trigonal curves (cf. [@CH]). We can combine the above results in the following For even $g$, ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$ is freely generated by all boundary classes $\delta_0$ and $\delta_{k,i}$, and the Maroni class $\mu$. The class of the Hodge bundle on $\overline{\mathfrak{T}}_g$ is expressed in terms of these generators as the following linear combination: $$(7g+6)\lambda|_{\overline{\mathfrak{T}}_g}=g\delta_0+ \sum_{k,i}\widehat{c}_{k,i}\delta_{k,i}+2(g-3){\mu}.$$ \[Pic trigonal\] [**Remark 12.2.**]{} Note that the coefficients $\widehat{c}_{k,i}$ depend on the specific decriptions of the Maroni curves that appear in the boundary divisors $\Delta{\mathfrak{T}}_{k,i}$, and they are [*not*]{} always equal to the corresponding coefficients $\widetilde{c}_{k,i}$ in Theorem \[7+6/g relation2\]. Indeed, in the above Proposition, we have shown that $$4c_2({V})-c_1^2({V})=4\mu|_B+ \sum_{k,i}\alpha_{k,i}\delta_{k,i}, \label{alpha-coef}$$ for some $\alpha_{k,i}$, which may be non-zero. Hence, $\widehat{c}_{k,i}=\widetilde{c}_{k,i}+\frac{g-3}{2}\alpha_{k,i}$. For example, consider the case of $\Delta{\mathfrak{T}}_{1,i}$, and let $C=C_1\cup C_2$ be a general member of it. If $C$ is also Maroni, then there exists a family $X\rightarrow B$, whose general fiber is an irreducible Maroni curve, and one of whose special fibers is our $C$. We can assume, modulo a base change and certain blow-ups not affecting $C$, that this family fits in the basic construction diagram (cf.  Fig. \[general B\]). Let ${\mathbf R}_1$ and ${\mathbf R}_2$ be the two ruled surfaces in which $C_1$ and $C_2$ are embedded, and let $E_1$ and $E_2$ be the projections of $C_1$ and $C_2$ in the birationally ruled surface $\widehat{Y}$. Then $F=E_1+E_2$ is a special fiber of $\widehat{Y}$, with self-intersections $E_1^2=E_2^2=-1$. Now, the general member of $X$, being Maroni, is embedded in a ruled surface $\mathbf F_2$ with a section $L$ of self-intersection $-2$. The union of such $L$’s forms a surface in the 3-fold $\mathbf PV$, whose closure we denote by $S$. Evidently, $S\cong \widehat{Y}$, at least outside their special fibers. Let $S$ intersect ${\mathbf R}_1$ and ${\mathbf R}_2$ in curves $L_1$ and $L_2$ (over $E_1$ and $E_2$). We claim that at least one of ${\mathbf R}_1$ and ${\mathbf R}_2$ is [*not*]{} isomorphic to $\mathbf F_0=\mathbf P^1\times \mathbf P^1$. It will suffice to show that $L_1$ or $L_2$ has negative self-intersection. Indeed, suppose to the contrary that $L_m^2\geq 0$ in ${\mathbf R}_m$ ($m=1,2$). Note that $S\cdot \mathbf R_m=L_m$ in $\mathbf PV$, so that $$L_m^2=S|_{\mathbf R_m}\cdot S|_{\mathbf R_m}= S^2\cdot \mathbf R_m\,\, \Rightarrow\,\,S^2(\mathbf R_1+\mathbf R_2)\geq 0.$$ On the other hand, $\mathbf R_1+\mathbf R_2$ is the fiber of the projection $\mathbf PV\rightarrow \widehat{Y}$, and as such it is linearly equivalent to the general fiber $\mathbf F_2$. Hence $$0\leq S^2\cdot \mathbf F_2=S|_{\mathbf F_2}\cdot S|_{\mathbf F_2}=L^2=-2,$$ a contradiction. We conclude that if $C=C_1\cup C_2$ is a Maroni curve of boundary type $\Delta{\mathfrak{T}}_{1,i}$, then either $C_1$ or $C_2$ (or both) is embedded in a ruled surface $\mathbf F_k$ with $k\geq 1$. This already distinguishes the cases of odd and even genus $i$. When $i=g(C_1)$ is even (and hence $j=g(C_2)=g-j-2$ is also even), the general member of $\Delta{\mathfrak{T}}_{1,i}$ is embedded in a join of two $\mathbf F_0$’s (each $C_m\subset \mathbf F_0$), and hence it is [*not*]{} Maroni. Based on this observation, one can easily find the coefficient $\alpha_{1,i}$ for $i$-even. To do this, consider the birationally ruled surface $Y$ which is the blow-up of $\mathbf F_0$ at one point. Let again the two components of the special fiber of $Y$ be $E_1$ and $E_2$, and projectivize the trivial vector bundle $V=\mathcal{O}_Y\oplus \mathcal{O}_Y$: ${\mathbf P}V=Y\times \mathbf P^1$. By taking an appropriate linear system in ${\mathbf P}V$, one obtains a family of trigonal curves $X$, whose fibers are all irreducible and embedded in $\mathbf F_0$, except for a special reducible curve $C$ over $E_1\cup E_2$ of the specified above type. Hence none of $X$’s members are Maroni, and so $\mu|_B=0$. Further, $4c_2(V)-c_1^2(V)=0$, and $\delta_{1,i}|_B=1$, so that equation (\[alpha-coef\]) implies $\alpha_{1,i}=0$, and hence $\widehat{c}_{k,i}=\widetilde{c}_{k,i}$ for $i$-even. The situation is quite different when the genus $i$ is odd. Then both components of the general member $C$ of $\Delta{\mathfrak{T}}_{1,i}$ are embedded in $\mathbf F_1$’s, and hence $C$ is potentially Maroni. One can take further the above general argument of intersection theory on ${\mathbf P}V$, and show that the curves $L_1$ and $L_2$ are in fact both sections of negative self-intersection $-1$ in these $\mathbf F_1$’s: consider the product $S\cdot X \cdot {\mathbf F}_2$ and its variation over the special fiber of $\widehat{Y}$. But we know that $L_1$ and $L_2$ intersect, as the fiber of $S$ over $\widehat{Y}$ is connected. Thus, the curve $C$ would be Maroni if and only if the two corresponding ruled surfaces $\mathbf F_1$ are glued along one of their fibers so that their negative sections intersect on that fiber. (This decsription can be alternatively derived by considering the degenerations of the $g^1_3$’s on the irreducible Maroni curves.) To find $\alpha_{1,i}$ in this case, we construct a similar example as above, only changing $V$ to $\mathcal O_Y\oplus \mathcal O_Y(E_1)$. This, while keeping the general fiber embedded in $\mathbf F_0$, has the effect of embedding the special one in a “Maroni” gluing of two $\mathbf F_1$’s. We have $4c_2(V)-c_1^2(V)=-E_1^2=1$, $\mu|_B=1$, and $\delta_{1,i}|_B=1$, so that equation (\[alpha-coef\]) implies $\alpha_{1,i}=-3$ for $i$-odd, and hence $\widehat{c}_{k,i}=\widetilde{c}_{k,i}-3/2(g-3)$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=maroni.ps,width=1.5in,height=1.5in}} \hspace{-1mm}\end{array}}$$ One can similarly compute the remaining coefficients $\alpha_{k,i}$, by first figuring out which boundary curves in $\Delta_{k,i}\mathfrak{T}_g$ are Maroni, then constructing an appropriate vector bundle $V$, and finally using equation (\[alpha-coef\]) to compute $\alpha_{k,i}$, and hence $\widehat{c}_{k,i}$. For $g$-even, if the base curve $B$ is not entirely contained in the Maroni divisor, and the singular members of $X$ belong only to $\Delta_0\mathfrak{T}_g \cup \Delta_{1,i}\mathfrak{T}_g$, then the slope of the family $X/_B$ satisfies: $$\frac{\delta}{\lambda}\leq 7+\frac{6}{g}.$$ \[Maroni inequality\] For $g$-even, if the base curve $B$ is not entirely contained in the Maroni divisor, then the slope of the family $X/_B$ satisfies: $$\frac{\delta}{\lambda}\leq 7+\frac{6}{g}.$$ \[Maroni-conj\] The Maroni divisor and the maximal bound {#Maroni-maximal} ---------------------------------------- Even though for odd genus $g$ the Maroni locus is not large enough to be a divisor in $\overline{\mathfrak{T}}_g$, we can define a [*generalized Maroni*]{} divisor class by extending the relation from the $g$-even case. [**Definition 12.2.**]{} For any genus, we define the [*generalized Maroni*]{} class $\mu$ in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{T}}_g$ by $$\mu:=\frac{1}{2(g-3)}\big\{(7g+6)\lambda-g\delta_0- \sum_{k,i}\widehat{c}_{k,i}\delta_{3,i}\big\}.$$ The maximal bound $36(g+1)/(5g+1)$ is attained for a trigonal family of curves $X\rightarrow B$ if and only all fibers of $X$ are irreducible and $$\delta_0|_B=-\frac{72(g+1)}{g+2}\mu|_B$$ \[maximalmaroni\] The fact that $X$ must have only irreducible fibers in order to attain the maximum bound is already known from Theorem \[genmaximal\]. This means $\delta_{k,i}|_B=0$ for all $k,i$. Then, Theorem \[bogomolov1\] implies: $$(7g+6)\lambda|_B=g\delta_0|_B+\frac{g-3}{2}\mu|_B.$$ Assume that the maximal bound is attained, i.e. $36(g+1)\lambda|_B=(5g+1)\delta_0|_B$. Substituting for $\lambda|_B$ in the above equation, yields the desired equality. The converse follows similarly. [**Remark 12.3.**]{} In the $g$-even case, this equality has a specific meaning. Since the Maroni class $\mu$ corresponds to an effective divisor on $\overline{\mathfrak{T}}_g$, the equality (and hence the maximal bound) is achieved only for base curves $B$ entirely contained in the Maroni divisor, so that the restriction $\mu|_B$ can be negative. In fact, in all found examples, the base $B$ is contained in a very small subloci of the Maroni loci, defined by the highest possible Maroni invariant. [**Remark 12.4.**]{} Theorem \[Pic trigonal\] and Prop. \[maximalmaroni\] do not have analogs in the hyperelliptic case: there is no additional Maroni divisor to generate ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak{I}}_g$ together with the boundary $\Delta\mathfrak{I}_g$. [**Remark 12.5.**]{} When $g=3$, there is no Maroni locus in $\overline{\mathfrak{T}}_3$ either. Indeed, since an irreducible trigonal curve of genus $3$ embeds only in ruled surfaces ${\mathbf F}_k$ with $k$-odd and $k\leq (g+2)/3=5/3$, then [*all*]{} irreducible trigonal curves embed in ${\mathbf F}_1$, and correspondingly they all have the lowest possible Maroni invariant $k=1$. However, ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak{T}}_3$ is not generated by the boundary classes of $\overline{\mathfrak{T}}_3$: as Prop. \[genPic\] asserts, in the odd genus case there is always one additional generating class. On the other hand, the results on p.  yield apriori [*two*]{} relations among $\lambda$ and the $\delta_{k,i}$’s. This would have been a contradiction to the [*freeness*]{} of the generators above, unless these two relations are the same. This is in fact what happens: $$9\lambda=\delta_0+3\delta_{2,1}+3\delta_{3,1}+4\delta_{4,1}+4\delta_{5,1} +3\delta_{5,2}+3\delta_{6,1},$$ as restricted to any base curve $B\not\subset\Delta\overline{\mathfrak{T}}_3$. Note the convenient disappearance of the “extra” $(g-3)$–summands in the coefficients of $\delta_{4,i},\delta_{5,i}, \delta_{6,i}$). Then the maximal and the semistable ratios both equal $9$, and are attained for families with irreducible trigonal members. 13. Further Results and Conjectures {#further-results-and-conjectures .unnumbered} =================================== \[furtherresults\] Results and conjectures for $d$-gonal families, $d\geq 4$ --------------------------------------------------------- I have carried out some preliminary research in the $d$-gonal case, and while the methods and ideas for the trigonal case are in principle extendable, this appears to be a substantially more subtle and complex problem. More precisely, let $\overline{{\mathcal}{D}}_d$ be the closure in $\overline{\mathfrak{M}}_g$ of the stable curves expressible as $d$-sheeted covers of ${{\mathbf P}}^1$. One possible goal is to complete the program of describing generators and relations for the rational Picard groups ${\operatorname}{Pic}_{\mathbb{Q}}\overline{{\mathcal}{D}}_d$, and to find the exact maximal bounds for the slopes of $d$-gonal families. For example, I have obtained the following bound for the slope of a general tetragonal family with smooth general member (for odd genus $g$): $$\frac{\delta}{\lambda}\leq 6\frac{2}{3}+\frac{64}{3(3g+1)}= \frac{4(5g+7)}{3g+1}.$$ I have also conjectured formulas for the maximal and general bounds for any $d$-gonal and other families of stable curves. Entering these formulas are the [*Clifford index*]{} of curves, [*Bogomolov semistability*]{} conditions for higher rank bundles, and some new geometrically described loci in $\overline{{\mathcal}{D}}_d$. Generalizing the idea of the Maroni locus in the trigonal case, these loci are characterized, for example, in the tetragonal case by the dimensions of the multiples of the $g^1_4$-series. In particular, there will be another generator of ${\operatorname}{Pic}_{\mathbb Q}\overline{\mathfrak{T}}_4$ besides the boundary and Maroni divisors. In the following I present some of these conjectures on the upper bounds for $\overline{{\mathcal}{D}}_d$. We start by comparing all known maximal and general bounds functions of the genus $g$: $$\begin{array}{|c|c|c|c|c|c|} \hline\hline \stackrel{\vspace*{1mm}}{{\operatorname}{locus\,\, in \,\,}\overline{\mathfrak{M}}_g} &{\operatorname}{bound}& g=1& g=2& g=3& g=5\\ \hline\hline\vspace*{1mm} {\operatorname}{general}\,\overline{\mathfrak{M}}_g& \stackrel{\vspace*{1mm}}{\displaystyle{ 6+\frac{12}{g+1\vspace*{1mm}}}} & 12 & 10 &9 &8\\ \hline\vspace*{1mm} {\operatorname}{hyperelliptic}\,\overline{{\mathcal}{H}}_g=\overline{{\mathcal}{D}}_2& \stackrel{\vspace*{1mm}}{\displaystyle{8+ \frac{4}{g\vspace*{1mm}}}} & 12 & 10 & - &-\\ \hline\vspace*{1mm} {\operatorname}{trigonal}\,\overline{{\mathcal}{T}}_g=\overline{{\mathcal}{D}}_3 & \stackrel{\vspace*{1mm}}{\displaystyle{ \frac{36(g+1)}{5g+1 \vspace*{1mm}}}} & 12 & - & 9 &- \\ \hline\vspace*{1mm} {\operatorname}{ gen. tetragonal=\overline{{\mathcal}{D}}_4} &\stackrel{\vspace*{1mm}}{\displaystyle{ \frac{4(5g+7)}{3g+1\vspace*{1mm}}}}& 12 & - & -&8\\ \hline \end{array}$$ The pattern appearing in this table is clear: the general bound $6+\displaystyle{12/(g+1)}$ coincides with each of the other bounds exactly twice for some special values of the genus $g$. Evidently, $g=1$ is one of these special values, yielding 12 everywhere. (I owe this observation to Benedict Gross.) Let $g_d$ be the other genus $g$ for which the general formula in $\overline{\mathfrak{M}}_g$ and the maximal formula for $\overline{{\mathcal}{D}}_d$ coincide, i.e. $g_2=2$, $g_3=3$, $g_5=5$. We notice that for these genera $g_d$ the moduli spaces $\overline{\mathfrak{M}}_2,\overline{\mathfrak{M}}_3$ and $\overline{\mathfrak{M}}_5$ consist only of hyperelliptic, trigonal or tetragonal curves, respectively. In general, [*Brill-Noether*]{} theory (cf. [@ACGH]) asserts that for complete linear series $g^r_d=g^1_d$ the expected dimension of the variety of $g^1_d$’s on a smooth curve of genus $g$ is $\rho=g-(r+1)(g-d+r)=2(d-1)-g,$ and hence the smallest genus $g$ for which $\overline{\mathfrak{M}}_g=\overline{{\mathcal}{D}}_d\supsetneq \overline{{\mathcal}{D}}_{d-1}$ is $g=2d-3$. Thus we set $g_d=2d-3$ for $d\geq 3$ and $g_2=2$. Note that this coincides with the previously found $g_3=3$ and $g_5=5$. If ${\mathcal}{F}_d(g)$ is an exact upper bound for the slopes of families of stable curves with smooth $d$-gonal general member (locus $\overline{{\mathcal}{D}}_d$), then $$\begin{aligned} &&(a)\,\,{\mathcal}{F}_d(1)=12.\\ &&(b)\,\,{\mathcal}{F}_d(g_d)=6+\displaystyle{\frac{12}{g_d+1}}\cdot\end{aligned}$$ \[conj2\] It is reasonable to expect that the upper bounds for $\overline{{\mathcal}{D}}_d$ will be ratios of linear functions of the genus $g$: ${\mathcal}{F}_d(g)=(Ag+B)/(Cg+D)$. Conjecture \[conj2\] then estimates the difference between ${\mathcal}{F}_d(g)$ and the general bound for $\overline{\mathfrak{M}}_g$ up to a factor $f_d=D/C$. The exact upper bounds ${\mathcal}{F}_d(g)$ are given by $${\mathcal}{F}_d(g)=6+\frac{12}{g+1}+6\frac{(1-f_d)(g-g_d)(g-1)}{(g+f_d)(g_d+1)(g+1)},$$ or equivalently, $${\mathcal}{F}_d(g)=6+\frac{6}{g+f_d}\left(1+f_d+\frac{1-f_d}{g_d+1}(g-1)\right).$$ I have a conjecture on how to determine the remaining factor $f_d$, which seems to be closely related to the coefficients of the linear expression in \[EMH\] for the divisor $\overline{{\mathcal}{D}}_{\frac{g+1}{2}}$ in terms of the Hodge bundle $\lambda$ and the boundary classes $\delta_i$ on $\overline{\mathfrak{M}}_g$. These conjectures are supported by the work of Cornalba-Harris on the [ hyperelliptic locus]{} $\overline{{\mathcal}{H}}_g=\overline{{\mathcal}{D}}_2$, by the results of this paper on the [ trigonal locus]{} $\overline{{\mathcal}{T}}_g=\overline{{\mathcal}{D}}_3$, and by partial results on the tetragonal locus $\overline{{\mathcal}{D}}_4$. In view of Remark 12.5, the equality between the maximal and semistable trigonal bounds for $g=3$ suggests that a similar situation might occur for other $d$-gonal families. It is reasonable to expect two or more “semistable” bounds, depending on the number of extra generators in ${\operatorname}{Pic}_{\mathbb Q}{\overline{{\mathcal}D}}_d$. One of these “semistable” bounds relates to families obtained as blow-ups of pencils of $d$-gonal curves on a ruled surface ${\mathbf F}_k$. Example 2.1 yields the maximal bound $8+4/g$ for hyperelliptic families (no extra generator besides the boundary classes), and a similar example in the trigonal case yields the $7+6/g$ semistable bound (one extra generator, the Maroni locus). We generalize this to any $d$-gonal family of curves embedded in an arbitrary ruled surface ${\mathbf F}_k$. Invariably, the slope of $X/\!_{\displaystyle{B}}$ is: $$\frac{\delta|_B}{\lambda|_B}=\left(6+\frac{2}{d-1}\right)+\frac{2d}{g}\cdot$$ Let $X$ be a family of $d$-gonal curves of genus $g$ whose base $B$ is not contained in a certain codimension 1 closed subset of $\overline{{\mathcal}D}_d$. Then the slope of $X/\!_{\displaystyle{B}}$ satisfies: $$\frac{\delta|_B}{\lambda|_B}\leq \left(6+\frac{2}{d-1}\right)+\frac{2d}{g}\cdot$$ \[clifford\] Conjectures \[clifford\]–4 are modifications of earlier conjectures of Joe Harris. A look at families with special $g^r_d$’s, $r\geq 2$ ---------------------------------------------------- The discussion so far was primarily concerned with the loci $\overline{{\mathcal}{D}}_d\subset \overline{\mathfrak{M}}_g$ corresponding to linear series $g^1_d$. But all of our problems are well-defined and quite interesting to solve for curves with series $g^r_d$ of dimension $r>1$. Equivalently, we consider the loci $\overline{{\mathcal}{D}}^r_d$ of curves mapping with degree $d$ to ${{\mathbf P}}^r$, $r\geq 1$. [**Definition 13.1.**]{} The [*Clifford index*]{} $\mathfrak {c}$ of a smooth curve $C$ is defined as $$\mathfrak{c}={\operatorname}{min}_L\left\{{\operatorname}{deg} {L} -2{\operatorname}{dim}{L}\right\}$$ where $L$ runs over all effective special linear series ${L}$ on $C$. Clifford’s theorem implies ${\mathfrak{c}}\geq 0$, with equality if and only if $C$ is hyperelliptic, i.e. ${L}=g^1_2$ (cf. [@ACGH]). On the other hand, ${\mathfrak{c}}=1$ means that there exists a $g^r_d$ on $C$ with $d-2r=1$. From Marten’s Theorem, ${\operatorname}{dim}W^r_d(C)\leq d-2r-1=0$, where $W^r_d$ is the variety parametrizing complete linear series on $C$ of degree $d$ and dimension at least $r$. Therefore, we must have ${\operatorname}{dim}W^r_d=0$. But then Mumford’s theorem asserts that $C$ is either trigonal, or bi-elliptic, or a smooth plane quintic. The bi-elliptic case would mean that $W^r_d$ consists of $g^2_6$’s, which contradicts the dimension of ${\operatorname}{dim}W^r_d$. In short, $\mathfrak{c}=1$ if and only if $C$ is not hyperelliptic and possesses a $g^1_3$ or a $g^2_5$. Thus, according to the Clifford index, the first case with $r\geq 2$ is the space of plane quintics. Consider a general pencil of such, and blow up the plane at its 25 base points. The resulting family $X={\operatorname}{Bl}_{25}{{\mathbf P}^2}\rightarrow{\mathbf P}^1$ is easily seen to have slope $8=7+6/g$, which corresponds to the bound in Conjecture \[clifford\] with $d-2$ replaced by the Clifford index $\mathfrak{c}=1$. Finally, note that for a $d$-gonal curve $C$ of genus $g$, by definition $\mathfrak{c}\leq d-2$, so that when $g\gg d$ we may generalize to: For a general family $X\rightarrow B$ of genus $g$ stable curves whose general member has Clifford index $\mathfrak{c}$ and whose base $B$ is a general curve in $\overline{{\mathcal}D}^r_d$, the slope of $X/\!_{\displaystyle{B}}$ satisfies: $$\frac{\delta_X}{\lambda_X}\leq\left(6+\frac{2}{\mathfrak{c}+1}\right)+ \frac{2\mathfrak{c}+4}{g}\,\,\,{\operatorname}{for}\,\,\mathfrak{c}<\!<g\cdot \label{clifford1}$$ [**Remark 13.1.**]{} It is worth noting that the stratification of $\overline{\mathfrak{M}}_g$, for which we asked in the Introduction, is not obtained via the Clifford index $\mathfrak{c}$. For example, Xiao constructs families of bi–elliptic curves $C$ with slope $8$ (cf. [@Xiao]), which is between the hyperelliptic and the trigonal maximal bounds. Since $C$ has a $g^1_4$ as bi–elliptic, this already exceeds the conjectured maximal bounds for the tetragonal case. This shows that in some of the above conjectures we have to exclude the subset of bi–elliptic curves from the tetragonal locus $\overline{{\mathcal}D}_4$, and that similar modifications might be necessary for the other loci $\overline{{\mathcal}D}_d$. More precisely, it seems plausible that the stratification of $\overline{\mathfrak{M}}_g$ according to successively lower slope bounds is related not just to the existence of a specific linear series $g^r_d$, but also to the number, dimension and description of the irreducible components of corresponding varieties $W^r_d$. Other methods via the moduli space $\overline{{\mathcal}{M}}_{g,n}({{\mathbf P}}^r,d)$ -------------------------------------------------------------------------------------- The approach in the $g^1_d$-cases is based on a modification of the Harris-Mumford’s \[EHM\] [*Hurwitz scheme of admissible covers*]{}, which parametrized the $d$-uple covers of stable pointed rational curves. However, in the more general situation for linear series with larger dimensions $r>1$, such a compactification via admissible covers does not exist, so we have to look for a different solution. Consider moduli spaces of stable maps $\overline{{\mathcal}{M}}_{g,n}({{\mathbf P}}^r,d)$. They parametrize [*stable*]{} maps $(C,p_1,p_2,...,p_n;\mu)$, where $C$ is a projective, connected nodal curve of arithmetic genus $g$, the $p_i$’s are marked distinct nonsingular points on $C$, and the map $\mu:C\rightarrow{{\mathbf P}}^r$ has image $\mu_*([C])=d[{\operatorname}{line}]$ and satisfies certain stability conditions (cf. [@K; @KM]). The space $\overline{{\mathcal}{M}}_{g,n}({{\mathbf P}}^r,d)$ seems to be the right compactification which we need in order to extend our results to families with $g^r_d$-series on the fibers: the moduli space of stable maps is somewhat more “sensitive” in describing our loci $\overline{{\mathcal}{D}}^r_d$ in terms of their geometry. Going back to the $g^1_d$-problems, one can also see the combinatorial flavor that stands in the background of these questions. It is probably not coincidental that the spaces $\overline{{\mathcal}{M}}_{g,n}({{\mathbf P}}^r,d)$ are also combinatorially defined and give rise to many enumerative problems. It will be useful to understand better the loci $\overline{{\mathcal}{D}}^r_d$ via their connection with the Kontsevich spaces $\overline{{\mathcal}{M}}_{g,n}({{\mathbf P}}^r,d)$, and ultimately to solve the remaining questions on ${\operatorname}{Pic}_{\mathbb{Q}}\overline{{\mathcal}{D}}^r_d$ for any $d,r$, as well as related interesting enumerative problems that will inevitably arise from such considerations. 14. Appendix: The Hyperelliptic Locus $\overline{\mathfrak{I}}_g$ {#appendix-the-hyperelliptic-locus-overlinemathfraki_g .unnumbered} ================================================================= \[hyperelliptic\] In this section we give a proof of Theorems \[theoremCHPic\] and \[CHX\], following the same ideas and methods as in the trigonal case. We refer the reader to previous sections for a detailed proof of certain statements. Boundary locus of $\overline{\mathfrak{I}}_g$ {#hyperellipticboundary} --------------------------------------------- Cornalba-Harris describe the boundary of $\overline{\mathfrak{I}}_g$ as consisting of several boundary components, whose general members and indexing are shown in Fig. \[hyperboundary\] (cf. [@CH]). The restriction of the divisor class $\delta$ to $\overline{\mathfrak{I}}_g$ is the following linear combination: $$\delta\big|_{\overline{\mathfrak{I}}_g}=\delta_0+2\sum_{i=1}^ {[(g-1)/2]}\xi_i+ \sum_{j=1}^{\left[g/2\right]}\delta_j, \label{boundaryrel}$$ where $\xi_i$ and $\delta_i$ are the classes in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{I}}_g$ of the boundary divisors $\Xi_i$ and $\Delta_j$. $${\begin{array}{c} \hspace{-1mm} \raisebox{-4pt}{\psfig{figure=hyper1.ps,width=4.5in,height=0.9in}} \hspace{-1mm}\end{array}}$$ $$\Xi_0;\,\,\Xi_i,\,{\scriptstyle {i=1,...,[(g-1)/2]}} ;\,\,\Delta_j,\,{\scriptstyle{j=1,...,[g/2]}}$$ Effective covers and embedding for hyperelliptic families {#embeddinghyperelliptic} --------------------------------------------------------- In the case of a hyperelliptic family $f:X\rightarrow B$, a minimal quasi-admissible cover coincides with the original family $X$, because no blow-ups are necessary to perform on the fibers of $X$: these are already quasi-admissible double covers. Thus, we have a degree 2 map $\phi=\widetilde{\phi}:X\rightarrow Y$ for some birationally ruled surface $Y$ over $B$. As for an effective cover $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$, only the boundary divisors $\Delta_i$ require blow-ups (cf. Fig. \[hyperboundary\]). This is analogous to the “ramification index 1” discussion in Fig. \[ram\]–\[resolve1\]. Thus, while in $\widehat{X}$ the special fibers may have occasional nonreduced rational components of multiplicity 2, the fibers of $\widehat{Y}$ are always trees of reduced smooth ${{\mathbf P}}^1$’s. In the case of a smooth hyperelliptic curve $C$, we consider the natural double sheeted map $f:C\rightarrow {{\mathbf P}}^1$. The pushforward $f_*{{\mathcal}{O}_C}$ is a rank 2 vector bundle on ${{\mathbf P}}^1$, which fits into the short exact sequence $$0\rightarrow {{\mathcal}{O}_{{{\mathbf P}}^1}(g+1)}\rightarrow {f}_*{{\mathcal}O}_{C}\stackrel {{\operatorname}{tr}}{\rightarrow}{{\mathcal}O}_{{{\mathbf P}}^1}\rightarrow 0.$$ We can embed $C$ in the rational ruled surface ${{\mathbf P}}((f_*{\mathcal}{O}_C)\,\,\hat{})$. We generalize this construction to the effective cover $\widehat{\phi}:\widehat{X}\rightarrow \widehat{Y}$ by setting $V:=({\phi}_*{{\mathcal}O}_{X})\,\,\hat{}$. For some line bundle $E$ on $\widehat{Y}$: $$0\rightarrow {E}\rightarrow {\widehat{\phi}}_*{{\mathcal}O}_{\widehat{X}}\stackrel {{\operatorname}{tr}}{\rightarrow}{{\mathcal}O}_{\widehat{Y}}\rightarrow 0.$$ Then $\widehat{X}$ naturally embeds in the threefold ${{\mathbf P}}V$. Let $\pi:{{\mathbf P}}V\rightarrow \widehat{Y}$ be the corresponding projection map. The invariants $\lambda,\delta$ and $\kappa$ {#Hyperinvariants} -------------------------------------------- As a divisor in ${{\mathbf P}}V$, $\widehat{X}\equiv 2\zeta+\pi^*D$, for some divisor $D$ on $\widehat{Y}$. From the adjunction formula, $g={\operatorname}{deg}c_1(V)|_{F_{\widehat{Y}}}-1=c-1$, where $c_1(V)=cB_0+dF_Y$. The arithmetic genus of the inverse image $\widehat{\phi}^*T(E)$ is given by $$p_{\!\stackrel{\phantom{.}}{E}}=-m_{\!\stackrel{\phantom{.}}{E}} \left(\Gamma_{\!\stackrel{\phantom{.}}{E}}+ \Theta_{\!\stackrel{\phantom{.}}{E}}\right).$$ It turns out that these are the only differences between the set-up of the hyperelliptic and the trigonal case. The definitions of the functions $m,\theta$ and $\gamma$, as well as the formulas for $c_1(V), K_{{{\mathbf P}}V}, c_2({{\mathbf P}}V)$ and the congruence $D\equiv 2c_1(V)$ are valid without any modifications. As in the trigonal case, it will be sufficient to consider only the cases when the base curve $B$ intersects [*transversally*]{} the boundary divisors of $\overline{\mathfrak{I}}_g$. But then for all non-root components $E$ in $\widehat{Y}$: $$m_{\!\stackrel{\phantom{.}}{E}}=1=\Theta_{\!\stackrel{\phantom{.}}{E}}\,\, {\operatorname}{and}\,\, \Gamma_{\!\stackrel{\phantom{.}}{E}}=-(p_{\!\stackrel{\phantom{.}}{E}}+1).$$ We can now easily calculate the invariants on $X$. For any family $f:X\rightarrow B$ of hyperelliptic curves with smooth general member and a base curve $B$ intersecting transversally the boundary of $\overline{\mathfrak{I}}_g$:$$\begin{aligned} \lambda_X&\!\!=\!\!&dg+\frac{1}{2}\sum_{E\not =R}\Gamma_ {\!\stackrel{\phantom{.}}{E}} (\Gamma_{\!\stackrel{\phantom{.}}{E}}+1),\\ \kappa_X&\!\!=\!\!&4d(g-1)-2\sum_{E\not =R}(\Gamma_{\!\stackrel{\phantom{.}}{E}}+1)^2+\sum_{{\operatorname}{ram}1}1,\\ \delta_X&\!\!=\!\!&4d(2g+1)+2\sum_{E\not = R} (\Gamma_{\!\stackrel{\phantom{.}}{E}}+1)(1-2\Gamma_ {\!\stackrel{\phantom{.}}{E}})+\sum_{{\operatorname}{ram}1}1.\end{aligned}$$ \[hyperinvariants\] With this, we are ready to show the linear relations among $\lambda|_B$ and the boundary restrictions $\delta_i|_B$ and $\xi_i|_B$. It is evident that in order to cancel the “global” term $d$, one must subtract $(8g+4)\lambda_X|_B-g\delta|_B$, which is the main idea of the next theorem. There exists an effective linear combination ${\mathcal}{E}_h$ of the boundary divisors of $\overline{\mathfrak{I}}_g$, not containing $\Xi_0$, such that for any family $f:X\rightarrow B$ of hyperelliptic curves with smooth general member: $$(8g+4)\lambda_X|_B=g\delta|_B+{\mathcal}{E}_h|_B$$ \[hyperrelation\] We consider the difference $$\begin{aligned} \mathfrak{S}_h&\!\!=\!\!&(8g+4)\lambda_X|_B-g\delta|_B= 2\sum_{E\not = R}(1+\Gamma_{\!\stackrel{\phantom{.}}{E}})(g+\Gamma_ {\!\stackrel{\phantom{.}}{E}})+\sum_{{\operatorname}{ram}1}g\\ &\!\!=\!\!&2\sum_{E\not = R}p_{\!\stackrel{\phantom{.}}{E}}(g-1+p_ {\!\stackrel{\phantom{.}}{E}})+\sum_{{\operatorname}{ram}1}g.\end{aligned}$$ In the hyperelliptic case, as opposed to the trigonal case, there is only [*one type*]{} of non-root components $E$, namely, such that both $E$ and $E^-$ are reduced. That is why there is just one type of summands in $\mathfrak{S}_h$. As in Section \[arbitrary\], it is sufficient to calculate the above sum for general members of $\Xi_{i}$ and $\Delta_i$, as described in Prop. \[Delta-k,i\], i.e. for a [*general*]{} base curve $B$. ### Contribution of the boundary divisors $\Xi_{i}$ {#hypercontribution1} This case is analogous to the case of $\Delta_{3,i}$ (cf.  Subsection \[contribution1\]). The arithmetic genus $p_{\!\stackrel{\phantom{.}}{E}}=g-i-1$, and the corresponding summand in $\mathfrak{S}_h$ is $$e_i=2p_{\!\stackrel{\phantom{.}}{E}}(g-1+p_ {\!\stackrel{\phantom{.}}{E}}) =2i(g-i-1)>0,$$ where $i=1,...,[(g-1)/2]$. ### Contribution of the boundary divisors $\Delta_{j}$ {#hypercontribution2} Compare this with the contribution of $\Delta_{5,j}$ (subsection \[contribution2\]). There are two non-root components $E_1$ and $E_2$ in the special fiber of $\widehat{Y}$ ($E_1^-=R$), whose invariants are $p_{\!\stackrel{\phantom{.}}{E_1}}=g-j-1$ and $p_{\!\stackrel{\phantom{.}}{E_2}}=g-j$. With the ramification adjustment of $g$, the contribution of $\Delta_j$ to the sum $\mathfrak{S}_h$ is $$f_j= 2p_{\!\stackrel{\phantom{.}}{E_1}}(g-1+p_{\!\stackrel{\phantom{.}}{E}})+ p_{\!\stackrel{\phantom{.}}{E_2}}(g-1+p_{\!\stackrel{\phantom{.}}{E}})+g =4j(g-j)-g>0,$$ where $j=1,...,[g/2]$. Finally, for the appropriate indices $i$ and $j$ we set $\displaystyle{{\mathcal}{E}_h:=\sum_{i>0}e_i\Xi_i+\sum_{j>0}f_j\Delta_j.}$ This is an effective combination of boundary divisors in $\overline{\mathfrak{I}}_g$, not containing $\Delta_0$ by construction, and satisfying $\mathfrak{S}_h={\mathcal}{E}_h|_B$. Theorem \[hyperrelation\] implies immediately the following Let $f:X\rightarrow B$ be a nonisotrivial family with smooth general member. Then the slope of the family satisfies: $$\frac{\delta|_B}{\lambda|_B}\leq 8+\frac{4}{g}. \label{second8+4/g}$$ Equality holds if and only if the general fiber of $f$ is hyperelliptic, and all singular fibers are irreducible. It is now straightforward to prove the fundamental relation in ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{I}}_g$, shown first in [@CH]. In Theorem \[hyperrelation\], we add to the coefficients $e_i$ and $f_j$ the corresponding multiplicities ${\operatorname}{mult}_{\delta}\xi_i$ and ${\operatorname}{mult}_{\delta}\delta_j$: $$\widetilde{e}_i=e_i+2\cdot g=2(i+1)(g-i),\,\,\, \widetilde{f}_j=f_j+1\cdot g=4j(g-j).$$ Using the fact that ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{I}}_g$ is generated freely by the boundary classes $\xi_i$ and $\delta_j$ (see [@CH]), we obtain $$(8g+4)\lambda=g\delta_0+\sum_{i>0}\widetilde{e_i}\xi_i+ \sum_{j>0}\widetilde{f_j}\delta_j.$$ In the Picard group of the hyperelliptic locus, ${\operatorname}{Pic}_{\mathbb{Q}}\overline{\mathfrak{I}}_g$, the class of the Hodge bundle $\lambda$ is expressible in terms of the boundary divisor classes of $\overline{\mathfrak{I}}_g$ as: $$(8g+4)\lambda=g\xi_0+\sum_{i=1}^{[(g-1)/2]}2(i+1)(g-i)\xi_i +\sum_{j=1}^{[g/2]}4j(g-j)\delta_j.$$ \[CHPic2\] [\[ACGH\]]{} E. Arbarello, M. Cornalba, P.A. Griffiths, J. Harris, [*Geometry of Algebraic Curves*]{}, Vol. I, Springer-Verlag, New York, Berlin, Heidelberg, Tokyo, 1985. A. Beauville, [*L’Inégalité $p_g\geq 2q-4$ pour les Surfaces de Type Géneral,*]{} Bull. Soc. Math. France [**110**]{} (1982) 319-346. F. Bogomolov, [*Holomorphic Tensors on Projective Varieties,*]{} Math. USSR-Izv. [**13**]{} (1978) 499-555. W. Barth, W. Peters and C. Van de Ven, [*Compact Complex Surfaces,*]{} Springer-Verlag, New York, Berlin, Heidelberg, 1984. E. Brosius, [*Rank-2 Vector Bundles on a Ruled Surface.I-II*]{}, Math. Ann. [**265-266**]{} (1983) 155-168/199-214. G. Casnati and T. Ekedahl, [*Covers of Algebraic Varieties I. A General Structure Theorem, Covers of Degree $3$, $4$ and Enriques Surfaces,*]{} J. Algebraic Geometry [**5**]{} (1996) 439-460. Z. Chen, [*On the Lower Bound of the Slope of a Non-hyperelliptic Fibration of Genus 4,*]{} Int. J. Math. [**4**]{}, No.3 (1993) 367-378. M. Cornalba and J. Harris, [*Divisor Classes Associated to Families of Stable Varieties, with Application to the Moduli Space of Curves,*]{} Ann. Sc. Ec. Norm. Sup. [**21**]{} (1988) 455-475. J. Harris and D. Mumford, [*On the Kodaira Dimension of the Moduli Space of Curves,*]{} Invent. Math. [**67**]{} (1982) 23-86. J. Harris, [*On the Kodaira Dimension of the Moduli Space of Curves, II. The Even Genus Case,*]{} Invent. Math. [**75**]{} (1984) 437-466. D. Eisenbud and J. Harris, [*The Kodaira Dimension of the Moduli Space of Curves of Genus $\geq 23$,*]{}Invent. Math. [**90**]{} (1987) 359-387. R. Friedman and D. Morrison, [*The Birational Geometry of Degenerations*]{}, Progress in Mathematics, Vol. 29, Birkhäuser, 1983. T. Fujita, [*On Kähler Fiber Spaces over Curves,*]{} J. Math. Soc. Japan [**30**]{} (1978) 779-794. P. Griffiths and J. Harris, [*Principles of Algebraic Geometry,*]{} John Wiley & Sons, Inc., New York, 1978. G. Harder and M. S. Narashimnah, [*On the Cohomology Groups of Moduli Spaces of Vector Bundles on Curves,*]{} Math. Ann. [**212**]{} (1975) 215-248. J. Harer, [*The Second Homology Group of the Mapping Class Group of an Orientable Surface,*]{} Invent. Math. [**72**]{} (1983) 221-239. J. Harer, [*The Cohomology of the Moduli Space of Curves,*]{} in Theory of Moduli, E. Sernesi, Ed., Lecture Notes on Math. [**1337**]{}, Springer-Verlag, Berlin, (1988) 138-221. J.Harris and I.Morrison, [Notes on Moduli and Parameter Spaces of Curves]{}, Springer-Verlag, 1997. R. Hartshorne, [*Algebraic Geometry,*]{} Graduate Texts in Mathematics 52, Springer-Verlag, New York, Heidelberg, Berlin, 1977. K. Konno, [*Non-hyperelliptic Fibrations of small genus and certain irregular canonical surfaces,*]{} Ann. Sc. Norm. Sup. [**20**]{} (1993) 575-595. K. Konno, [*A Lower Bound of the Slope of Trigonal Fibrations,*]{} preprint. K. Konno, [*Even Canonical Surfaces with small $K^2$*]{}, I, Nagoya Math. J., [**290**]{} (1993) 115-146. M. Kontsevich, [*Enumeration of Rational Curves via Torus Action*]{}, in [*The Moduli Space of Curves*]{}, R. Dijkgraaf, C. Faber, and G. van der Geer, eds., Birkhauser (1995) 335-368. M. Kontsevich and Yu. Manin, [*Gromov-Witten classes, Quantum Cohomology, and Enumerative Geometry*]{}, Commun. Math. Phys. [**164**]{} (1994) 525-562. A. Lascu, D. Mumford, D. Scott, [*The Self-Intersection Formula and the “formul-clef”,*]{} Math. Proc. Cambridge Philos. Soc. [**78**]{} (1975) 117-123. A. Maroni, [*Le Serie Lineari Speciali Sulle Curve Trigonali*]{}, Ann. Mat. Pura Appl. (4) [ **25**]{} (1946) 343-354. J. Milne, [*$\acute{E}$tale Cohomology,*]{} Princeton University Press, Princeton, NJ, 1980. D. Mumford, [*Stability of Projective Varieties,*]{} L’Ens. Math. [**23**]{} (1977) 39-110. D. Mumford, [*Towards Enumerative Geometry of the Moduli Space of Curves,*]{} in [*Arithmetic and Geometry*]{} [**11**]{}, M. Artin and J. Tate, Eds., Birkhäuser, Boston (1983) 271-328. M. Reid, [*Bogomolov’s Theorem $c_1^2\leq 4c_2$,*]{} Intl. Symp. on Alg. Geometry, Kyoto (1977) 623-642. G. Xiao, [*Fibered Algebraic Surfaces with Low Slope,*]{} Math. Ann. [**276**]{} (1987) 449-466. [Zvezdelina Stankova-Frenkel]{}\ Mathematical Sciences Research Institute\ 1000 Centennial Dr., Berkeley, CA 94720\ e-mail address: [stankova@msri.org]{}
--- author: - Alina Vdovina title: ' K-theory of some C\*-algebras and buildings.' --- Introduction {#introduction .unnumbered} ============ The class of the identity \[$\mathbf{1}$\] in $K_0$ of different classes of crossed product $C^*$-algebras was broadly investigated in the literature, see [@Co1], [@N], [@HN], [@AD], [@RS2], [@M]. We will concentrate on a case associated to two-dimensional Euclidean buildings. Let group $G$ acts simply transitively on the vertices of a $\Tilde{A}_2$ building $\Delta$. Then there is an induced action on the boundary $\Omega$ of $\Delta$, the crossed product algebra $C(\Omega) \rtimes \Gamma$ depends only on $G$ and is classified by its $K$-groups together with the class \[$\mathbf{1}$\] in $K_0$ of the identity element $\mathbf{1}$ of $C(\Omega) \rtimes \Gamma$. It is interesting therefore to identify this class. We will consider special class of $\Tilde{A}_2$ groups $\Gamma_{\mathcal{T}_0}$ described in [@Cart], which embed as arithmetic subgroups of $PGL(3,\mathbf{F}_q(X))$. For this class of groups we prove the following result, which was conjectured for all $\Tilde{A}_2$ groups in [@RS2]. [**Theorem.**]{} The order of the class \[$\mathbf{1}$\] of the identity element $\mathbf{1}$ of $C(\Omega) \rtimes \Gamma$ in $K_0(C(\Omega) \rtimes \Gamma)$ is $q-1$, where $\Gamma$ is a $\Gamma_{\mathcal{T}_0}$ group and $q \nequiv 1 (mod~3)$. Polygonal presentation and construction of polyhedra. {#polygonal-presentation-and-construction-of-polyhedra. .unnumbered} ===================================================== A [*polyhedron*]{} is a two-dimensional complex which is obtained from several oriented $p$-gons by identification of corresponding sides. Let’s take take a sphere of a small radius at a point of the polyhedron. The intersection of the sphere with the polyhedron is a graph, which is called the [*link*]{} at this point. We consider a case, when all sides of a polyhedron are regular euclidean triangles and links at all vertices are incidence graphs of finite projective planes. The universal covering of such a polyhedron is an euclidean $\Tilde{A}_2$ building [@BB], [@Ba] and with the metric introduced in [@BBr p. 165] it is a complete metric space of non-positive curvature in the sense of Alexandrov and Busemann. It follows from [@BS], that the fundamental groups of our polyhedra satisfy the property (T) of Kazhdan. (Another relevant reference is [@Z].) [**Definition.**]{} Let $\mathcal{P}$ be a tesselation of the Euclidean plane by regular triangles, with angles $\pi/3$ in each vertex. A [*Euclidean*]{} $\Tilde{A}_2$ [*building*]{} is a polygonal complex $X$, which can be expressed as the union of subcomplexes called apartments such that: - Every apartment is isomorphic to $\mathcal{P}$. - For any two polygons of $X$, there is an apartment containing both of them. - For any two apartments $A_1, A_2 \in X$ containing the same polygon, there exists an isomorphism $ A_1 \to A_2$ fixing $A_1 \cap A_2$. Recall that a [*generalized m-gon*]{} is a connected, bipartite graph of diameter $m$ and girth $2m$, in which each vertex lies on at least two edges. A graph is [*bipartite*]{} if its set of vertices can be partitioned into two disjoint subsets such that no two vertices in the same subset lie on a common edge. The vertices of one subset we will call black vertices and the vertices of the other subset the white ones. The [*diameter*]{} is the maximum distance between two vertices and the [*girth*]{} is the length of a shortest circuit. Incidence graphs of finite projective planes are exactly generalized triangles. We recall a definition of polygonal presentation introduced in [@V]. [**Definition.**]{} Suppose we have $n$ disjoint connected bipartite graphs $G_1, G_2, \ldots G_n$. Let $P_i$ and $Q_i$ be the sets of black and white vertices respectively in $G_i$, $i=1,\dots,n$; let $P=\bigcup P_i$, $Q=\bigcup Q_i$, $P_i \cap P_j = \emptyset$, $Q_i \cap Q_j = \emptyset$ for $i \neq j$ and let $\lambda$ be a bijection $\lambda: P\to Q$. A set $\mathcal{K}$ of $k$-tuples $(x_1,x_2, \ldots, x_k)$, $x_i \in P$, will be called a [*polygonal presentation*]{} over $P$ compatible with $\lambda$ if - $(x_1,x_2,x_3, \ldots ,x_k) \in \mathcal{K}$ implies that $(x_2,x_3,\ldots,x_k,x_1) \in \mathcal{K}$; - given $x_1,x_2 \in P$, then $(x_1,x_2,x_3, \ldots,x_k) \in \mathcal{K}$ for some $x_3,\ldots,x_k$ if and only if $x_2$ and $\lambda(x_1)$ are incident in some $G_i$; - given $x_1,x_2 \in P$, then $(x_1,x_2,x_3, \ldots ,x_k) \in \mathcal{K}$ for at most one $x_3 \in P$. If there exists such $\mathcal{K}$, we will call $\lambda$ a [ *basic bijection*]{}. We can associate a polyhedron $K$ on $n$ vertices with each polygonal presentation $\mathcal{K}$ as follows: for every cyclic $k$-tuple $(x_1,x_2,x_3,\ldots,x_k)$ we take an oriented $k$-gon on the boundary of which the word $x_1 x_2 x_3\ldots x_k$ is written. To obtain the polyhedron we identify the corresponding sides of our polygons, respecting orientation. [**Lemma [@V]**]{} A polyhedron $K$ which corresponds to a polygonal presentation $\mathcal{K}$ has graphs $G_1, G_2, \ldots, G_n$ as the links. Balanced polygonal presentation =============================== We will use a particular case of polygonal presentation, so-called triangle presentation, described in [@Cart]. We repeat now the construction from [@Cart] for completness. Consider the Desarguesian projective plane $(P,L)=PG(2,q)$ of prime power order $q$, in which the points and lines are 1- and 2-dimensional subspaces, respectively, of a 3-dimensional vector space $V$ over $\mathbf{F}_q$, with incidence being inclusion. We may take $V=\mathbf{F}_{q^3}$. Consider a regular quadratic form on $F_{q^3}$ $(x_0,y_0) \to Tr(x_0,y_0)$, where $Tr$ is the trace of the field extension $F_{q^3}/F_q$. For $x \in P$, set $$\lambda_0=\{y \in P: Tr(x,y)=0\}.$$ This defines a point-line correspondence $\lambda_0: P \to L$. The following set of triples is a triangle presentation $\mathcal{T}_0$ compatible with $\lambda_0$: $$\mathcal{T}_0=\{(x, x\xi, x\xi^{q+1})|x, \xi \in P, Tr(\xi)=0\}.$$ It is convinient to denote elements of $P$ with letters of an alphabet $\mathcal{X}=\{x_1, \dots, x_{q^2+q+1}\}$. We describe now a new polygonal presentation $\mathcal{T}_1$. Take an alphabet $\mathcal{Y}$=$\mathcal{A}, \mathcal{B}, \mathcal{C}$, were every subalphabet $\mathcal{A}, \mathcal{B}, \mathcal{C}$ contains $q^2+q+1$ elements. $\mathcal{A}=\{a_i\}$, $\mathcal{B}=\{b_i\}$, $\mathcal{C}=\{c_i\}$, $i=1, \dots, q^2+q+1$. Define $\mathcal{T}_1$ as the following set of triples: $$\{(a_k,b_l,c_m),\, (b_k, c_l, a_m),\, (c_k,a_l, b_m) \in \mathcal{T}_1 \iff (x_k, x_l, x_m) \in \mathcal{T}_0\}$$ Define bijections $\lambda_1$, $\lambda_2$,$\lambda_3$ in the following way $\lambda_1(x_i)=a_i, \,\lambda_2(x_i)=b_i, \,\lambda_3(x_i)=c_i$. [**Lemma 1.**]{} There exists a subset of $\mathcal{T}_1$, such that every element of $\mathcal{Y}$ occurs exactly once. We will call such a subset $\mathcal{S}$ [*basic*]{} subset. A polygonal presentation containing a basic subset will be called [*balanced*]{} presentation. [**Proof.**]{} Let’s consider such an element $x \in P$, that is a generator of $P$ as a cyclic group. Fix $\xi \in P$ such that $Tr(\xi)=0$. Now, consider the following set of triples $(a_i,b_i,c_i), i=1,...,g^2+q+1$, where $a_i=\lambda_1(x^i)$, $b_i=\lambda_2(x^i\xi)$, $c_i=\lambda_3(x^i \xi^{q+1})$. Subshift of a balanced polygonal presentation ============================================= Let $\mathcal{T}$ be a polygonal presentation with $n=3$, $k=3$, where all there graphs $G_1$, $G_2$ and $G_3$ are incidence graphs of finite projective planes of order $q$. The polyhedron, which corresponds to $\mathcal{T}$, has triangular faces and three vertices. We will consider polyhedra such that all three vertices of each triangle have different graphs as links. In this case we can give a Euclidean metric to every face. In this metric all sides of the triangles are geodesics of the same length. The universal covering of the polyhedron is an Euclidean building $\Delta$, see [@BB], [@Ba]. Each element of $\mathcal{T}$ may be identified with an oriented basepointed triangle in $\Delta$. We now construct a 2-dimensional shift system associated with $\mathcal{T}$. The transition matrices $M_1, M_2$ in the way, defined as in [@RS2],p.828: if $\alpha=(x_1,x_2,x_3), \beta=(y_1, y_2, y_3) \in \mathcal{T}$ say that $M_1( \beta, \alpha)=1$ if and only if there exists $\psi=(x_3,z,y_1)$ and $M_1( \beta, \alpha)=0$ otherwise (Figure \[fig-1\]). In a similar way, $M_2(\gamma, \alpha)=1$ for $\alpha=(x_1,x_2,x_3), \gamma=(y_1,y_2,y_3)$ if and only if there exists $\psi=(x_2,y_1,z)$ and $M_2(\gamma, \alpha)=0$ otherwise. (70,50) (20,5)[(1,2)[5]{}]{} (25,15)[(1,2)[5]{}]{} (30,25)[(1,-2)[5]{}]{} (35,15)[(1,-2)[5]{}]{} (20,5)[(1,0)[10]{}]{} (40,5)[(-1,0)[10]{}]{} (29,12)[$\alpha$]{} (28,2)[$x_1$]{} (21,15)[$x_2$]{} (36,14)[$x_3$]{} (30,25)[(1,2)[5]{}]{} (35,35)[(1,2)[5]{}]{} (40,45)[(1,-2)[5]{}]{} (45,35)[(1,-2)[5]{}]{} (30,25)[(1,0)[10]{}]{} (50,25)[(-1,0)[10]{}]{} (39,32)[$\beta$]{} (38,22)[$y_1$]{} (31,35)[$y_2$]{} (46,34)[$y_3$]{} (40,5)[(1,2)[5]{}]{} (45,15)[(1,2)[5]{}]{} (45,12)[$z$]{} (95,5)[(-1,0)[10]{}]{} (85,5)[(-1,0)[10]{}]{} (75,5)[(1,2)[5]{}]{} (80,15)[(1,2)[5]{}]{} (85,25)[(1,-2)[5]{}]{} (90,15)[(1,-2)[5]{}]{} (84,12)[$\alpha$]{} (83,2)[$x_1$]{} (76,15)[$x_2$]{} (91,14)[$x_3$]{} (85,25)[(-1,0)[10]{}]{} (75,25)[(-1,0)[10]{}]{} (65,25)[(1,2)[5]{}]{} (70,35)[(1,2)[5]{}]{} (75,45)[(1,-2)[5]{}]{} (80,35)[(1,-2)[5]{}]{} (74,32)[$\gamma$]{} (73,22)[$y_1$]{} (66,35)[$y_2$]{} (81,34)[$y_3$]{} (65,25)[(1,-2)[5]{}]{} (70,15)[(1,-2)[5]{}]{} (68,13)[$z$]{} \[fig-1\] The matrices $M_1, M_2$ of order $3(q+1)(q^2+q+1) \times 3(q+1)(q^2+q+1)$ are nonzero $\{0,1\}$ matrices. We will use $\mathcal{T}$ as an alphabet and $M_1, M_2$ as transition matrices to build up 2-dimensional words as in [@RS1]. Let $[m,n]$ denote $\{m, m+1,...,n\}$, where $m \leq n$ are integers. If $m,n \in \mathbb{Z}^2$, say that $m \leq n$ if $m_j \leq n_j$ for j=1,2, and when $m \leq n$ let $[m,n]=[m_1,n_1] \times [m_2,n_2]$. In $ \mathbb{Z}^2$, let 0 denote the zero vector and let $e_j$ denote the $j$-th standard unit basis vector. If $m \in \mathbb{Z}_+^2=\{m \in \mathbb{Z}^2; m \geq 0\}$, let $W_m=\{w:[0,m] \to \mathcal{T}; M_j(w(l+e_j),w(l)=1$ where $l,l+e_j \in [0,m]\}$ and call the elements of $W_m$ words. In order to apply the theory from [@RS1] we need the matrices $M_1,M_2$ to satisfy the following conditions: (H0) Each $M_i$ is a nonzero $\{0,1\}$-matrix. (H1a) $M_1M_2=M_2M_1$. (H1b) $M_1M_2$ is a $\{0,1\}$-matrix. (H2) The directed graph with vertices $\alpha \in \mathcal{T}$ and directed edges $(\alpha,\beta)$ whenever $M_i(\alpha, \beta)=1$ for some $i$ is irreducible. (H3) For any nonzero $p \in \mathbb{Z}^2$, there exists a word $w \in W$ which is not $p-periodic$, i.e., there exists $l$ so that $w(l)$ and $w(l+p)$ are both defined but not equal. In [@RS1] some $C^*$-algebra is defined by partial isometries of the system of words $W_m$, where $m \in \mathbb{Z}_+^2$. It is proved there, that if the matrices $M_1,M_2$ satisfy the conditions (H0),(H1a,b),(H2),(H3), then this algebra is simple, purely infinite and nuclear. Now we prove the conditions (H0), (H1a,b), (H2), (H3) for our two-dimensional shift. By definition of matrices $M_1,M_2$ they are nonzero $\{0,1\}$ matrices, so (H0) holds. If we have $\alpha$, $\beta$, $\psi$, such that $M_1(\alpha, \beta)=1$, $M_2(\beta, \psi)=1$, then $\gamma$ such that $M_2(\alpha, \gamma)=1$, $M_1(\gamma, \psi)=1$, is uniquely defined because of properties of finite projective planes. Conditions (H1a,b) follow. To prove (H2) we need to color sides of triangles in three different colors. This is possible since there are three vertices in the polyhedron with different graphs as links. So, all triangles from $\mathcal{T}$ have one of three possible colorings. We need to show, that for any $\alpha, \beta \in \mathcal{T}$ we can choose $r>0$ such that $M_j^r(\alpha, \beta)>0$, where $j=1,2$. Geometrically it means that any $\alpha, \beta \in \mathcal{T}$ can be realized so that $\beta$ lies in some sector with base $\alpha$ (for more detailes see [@RS1]). Without loss of generality we can assume, that $j=1$. We will say, that $\beta \in \mathcal{T}$ is reachable from $\alpha \in \mathcal{T}$ in $r$ steps, if there is $r>0$ such that $M_1^r(\alpha, \beta)>0$. It is easy to see, that every triangle is reachable from some triangle of other color in one or two steps. So, to prove (H2) we need to show, that any triagnle is reachible from another one of the same color. Now we can use the proof of the Theorem 1.3 from [@RS1], since at each step of this proof it is only used, that the link at each vertex of the building is an incidence graph of a finite projective plane, which is true in our case too. The proof of (H3) is identical to the proof of (H3) in the case of the subshift considered in [@RS3]. Now, as a set of triangles we consider all elements of $\mathcal{T}_1$, every cyclic word $(a_i, b_j, c_k) \in \mathcal{T}_1$ brings three basepointed triangles (Figure \[fig-2\]). (70,30) (40,5)[(-1,0)[10]{}]{} (30,5)[(-1,0)[10]{}]{} (20,5)[(1,2)[5]{}]{} (25,15)[(1,2)[5]{}]{} (30,25)[(1,-2)[5]{}]{} (35,15)[(1,-2)[5]{}]{} (28,2)[$a_i$]{} (21,15)[$b_j$]{} (36,14)[$c_k$]{} (70,5)[(-1,0)[10]{}]{} (60,5)[(-1,0)[10]{}]{} (50,5)[(1,2)[5]{}]{} (55,15)[(1,2)[5]{}]{} (60,25)[(1,-2)[5]{}]{} (65,15)[(1,-2)[5]{}]{} (58,2)[$b_j$]{} (51,15)[$c_k$]{} (66,14)[$a_i$]{} (100,5)[(-1,0)[10]{}]{} (90,5)[(-1,0)[10]{}]{} (80,5)[(1,2)[5]{}]{} (85,15)[(1,2)[5]{}]{} (90,25)[(1,-2)[5]{}]{} (95,15)[(1,-2)[5]{}]{} (88,2)[$c_k$]{} (81,15)[$a_i$]{} (96,14)[$b_j$]{} \[fig-2\] [**Lemma 2.**]{} The set $M(\mathcal{S}^a)$ consists of $q-1$ copies of every element of $\mathcal{T}_1^b$ and one copy of $\mathcal{S}^b$. Denote $\mathcal{S}^a$ tiles of $\mathcal{S}$ starting with $a_i$, $i=1,...,g^2+q+1$ and analyse, which elements of $\mathcal{T}_1$, and how many of them can be obtained by one left shift from $\mathcal{S}^a$. In general, from each tile in $\Tilde{A}_2$ case, one can obtain $q^2$ tiles by one left(right) shift as a consequence of properties of finite projective planes (see \[RS\] for details). Now, each tile $\gamma \in \mathcal{T}_1^b$ can be obtained from some tile $\alpha \in \mathcal{S}^a$ by $q^2$ times. So, the total number of tiles which can be obtained from $\mathcal{S}^a$ is $q^2(q^2+q+1)$. Since every $c_i$ appears exactly once, every $\gamma \in \mathcal{T}_1^b$ will appear in $M(\mathcal{S}^a)$ exactly $q$ times if $\gamma \in \mathcal{S}_1^b$ and $q-1$ otherwise. So, the set $M(\mathcal{S}^a)$ consists of $q-1$ copies of every element of $\mathcal{T}_1^b$ and one copy of $\mathcal{S}^b$. Two subsequent copies of this lemma one gets by substitution $a$ by $b$, $b$ by $c$ and $a$ by $c$, $b$ by $a$. The class of the identity in $K$-theory. ======================================== [**Proof of the Theorem.**]{} It was shown in [@RS2], that the $K$-theory of the $C^*$-algebra can be found as the abelian group with generators, which are the elements of the alphabet $\mathcal{T}_1^b$ with the following relations: $$t=\sum_{s \in \mathcal{T}_1} M_1(s,t)$$ It follows from [@CMS], [@RS2], that the identity function in $C(\Omega)$ can be expressed as the sum of all tiles of the alphabet $\mathcal{T}_1$, $$\sum_{t \in \mathcal{T}_1} t \,\,\, .$$ So, we will use the system of relations $$t=\sum_{s \in \mathcal{T}_1} M_1(s,t)$$ to express $$\sum_{t \in \mathcal{T}_1} t \,\,\, .$$ It follows from Lemma 2, that $$\sum_{t \in \mathcal{S}^a} t = (q-1)\sum_{t \in \mathcal{T}_1^b} t + \sum_{t \in \mathcal{S}^b} t$$ $$\sum_{t \in \mathcal{S}^b} t = (q-1)\sum_{t \in \mathcal{T}_1^c} t + \sum_{t \in \mathcal{S}^c} t$$ $$\sum_{t \in \mathcal{S}^c} t = (q-1)\sum_{t \in \mathcal{T}_1^a} t + \sum_{t \in \mathcal{S}^a} t \,\,\, .$$ By addition of these three equalities we get $$(q-1)\sum_{t \in \mathcal{T}_1} t = 0 \, ,$$ so $(q-1)\mathbf{1}=1$. But it was shown in [@RS2], that the order of $\mathbf{1}$ is at least $q-1$ in the case when $q \nequiv 1 (mod~3)$, what completes the proof. [*Acknowledgement*]{}. I would like to thank G. Robertson for useful discussions and comments. [99]{} C. Anantharaman-Delaroche, [*$C^*$-algbres de Cuntz- Krieger et groupes Fuchsiens*]{}, In: Operator Theory, Operator Algebras and Related Topics (Timisoara 1996), The Theta Foundation, Bucharest, 1997, pp.17-35. W. Ballmann, M. Brin, [*Orbihedra of nonpositive curvature*]{}, Publications Mathématiques IHES, 82 (1995), 169-209. W. Ballmann, M. Brin, [*Polygonal complexes and combinatorial group theory*]{}, Geometriae Dedicata 50 (1994), 165–191. W. Ballmann, J. Światkowski, [*On $L^2$-cohomology and property (T) for automorphism groups of polyhedral cell complexes*]{}, Geom.Funct.Anal. 7 (1997), no.4, 615–645. S. Barre, [*Polyèdres finis de dimension 2 à courbure $\leq 0$ et de rang 2*]{}, Ann.Inst.Fourier 45, 4(1995),1037-1059. M. Bourdon, [*Sur les immeubles fuchsiens et leur type de quasi-isometrie*]{}, Ergodic Theory Dynam. Systems, 20 (2000), no. 2, 343–364. D. Cartwright, A. Mantero, T. Steger, A. Zappa, [*Groups acting simply transitively on vertices of a building of type $\Tilde{A}_2$*]{}, Geometriae Dedicata 47 (1993), 143–166. D. Cartwright, W. Mlotkowski, T. Steger, [*Property (T) and $\Tilde{A}_2$ groups*]{}, Ann.Inst.Fourier 44 (1994), 213-248. A. Connes, [*Cyclic cohomology and the transverse fundamental class of a foliation*]{}, Geometric methods in operator algebras (Kyoto, 1983), 52–144, Pitman Res.Notes Math.Ser., 123, Longman, harlow, 1986. G. Cornelissen, O. Lorschied, M. Marcolli, [*On the K-theory of graph $C^*$-algebras*]{}, arXiv math.OA/0606582, 2006. M. Gromov, [*Hyperbolic groups*]{}, Essays in Group Theory (ed. M. Gersten) MSRI Publ. 8, Springer, 1987, 75–263. M. Gromov, [*Random walk in random groups*]{}, Geom. Funct. Anal. 13 (2003), no. 1, 73–146. H. Moriyoshi, T. Natsume, [*The Godbillon-Vey cocycle and longtudial Dirac operators*]{}, Pacific J.Math. 172 (1996), 483–539. T. Natsume, [*Euler characteristic and the class of unit in K-theory*]{}, Math.Z. 194(1987), 237 –243. A. Yu. Olshanskii, [*On residualing homomorphisms and $G$ -subgroups of hyperbolic groups*]{}, Internat. J. Algebra Comput. 3 (1993), no. 4, 365–409. N. Ozawa, [*There is no separable universal $\rm II\sb 1$-factor*]{}, Proc. Amer. Math. Soc. 132 (2004), no. 2, 487–490 (electronic). G. Robertson, T. Steger, [*Asymptotic $K$-theory for groups acting on $\Tilde{A}_2$*]{}, Canad.J.Math. 53 (2001), 809–833. G .Robertson, T .Steger, [*Affine buildings, tiling systems and higher rank Cuntz-Krieger algebras*]{}, J.Reine Angew. Math. 513(1999), 115–144. G .Robertson, T .Steger, [*Irreducible subshifts associated with $\Tilde{A}_2$ buildings*]{}, J.Comb.Theory, Series A 103,(2003), 91–104 A. Vdovina, [*Combinatorial structure of some hyperbolic buildings*]{}, Math. Z. 241 (2002), no. 3, 471–478. A. Zuk, [*La propriété de Kazhdan pour les groupes adissant sur les polyèdres*]{}, C.R.A.S. Sér.1 Math(1996), no.5, 453-458.
--- abstract: 'In this paper we improve the upper bound of the third order Hankel determinant for the class of Ozaki close-to-convex functions. The sharp bound is conjectured.' address: - 'Department of Mathematics, Faculty of Civil Engineering, University of Belgrade, Bulevar Kralja Aleksandra 73, 11000, Belgrade, Serbia' - 'Department of Mathematics and Informatics, Faculty of Mechanical Engineering, Ss. Cyril and Methodius University in Skopje, Karpoš II b.b., 1000 Skopje, Republic of North Macedonia.' author: - Milutin Obradović - Nikola Tuneski title: 'Improved upper bound of third order Hankel determinant for Ozaki close-to-convex functions' --- Introduction and preliminaries ============================== Univalent functions are functions which are analytic, one-on-one and onto on a certain domain. Their study for more than a century shows that problems are significantly more difficult to be solved over the general class instead of its subclasses. This is also the case for the upper bound of the Hankel determinant, a problem rediscovered and extensively studied in recent years. Over the class ${{\mathcal A}}$ of functions $f(z)=z+a_2z^2+a_3z^3+\cdots$ analytic on the unit disk, this determinant is defined by $$H_{q}(n) = \left | \begin{array}{cccc} a_{n} & a_{n+1}& \ldots& a_{n+q-1}\\ a_{n+1}&a_{n+2}& \ldots& a_{n+q}\\ \vdots&\vdots&~&\vdots \\ a_{n+q-1}& a_{n+q}&\ldots&a_{n+2q-2}\\ \end{array} \right |,$$ where $q\geq 1$ and $n\geq 1$. The second order Hankel determinants is $$H_2(2) = \left | \begin{array}{cc} a_2 & a_3\\ a_3 & a_4\\ \end{array} \right | = a_2a_4-a_{3}^2,$$ and the third order one is $$H_3(1) = \left | \begin{array}{ccc} 1 & a_2& a_3\\ a_2 & a_3& a_4\\ a_3 & a_4& a_5\\ \end{array} \right | = a_3(a_2a_4-a_{3}^2)-a_4(a_4-a_2a_3)+a_5(a_3-a_2^2).$$ For the general class ${{\mathcal S}}$ of univalent functions in the class ${{\mathcal A}}$ tehre are very few results concerning the Hankel determinant. The best known for the second order case is due to Hayman ([@hayman-68]), saying that $|H_2(n)|\le An^{1/2}$, where $A$ is an absolute constant, and that this rate of growth is the best possible. Another one is [@OT-S], where it was proven that $|H_{2}(2)|\leq A$, where $1\leq A\leq \frac{11}{3}=3,66\ldots$ and $|H_{3}(1)|\leq B$, where $\frac49\leq B\leq \frac{32+\sqrt{285}}{15} = 3.258796\cdots$. There are much more results for the subclasses of ${{\mathcal S}}$. Namely, for starlike functions the upper bounds for the second and the third order Hankel determinant are 1 ([@janteng-07]) and $\frac47=0.5714\ldots$ ([@MONT-2019-3]), respectively, while for the same bounds for the convex functions they are $1/8$ ([@janteng-07]) and $\frac{4}{135}=0.0296\ldots$ ([@Kowalczyk-18]). The estimates for the second order case are sharp, while of the third order are not, but are best known. For the class ${{\mathcal R}}\subset{{\mathcal A}}$ of functions with bounded turning satisfying ${{\operatorname{Re}\,}}f'(z)>0$, $z\in{{\mathbb D}}$, we have sharp estimate $|H_2(1)|\le \frac{4}{9} = 0.444\ldots,$ ([@janteng-06]) and probably non-sharp $ |H_3(1)| \le \frac{1249}{3840} = 0.32526\ldots$ ([@OT-R]). In this paper we study two classes introduced by Ozaki. The first one is the class of Ozaki close-to-convex functions $${{\mathcal F}}= \{f\in{{\mathcal A}}: {{\operatorname{Re}\,}}\left[1+\frac{zf''(z)}{f'(z)}\right]>-\frac12,\, z\in{{\mathbb D}}\}$$ introduced by Ozaki in 1941 ([@ozaki-1941]) and it is a subclass of the class of close-to-convex functions. For this class the non-sharp estiamtes are known $ |H_{2}(2)|\leq \frac{21}{64}$ ([@MONT-2018-1]) and $ |H_{3}(1)|\leq \frac{180+69\sqrt{15}}{32\sqrt{15}}=3.6086187\ldots$ ([@ind-1]). We will significantly improve the second estimate to the value $0.08802\ldots$. More about this class one can find in [@DTV-book Sect. 9.5]. The other class that we will be considered is $${{\mathcal G}}= \{f\in{{\mathcal A}}: {{\operatorname{Re}\,}}\left[1+\frac{zf''(z)}{f'(z)}\right]<\frac32,\, z\in{{\mathbb D}}\},$$ Ozaki in [@ozaki-1941] introduced this class and proved that it is subclass of ${{\mathcal S}}$. Later, Sakaguchi in [@saka] and R. Singh and S. Singh in [@singh] showed, respectively, that functions in $\mathcal{G}$ are close-to-convex and starlike. Again in [@MONT-2018-1] it was shown that $ |H_{2}(2)|\leq \frac{9}{320}=0.028125\ldots$. Here we will give estimate of the third Hankel determinant. In the studies given in this paper we use approach based on the estimates of the coefficients of Shwartz function due to Prokhorov and Szynal (Lemma \[lem-prok\] given below). This approach is essentialy different than the comonly used and is the main reason for the improvement in the estiamete for the class ${{\mathcal F}}$ mentioned above. Uusualy the research is done using a result on coefficients of Carathéodory functions (functions from with positive real part on the unit disk) that involves Toeplitz determinants (see [@DTV-book Theorem 3.1.4, p.26] and [@granader]). Here is the result of Prokhorov and Szynal that we will need. In more general form it can be found in [@Prokhorov-1984 Lemma 2]. \[lem-prok\] Let $\omega(z)=c_{1}z+c_{2}z^{2}+\cdots $ be a Schwarz function, i.e., be analytic in the unit dick and $|\omega(z)|<1$ when $z\in{{\mathbb D}}$ and $\mu$ and $\nu$ be real numbers. - If $|\mu|\le\frac12$ and $-1\le\nu\le1$, then $$\left|c_{3}+\mu c_{1}c_{2}+\nu c_{1}^{3}\right|\leq 1.$$ - If $|\mu|\ge2$ and $-\frac23(|\mu|+1)\le\nu\le \frac{2|\mu|(|\mu|+1)}{\mu^2+2|\mu|+4}$, then $$\left|c_{3}+\mu c_{1}c_{2}+\nu c_{1}^{3}\right|\leq \frac23(|\mu|+1)\sqrt{\frac{|\mu|+1}{3(|\mu|+1+\nu)}}.$$ We will also need the following, almost forgotten result of Carleson ([@carlson]) that can be found also in [@good-2 Problem 16, p.78]. \[lem-carl\] Let $\omega(z)=c_{1}z+c_{2}z^{2}+\cdots $ be a Schwarz function. Then $$|c_2|\le1-|c_1|^2 \quad\mbox{and}\quad |c_4|\le1-|c_1|^2 -|c_2|^2-|c_3|^2.$$ Main results ============ We begin with improvement of the upper bound of the third Hankel determinant for the class ${{\mathcal F}}$ of Ozaki close-to-convex functions. \[main-thm\] Let $f\in{{\mathcal F}}$ is of the form $f(z)=z+a_2z^2+a_3z^3+\cdots$. Then $$|H_3(1)| \le \frac{1}{30}\left(\frac{13}{8}\right)^2 = 0.08802\ldots$$ For a function $f\in{{\mathcal F}}$ there exists a Schwarz function $\omega(z) = c_1z+c_2z^2+\cdots$ such that \[e4\] 1+ = -12+32, i.e., $$[zf'(z)]' \cdot [1-\omega(z)] = [1+2\omega(z)]\cdot f'(z).$$ By equating the coefficients in the abovr expression we receive \[e6\] a\_2 &= 32c\_1,\ a\_3 &= 12(4c\_1\^2+c\_2),\ a\_4 &= 12(2c\_3+13c\_1c\_2+20c\_1\^3),\ a\_5 &= (2c\_4+12c\_1c\_3+46c\_1\^2c\_2+40c\_1\^4+5c\_2\^2). Using we have $$\begin{split} H_3(1) &= \frac{1}{320} \left[4 c_1 ^4 c_2 +8 c_1 ^3 c_3 + 4 c_1 c_2 c_3 -23 c_1^2 c_2 ^2 \right.\\ &\quad \left.- 12c_1^2 c_4 + 20c_2^3 -20c_3^2 +24 c_2 c_4 \right] \\ &=\frac{1}{320} \Bigg[-20c_3 \left(c_3-\frac15c_1c_2-\frac25c_1^3\right) +12 c_4(2c_2-c_1^2) \\ &\quad -23c_1^2 c_2 ^2 + 20 c_2 ^3 + 4c_1^4c_2 \Bigg], \end{split}$$ and from here \[e7\] |H\_3(1)| & . Lemma \[lem-prok\](i) for $\mu=-\frac15$ and $\nu=-\frac25$ gives $\left|c_3-\frac15c_1c_2-\frac25c_1^3\right| \le1$. This inequality, together with the inequalities for the function $\omega$ given in Lemma \[lem-carl\], applied in imply $$\begin{split} |H_3(1)| &\le \frac{1}{320} \left[20|c_3| +12 (2-|c_1|^2) \left(1-|c_1|^2-|c_2|^2-|c_3|^2\right)\right.\\ & \quad\left.+ 23|c_1|^2|c_2|^2+20|c_2|^2(1-|c_1|^2) \right]\\ & = \frac{1}{320} \left[ 24+20|c_3|-24|c_3|^2 -12|c_1|^2 (1-|c_3|^2) \right. \\ & \quad\left. -4|c_2|^2 - 24|c_1|^2 +12|c_1|^4 + 15 |c_1|^2 |c_2|^2)\right]\\ & \le \frac{1}{320} \left[ 24+20|c_3|-24|c_3|^2 -12|c_1|^2 (1-|c_3|^2) \right. \\ & \quad\left. -4|c_2|^2 - 24|c_1|^2 +12|c_1|^4 + 15 |c_1|^2 (1-|c_1|^2)^2\right]\\ & = \frac{1}{320} \left[ 24+20|c_3|-24|c_3|^2 -12|c_1|^2 (1-|c_3|^2) \right. \\ & \quad\left. -4|c_2|^2 - 3|c_1|^2(3+6|c_1|^2-5|c_1|^4)\right]\\ & \le \frac{1}{320} \left( 24+20|c_3|-24|c_3|^2 \right) = \frac{3}{40} \left( 1+\frac56|c_3|-|c_3|^2 \right)\\ & \le \frac{3}{40} \left( 1+\frac56\cdot\frac{5}{12}-\left(\frac{5}{12}\right)^2 \right) =\frac{1}{30}\left(\frac{13}{8}\right)^2. \end{split}$$ The previous result, althow significantly improves the one from [@ind-1], still is not sharp, as the following one dealing with the class ${{\mathcal G}}$. \[th2\] Let $f\in{{\mathcal G}}$ and is of the form $f(z)=z+a_2z^2+a_3z^3+\cdots$. Then $$|H_3(1)| \le \frac{3589}{291600} = 0.0123\ldots.$$ Similarly as in the proof of the previous theorem, for each function $f$ from ${{\mathcal G}}$, there exists a function $\omega(z)= c_1z+c_2z^2+\cdots$, analytic in ${{\mathbb D}}$, such that $|\omega(z)|<1$ for all $z$ in ${{\mathbb D}}$, and \[eeq\] 1+ = 32-12, i.e., $$[zf'(z)]' \cdot [1-\omega(z)] = [1-2\omega(z)]\cdot f'(z).$$ From here, by equating the coefficients we receive $$\begin{split} a_2 &= -\frac{1}{2}c_1,\\ a_3 &= -\frac16c_2,\\ a_4 &= -\frac{1}{24}(2c_3+c_1c_2),\\ a_5 &= -\frac{1}{120}(6c_4+4c_1c_3+3c_2^2+2c_1^2c_2). \end{split}$$ From here, after some calculations we receive $$\begin{split} H_3(1) &= \frac{1}{8640} \left[-60c_3^2 - 132 c_1c_2c_3 + 72c_1^3c_3 + 36c_4(2c_2+3c_1^2) \right.\\ &\quad \left. + 36c_1^4c_2 + 76c_2^3 + 3c_1^2c_2^2 \right]\\ &= \frac{1}{8640} \left[-60c_3\left( c_3 +\frac{11}{5} c_1c_2 - \frac65c_1^3\right) + 36c_4(2c_2+3c_1^2) \right.\\ &\quad \left. + 36c_1^4c_2 + 76c_2^3 + 3c_1^2c_2^2 \right] \end{split}$$ and further $$\begin{split} |H_3(1)| &\le \frac{1}{8640} \Bigg[ 60|c_3|\left| c_3 +\frac{11}{5} c_1c_2 - \frac65c_1^3\right| + 36|c_4|(2|c_2|+3|c_1|^2) \\ & + 36|c_1|^4|c_2| + 76|c_2|^3 + 3|c_1|^2|c_2|^2 \Bigg]. \end{split}$$ Now, Lemma \[lem-prok\](ii) for $\mu=-\frac{4}{75}$ and $\nu=\frac{38}{75}$ gives $\left|c_3+\frac{11}{5}c_1c_2-\frac65c_1^3\right| \le \frac{128}{15\sqrt{30}}$, which together with the inequalities from Lemma \[lem-carl\], implies $$\begin{split} |H_3(1)| &\le \frac{1}{8640} \Bigg[ \frac{512}{\sqrt{30}}|c_3| + 36(1-|c_1|^2-|c_2|^2-|c_3|^2)(2+|c_1|^2) \\ & + 36|c_1|^4(1-|c_1|^2) + 76|c_2|^2(1-|c_1|^2) + 3|c_1|^2|c_2|^2 \Bigg] \\ &= \frac{1}{8640} \Bigg[ 76 + \frac{512}{\sqrt{30}}|c_3| -72|c_3|^2 + B \Bigg], \end{split}$$ where $$B = -|c_1|^2 ( 44-4|c_1|^2+109|c_2|^2+36|c_3|^2 +36|c_1|^4) \le 0.$$ Therefore, for $|c_3|\le1$ we have $$\begin{split} |H_3(1)| &\le \frac{1}{8640} \left[ 76 + \frac{512}{\sqrt{30}}|c_3| -72|c_3|^2 \right] \\ &\le \frac{1}{8640} \left[ 76 + \frac{512}{\sqrt{30}}\cdot \frac{16}{9}\sqrt{\frac{2}{15}} -72\cdot\left(\frac{16}{9}\sqrt{\frac{2}{15}}\right)^2 \right]\\ & = \frac{3589}{291600} = 0.0123\ldots. \end{split}$$ The estimates of the third Hankel determinant given in Theorem \[main-thm\] and Theorem \[th2\] are probably not sharp. Here is a conjecture of the sharp values. Let $f\in{{\mathcal A}}$ and is of the form $f(z)=z+a_2z^2+a_3z^3+\cdots$. - If $f\in{{\mathcal F}}$, then $|H_{3}(1)|\leq \frac{1}{16} = 0.0625$; - If $f\in{{\mathcal G}}$, then $|H_{3}(1)|\leq \frac{19}{2160}=0.00879\ldots$. Both estimates are sharp with extremal functions $\frac{1+2z^2}{1-z^2}$ and $\frac{1}{2} \left(z \sqrt{1-z^2}+\arcsin{z}\right)$, respectively, obtained for $\omega(z)=z^2$ in and . [99]{} D. Bansal, S. Maharana, J.K. Prajapat, Third order Hankel determinant for certain univalent functions. *J. Korean Math. Soc.* **52**(6) (2015), 1139–1148. F. Carlson, Sur les coefficients d’une fonction bornée dans le cercle unité, *Ark. Mat. Astr. Fys.* **27A**(1) (1940), 8 pp. A.W. Goodman, *Univalent functions. Vol. II.*, Mariner Publishing Co., Inc., Tampa, FL, 1983. U. Grenander, G. Szegő, *Toeplitz forms and their applications*, California Monographs in Mathematical Sciences. University of California Press, Berkeley-Los Angeles, 1958. W.K. Hayman, On the second Hankel determinant of mean univalent functions, *Proc. London Math. Soc.* **3**(18) (1968), 77-–94. A. Janteng, S.A. Halim, M. Darus, Coefficient inequality for a function whose derivative has a positive real part. *J. Inequal. Pure Appl. Math.* **7**(2) (2006), Article 50, 5 pp. A. Janteng, S.A. Halim, M. Darus, Hankel determinant for starlike and convex functions, *Int. J. Math. Anal. (Ruse).* **1**(13-16) (2007), 619–-625. B. Kowalczyk, A. Lecko, Y.J. Sim, The sharp bound of the Hankel determinant of the third kind for convex functions. *Bull. Aust. Math. Soc.* **97**(3) (2018), 435–445. M. Obradović, N. Tuneski, Hankel determinant of second order for some classes of analytic functions. *preprint*, arXiv:1903.08069. M. Obradović, N. Tuneski, New upper bounds of the third Hankel determinant for some classes of univalent functions. *preprint*, arXiv:1911.10770v2. M. Obradović, N. Tuneski, Upper bounds of the third Hankel determinant for classes of univalent functions with bounded turning. *preprint*, arXiv:2004.04960. M. Obradović, N. Tuneski, Hankel determinants of second and third order for the class ${{\mathcal S}}$ of univalent functions, arXiv:1912.06439. Ozaki S., On the theory of multivalent functions. II. *Sci. Rep. Tokyo Bunrika Daigaku. Sect. A.* **4** (1941), 45–87. D.V. Prokhorov, J. Szynal, Inverse coefficients for $(\alpha ,\beta )$-convex functions. *Ann. Univ. Mariae Curie-Sk[ł]{}odowska Sect. A*. **35**(1981) (1984), 125–143. Sakaguchi K., A property of convex functions and an application to criteria for univalence, *Bull. Nara Univ. Ed. Natur. Sci.* **22** (2) (1973), 1–5. Singh R., Singh S., Some sufficient conditions for univalence and starlikeness, *Colloq. Math.* **47** (2) (1982), 309–314 (1983). D.K. Thomas, N. Tuneski, A. Vasudevarao, *Univalent Functions: A Primer*, De Gruyter Studies in Mathematics [**69**]{}, De Gruyter, Berlin, Boston, 2018. D. Vamshee Krishna, B. Venkateswarlu, T. RamReddy, Third Hankel determinant for bounded turning functions of order alpha. *J. Nigerian Math. Soc.* **34**(2) (2015), 121–127.
--- author: - | Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, Graham Neubig\ Language Technologies Institute, Carnegie Mellon University\ `{mengzhox,aanastas,yiming,gneubig}@cs.cmu.edu`\ `ruochenx@gmail.com` bibliography: - 'acl2020.bib' title: Predicting Performance for Natural Language Processing Tasks --- Introduction {#sec:intro} ============ Problem Formulation {#sec:formulation} =================== NLP Task Instantiations {#sec:featuring} ======================= Can We Predict NLP Performance? {#sec:perform} =============================== What Datasets Should We Test On? {#sec:representativeness} ================================ Can We Extrapolate Performance for New Models? {#sec:model} ============================================== Related Work {#sec:related} ============ Conclusion and Future Work {#sec:conclusion} ========================== Acknowledgement {#sec:acknowledgement .unnumbered} ===============
--- abstract: | A relation between O$(n)$ models and Ising models has been recently conjectured \[L. Casetti, C. Nardini, and R. Nerattini, Phys. Rev. Lett. [**106**]{}, 057208 (2011)\]. Such a relation, inspired by an energy landscape analysis, implies that the microcanonical density of states of an O$(n)$ spin model on a lattice can be effectively approximated in terms of the density of states of an Ising model defined on the same lattice and with the same interactions. Were this relation exact, it would imply that the critical energy densities of all the O$(n)$ models (i.e., the average values per spin of the O$(n)$ Hamiltonians at their respective critical temperatures) should be equal to that of the corresponding Ising model; it is therefore worth investigating how different the critical energies are and how this difference depends on $n$. We compare the critical energy densities of O$(n)$ models in three dimensions in some specific cases: the O$(1)$ or Ising model, the O$(2)$ or $XY$ model, the O$(3)$ or Heisenberg model, the O$(4)$ model and the O$(\infty)$ or spherical model, all defined on regular cubic lattices and with ferromagnetic nearest-neighbor interactions. The values of the critical energy density in the $n=2$, $n=3$, and $n=4$ cases are derived through a finite-size scaling analysis of data produced by means of Monte Carlo simulations on lattices with up to $128^3$ sites. For $n=2$ and $n=3$ the accuracy of previously known results has been improved. We also derive an interpolation formula showing that the difference between the critical energy densities of O$(n)$ models and that of the Ising model is smaller than $1\%$ if $n<8$ and never exceeds $3\%$ for any $n$. author: - Rachele Nerattini - Andrea Trombettoni - Lapo Casetti bibliography: - '/Users/casetti/Work/Scripta/papers/bib/mybiblio.bib' - '/Users/casetti/Work/Scripta/papers/bib/statmech.bib' title: 'Critical energy density of O$(n)$ models in $d=3$' --- Introduction ============ Simple models are important tools in theoretical physics, and especially in statistical mechanics, where O$(n)$ Hamiltonians are often used to describe in highly simplified, yet significant models realistic interactions between particles or spins. Finding links or relations between different simple and paradigmatic models often results in a deeper understanding of the model themselves and of the physics they describe: from this point of view it is highly desirable to individuate and characterize exact (or even approximate) properties and quantities shared by them. In [@prl2011] a relation between the microcanonical densities of states of continuous and discrete spin models was conjectured, and further discussed in [@jstat2012; @analyticalpaper]. It was suggested that the density of states of an O$(n)$ classical spin model on a given lattice can be approximated in terms of the density of states of the corresponding Ising model. By “corresponding” Ising model we mean an Ising model defined on the same lattice and with the same interactions. Such a relation was inspired by an energy landscape approach [@Wales:book] to the microcanonical thermodynamics of these models, the key observation being that all the configurations of an Ising model on a lattice are stationary points of an O$(n)$ model Hamiltonian defined on the same lattice with the same interactions, for any $n$. The relation between the densities of states can be written as $$\label{omega_appr} \omega^{(n)}(\varepsilon) \approx \omega^{(1)}(\varepsilon) \, g^{(n)}(\varepsilon)\, ,$$ where $\varepsilon$ is the energy density of the system, i.e., $\varepsilon = E/N$ with $E$ and $N$ denoting the total energy and the number of spins, respectively; furthermore $\omega^{(n)}$ is the density of states of the O$(n)$ model, $\omega^{(1)}$ the density of states of the corresponding Ising model and $g^{(n)}$ is a function representing the volume of a neighborhood of the Ising configuration in the phase space of the O$(n)$ model. The function $g^{(n)}$ is typically unknown. However, since it comes from local integrals over a neighborhood of the phase space, one expects it is regular. Eq. (\[omega\_appr\]) is an approximate one and the approximations involved are not easily controlled in general [^1]. However, as discussed in [@prl2011], were it exact there would be a very interesting consequence: the critical energy densities $\varepsilon^{(n)}_{c}$ of the phase transitions of all the O$(n)$ models on a given lattice would be the same and equal to $\varepsilon^{(1)}_{c}$, that is to the critical energy density of the corresponding Ising model. Rather surprisingly, according to available analytical and numerical calculations the critical energy densities are indeed very close to each other whenever a phase transition is known to take place, at least for ferromagnetic models on $d$-dimensional hypercubic lattices. More precisely, the critical energy densities are the same and equal to the Ising one for all the O$(n)$ models with long-range mean-field interactions as shown by the exact solution [@CampaGiansantiMoroni:jpa2003], and the same happens for all the O$(n)$ models on a one-dimensional lattice with nearest-neighbor interactions. Making use of the microcanonical solutions of the models, an expression analogous to (\[omega\_appr\]) can be exactly computed for the mean-field and for the one-dimensional nearest-neighbors $XY$ models ($n=2$) [@jstat2012]: such expression implies the equality of the critical energies in the limit $\varepsilon\rightarrow\varepsilon^{(n)}_{c}$. Hence the equality of the critical energies is rooted in the expression (\[omega\_appr\]) for the density of states. In $d=2$ the critical energies of the ferromagnetic transition of the Ising model and of the Berežinskii-Kosterlitz-Thouless (BKT) transition of the $XY$ model are only slightly different, the difference being about 2% (see Ref. [@prl2011] and references therein). The thermodynamics of the two-dimensional $XY$ model has been analytically studied in [@analyticalpaper] assuming Eq. (\[omega\_appr\]) as an ansatz on the form of its density of states and then computing $g^{(2)}$ with suitable approximations. The results were compared with numerical simulations and a very good agreement was found in almost all the energy density range. This confirms the soundness of the hypotheses behind Eq.  (\[omega\_appr\]) also in the two-dimensional case. It is also worth noticing that despite the difference in the nature of the Ising and of the BKT transitions in $d=2$, the two-dimensional Ising and $XY$ models share a “weak universality”: indeed, the critical exponent ratio $\beta/\nu$ and the exponent $\delta$ are equal in the two cases [@Archambault_etal:jpa1997]. It is tempting to think that energy landscape arguments like those discussed above may explain such a relation between the features of phase transitions so different from each other. The very different nature, due to the Mermin-Wagner theorem, of the Ising and BKT phase transitions in two dimensions together with the fact that the comparison is between an exact result for $\varepsilon^{(1)}_{c}$ (for the Ising model) and numerical results for $\varepsilon^{(2)}_{c}$ (for the XY model) prevents the two-dimensional case from being a good test case to quantify the accuracy of the prediction on the equality of critical energy densities. From this point of view the O$(n)$ model in three dimensions ($d=3$) provides a very promising and clear-cut case study to test the equality of the critical energy densities since a phase transition occurs for all $n$ and in all cases a local order parameter becomes non-vanishing at a finite critical temperature. For nearest-neighbor interacting O$(n)$ models in $d = 3$ the comparison has to be based on the outcomes of numerical simulations or on approximate methods, since no exact solution (in particular for the critical energy) exists even for the Ising case. Although typically overlooked, results reported in the literature clearly show that the critical energies measured for three-dimensional O$(n)$ spin systems with $n = 1$, $2$ and $3$ are almost consistent: see [@prl2011] for a discussion on this point and [@BradyMoreira:prb1993; @GottlobHasenbusch:physicaa1993; @BrownCiftan:prb2006] for the critical values of the energy densities for $n=1$, $n=2$ and $n=3$, respectively. Inspired by these results, the aim of this paper is to quantify the difference between the critical energy densities of nearest-neighbor O$(n)$ models defined on regular cubic lattices in $d=3$ and to study the dependence on $n$ of the O$(n)$ critical energy densities. This study also entails an assessment of the accuracy of the prediction of equal critical energy densities following from Eq. . As shown in the following Sections, the already existing numerical estimates of the critical energy densities for three-dimensional O$(n)$ models with $n=2$ and $3$ will be improved; in the case $n=4$ we obtain a result having the same accuracy of, and in good agreement with, a very recent one given in [@EngelsKarsch:prd2012]. Using these results together with the exact result for the critical energy density of the $n=\infty$ model (i.e., the spherical model [@Stanley:physrev1968]) and with the first term of the $1/n$ expansion [@CampostriniEtAl:npb1996], an interpolation formula for the critical energy densities $\varepsilon^{(n)}_{c}$ will be derived, valid in the whole range $n=1,2,\ldots,\infty$. It will turn out that the difference between the critical energy densities of the O$(n)$ models and that of the corresponding Ising model is smaller than $1\%$ for O$(n)$ models with $n<8$ and never exceeds $3\%$. The paper is organized as follows: In Sec. \[Onmodels\] the definition of O$(n)$ models is recalled and the notation used in the next Section introduced. Assuming the critical energy density of the Ising model in three dimensions known with enough accuracy [@HasenbuschPinn:jphysa1998], in Sec. \[SecFSS\] we estimate the critical energy densities of the O$(2)$, O$(3)$ and O$(4)$ models in $d=3$ via a finite-size scaling (FSS) analysis whose basic relations are presented in Sec. \[SecFSS\]. In Sec. \[numericalSphericalModel\] the spherical model in $d=3$ is discussed since its thermodynamics is equivalent to the one of an O$(n)$ model in the $n\rightarrow\infty$ limit. The spherical model can be solved analytically in any spatial dimension $d$ and, in particular, in $d=3$: hence it provides the value of $\varepsilon^{(\infty)}_{c}$. In Sec. \[Sec\_NumericalTest\] a careful comparison between the critical values of the energy densities of the above mentioned models is performed and an interpolation formula for $\varepsilon^{(n)}_{c}$ defined. Some conclusions are drawn in Sec. \[ConclusionsNumericalTest\]. O$(n)$ spin models {#Onmodels} ================== In the following we are going to consider classical O$(n)$ spin models defined on a regular cubic lattice in $d=3$ and with periodic boundary conditions. To each lattice site $i$ an $n$-component classical spin vector $\mathbf{S}_i = (S_i^1,\ldots,S_i^n)$ of unit length is assigned. The energy of the model is given by the Hamiltonian $$\label{H-On} H^{(n)} = - J \sum_{\langle i,j \rangle} \mathbf{S}_i \cdot \mathbf{S}_j= - J \sum_{\langle i,j \rangle} \sum_{a = 1}^n S^a_i S^a_j\, ,$$ where the angular brackets denote a sum over all distinct pairs of nearest-neighbor lattice sites. The exchange coupling $J$ will be assumed positive, resulting in ferromagnetic interactions. The Hamiltonian (\[H-On\]) is globally invariant under the $O(n)$ group. In the special cases $n=1$, $n=2$, and $n=3$, one obtains the Ising, $XY$, and Heisenberg models, respectively. The case $n=1$ is even more special because O$(1) \equiv \mathbb{Z}_2$ is a discrete symmetry group. In this special case the Hamiltonian (\[H-On\]) becomes the Ising Hamiltonian $$H^{(1)} = - J \sum_{i,j=1}^N \sigma_i \sigma_j~\, , \label{H_1}$$ where $\sigma_i = \pm 1$ $\forall i$. In all the other cases $n \geq 2$ the O$(n)$ group is continuous. Without loss of generality we shall set $J=1$ in the following (and $k_B=1$). The energy density $\varepsilon = H^{(n)}/N$ lies in the energy range $[-d,d]$ where $d$ is the lattice dimension. In $d=3$ and for any $n$ the models exhibit a phase transitions at $\varepsilon=\varepsilon^{(n)}_{c}$ from a paramagnetic phase, for $\varepsilon>\varepsilon^{(n)}_{c}$, to a ferromagnetic phase, for $\varepsilon<\varepsilon^{(n)}_{c}$, with a spontaneous breaking of the O$(n)$ symmetry. The models are not exactly solvable and estimates of critical temperatures, critical exponents and other quantities at criticality have been mainly derived by means of numerical simulations, see e.g.  [@BradyMoreira:prb1993; @GottlobHasenbusch:physicaa1993; @BrownCiftan:prb2006]. Determination of the critical energy densities {#SecNumericalIntro} ============================================== The aim of this work is to answer the following question: what is the difference between the critical value $\varepsilon^{(n)}_{c}$ of the energy density of the O$(n)$ model (\[H-On\]) and the critical value $\varepsilon^{(1)}_{c}$ of the energy density of the Ising model (\[H\_1\])? And how does it depend on $n\in[2,\infty]$? Some preliminary observations are necessary. As mentioned in the Introduction, three-dimensional O$(n)$ models are not exactly solvable [^2] and the value of thermodynamic functions at criticality is typically estimated numerically. Most numerical simulations have been limited so far mostly to small $n$: see e.g. [@BradyMoreira:prb1993; @GottlobHasenbusch:physicaa1993; @BrownCiftan:prb2006; @EngelsKarsch:prd2012] for $n=1$, $2$, $3$ and $4$, respectively. This is clearly understandable since these are the most relevant cases for physical applications [@CampostriniEtAl:npb1996]. On the other hand, different approaches like $1/n$ and strong-coupling expansions have been used for large $n$, see Ref.  [@CampostriniEtAl:npb1996]. The common feature of these studies is that they have been performed in the canonical ensemble. Hence, especially before the suggestion that critical energy densities might be very close or even equal [@prl2011], an accurate evaluation of the critical energy densities $\varepsilon^{(n)}_{c}$ was out of the scope of the works, and the computation of $\varepsilon^{(n)}_{c}$ was usually a byproduct of a more general task possibly focused on the determination of other parameters, such as the critical temperatures $T^{(n)}_{c}$ or the critical exponents or the free energies at the critical point. In the following we shall use Monte Carlo simulations and FSS to determine improved estimates of $\varepsilon^{(n)}_{c}$ for $n=2$ and $3$, our estimate of $\varepsilon^{(4)}_{c}$ being as accurate as the most recent in the literature [@EngelsKarsch:prd2012]. The case $n=1$ has already been studied with high accuracy by Hasenbusch and Pinn in [@HasenbuschPinn:jphysa1998] and we will simply recall their results in Sec. \[numericalO1model\]. The FSS analyses rely on numerical data computed by means of canonical Monte Carlo simulations using the optimized cluster algorithm for classical O$(n)$ spin models provided by the ALPS project [@ALPS]. Most of the simulations have been performed on the PLX machine at the CINECA in Casalecchio di Reno (Bologna, Italy). A small subset of the simulations has been performed with the same algorithm on the PC-farm of the Dipartimento di Fisica e Astronomia of the Università di Firenze, Italy. We typically used $5\times 10^6$ Monte Carlo sweeps (MCS) plus $5\times 10^5$ MCS of thermalization for the simulations of the O$(2)$ model and $10^7$ MCS plus $2.5\times 10^6$ MCS of thermalization for the simulations of the O$(3)$ and of the O$(4)$ model. The total cluster CPU time spent on PLX for the simulations has been more than $40000$ hours. For each O$(n)$ model, the simulations have been performed at the value of the critical temperature $T^{(n)}_{c}$ given in the literature with an uncertainty $\varDelta T^{(n)}_{c}$. This quantity has to be taken into account in the computation of the uncertainty $\varDelta \varepsilon^{(n)}_{c}$ associated to the estimate of $\varepsilon^{(n)}_{c}$ and the uncertainty propagation procedure needs the evaluation of the critical value of the specific heat. For this reason, in the Monte Carlo simulations, besides collecting the values of the energy densities, we also computed the specific heat. The FSS procedure and the uncertainty propagation procedure will be discussed in the following section. Finite-size scaling analysis {#SecFSS} ---------------------------- Let us denote by $\varepsilon^{(n)}_{c}(L)$ and $c^{(n)}(L)$ the critical values of the energy density and of the specific heat, respectively, of an O$(n)$ model defined on a regular cubic lattice of edge $L=\sqrt[3]{N}$. The relation between $\varepsilon^{(n)}_{c}(L)$ and $\varepsilon^{(n)}_{c}(\infty) \equiv \varepsilon^{(n)}_{c}$ is given by the FSS equation $$\label{energy_FSS} \varepsilon^{(n)}_{c}(L) = \varepsilon^{(n)}_{c}+\varepsilon_{n}\;L^\frac{\alpha_{n}-1}{\nu_{n}}\, :$$ in the following we use the notation $$\label{definition_Dn} D_{n}=\frac{\alpha_{n}-1}{\nu_{n}}\, .$$ An analogous expression holds for the specific heat, and it is given by $$\label{cv_FSS} c^{(n)}(L)=c^{(n)}_{c}+c_{n}\;L^\frac{\alpha_{n}}{\nu_{n}}\, ,$$ where $c^{(n)}_{c} \equiv c^{(n)}_{c}(\infty)$ denotes the critical value of the specific heat in the thermodynamic limit. In Eqs. (\[energy\_FSS\]) and (\[cv\_FSS\]), $\varepsilon_{n}$ and $c_{n}$ are model dependent fit parameters, while $\alpha_{n}$ and $\nu_{n}$ are the specific heat and the correlation length critical exponents, respectively. We do not discuss here the derivation of Eqs. (\[energy\_FSS\]) and (\[cv\_FSS\]), referring the reader to the existing literature for an in-depth analysis on the subject, see e.g. [@Fisher:rmp1974; @Brezin:jphys1982; @Stanley:rmp1999] for reviews and [@SchultkaManousakis:prb1995] for an explicit derivation of Eqs. (\[energy\_FSS\]) and (\[cv\_FSS\]) in the case $n=2$. For each O$(n)$ model, the estimate of the critical energy density $\varepsilon^{(n)}_{c}\pm\varDelta \varepsilon^{(n),stat}_{c}$ can be determined with a fit of the Monte Carlo data $\varepsilon^{(n)}_{c}(L)$ according to Eq. (\[energy\_FSS\]); here and in the following $\varDelta \varepsilon^{(n),stat}_{c}$ will denote the statistical uncertainty on $\varepsilon^{(n)}_{c}$ due to the fitting procedure. Since our purpose is to compare the values of $\varepsilon^{(n)}_{c}$ for different $n$, any source of error in the determination of $\varDelta \varepsilon^{(n)}_{c}$ has to be considered separately. The fact that the energy data $\varepsilon^{(n)}_{c}(L)$ are computed with Monte Carlo simulations performed at $T^{(n)}_{c}$ becomes important. Indeed, the critical temperatures $T^{(n)}_{c}$ of O$(n)$ models are provided in the literature with an uncertainty $\varDelta T^{(n)}_{c}$ whose effect in the determination of $\varDelta \varepsilon^{(n)}_{c}$ has to be checked with special care. As a matter of fact, $\varDelta T^{(n)}_{c}$ can be seen as the analogous of a systematic source of error in an experimental setting; we will then denote by $\varDelta \varepsilon^{(n),syst}_{c}$ its contribution to $\varDelta \varepsilon^{(n)}_{c}$. The two contributions $\varDelta \varepsilon^{(n),stat}_{c}$ and $\varDelta \varepsilon^{(n),syst}_{c}$ to the uncertainty $\varDelta\varepsilon^{(n)}_{c}$ of $\varepsilon^{(n)}_{c}$ will be discussed separately in the following, and the final estimate of $\varepsilon^{(n)}_{c}$ will be given in the form $$\label{final-e-estimation} \varepsilon^{(n)}_{c}\pm \varDelta \varepsilon^{(n)}_{c} \equiv \varepsilon^{(n)}_{c}\pm \varDelta \varepsilon^{(n),stat}_{c} \pm \varDelta \varepsilon^{(n),syst}_{c}\, .$$ The systematic uncertainty $\varDelta \varepsilon^{(n),syst}_{c}$ can be estimated with two different methods. In both cases the critical value $c^{(n)}_{c}$ of the specific heat is necessary and will be computed with a fit [^3] of the Monte Carlo data $c^{(n)}_{c}(L)$ according to Eq. (\[cv\_FSS\]). The two methods we used to compute $\varDelta \varepsilon^{(n),syst}_{c}$ are the following: - *Method 1.* $$\label{second-energy} \varDelta \bar{\varepsilon}^{(n),syst}_{c} = |\varepsilon^{(n)}_{c} - \bar{\varepsilon}^{(n)}_{+} | = |\varepsilon^{(n)}_{c} - \bar{\varepsilon}^{(n)}_{-} |\, :$$ $\bar{\varepsilon}^{(n)}_{\pm}$ denote the energy densities at $T^{(n)}_{\pm}=T^{(n)}_{c}\pm\varDelta T^{(n)}_{c}$, computed with a first order Taylor expansion around $\varepsilon^{(n)}_{c}$; that is, $$\label{Taylor-espansion} \begin{split} \bar{\varepsilon}^{(n)}_{\pm}= &\varepsilon^{(n)}_{c}\Big|_{T=T^{(n)}_{c}} + \frac{d\varepsilon}{dT}\Big|_{T=T^{(n)}_{c}}\;\left[\left(T^{(n)}_{c}\pm\varDelta T^{(n)}_{c} \right)- T^{(n)}_{c}\right]=\\ =& \varepsilon^{(n)}_{c}\pm c^{(n)}_{c}\;\varDelta T^{(n)}_{c}\, . \end{split}$$ - *Method 2.* $$\varDelta \tilde{\varepsilon}^{(n),syst}_{c}= \cdot^{|\varepsilon^{(n)}_{c} - \tilde{\varepsilon}^{(n)}_{+}|}_{|\varepsilon^{(n)}_{c} - \tilde{\varepsilon}^{(n)}_{-}|}\, ,$$ with $\tilde{\varepsilon}^{(n)}_{\pm}$ denoting again the energy density values at $T^{(n)}_{\pm}$; at variance $\bar{\varepsilon}^{(n)}_{\pm}$, $\tilde{\varepsilon}^{(n)}_{\pm}$ are computed with a fit of the energy density data $\tilde{\varepsilon}^{(n)}_{\pm}(L)$ at $T^{(n)}_{\pm}$. The values of $\tilde{\varepsilon}^{(n)}_{\pm}(L)$ are computed in part with a first order Taylor expansion of the numerical data for $\varepsilon^{(n)}_{c}(L)$ through the relation $$\label{Taylor-espansion-2} \begin{split} \tilde{\varepsilon}^{(n)}_{\pm}(L) &= \varepsilon^{(n)}(L) \Big|_{T=T^{(n)}_{c}}+ c^{(n)}_{c}(L)\Big|_{T=T^{(n)}_{c}} \left[\left(T^{(n)}_{c}\pm\varDelta T^{(n)}_{c}\right)-T^{(n)}_{c}\right]=\\ &=\varepsilon^{(n)}(L)\pm c^{(n)}_{c}(L)\;\varDelta T^{(n)}_{c}\, , \end{split}$$ and in part —namely for $L=32$, $64$ and $128$— numerically by performing Monte Carlo simulations of the systems at $T^{(n)}_{\pm}$ (the two procedures give results for $\tilde{\varepsilon}^{(n)}_{\pm}(L)$ in excellent agreement). In the end, the fitting procedure is applied according to the relation [^4] $$\label{energy_FSS_out} \tilde{\varepsilon}^{(n)}_{\pm}(L)=\tilde{\varepsilon}^{(n)}_{\pm} +\varepsilon_{n,\pm} L^{D_{n}}$$ with $D_{n}$ given in Eq. (\[definition\_Dn\]). At the end of the analysis, $\varDelta \bar{\varepsilon}^{(n),syst}_{c}$ and $\varDelta \tilde{\varepsilon}^{(n),syst}_{c}$ will be compared and one of them will be chosen as final estimate of $\varDelta \varepsilon^{(n),syst}_{c}$. $n=1$, the Ising model {#numericalO1model} ---------------------- The derivation of the critical energy density $\varepsilon^{(1)}_{c}$ for the three-dimensional Ising model can be found in Ref. [@HasenbuschPinn:jphysa1998]: the authors performed a FSS analysis of data computed with canonical Monte Carlo simulations of the system, considering lattices up to $112^{3}$ spins. The critical coupling $\beta^{(1)}_{c} \equiv 1/T^{(1)}_{c}$ reported in [@HasenbuschPinn:jphysa1998; @TalapovBlote:jphysa1996] is $\beta^{(1)}_{c}=0.2216544(6)$ \[see as well the discussion in [@Binder:book], p. 265 (Chapter 7), and references therein\]. The best final estimate of the critical energy density is given by $$\label{Ising_best_energy} \varepsilon^{(1)}_{c} \pm \varDelta\varepsilon^{(1)}_{c} =-0.99063 \pm 0.00004\, .$$ The above result has been computed considering system sizes close to the maximum achievable with our tools and represents one of the most accurate estimation of $\varepsilon^{(1)}_{c}$ available in the literature (see, e.g., [@BradyMoreira:prb1993] for a comparison). Moreover, the uncertainty $\varDelta \varepsilon^{(1)}_{c}$ in Eq. (\[Ising\_best\_energy\]) has been computed combining the statistical and the systematic error as we have discussed in the previous Section. These facts led us not to repeat the analysis on the Ising model and to consider Eq. (\[Ising\_best\_energy\]) as the best final estimation of $\varepsilon^{(1)}_{c}$. A further comment on this point can be found in Sec. \[ConclusionsNumericalTest\]. $n=2$, the XY model {#numericalO2model} ------------------- We performed canonical Monte Carlo simulations of the $XY$ model defined on regular cubic lattices with edges $L=32,40,50,64,80,100$ and $128$. The simulations have been performed at a temperature $T = 2.201673$ according to the critical value of the temperature $T^{(2)}_{c}=2.201673(97)$ reported in [@GottlobHasenbusch:physicaa1993]. The values for $\varepsilon^{(2)}_{c}(L)$ and $c^{(2)}_{c}(L)$ obtained from the simulations are reported in Table \[table:xy\_data\_ec\]: in parentheses are the statistical errors. -------------- ---------------------------- ------------------ $L$ $\varepsilon^{(2)}_{c}(L)$ $c^{(2)}_{c}(L)$ \[0.5ex\] 32 -0.9982(3) 2.611(31) 40 -0.99589(12) 2.709(18) 50 -0.99382(9) 2.825(24) 64 -0.99233(14) 2.923(59) 80 -0.99137(6) 3.074(34) 100 -0.99067(4) 3.199(38) 128 -0.99020(4) 3.282(54) \[1ex\] -------------- ---------------------------- ------------------ : [Monte Carlo results for the energy density $\varepsilon^{(2)}_{c}(L)$ and for the specific heat $c^{(2)}_{c}(L)$ at the critical temperature $T^{(2)}_{c}=2.201673$.]{} \[table:xy\_data\_ec\] We fitted the energy density data reported in Table \[table:xy\_data\_ec\] according to the relation (\[energy\_FSS\]) considering different choices for the critical exponents. In particular we chose: (*i*) the experimental values $\nu_2 = 0.6705(6)$ and $\alpha_2 = -0.0115(18)$ as reported in [@GoldnerAhlers:prb1992]; (*ii*) $\nu_2=0.662(7)$ obtained in [@GottlobHasenbusch:physicaa1993] at the same critical value of the temperature as in our case and $\alpha_2=-0.014(21)$ as derived from the scaling relation $\alpha_2=2-d\nu_2$ with $d=3$; (*iii*) $\nu_2=0.6723(3)$ obtained in [@HasenbuschTorok:jphysa1999] with a high statistics simulation performed at a slightly different value of the temperature and $\alpha_2=-0.017(3)$ as derived from the scaling relation $\alpha=2-d\nu$ with $d=3$; (*iv*) $\alpha_{2}/\nu_{2}=-0.0258(75)$ and $1/\nu_{2}=1.487(81)$ as obtained in [@SchultkaManousakis:prb1995] with a similar analysis. The results of the fits for $\varepsilon^{(2)}_{c}$ and for the fitting parameter $\varepsilon_{2}$ are reported in Table \[table:xy\_fit\_results\]. We also performed a four-parameters fit considering $\alpha_2$, $\nu_2$, $\varepsilon^{(2)}_{c}$ and $\varepsilon_{2}$ as free parameters. However, no meaningful results could be extracted from the fit, the relative error on the parameters being larger than $100\%$ on the critical exponents (data not shown). -------------------- ------------------------------ ----------------------------------------- ------------------------ Fitting parameters $\nu_2$ and $\alpha_2$ results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$\nu_2=0.6705$]{} [$\varepsilon^{(2)}_{c}=-0.98900(3)$]{} \[-1ex\] [$\alpha_2=-0.0115$]{} [$\varepsilon_{2}=-1.77(2)$]{} [$\nu_2=0.662$]{} [$\varepsilon^{(2)}_{c}=-0.98904(3)$]{} \[-1ex\] [$\alpha_2=-0.014$]{} [$\varepsilon_{2}=-1.92(2)$]{} [$\nu_2=0.6723$]{} [$\varepsilon^{(2)}_{c}=-0.98901(3)$]{} \[-1ex\] [$\alpha_2=-0.017$]{} [$\varepsilon_{2}=-1.79(2)$]{} [$\alpha_2/\nu_2=-0.0258$]{} [$\varepsilon^{(2)}_{c}=-0.98901(3)$]{} \[-1ex\] [$1/\nu_2=1.487$]{} [$\varepsilon_{2}=-1.79(2)$]{} -------------------- ------------------------------ ----------------------------------------- ------------------------ : [Fitting values of the parameters $\varepsilon^{(2)}_{c}$ and $\varepsilon_{2}$ entering expression (\[energy\_FSS\]).]{} \[table:xy\_fit\_results\] All the results reported in Table \[table:xy\_fit\_results\] have a $\chi^2/\text{d.o.f.}\simeq 0.6$ and all the values of the critical energy densities $\varepsilon^{(2)}_{c}$ are consistent with each other. This fact implies that $\varepsilon^{(2)}_{c}$ is rather insensitive to the choice of the critical exponents $\nu_2$ and $\alpha_2$ (and so to the values of the critical temperatures at which they have been computed). Anyway, as best estimate of the fitting parameters we chose: $$\label{XY_energy_I_best_result} \begin{split} \varepsilon^{(2)}_{c} \pm \varDelta \varepsilon^{(2),stat}_{c} &=-0.98904\;\pm\;0.00003\, , \\ \varepsilon_{2} &=-1.92\;\pm\;0.02 \end{split}$$ reported in the second row of Table \[table:xy\_fit\_results\]. These values correspond to a choice of the critical exponents given by $\nu_2=0.662$ and $\alpha_2=-0.014$ as derived in [@GottlobHasenbusch:physicaa1993] (second raw of Table \[table:xy\_fit\_results\]) assuming the same value of $T^{(2)}_{c}$ as in our case. The curve $\varepsilon^{(2)}_{c}(L)$ given by Eq. (\[energy\_FSS\]) for $n=2$ and with the values of $\varepsilon^{(2)}_{c}$ and $\varepsilon_{2}$ as in Eq. (\[XY\_energy\_I\_best\_result\]) is shown in Fig. \[xy\_energy\_plot\] together with the simulation data. $\varepsilon^{(2)}_{c}$ and $\varepsilon_{2}$ in Eq. (\[XY\_energy\_I\_best\_result\]) are consistent with the values reported in [@SchultkaManousakis:prb1995]; therein, authors found $\varepsilon^{(2)}_{c}=-0.9890(4)$ and $\varepsilon_{2}=-1.81(38)$. It is worth noticing that our result $\varepsilon^{(2)}_{c}=-0.98904(3)$ given in Eq. (\[XY\_energy\_I\_best\_result\]) has one digit of precision more than previous results obtained with analogous techniques, see e.g. [@SchultkaManousakis:prb1995]. We fitted data of $c^{(2)}_{c}(L)$ reported in Table \[table:xy\_data\_ec\] according to the scaling relation given in Eq. (\[cv\_FSS\]) and keeping the value of the ratio $\alpha_2/\nu_2$ constant and equal to $\alpha_2/\nu_2=-0.02$, as given in [@GottlobHasenbusch:physicaa1993]. The result of the fit is reported in the first row of Table \[table:xy\_cv\_fit\_results\]. To check the dependence of the specific heat on the value of the ratio $\alpha_2/\nu_2$, we also performed the same fit for different values of the critical exponents: (*i*) $\alpha_2/\nu_2=-0.0285$ as reported in [@SchultkaManousakis:prb1995]; (*ii*) $\alpha_2/\nu_2=-0.025$ as obtained from data in [@HasenbuschTorok:jphysa1999]; (*iii*) $\alpha_2/\nu_2=-0.0172$ as obtained from the experimental values of the critical exponents reported in [@GoldnerAhlers:prb1992]. The results of the fits for $c^{(2)}_{c}$ and $c_{2}$ with these choices of the critical exponents are reported in the second, third and fourth row of Table \[table:xy\_cv\_fit\_results\], respectively. Although the values of $c^{(2)}_{c}$ reported in Table \[table:xy\_cv\_fit\_results\] are not all consistent with each other, the results in the first three rows are comparable. Moreover, our results assuming $\alpha_2/\nu_2=-0.0285$ are in agreement with the results computed in [@SchultkaManousakis:prb1995] with the same choice of the ratio of the critical exponents. Indeed, authors found $c^{(2)}_{c}=20.45(66)$ and $c_{2}=-19.61(72)$ with a fit based on data derived form Monte Carlo simulations at a different value of the critical temperature. Interestingly the values of the fitting parameters $c^{(2)}_{c}$ and $c_{2}$ are slightly larger when the experimentally determined critical exponents $\nu_2=0.6705$ and $\alpha_2=-0.0115$ [@GoldnerAhlers:prb1992] are considered, see the last row of Table \[table:xy\_cv\_fit\_results\]. This fact was already pointed out in [@SchultkaManousakis:prb1995] where the authors found $c^{(2)}_{c}=30.3\pm1.0$ and $c_{2}=-29.4\pm1.1$ for the same choice of the critical exponents. These results suggest that the value of $c^{(2)}_{c}$ strongly depends on the value of the ratio $\alpha_2/\nu_2$. In [@SchultkaManousakis:prb1995] the authors considered lattice sizes up to $L=80$ and suggested that a wider range of lattice sizes should be necessary to determine the asymptotic value of $c^{(2)}_{c}$. In our analysis we considered lattice sizes up to $L=128$, giving $N$ almost $4$ times bigger than in [@SchultkaManousakis:prb1995], but the discrepancy is still visible. Lattice sizes bigger than $128^3$ spins may be needed to improve the estimate of $c^{(2)}_{c}$. For our purposes, we can consider $$\label{XY_cv_best_result} \begin{split} c^{(2)}_{c}\pm \varDelta c^{(2)}_{c} &=28.4\pm0.6\,,\\ c_{2}&=-27.7\pm0.7 \end{split}$$ as best final estimates of the fitting parameters. These quantities, in fact, derive from the fit with $\alpha_2/\nu_2=-0.02$ as obtained in [@GottlobHasenbusch:physicaa1993] assuming the same value of the critical temperature $T^{(2)}_{c}=2.201673$ as in our case. We refer the reader to [@SchultkaManousakis:prb1995] for a more detailed discussion of this problem. -------------------- ------------------ ------------------------------ ------------------------ Fitting parameters $\alpha_2/\nu_2$ results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$c^{(2)}_{c}=28.4\pm0.6$]{} \[-1ex\] [$c_{2}=-27.7\pm0.7$]{} [$c^{(2)}_{c}=22.7\pm0.5$]{} \[-1ex\] [$c_{2}=-21.9\pm0.5$]{} [$c^{(2)}_{c}=23.3\pm0.5$]{} \[-1ex\] [$c_{2}=-22.6\pm0.6$]{} [$c^{(2)}_{c}=32.5\pm0.7$]{} \[-1ex\] [$c_{2}=-31.8\pm0.8$]{} -------------------- ------------------ ------------------------------ ------------------------ : [Fitting values of the parameters $c^{(2)}_{c}$ and $c_{2}$ entering expression (\[cv\_FSS\]).]{} \[table:xy\_cv\_fit\_results\] The curve $c^{(2)}_{c}(L)$ given by Eq. (\[cv\_FSS\]) for $n=2$ with $c^{(2)}_{c}$ and $c_{2}$ as in Eq. (\[XY\_cv\_best\_result\]) is plotted in Fig. \[xy\_cv\_plot\] together with the simulation data. ![ [Energy density $\varepsilon^{(2)}_{c}$ at the critical temperature $T^{(2)}_{c}=2.201673$ as a function of $L$. The solid curve is the fit to (\[XY\_energy\_I\_best\_result\]) with $\nu_2=0.662$ and $\alpha_2=-0.014$.]{}[]{data-label="xy_energy_plot"}](xy-fit-energie.eps){width="10cm"} ![ [Specific heat $c^{(2)}_{c}$ at the critical temperature $T^{(2)}_{c}=2.201673$ as a function of $L$. The solid curve represents the fit to (\[XY\_cv\_best\_result\]) with $\alpha_2/\nu_2=-0.02$.]{}[]{data-label="xy_cv_plot"}](xy-fit-cv.eps) In order to evaluate the systematic contribution to the uncertainty, $\varDelta \varepsilon^{(2),syst}_{c}$, we applied the two methods presented in Sec. \[SecFSS\]. - *Method 1.* From Eq. (\[Taylor-espansion\]), we computed $\bar{\varepsilon}^{(2)}_{+}$ and $\bar{\varepsilon}^{(2)}_{-}$ at $T^{(2)}_{+}=2.20177$ and $T^{(2)}_{-}= 2.201576$, respectively, assuming $\varepsilon^{(2)}_{c}=-0.98904$ as reported in Eq. (\[XY\_energy\_I\_best\_result\]). These quantities are given by $\bar{\varepsilon}^{(2)}_{+}=-0.98629$ and $\bar{\varepsilon}^{(2)}_{-}=-0.99180$ and are such that $|\varepsilon^{(2)}_{c} - \bar{\varepsilon}^{(2)}_{+}| = |\varepsilon^{(2)}_{c}-\bar{\varepsilon}^{(2)}_{-}|\simeq 0.003$. In this way, we get $$\label{XY_energy_II_best_result} \varDelta \bar{\varepsilon}^{(2),syst}_{c}= |\varepsilon^{(2)}_{c} - \bar{\varepsilon}^{(2)}_{\pm}|=0.003.$$ - *Method 2.* We computed $\tilde{\varepsilon}^{(2)}_{\pm}$ with a fit of the energy density data $\tilde{\varepsilon}^{(2)}_{\pm}(L)$ for $L=40,50,80$ and $100$ at $T^{(2)}_{+}=2.20177$ and $T^{(2)}_{-}= 2.201576$, respectively, according to Eq. (\[energy\_FSS\_out\]) with $n=2$ and $D_2=-1.5317$ as derived from data in [@GottlobHasenbusch:physicaa1993]. $\tilde{\varepsilon}^{(2)}_{\pm}(L)$ for these values of $L$ are computed with Eq. (\[Taylor-espansion-2\]) from data given in Table \[table:xy\_data\_ec\]. For some particular values of $L$, namely for $L=32, 64$ and $128$, we performed Monte Carlo simulations at $T^{(2)}_{+}$ and $T^{(2)}_{-}$, respectively, to compute the numerical values $\mathbf{\varepsilon}^{(2)}_{\pm}(32)$, $\mathbf{\varepsilon}^{(2)}_{\pm}(64)$ and $\mathbf{\varepsilon}^{(2)}_{\pm}(128)$. The numerical results have been compared with the same quantities as derived with the Taylor expansion (\[Taylor-espansion-2\]) and appeared to be consistent with them. This result reinforce the robustness of the analytical procedure used to derive $\varDelta \tilde{\varepsilon}^{(2),syst}_{c}$ and we considered the simulation values $\mathbf{\varepsilon}^{(2)}_{\pm}(32)$, $\mathbf{\varepsilon}^{(2)}_{\pm}(64)$ and $\mathbf{\varepsilon}^{(2)}_{\pm}(128)$ in the fitting procedure for the derivation of $\tilde{\varepsilon}^{(2)}_{\pm}$. The data used in the analysis are given in Table \[table:xy\_data\_tc\_piu\_meno\_delta\] in which data derived from Monte Carlo simulations are in **bold** and data derived with the Taylor expansion (\[Taylor-espansion-2\]) are in plain text. The result of the fits are reported in Table \[xy\_data\_fit\_delta\_t\]; we get $$\label{XY_energy_III_best_result} \varDelta \tilde{\varepsilon}^{(2),syst}_{c}= \cdot^{|\varepsilon^{(2)}_{c}-\tilde{\varepsilon}^{(2)}_{+}|}_{|\varepsilon^{(2)}_{c}-\varepsilon^{(2)}_{-}|}= \cdot^{+0.0003}_{-0.0003}=0.0003\, .$$ -------------- ---------------------------- ---------------------------- $L$ $\varepsilon^{(2)}_{+}(L)$ $\varepsilon^{(2)}_{-}(L)$ \[0.5ex\] 32 **-0.99854(15)** **-0.9984(3)** 40 -0.99563(12) -0.99615(12) 50 -0.99355(9) -0.99409(9) 64 **-0.99197(7)** **-0.99270(7)** 80 -0.99107(6) -0.99167(6) 100 -0.99036(4) -0.99098(4) 128 **-0.98994(4)** **-0.99049(4)** \[1ex\] -------------- ---------------------------- ---------------------------- : [Energy density data $\varepsilon^{(2)}_{+}$ and $\varepsilon^{(2)}_{-}$ obtained via Taylor expansion and numerical Monte Carlo simulations (**bold**), at $T^{(2)}_{+}=2.20177$ and $T^{(2)}_{-}=2.201576$, respectively.]{} \[table:xy\_data\_tc\_piu\_meno\_delta\] -------------------- ----------- ----------------------------------------- ------------------------ Fitting parameters constants results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$\varepsilon^{(2)}_{+}=-0.98871(5)$]{} \[-1ex\] [$\varepsilon_{2,+}=-1.95(3)$]{} [$\varepsilon^{(2)}_{-}=-0.98935(4)$]{} \[-1ex\] [$\varepsilon_{2,-}=-1.91(3)$]{} -------------------- ----------- ----------------------------------------- ------------------------ : [Fitting values of the parameters $\varepsilon^{(2)}_{\pm}$ and $\varepsilon^{(2)}_{\pm}$. In parentheses are the statistical errors due to the fitting procedure.]{} \[xy\_data\_fit\_delta\_t\] In Sec. \[Sec\_NumericalTest\] we are going to compare the critical values of the energy density of different O$(n)$ models both in the limit of small $n$ and in the limit $n\rightarrow\infty$; we should then consider $\varDelta \varepsilon^{(2),syst}_{c}=\varDelta \bar{\varepsilon}^{(2),syst}_{c} $ given in Eq. (\[XY\_energy\_II\_best\_result\]), being the largest among the two different estimations of the systematic uncertainties reported in Eqs. (\[XY\_energy\_II\_best\_result\]) and (\[XY\_energy\_III\_best\_result\]), respectively. However, this result depends on the value of $c^{(2)}_{c}$ given in Eq. (\[XY\_cv\_best\_result\]) that, in turn, is strongly affected by the choice of the ratio $\alpha_2/\nu_2$. For this reason we prefer to consider $\varDelta \tilde{\varepsilon}^{(2),syst}_{c}$ given in Eq. (\[XY\_energy\_III\_best\_result\]) as best estimate of $\varDelta \varepsilon^{(2),syst}_{c}$. We finally have $$\label{XY_energy_best_result} \varepsilon^{(2)}_{c}\pm \varDelta \varepsilon^{(2),stat}_{c}\pm \varDelta \varepsilon^{(2),syst}_{c}= -0.98904\;\pm\;0.00003\;\pm\;0.0003$$ as final best estimate for the critical energy density of the O$(2)$ model in three dimensions. The uncertainty $\varDelta \varepsilon^{(2),syst}_{c}$ due to $\varDelta T^{(2)}_{c}$ is an order of magnitude larger than the statistical error: this feature will be in common with all the other models considered. $n=3$, the Heisenberg model {#numericalO3model} --------------------------- We performed canonical Monte Carlo simulations of the Heisenberg model defined on a regular cubic lattices with edges $L=32,40,50,64,80,100$ and $128$. As best estimate of the critical temperature of the system we considered the value $T^{(3)}_{c}=1.44298(2)$ given in [@BrownCiftan:prb2006]. The values for $\varepsilon^{(3)}_{c}(L)$ and $c^{(3)}_{c}(L)$ obtained from the simulations are reported in Table \[table:O3\_data\_ec\]: in parentheses are the statistical errors. -------------- ---------------------------- ------------------ $L$ $\varepsilon^{(3)}_{c}(L)$ $c^{(3)}_{c}(L)$ \[0.5ex\] 32 -0.99646(7) 2.863(15) 40 -0.99437(6) 2.938(19) 50 -0.99289(5) 3.030(19) 64 -0.99183(4) 3.126(23) 80 -0.99116(3) 3.197(28) 100 -0.99064(3) 3.259(32) 128 -0.990312(14) 3.367(28) \[1ex\] -------------- ---------------------------- ------------------ : [Monte Carlo results for the energy density $\varepsilon^{(3)}_{c})$ and for the specific heat $c^{(2)}_{c}$ at the critical temperature $T^{(3)}_{c}=1.44298$.]{} \[table:O3\_data\_ec\] We fitted data reported in Table \[table:O3\_data\_ec\] according to relation (\[energy\_FSS\]) with $n=3$ and considering $\varepsilon^{(3)}_{c}$ and $\varepsilon_{3}$ as fitting parameters. For the values of the critical exponents, we considered different choices: (*i*) the best theoretical estimates $\nu_3=0.705(3)$ and $\alpha_3=-0.115(9)$ coming from a re-summed perturbation series analysis [@LeGuillouZinnJustin:prb1980]; (*ii*) we used $\alpha_{3}-1)/\nu_{3}=-1.586(19)$ as obtained in [@HolmJanke:jphysa1994] from a similar analysis performed using a slightly different value of the critical temperature, namely $T_{c}=1.4430$; (*iii*) we considered $(\alpha_{3}-1)/\nu_{3}=-1.5974$ as derived in [@BrownCiftan:prb2006] from a similar analysis performed using t he same value of $T^{(3)}_{c}$ as in our case. The results of these fits for $\varepsilon^{(3)}_{c}$ and $\varepsilon_{3}$ are reported in Table \[table:O3\_fit\_results\]. -------------------- ------------------------ -------------------------------------------- ------------------------ Fitting parameters $\nu_3$ and $\alpha_3$ results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$\nu_3=0.705$]{} [$\varepsilon^{(3)}_{c}= -0.989537(12)$]{} \[-1ex\] [$\alpha_3=-0.115$]{} [$\varepsilon_{3}=-1.652(10)$]{} [$\varepsilon^{(3)}_{c}=-0.989542(11)$]{} \[-1ex\] [$\varepsilon_{3}=-1.677(10)$]{} [$\varepsilon^{(3)}_{c}=-0.989556(10)$]{} \[-1ex\] [$\varepsilon_{3}=-1.744(9)$]{} -------------------- ------------------------ -------------------------------------------- ------------------------ : [Fitting values of the parameters $\varepsilon^{(3)}_{c}$ and $\varepsilon_{3}$ entering expression (\[energy\_FSS\]).]{} \[table:O3\_fit\_results\] We also performed a fit of all the parameters $\varepsilon^{(3)}_{c}$, $\varepsilon_{3}$ and $D_{3} = (\alpha_3 - 1)/\nu_3$ with the scaling relation $\varepsilon^{(3)}_{c}(L)=\varepsilon^{(3)}_{c}+\varepsilon_{3}L^{D_{3}}$. The results are $\varepsilon^{(3)}_{c}=-0.98958(3)$, $\varepsilon_{3}=-1.88(17)$ and $D_{3}=-1.62(2)$ with a $\chi^2/\text{d.o.f.}\simeq 0.43$. These results are in agreement with those reported in Table \[table:O3\_fit\_results\] and with the results reported in literature, see e.g. [@HolmJanke:jphysa1994; @BrownCiftan:prb2006]. However, as they come from a three-parameters fit of a relatively small set of data, we chose to neglect them and to consider only results reported in Table \[table:O3\_fit\_results\] in our study. The values of the parameters reported in the second row of Table \[table:O3\_fit\_results\] are consistent with the corresponding quantities reported in [@HolmJanke:jphysa1994]. Therein, the authors obtain $\varepsilon^{(3)}_{c}=-0.9894(1)$, $\varepsilon_{3}=-1.68(8)$ and $D_{3}=-1.586(19)$. These values come from a three parameter fit of the scaling relation $\varepsilon^{(3)}_{c}(L) = \varepsilon^{(3)}_{c} + \varepsilon_{3} L^{D_{3}}$ with $D_{3}=(\alpha_3-1)/\nu_3$, performed at $T_{c}=1.4430\neq T^{(3)}_{c}$. Beside supporting our results, this fact seems to suggest that $\varepsilon^{(3)}_{c}$ does not sensibly depend on the value of the critical temperature. For what concerns the third row of Table \[table:O3\_fit\_results\], the results of the fit have to be compared with the results computed in [@BrownCiftan:prb2006] at the same value of $T^{(3)}_{c}$ as in our case. Therein, the authors find $$\varepsilon^{(3)}_{c}(L)=\varepsilon^{(3)}_{c}+\varepsilon_{3} L^{D_{3}} = -0.9896\pm1.7225 L^{-1.5974} \, ;$$ the relative precision of the data fit being of $0.001\%$ or better. Also in this case our results, obtained for $D_{3}=-1.5974$, are perfectly consistent. The values of the parameter $\varepsilon^{(3)}_{c}$ reported in Table \[table:O3\_fit\_results\] are consistent with each other. The results reported in the third row of Table \[table:O3\_fit\_results\] have been determined considering a combination of the critical exponents $D_{3}$ as derived in [@BrownCiftan:prb2006] at the same value of the critical temperature as in our case. Since the numerical value of $\alpha_3/\nu_{3}$ is needed in the following to determine $c^{(3)}_{c}$, we give $$\label{H_energy_I_best_result} \begin{split} \varepsilon^{(3)}_{c}\;\pm\;\varDelta \varepsilon^{(3),stat}_{c} &=-0.989556\;\pm\;0.000010\,,\\ \varepsilon_{3}&=-1.744\;\pm\;0.009\, ; \end{split}$$ as best estimate of the critical energy density value of $\varepsilon^{(3)}_{c}$. The curve $\varepsilon^{(3)}_{c}(L)$ given by Eq. (\[energy\_FSS\]) for $n=3$ and with the values of $\varepsilon^{(3)}_{c}$ and $\varepsilon_{3}$ as in Eq. (\[H\_energy\_I\_best\_result\]) is shown in Fig. \[H\_energy\_plot\] together with the simulation data. It is worth noticing that the value of $\varepsilon^{(3)}_{c}$ in Eq. (\[H\_energy\_I\_best\_result\]) is given with one digit of precision more than previous results in the literature and obtained with similar techniques [@HolmJanke:jphysa1994; @BrownCiftan:prb2006]. ![ [Energy density $\varepsilon^{(3)}_{c}$ at the critical temperature $T^{(3)}_{c}=1.4498$ as a function of $L$. The solid curve is the fit to (\[H\_energy\_I\_best\_result\]) with $(\alpha_3-1)/\nu_3=-1.5974$.]{}[]{data-label="H_energy_plot"}](Heisenberg-fit-energie.eps){width="10cm"} ![ [Specific heat $c^{(3)}_{c}$ at the critical temperature $T^{(3)}_{c}=1.4498$ as a function of $L$. The solid curve is the fit to (\[H\_cv\_best\_results\]) with $\alpha_3/\nu_3=-0.1991$. ]{}[]{data-label="H_cv_plot"}](Heisenberg-fit-cv.eps){width="10cm"} We fitted data of $c^{(3)}_{c}(L)$ reported in Table \[table:O3\_data\_ec\] according to the scaling relation given in Eq. (\[cv\_FSS\]) with $\alpha_3/\nu_3=-0.1991$ as in [@BrownCiftan:prb2006]. The results of the fit are reported in the first row of Table \[table:O3\_cv\_fit\_results\]. To check the dependence of our results from the ratio $\alpha_3/\nu_3$ we performed the same fit for two different choices of $\alpha_3/\nu_3$: (*i*) $\alpha_3/\nu_3=-0.1631$ as derived in [@LeGuillouZinnJustin:prb1980] and (*ii*) $\alpha_3/\nu_3=-0.166$ as derived in [@HolmJanke:jphysa1994]. The results of these fits are reported in the second and third rows of Table \[table:O3\_cv\_fit\_results\], respectively. We chose $$\label{H_cv_best_results} \begin{split} c^{(3)}_{c}&=4.91\;\pm\;0.03\,,\\ c_{3} &=-4.09\;\pm\;0.09\,; \end{split}$$ as the best choice of the fitting parameters, being associated to a choice of the critical exponents as in [@BrownCiftan:prb2006] at the same value of $T^{(3)}_{3}$ as in our case. The curve $c^{(3)}_{c}(L)$ given by Eq. (\[cv\_FSS\]) for $n=3$ and the values of the fitting parameters $c^{(3)}_{c}$ and $c_{3}$ as in Eq. (\[H\_cv\_best\_results\]) is shown in Fig. \[H\_cv\_plot\] together with the simulation data. -------------------- ----------- --------------------------- ------------------------ Fitting parameters constants results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$c^{(3)}_{c}=4.91(3)$]{} \[-1ex\] [$c_{2}=-4.09(9)$]{} [$c^{(3)}_{c}=5.31(5)$]{} \[-1ex\] [$c_{3}=-4.32(8)$]{} [$c^{(3)}_{c}=5.27(4)$]{} \[-1ex\] [$c_{3}=-4.29(8)$]{} -------------------- ----------- --------------------------- ------------------------ : [Fitting values of the parameters $c^{(3)}_{c}$ and $c_{3}$ entering expression (\[energy\_FSS\]) with $n=3$.]{} \[table:O3\_cv\_fit\_results\] In order to evaluate $\varDelta \varepsilon^{(3),syst}_{c}$, we applied the two methods presented in Sec. \[SecFSS\] specialized to $n=3$: - *Method 1.* From Eq. (\[Taylor-espansion\]) we computed the values of $\bar{\varepsilon}^{(3)}_{+}$ and $\bar{\varepsilon}^{(3)}_{-}$ at $T^{(3)}_{+}=1.44300$ and $T^{(3)}_{-}= 1.44296$, respectively, assuming $\varepsilon^{(3)}_{c}=-0.989556$ as reported in Eq.  (\[H\_energy\_I\_best\_result\]). These quantities are given by $\bar{\varepsilon}^{(3)}_{+}=-0.989458$ and $\bar{\varepsilon}^{(3)}_{-}=-0.989654$ and are such that $|\varepsilon^{(3)}_{c} - \bar{\varepsilon}^{(3)}_{+}| = |\varepsilon^{(3)}_{c}-\bar{\varepsilon}^{(3)}_{-}|\simeq 0.00010$. In this way, we get $$\label{H_energy_II_best_result} \varDelta \bar{\varepsilon}^{(3),syst}_{c}= |\varepsilon^{(3)}_{c} - \bar{\varepsilon}^{(3)}_{\pm}|=0.00010.$$ - *Method 2.* We computed $\tilde{\varepsilon}^{(3)}_{\pm}$ with a fit of the energy density data for $\tilde{\varepsilon}^{(3)}_{\pm}(L)$ for $L=32,40,50,64,80,100$ and $128$ at $T^{(3)}_{+}=1.44300$ and $T^{(3)}_{-}= 1.44296$, respectively, according to Eq. (\[energy\_FSS\_out\]) with $n=3$ and $D_{3}=-1.5974$ as in [@BrownCiftan:prb2006]. For $L=40,50,80,100$ we computed $\tilde{\varepsilon}^{(3)}_{\pm}(L)$ by applying Eq. (\[Taylor-espansion-2\]) to data given in Table \[table:O3\_data\_ec\]. As in the case of the $XY$ model, the values of $\tilde{\varepsilon}^{(3)}_{\pm}(L)$ for $L=32,64$ and $128$ are obtained with Monte Carlo simulations performed at $T^{(3)}_{+}$ and $T^{(3)}_{-}$, respectively; these numerical values are consistent with the same quantities computed with Eq. (\[Taylor-espansion-2\]), not shown here. The data involved in the analysis are shown in Table \[table:H\_data\_tc\_piu\_meno\_delta\]; data arising from the Monte Carlo simulations are printed in **bold** and data computed using Eq. (\[Taylor-espansion-2\]) are printed in plain text. From the fits we get $$\label{H_energy_III_best_result} \varDelta \tilde{\varepsilon}^{(3),syst}_{c}= \cdot^{|\varepsilon^{(3)}_{+} - \varepsilon^{(3)}_{c}|}_{|\varepsilon^{(3)}_{-}-\varepsilon^{(3)}_{c}|}= \cdot^{+0.00008}_{-0.00006}$$ as reported in Table \[O3\_data\_fit\_delta\_t\]. Since our purpose is to compare the values of the critical energy density for different O$(n)$ models, we choose $\varDelta \bar{\varepsilon}^{(3),syst}_{c}$ in Eq. (\[H\_energy\_II\_best\_result\]) as best estimate of the systematic uncertainty on $\varepsilon^{(3)}_{c}$. From Eqs. (\[H\_energy\_I\_best\_result\]) and (\[H\_energy\_II\_best\_result\]) we finally get $$\label{H_energy_final_best_result} \varepsilon^{(3)}_{c}\;\pm\;\varDelta \varepsilon^{(3),stat}_{c}\;\pm\;\varDelta \varepsilon^{(3),syst}_{c}= -0.989556\;\pm\;0.000010\;\pm\;0.00010,$$ as best estimate of the critical energy density of the three dimensional Heisenberg model, in the thermodynamic limit. -------------- ---------------------------- ---------------------------- $L$ $\varepsilon^{(3)}_{+}(L)$ $\varepsilon^{(3)}_{-}(L)$ \[0.5ex\] 32 **-0.99636(7)** **-0.99654(7)** 40 -0.99431 -0.99443 50 -0.99283 -0.99295 64 **-0.99164(6)** **-0.99182(4)** 80 -0.99110 -0.99122 100 -0.99058 -0.99071 128 **-0.990232(19)** **-0.99039(2)** \[1ex\] -------------- ---------------------------- ---------------------------- : [Energy density data $\varepsilon^{(3)}_{+}(L)$ and $\varepsilon^{(3)}_{-}(L)$ obtained via Taylor expansion (plain text) and numerical Monte Carlo simulations (**bold**), at $T^{(3)}_{+}=1.44300$ and $T^{(3)}_{-}=1.44296$, respectively. The statistical errors are in parentheses.]{} \[table:H\_data\_tc\_piu\_meno\_delta\] -------------------- ------- ------------------------------------------- ------------------------ Fitting parameters $D_3$ results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$\varepsilon^{(3)}_{+}=-0.989479(19)$]{} \[-1ex\] [$\varepsilon_{+,3}=-1.743(16)$]{} [$\varepsilon^{(3)}_{-}=-0.98962(2)$]{} \[-1ex\] [$\varepsilon_{-,3}=-1.738(17)$]{} -------------------- ------- ------------------------------------------- ------------------------ : [Fitting values of the parameters $\varepsilon^{3}_{\pm}$ and $\varepsilon_{\pm,3}$.]{} \[O3\_data\_fit\_delta\_t\] $n=4$, the O$(4)$ model {#numericalO4model} ----------------------- We performed canonical Monte Carlo simulations of the O$(4)$ model on a regular cubic lattices with edges $L=32,40,64,80,100$ and $128$. For the critical temperature of the system we choose the value $T^{(4)}_{c}=1.06835(13)$ given in [@KanayaKaya:prd1995]; therefore, simulations were performed at $T=1.06835$. Table \[table:O4\_data\_ec\] shows the values for $\varepsilon^{(4)}_{c}(L)$ and $c^{(4)}_{c}(L)$ involved in the analysis, with statistical errors in parentheses. -------------- ---------------------------- ------------------ $L$ $\varepsilon^{(4)}_{c}(L)$ $c^{(4)}_{c}(L)$ \[0.5ex\] 32 -0.996930(67) 3.195(20) 40 -0.995431(53) 3.282(21) 64 -0.993374(35) 3.416(27) 80 -0.992875(20) 3.470(39) 100 -0.992482(23) 3.551(44) 128 -0.992260(20) 3.617(43) \[1ex\] -------------- ---------------------------- ------------------ : [Monte Carlo results for the energy density $\varepsilon^{(4)}_{c}(L)$ and for the specific heat $c^{(4)}_{c}(L)$ at the critical temperature $T^{(4)}_{c}=1.06835$.]{} \[table:O4\_data\_ec\] We fitted data reported in Table \[table:O4\_data\_ec\] according to Eq. (\[energy\_FSS\]) with $n=4$ and considering $\varepsilon^{(4)}_{c}$ and $\varepsilon_{4}$ as fitting parameters. For the values of the critical exponents, we considered two different cases: (*i*) $\nu_4=0.7479(80)$ as reported in [@KanayaKaya:prd1995] using the same value of the critical temperature as in our case and $\alpha_4=-0.244(24)$ as obtained from the scaling relation $\alpha=2-d\nu$ with $d=3$; (*ii*) $\alpha_4=-0.21312$ and $\nu_4=0.73771$ as obtained from the scaling relations $\alpha=2 - \beta ( 1 + \delta )$ and $\nu=\frac{2-\alpha}{d}$ with $d=3$, from data reported in [@EngelsKarsch:prd2012] using $T_{c}=1.06849$. In [@EngelsKarsch:prd2012] the values of $\varepsilon^{(4)}_{c}$ and $c^{(4)}_{c}$ have been determined with a finite size scaling analysis in an external field $h$ and then extrapolating the results in the limit $h\rightarrow 0$. As we shall see in the following, their results are in good agreement with ours although derived with a slightly different approach: this supports the validity of our analysis. The results of the fits for $\varepsilon^{(4)}_{c}$ and $\varepsilon_{4}$ are reported in Table \[table:O4\_fit\_results\]. -------------------- ------------------------------ ------------------------------------------ ------------------------ Fitting parameters $\nu_4$ and $\alpha_4$ results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$\nu_4=0.7479$]{} [$\varepsilon^{(4)}_{c}= -0.99174(2)$]{} \[-1ex\] [$\alpha_4=-0.244$]{} [$\varepsilon_{4}=-1.68(2)$]{} [$\nu_4=0.73771$]{} [$\varepsilon^{(4)}_{c}= -0.99170(2)$]{} \[-1ex\] [ [$\alpha_4=-0.21312$]{}]{} [$\varepsilon_{4}=-1.57(2)$]{} -------------------- ------------------------------ ------------------------------------------ ------------------------ : [Fitting values of the parameters $\varepsilon^{(4)}_{c}$ and $\varepsilon_{4}$ entering Eq. (\[energy\_FSS\]).]{} \[table:O4\_fit\_results\] We also performed a four-parameter fit with $\alpha_4$, $\nu_4$, $\varepsilon^{(4)}_{c}$ and $\varepsilon_{4}$ as free parameters. However, as in the $n=2$ case, no meaningful results can be extracted from the fit, the relative error on the critical exponents being larger then $100\%$. The results of the fit are not shown here and will be neglected in the following. The results for the critical energy density $\varepsilon^{(4)}_{c}$ shown in Table \[table:O4\_fit\_results\] are consistent with each other. As anticipated, they are also in good agreement with the known results, see e.g. [@EngelsKarsch:prd2012], where the authors find $\varepsilon^{(4)}_{c}=-0.991792(28)$ from a FSS analysis in an external magnetic field. We chose $$\label{O4_energy_I_best_result} \begin{split} \varepsilon^{(4)}_{c}\;\pm\;\varDelta \varepsilon^{(4),stat}_{c}&= -0.99174\;\pm\;0.00002\,,\\ \varepsilon_{4}&=-1.69\;\pm\;0.02 \end{split}$$ as best estimate of the critical energy density $\varepsilon^{(4)}_{c}$ and of the fitting parameter $\varepsilon_{4}$, as reported in the first row of Table \[table:O4\_fit\_results\]. Indeed, these results come from a choice of the critical exponents as in [@KanayaKaya:prd1995] where the same value of the critical temperature as in our case was used. The curve $\varepsilon^{(4)}_{c}(L)$ given by Eq. (\[energy\_FSS\]) for $n=3$ and for $\varepsilon^{(4)}_{c}$ and $\varepsilon_{4}$ as in Eq. (\[O4\_energy\_I\_best\_result\]), is shown in Fig. \[O4\_energy\_plot\] together with the simulation data used in the analysis. ![ [Energy density $\varepsilon^{(4)}_{c}$ at the critical temperature $T^{(4)}_{c}=1.06835$ as a function of $L$. The solid curve is the fit to Eq. (\[energy\_FSS\]) with $\alpha_4=-0.244$ and $\nu_4=0.7479$.]{}[]{data-label="O4_energy_plot"}](O4-fit-energie.eps){width="10cm"} ![ [Specific heat $c^{(4)}_{c}$ at the critical temperature $T^{(4)}_{c}=1.06835$ as a function of $L$. The solid curve is the fit to (\[cv\_FSS\]) with $\alpha_4/\nu_4=-0.326$.]{}[]{data-label="O4_cv_plot"}](O4-fit-cv.eps){width="10cm"} We fitted data of $c^{(4)}_{c}(L)$ reported in Table \[table:O4\_data\_ec\] according to the scaling relation given in Eq. (\[cv\_FSS\]) with $n=4$ and keeping the value of the ratio $\alpha_4/\nu_4$ fixed to $\alpha_4/\nu_4=-0.326$ as derived in [@KanayaKaya:prd1995] at the same value of $T^{(4)}_{c}$ as in our case. The results of the fit are given by $$\label{O4_cv_best_results} \begin{split} c^{(4)}_{c}&=4.32\;\pm\;0.03\,,\\ c_{4}&=-3.46\;\pm\;0.10\,, \end{split}$$ and are reported in the first row of Table \[table:O4\_cv\_fit\_results\]. To check the dependence of our results on the value of the ratio $\alpha_4/\nu_4$, we also performed the fit with a different choice for $\alpha_4/\nu_4$: $\alpha_4/\nu_4=-0.289$ as derived from data reported in [@EngelsKarsch:prd2012]. The results of this fit are reported in the second row of Table \[table:O4\_cv\_fit\_results\]. The values of $c^{(4)}_{c}$ reported in Table \[table:O4\_cv\_fit\_results\] are in a good agreement with each other. Moreover the value of $c^{(4)}_{c}$ in the second row Table \[table:O4\_cv\_fit\_results\] is consistent with the corresponding quantity reported in [@EngelsKarsch:prd2012] and derived with a rather different procedure.\ -------------------- ----------- --------------------------- ---------------- Fitting parameters constants results $\chi^2/d.o.f$ \[0.5ex\] [$c^{(4)}_{c}=4.32(3)$]{} \[-1ex\] [$c_{4}=-3.46(10)$]{} [$c^{(4)}_{c}=4.43(3)$]{} \[-1ex\] [$c_{4}=-3.37(9)$]{} -------------------- ----------- --------------------------- ---------------- : [Fitting values of the parameters $c^{(4)}_{c}$ and $c_{4}$ entering expression (\[energy\_FSS\]) with $n=4$.]{} \[table:O4\_cv\_fit\_results\] In order to estimate $\varDelta \varepsilon^{(4),syst}_{c}$ we applied the two methods presented in Sec. \[SecFSS\]: - *Method 1.* From Eq. (\[Taylor-espansion\]), we computed the values of $\bar{\varepsilon}^{(4)}_{+}$ and $\bar{\varepsilon}^{(4)}_{-}$ at $T^{(4)}_{+}=1.06848$ and $T^{(4)}_{-}=1.06822$, respectively, assuming $\varepsilon^{(4)}_{c}=-0.99174$ as reported in Eq. (\[O4\_energy\_I\_best\_result\]). These quantities are given by $\bar{\varepsilon}^{(4)}_{+}=-0.991178$ and $\bar{\varepsilon}^{(4)}_{-}=-0.992302$ and are such that $|\varepsilon^{(4)}_{c} - \bar{\varepsilon}^{(4)}_{+}| = |\varepsilon^{(4)}_{c}-\bar{\varepsilon}^{(4)}_{-}|\simeq 0.0006$. In this way, we get $$\label{O4_energy_II_best_result} \varDelta \bar{\varepsilon}^{(4),syst}_{c}= |\varepsilon^{(4)}_{c} - \bar{\varepsilon}^{(4)}_{\pm}|=0.0006.$$ - *Method 2.* We computed $\tilde{\varepsilon}^{(4)}_{\pm}$ with a fit of the energy density data $\tilde{\varepsilon}^{(4)}_{\pm}(L)$ with $L=32,64$ and $128$ derived with Monte Carlo simulations performed at $T^{(4)}_{+}=1.06848$ and $T^{(4)}_{-}=1.06822$, respectively; the fits have been computed according to relation in Eq. (\[energy\_FSS\_out\]) with $n=4$ and $D_{4}=-0.326$ as in [@KanayaKaya:prd1995]. At variance with what we have done for $n=2$ and $3$, in this case we did not consider the values of the critical energy density for other $L$-values, obtained with Eq. (\[Taylor-espansion-2\]). Indeed, in this case, the fits produced extremely bad results when Taylor-expanded data are considered. The Monte Carlo data involved in the analysis are given in Table \[table:O4\_data\_tc\_piu\_meno\_delta\]; the statistical errors are reported in parentheses. The results of the fit, shown in Table \[O4\_data\_fit\_delta\_t\], are such that $$\label{O4_energy_III_best_result} \varDelta \tilde{\varepsilon}^{(4),syst}_{c}= \cdot^{|\varepsilon^{(4)}_{+} - \varepsilon^{(4)}_{c}|}_{|\varepsilon^{(4)}_{-}-\varepsilon^{(4)}_{c}|}= \cdot^{+0.00006}_{-0.00002}$$ As for the O$(2)$ and for the O$(3)$ model, we are going to consider $\varDelta \varepsilon^{(4),syst}_{c}=\varDelta \bar{\varepsilon}^{(4),syst}_{c}=0.0006$ given by Eq. (\[O4\_energy\_II\_best\_result\]), being larger than $\varDelta \tilde{\varepsilon}^{(4),syst}_{c}$ reported in Eq. (\[O4\_energy\_III\_best\_result\]). We finally get $$\label{O4_energy_final_best_result} \varepsilon^{(4)}_{c}\;\pm\;\varDelta \varepsilon^{(4),stat}_{c}\;\pm\;\varDelta \varepsilon^{(4),syst}_{c}= -0.99174\;\pm\;0.00002 \;\pm\;0.0006$$ as the final value of the critical energy density of the three dimensional O$(4)$ model in the thermodynamic limit. As for the O$(2)$ and the O$(3)$ models, the uncertainty on $\varepsilon^{(4)}_{c}$ due to $\varDelta T^{(4)}_{c}$ is larger than the statistical uncertainty. -------------- ---------------------------- ---------------------------- $L$ $\varepsilon^{(4)}_{+}(L)$ $\varepsilon^{(4)}_{-}(L)$ \[0.5ex\] 32 -0.996955(64) -0.996962(67) 64 -0.993294(37) -0.993383(36) 128 -0.992208(19) -0.992275(18) \[1ex\] -------------- ---------------------------- ---------------------------- : [Energy density data $\varepsilon^{(4)}_{+}(L)$ and $\varepsilon^{(4)}_{-}(L)$ obtained with numerical Monte Carlo simulations performed at $T^{(4)}_{+}=1.06848$ and $T^{(4)}_{-}=1.06822$, respectively.]{} \[table:O4\_data\_tc\_piu\_meno\_delta\] -------------------- ------- ------------------------------------------ ------------------------ Fitting parameters $D_4$ results $\chi^2/\text{d.o.f.}$ \[0.5ex\] [$\varepsilon^{(4)}_{+}=-0.99168(3)$]{} \[-1ex\] [$\varepsilon_{4,+}=-1.67(3)$]{} [$\varepsilon^{(4)}_{-}=-0.991755(8)$]{} \[-1ex\] [$\varepsilon_{4,-}=-1.657(9))$]{} -------------------- ------- ------------------------------------------ ------------------------ : [Fitting values of the parameters $\varepsilon^{(4)}_{\pm}$ and $\varepsilon_{4,\pm}$.]{} \[O4\_data\_fit\_delta\_t\] $n=\infty$, the spherical model {#numericalSphericalModel} ------------------------------- The spherical model has been introduced by Berlin and Kac [@BerlinKac:physrev1952] as an exactly solvable model of a ferromagnet: its Hamiltonian reads $$\label{Ham_sph_model} H^{sph}= - \sum_{\langle i,j \rangle}^N T_i T_j\, ,$$ where the sum is intended over all the distinct pairs of distinct nearest neighbors on a regular $d-$dimensional hypercubic lattice. At variance with the O$(n)$ models, the “spin variables” $T_i$ are real numbers and their modulus is not fixed to unity: instead, the spherical constraint $$\label{sph_constraint} \sum_{i=1}^{N} T^{2}_{i}=N$$ is imposed, allowing for a fluctuation of the modulus of the spin variables. The spherical model is exactly solvable in any spatial dimension $d$ in the thermodynamic limit, both in the canonical and in the microcanonical ensembles: for the canonical solution see e.g. [@Binney:book] and references therein, for the microcanonical solution see [@Kastner:jstat2009]. Despite the long-range nature of the constraint in Eq. (\[sph\_constraint\]) the canonical and the microcanonical descriptions are equivalent and the model shows a continuous phase transition from a low-energy (temperature) ferromagnetic phase to a high-energy (temperature) paramagnetic phase for all $d\geq 3$ [@Joyce:inDombGreen]. As pointed out in 1968 by H. E. Stanley, the free energy of a class of models described by the Hamiltonian $$\label{Ham_stanley_On} {\mathbb{H}}^{(n)}=-\sum_{\langle i,j\rangle}^N \mathbf{T}^{(n)}_{i} \cdot \mathbf{T}^{(n)}_{j}= -\sum_{\langle i,j\rangle}^{N} \sum_{a = 1}^n T^a_i T^a_j$$ (with $\mathbf{T}^{(n)}_i \equiv (T_i^1,\ldots,T_i^n)$ and $\,|\mathbf{T}_i|^2=n\;\forall i=1,\dots,N$) approaches the free energy of the spherical model (\[Ham\_sph\_model\]) in the $n\rightarrow\infty$ limit [@Stanley:physrev1968]. Moreover some “critical properties” of ${\mathbb{H}}^{(n)}$, like the value of the critical temperature $T^{(n)}_{c}$ or the value of some critical exponents [@Stanley:prl1968], appear to be monotonic functions of $n$ [^5]. The class of models described by the Hamiltonian in Eq. (\[Ham\_stanley\_On\]) can be mapped onto classical O$(n)$ models defined by Eq. (\[H-On\]), once the norm of the spins is properly scaled: $${\mathbb{H}}^{(n)}=-\sum_{\langle i,j\rangle}^{N} \mathbf{T}^{(n)}_{i} \cdot \mathbf{T}^{(n)}_{j}= - n \sum_{\langle i,j\rangle}^{N} \mathbf{S}_{i}\cdot \mathbf{S}_{j}= n\; H^{(n)}\, ,$$ so that $$\lim_{n,\; N \rightarrow \infty} \frac{1}{n\;N}{\mathbb{H}}^{(n)} = \lim_{N \rightarrow \infty} \frac{1}{N} H^{(n)} = \lim_{N \rightarrow \infty} \frac{1}{N} H^{sph}.$$ This implies that the thermodynamic properties of the continuous O$(n)$ models described by the Hamiltonian in Eq. (\[H-On\]) converge to those of the spherical model in the $n \rightarrow \infty$ limit. In particular, the discrete set of critical values of the energy density: $\{\varepsilon^{(1)}_{c}$, $\varepsilon^{(2)}_{c}$, $\varepsilon^{(3)}_{c}$, $\varepsilon^{(4)}_{c},\dots\}$ should converge to $\varepsilon^{(\infty)}_{c}$ —that is to the critical energy density value of $H^{sph}$— in the $n\rightarrow\infty$ limit. This means that the spherical model has to be considered an O$(\infty)$ model in our analysis of the critical energy densities. The above property hold independently of the spatial dimensionality $d$ of the lattice, hence also in the case $d=3$. In [@Binney:book; @Kastner:jstat2009] an explicit expression for $\varepsilon^{(\infty)}_{c}$ is given: when adapted to our conventions in $d=3$ the result is $$\label{KastnerSMenergy} \varepsilon^{(\infty)}_{c}=-3\;\; \frac{a_3}{1+a_3}\, ,$$ where the coefficient $a_3$ is given by $$\label{coefficient_a3} a_{3}=\int_{[0,\pi]^{3}}\frac{d^{3}k}{\pi^{3}} \; \frac{\sum_{j=1}^{3}\cos{k_{j}}}{3-\sum_{j=1}^{3}\cos{k_{j}}}\, .$$ The coefficient $a_3$ is related to the Watson integral $W_3$ commonly used in the spherical model [@Joyce:inDombGreen; @JoyceZucker:jphysa2001]: some properties of the Watson integrals are recalled in Appendix \[Watson\]. The result for $a_3$ is $$\label{coefficient_a3_analitico} a_3=\frac{\sqrt{3} -1}{32 \pi^3} \left( \Gamma\left( \frac{1}{24} \right) \Gamma\left( \frac{11}{24} \right) \right)^2 -1\, ,$$ where $\Gamma$ denotes the gamma function. Using (\[coefficient\_a3\_analitico\]), the numerical value we get from Eq. (\[KastnerSMenergy\]) is $$\label{KastnerSMenergy2} \varepsilon^{(\infty)}_{c}=-1.0216119\dots$$ and we shall use it as the critical energy density of the O$(\infty)$ model in $d=3$. Comparison of critical energy densities {#Sec_NumericalTest} ======================================= The critical energy densities $\varepsilon^{(n)}_{c}$, discussed in the previous Sections for $n=1,2,3,4$ and $\infty$, are collected in Table \[table:energy\_summary\_results\] as a function of $1/n= 1/\infty, 1/4, 1/3, 1/2$ and $1$, together with their derivation method. ------------------------------------- ------------------------------------------ ----------------------------------------------------------- -- $\frac{1}{n}$ $\varepsilon^{(n)}_{c}$ Derivation method \[0.5ex\] $\frac{1}{\infty}\equiv0$ $-1.0216119\dots$ Exact solution \[0.5ex\] $\frac{1}{4}$ $-0.99174\;\pm\;0.00002 \;\pm\;0.0006$ FSS this work, Eq.  (\[O4\_energy\_final\_best\_result\]) \[0.5ex\] $\frac{1}{3}$ $-0.989556\;\pm\;0.000010\;\pm\;0.00010$ FSS this work, Eq.  (\[H\_energy\_final\_best\_result\]) \[0.5ex\] $\frac{1}{2}$ $-0.98904\;\pm\;0.00003\;\pm\;0.0003$ FSS this work, Eq.  (\[XY\_energy\_best\_result\]) \[0.5ex\] 1 $-0.99063 \pm 0.00004$ FSS [@HasenbuschPinn:jphysa1998] ------------------------------------- ------------------------------------------ ----------------------------------------------------------- -- : [Critical energy densities $\varepsilon^{(n)}_{c}$ with their derivation method for $n=1,2,3,4$ and $n=\infty$.]{} \[table:energy\_summary\_results\] Data in Table [\[table:energy\_summary\_results\]]{} can be interpolated to obtain an estimate of $\varepsilon^{(n)}_{c}$ for any $n$. To make such an interpolation more reliable, we exploit a theoretical result by Campostrini et al.  [@CampostriniEtAl:npb1996]. These authors performed an analysis of the four-point renormalized coupling constant in classical O$(n)$ models. Interestingly, an important byproduct of their study was to have an estimate of the critical energy density $\varepsilon^{(n)}_{c}$ for large values of $n$, i.e., at the first order in a $1/n$ expansion. They found $$\label{CPRV} \varepsilon^{(n)}_{c}=\varepsilon^{(\infty)}_{c}+b_{1}\,\frac{1}{n}+O\left(\frac{1}{n^{2}}\right)\, ,$$ and the numerical result for the coefficient $b_1$ given in [@CampostriniEtAl:npb1996], once adapted to our conventions, is $b_1 = 0.21$. The accuracy of $b_1$ affects the accuracy of the interpolation, as we shall see below, hence we repeated the numerical calculation of $b_1$ increasing its precision; as reported in Appendix \[Watson\], we obtained $b_1=0.2182(8)$. This result suggests an interpolation of the data in Table \[table:energy\_summary\_results\] has to be performed: $\varepsilon_{c}(n)$ should be a polynomial function in $\frac{1}{n}$ in which the zero-order term is given by the critical energy density $\varepsilon^{(\infty)}_{c}$ of the spherical model as given in Eq. (\[KastnerSMenergy2\]), and the coefficient of the linear term is fixed to $b_1$. Using these constraints and the data of Table \[table:energy\_summary\_results\], we numerically computed the interpolating function and found $$\label{polynomial-fit} \varepsilon_{c}({n}) = ~\varepsilon^{(\infty)}_{c} + b_1 \,\frac{1}{n} +b_{2} \, \frac{1}{n^{2}} +b_{3}\, \frac{1}{n^{3}} +b_{4}\, \frac{1}{n^{4}}$$ finding $b_2=-0.4762$, $b_3=0.3105$ and $b_4=0.0593$. In the interpolation procedure we did not consider the point $\{ 1, \varepsilon^{(1)}_{c} \}$ since our interest is in the comparison of $\varepsilon^{(n \geq 2)}_{c}$ and $\varepsilon^{(1)}_{c}$ in $\frac{1}{n}\in\left[ 0,\frac{1}{2} \right]$. Moreover, the function $\varepsilon_{c}(n)$ has to be computed with the lowest order polynomial function as possible. If we force $\varepsilon_{c}(n)$ to pass through $\{1,\varepsilon^{(1)}_{c}\}$, the next-order term ($b_{5}\,\frac{1}{n^{5}}$) becomes necessary although no useful information on $\varepsilon^{(n)}_{c}$ is present in the range $1/n\in[1/2,1]$. As a further check we also performed a fit of data presented in Table \[table:energy\_summary\_results\] (without the point $\{1,\varepsilon^{(1)}_{c}\}$) with a fourth-order polynomial obtaining an excellent agreement with the interpolation. However, the value of $b_1$ is known with a finite precision, and this affects the reliability of the numerical values of the coefficients $b_2$, $b_3$ and $b_4$. To estimate the accuracy of the coefficients of the interpolation formula we thus repeated the procedure using $b_1 = 0.2190$ and $b_1 = 0.2174$, i.e., the upper and lower bounds for $b_1$, respectively. We can summarize the results as follows: the interpolation formula for the critical energy density is given by Eq.  with $\varepsilon^{(\infty)}_{c} = -1.0216119\dots$, $b_1 = 0.2182(8)$, $b_2 = -0.472(7)$, $b_3 = 0.31(2)$ and $b_4 = 0.06(2)$. In Fig. \[final-energy-plot\] we plot the following quantities: the interpolating curve given by Eq. (\[polynomial-fit\]) with the above reported coefficients (dashed blue line), the first-order approximation as given by Eq. (\[CPRV\]) (solid green line), the horizontal curve $\varepsilon^{(n)}_{c}=\varepsilon^{(1)}_{c}$ in correspondence of the critical energy density of the Ising model (dot-dashed black line), and, with solid symbols, the critical energy densities $\varepsilon^{(1)}_{c}$, $\varepsilon^{(2)}_{c}$ (purple square), $\varepsilon^{(3)}_{c}$, $\varepsilon^{(4)}_{c}$ and $\varepsilon^{(\infty)}_{c}$ (blue down-pointing triangle). For $1/n=1/2,1/3,1/4$ the uncertainties on the points are given by the systematic uncertainties shown in Table \[table:energy\_summary\_results\] and are hardly visible on the plot being smaller than the symbols’ size. Simulation data for $n$ larger than 4 are not available. We thus reported on the plot the values of $\varepsilon^{(4)}_{c}$ obtained in Ref. [@CampostriniEtAl:npb1996] with a strong-coupling expansion, using open symbols. Although these data are less accurate than simulation data they are in very good agreement with the interpolation formula. ![Critical energy densities $\varepsilon^{(n)}_{c}$ of 3-$d$ O$(n)$ models as a function of $1/n$: $\varepsilon^{(1)}_{c}$ (solid blue circle), $\varepsilon^{(2)}_{c}$ (solid purple square), $\varepsilon^{(3)}_{c}$ (solid yellow diamond), $\varepsilon^{(4)}_{c}$ (solid green up-pointing triangle) and $\varepsilon^{(\infty)}_{c}$ (solid blue down-pointing triangle) as given in Table \[table:energy\_summary\_results\]; uncertainties are smaller than or of the same order of the symbol sizes. The dashed blue line is the interpolating curve $\varepsilon_{c}(1/n)$ given in Eq. (\[polynomial-fit\]) with the coefficients given in the text, the solid green line represents the $\frac{1}{n}$ expansion up to first order as given in Eq. (\[CPRV\]), the horizontal dot-dashed black line is the line of equation $\varepsilon_{c}(n)=\varepsilon^{(1)}_{c}$. Open symbols are the values of the critical energies found by strong-coupling expansion in Ref. [@CampostriniEtAl:npb1996] for $n = 8$ (open purple down-pointing triangle), $n = 16$ (open purple circle), $n = 24$ (open yellow square), $n = 32$ (open green diamond), and $n = 48$ (open blue up-pointing triangle).[]{data-label="final-energy-plot"}](plotfinale.eps){width="10cm"} The interpolating curve provides a practical test for the reliability of the approximation $\varepsilon^{(n)}_{c}\simeq \varepsilon^{(1)}_{c}$ discussed at the beginning. Indeed, assuming that Eq. (\[polynomial-fit\]) yields good estimates of the values of $\varepsilon^{(n)}_{c}$, for any $n\in[2,\infty]$ the discrepancy between $\varepsilon^{(n)}_{c}$ and $\varepsilon^{(1)}_{c}$ can be easily quantified as $|\varepsilon_{c}(1/n)-\varepsilon^{(1)}_{c}|$. In particular: for $1/n\in[1,1/8)$, that is up to $n=8$, the error committed by replacing $\varepsilon^{(n)}_{c}$ with $\varepsilon^{(1)}_{c}$ is about $1\%$; for $1/n\in[1/8,1/18)$, that is up to $n=18$, the error is about $2\%$; for $1/n\in[1/18,0]$, that is up to $n=\infty$, the error is about $3\%$, and in any case smaller than $|\varepsilon^{(\infty)}_{c}-\varepsilon^{(1)}_{c}|\simeq 0.031$. We checked that the same conclusion is obtained by performing a fit of the form (\[polynomial-fit\]) using also the data for $\varepsilon^{(n)}_{c}$ with $n=8,16,24,32,48$ reported in [@CampostriniEtAl:npb1996] (and of course the data of Table \[table:energy\_summary\_results\]). Concluding remarks {#ConclusionsNumericalTest} ================== We have performed a numerical analysis of the $n$-dependence of the critical energy density of three-dimensional classical O$(n)$ models defined on regular cubic lattices and with nearest-neighbor ferromagnetic interactions: our results are summarized in Table \[table:energy\_summary\_results\]. For $n=2$ and $3$, our results for the critical energy densities —Eqs. (\[XY\_energy\_I\_best\_result\]) and (\[H\_energy\_I\_best\_result\])— improved the accuracy of the numerical estimates present in the literature. The critical energy densities of classical O$(n)$ models with $n=2,3$ and $4$ have been evaluated with a finite-size scaling (FSS) analysis together with their statistical and systematic uncertainties due to the FSS procedure and to the uncertainty on the critical temperature, respectively; the systematic uncertainties turned out to be much larger (an order of magnitude) than the statistical ones for every value of $n$. A possible way to further reduce these systematic uncertainties in future simulations would possibly be to compute the critical temperature $T_c^{(n)}(L)$ at size $L$ [@Binder:book], vary $L$ and then proceed to the FSS analysis. Interpolating the data of $\varepsilon^{(n)}_{c}$ for $n=2,3,4$ and $n=\infty$, a polynomial function $\varepsilon_{c}(n)$ has been computed to estimate the critical energy density at any $n$. This function exploits the knowledge of the first-order term in the $1/n$-expansion of the critical energy density of O$(n)$ models computed in [@CampostriniEtAl:npb1996], and yields a a practical way to test the error committed by replacing $\varepsilon^{(n)}_{c}$ with $\varepsilon^{(1)}_{c}$ for a generic O$(n)$ model. The latter is less than $1\%$ if $n\in[2,8)$, between $1\%$ and $2\%$ if $n\in[8,18]$ and less then $3\%$ for all the larger $n$’s up to $n=\infty$. The above analysis concludes the discussion started in [@prl2011] as to the values of the critical energy densities of classical O$(n)$ models with ferromagnetic interactions defined on regular cubic lattices in $d=3$, showing that the critical energy densities of these models are indeed very close to each other and quantifying their differences. Clearly this result alone does not mean that the rather crude approximations on the density of states put forward in [@prl2011] are reliable. However, as already recalled in the Introduction, such approximations can be controlled and a relation similar to can be derived for two exactly solvable models, the mean-field and 1-$d$ $XY$ models [@jstat2012], and similar considerations can be effectively used to construct analytical or semi-analytical estimates of the density of states of O$(n)$ models that compare well with simulation data for $n=2$ in $d=2$ [@analyticalpaper]. Finally a comment is in order on the critical energy densities for three-dimensional O$(n)$ models found in this paper. As briefly discussed in Sec. \[numericalSphericalModel\], a monotonic behavior in $n$ is supposed to hold for some thermodynamic functions of classical O$(n)$ models defined on particular lattice geometries [@Stanley:prl1968]. It is unclear whether such considerations could be applied also to $\varepsilon^{(n)}_{c}$ of O$(n)$ models defined on regular cubic lattices. The interpolating function in Eq. (\[polynomial-fit\]) is a monotonically increasing function of $\frac{1}{n}$ from $n =\infty$ up to $n=2$, but this is no longer true for $n=1$ since —within the estimated errors— it is $\varepsilon^{(1)}_{c} < \varepsilon^{(2)}_{c}$. Monotonicity could be restored admitting a higher value $\varepsilon^{(1)\prime}_{c}$ for $\varepsilon^{(1)}_{c}$, such that $\varepsilon^{(1)\prime}_{c}-\varepsilon^{(1)}_{c}\simeq10^{-3}$. The accuracy of the numerical value of $\varepsilon^{(1)}_{c}$ in Eq. (\[Ising\_best\_energy\]) derived in [@HasenbuschPinn:jphysa1998] clearly does not allow such a higher value of $\varepsilon^{(1)}_{c}$. Hence we conclude that monotonicity fails for $n=1$, unless the uncertainty quoted in [@HasenbuschPinn:jphysa1998] is underestimated. However, a possible increase of $10^{-3}$ in $\varepsilon^{(1)}_{c}$ would neither affect the considerations made at the end of Sec. \[Sec\_NumericalTest\] nor the form of Eq. (\[polynomial-fit\]). Discussions with E. Vicari and G. Gori are gratefully acknowledged. Some properties of the Watson integrals and estimate of $b_1$ {#Watson} ============================================================= The Watson integrals appear in the theory of the spherical model [@Joyce:inDombGreen] and are related to the generalized Watson integrals $$W(d,z)=\frac{1}{\pi^d} \, \int_0^\pi \cdots \int_0^\pi \, \frac{dk_1 \cdots dk_d}{1-\frac{1}{dz} \left( \cos{k_1} + \cdots + \cos{k_d} \right)}\, .$$ The Watson integral in dimension $d$ is defined as $$\label{Watson_Wd} W_d=\frac{1}{\pi^d} \, \int_0^\pi \cdots \int_0^\pi \, \frac{dk_1 \cdots dk_d} {d-\left( \cos{k_1} + \cdots + \cos{k_d} \right)}\, ,$$ so that $$d\,W_d=W(d,1)\, .$$ Using the notation $$\label{Watson_notation} f_d(\mathbf{k}) \equiv d- \sum_{\alpha=1}^d \cos{k_\alpha}\,$$ with $\mathbf{k} = (k_1,\ldots,k_d)$, the Watson integral $W_d$ can be compactly written in the form $$\label{Watson_notation_W} W_d=\int_{[0,\pi]^d} \, \frac{d^dk}{\pi^d} \, \frac{1}{f_d(\mathbf{k})}\, .$$ The coefficient $a_d$ defined in Eq. (\[coefficient\_a3\]) for $d=3$ reads in dimension $d$ $$a_d=\int_{[0,\pi]^d} \, \frac{d^dk}{\pi^d} \, \frac{\sum_{\alpha=1}^d \cos{k_\alpha}}{f_d(\mathbf{k})} \, :$$ $a_d$ is related to the Watson integral $W_d$ according to the relation $$a_d=d\,W_d-1\, .$$ A major simplification in the evaluation of Watson integrals is obtained by using the identity [@Maradudin:book] $$\label{identity} \frac{1}{\lambda}=\int_0^{\infty} \, e^{-\lambda t} \, dt\, :$$ by putting $\lambda=f_d(\mathbf{k})=d-\sum_{\alpha=1}^d \cos{k_\alpha}$ in Eq. (\[Watson\_Wd\]) and integrating over the $k_\alpha$’s one gets the *single* integral $$\label{identity_Wd} W_d=\int_0^{\infty} \, e^{-d t} \, [I_0(t)]^d \, dt\, ,$$ where $I_0(t)=(1/\pi) \, \int_0^\pi e^{t \cos{k}} \, dk$ is a modified Bessel function of the first kind. In $d=3$ it is possible to write $W_d$ in terms of the gamma function [@Watson:QJMO1939; @Joyce:jphysa1972; @BorweinZucker:IMA1992] as $$\label{identity_Wd_d3} W_3=\frac{\sqrt{3} -1}{96 \pi^3} \left( \Gamma\left( \frac{1}{24} \right) \Gamma\left( \frac{11}{24} \right) \right)^2 \, ,$$ from which Eq. (\[coefficient\_a3\_analitico\]) follows. The Watson integral in $d=3$ and its generalizations enter as well in the coefficients of the $1/n$ expansion [@CampostriniEtAl:npb1996; @MullerRuhl:AnnPhys1986]: in particular the coefficient $b_1$ defined in the expression (\[CPRV\]) for the critical energy density reads as [@CampostriniEtAl:npb1996] $$\label{b1_app} b_1= 2 \left( \frac{b_1^{(a)}}{4} - \frac{1}{W_3} - \frac {b_1^{(b)}}{\left( W_3 \right)^2} \right) \, ,$$ where the coefficients $b_1^{(a)},b_1^{(b)}$ are computed as integrals of the function $\Delta(\mathbf{q})$ defined as $$\label{Delta_app} \frac{1}{\Delta(\mathbf{q})} = \frac{1}{8} \int_{[-\pi,\pi]^3} \frac{d^3k}{(2\pi)^3} \, \frac{1} {f_3\left( \mathbf{k} \right) \, f_3\left( \mathbf{k+q} \right)} \, ,$$ with $\mathbf{q}$ belonging to the first Brillouin zone ($\mathbf{q} \in [\pi,\pi]^3$) and $f_3\left( \mathbf{k} \right)= 3-\sum_{\alpha=x,y,z} \cos{k_\alpha}$. We observe that using twice the identity (\[identity\]) one can formally reduce the integral in (\[Delta\_app\]) to a double integral as $$\label{Delta_app_twice} \frac{1}{\Delta(\mathbf{q})} = \frac{1}{8} \int_0^\infty dt_1 \, e^{-3t_1} \int_0^\infty dt_2 \, e^{-3t_2} \, \left( \prod_{\alpha=x,y,z} {\cal I} \left(q_\alpha;t_1,t_2 \right) \right) \,$$ \[similarly to the re-writing (\[identity\_Wd\]) for $W_d$\] with $$\label{Delta_app_twice_I} {\cal I} \left(q;t_1,t_2 \right) = \int_{-\pi}^{\pi} \, \frac{dk}{2\pi} \, e^{\,t_1\,\cos{k}+t_2\,\cos{(k+q)}}\, .$$ The expressions for $b_1^{(a)}$ and $b_1^{(b)}$ are respectively given by $$\label{b1_a} b_1^{(a)}=\frac{1}{2} \, \int_{[-\pi,\pi]^3} \frac{d^3q}{(2\pi)^3} \, \frac{\Delta(\mathbf{q})} {f_3\left( \mathbf{q} \right) } \,$$ and $$\label{b1_b} b_1^{(b)}=-\frac{1}{16} \, \int_{[-\pi,\pi]^3} \frac{d^3q}{(2\pi)^3} \, \Delta(\mathbf{q}) \int_{[-\pi,\pi]^3} \frac{d^3p}{(2\pi)^3} \frac{1} {\left( f_3\left( \mathbf{p} \right) \right)^2} \left[ \frac{1}{f_3\left( \mathbf{p+q} \right)} + \frac{1}{f_3\left( \mathbf{p-q} \right)} - \frac{2}{f_3\left( \mathbf{q} \right)} \right]\, .$$ Numerically we obtained $b_1^{(a)}=6.49628(1)$ and $b_1^{(b)}=-0.1184(1)$, from which $b_1=0.2182(8)$. [^1]: The relation cannot be exact, at least in the form proposed in [@prl2011], because it would imply wrong —and $n$-independent— values of the critical exponent $\alpha$. Nevertheless Eq. (\[omega\_appr\]) yields the correct sign of $\alpha$, that is, correctly predicts a cusp in the specific heat at criticality and not a divergence: see Refs. [@prl2011] and especially [@jstat2012] for a more complete discussion on the problem. [^2]: But in the case $n\rightarrow\infty$, that will be discussed in Sec. \[numericalSphericalModel\]. [^3]: For $c^{(n)}_{c}$ only the statistical error $\varDelta c^{(n),stat}_{c}$ will be computed since this quantity is only used for the computation of $\varDelta \varepsilon^{(n),syst}_{c}$. [^4]: Notice that Eqs. (\[energy\_FSS\]) and Eq. (\[energy\_FSS\_out\]) hold for $T=T^{(n)}_{c}$ - however, since $\frac{\varDelta T^{(n)}_{c}}{T^{(n)}_{c}}\sim 10^{-5}$ for the models considered, we assume Eq. (\[energy\_FSS\_out\]) valid in the whole range $T\in\left[T^{(n)}_{c}-\varDelta T^{(n)}_{c}, T^{(n)}_{c}+\varDelta T^{(n)}_{c}\right]$. [^5]: In [@Stanley:prl1968] the monotonicity is explicitly shown for the above quantities in $d=1,2,3$ and for particular geometries of the lattices, i.e., spin chains, triangular lattices and fcc lattices. These results are supposed to hold also in more general cases but the generalization is not straightforward. In particular, it is not immediately clear whether the monotonicity is expected to hold also also for the energy density $\varepsilon^{(n)}_{c}$ of models defined by Eq. (\[H-On\]) on regular cubic lattices in $d=3$.
--- abstract: 'Consider a random medium consisting of points randomly distributed so that there is no correlation among the distances. This is the random link model, which is the high dimensionality limit (mean field approximation) for the euclidean random point structure. In the random link model, at discrete time steps, the walker moves to the nearest site, which has not been visited in the last $\mu$ steps (memory), producing a deterministic partially self avoiding walk (the tourist walk). We have obtained analitically the distribution of the number $n$ of points explored by a walker with memory $\mu = 2$, as well as the transient and period joint distribution. This result enables to explain the abrupt change in the exploratory behavior between the cases $\mu = 1$ (memoryless, driven by extremal statistics) and $\mu = 2$ (with memory, driven by combinatorial statistics). In the $\mu = 1$ case, the mean newly visited points in the thermodynamic limit $(N \gg 1)$ is just $\langle n \rangle = e = 2.72 \ldots$ while in the $\mu = 2$ case, the mean number $\langle n \rangle$ of visited points is proportional to $N^{1/2}$. Also, this result allows us to stabilish an equivalence between the random link model with $\mu=2$ and random map (uncorrelated back and forth distances) with $\mu=0$ and the drastic change between the cases where the transient time is null compared to non-null transient times.' author: - Augusto Sangaletti - Souto title: 'The influence of memory in deterministic walks in random media: analytical calculation within a mean field approximation' --- Introduction ============ Although not as thoroughly studied as random walks in disordered media [@fisher:1984] and complex media [@metzler_2000], which constitute an interesting problem for Physics, deterministic walks in regular [@grassberger:92; @gale:95] and disordered media [@bunimovich:2004; @lam_2006; @santos:061114] present very interesting results, as an application to foraging [@boyer_2004; @boyer_2005; @boyer_2006]. The memory in random walks has the effect of changing the behavior of the gaussian displacement distribution [@cressoni:070603]. Here, we are interested in understanding fundamental aspects of a partially self-avoiding deterministic walk algorithm, known as the tourist walk (TW) [@lima_prl2001; @stanley_2001; @kinouchi:1:2002]. These walks, that are described below, have been applied to characterize thesaurus [@kinouchi:1:2002], as a pattern recognition algorithm [@campiteli_2006] and image analysis [@bruno_2006; @backes_2006]. Consider $N$ points (sites, cities) randomly distributed inside a $d$-dimensional hypercube with unitary edges. The distance $D_{i, j}$ between any two points $s_i$ and $s_j$ is calculated via euclidean metrics. The walker leaves a given point and moves obeying the deterministic rule of going to the nearest point (shortest euclidean distance), which has not been visited in the $\mu$ preceding steps. This rule produces trajectories with an initial transient part of $t$ steps and a cycle of $p$ steps as a final periodic part. Once trapped in a cycle, the walker does not visit new points anylonger. Short transient times and short period cycles limit exploration of the medium by the walker. Analytical results [@tercariol_2007_physa] could be obtained for (i) memoryless walkers in the deterministic [@tercariol_2005] and stochastic [@risaugusman:1:2003; @martinez:1:2004] versions of the TW and for (ii) deterministic walk with arbitrary memory in one-dimensional systems [@tercariol_2007]. Here we consider the memory effect in deterministic walks in a mean field approximation. The deterministic TW, with memory $\mu = 0$, is trivial since the walker does not move at each time step, so that the transient-time/period joint distribution is simply: $S^{(N)}_{0, d}(t, p) = \delta_{t, 0} \delta_{p, 1}$, where $\delta_{i, j}$ is the Kronecker delta. With memory $\mu = 1$, the walker must leave the current site at each time step. The joint distribution $S^{(N)}_{1, d}(t, p)$ is obtained considering the trajectories of a tourist leaving from all sites of a given map and statistics is performed for different realizations (maps). For $N \gg 1$, the transient-time/period joint distribution is obtained analytically for arbitrary dimensionality [@tercariol_2005]: $S^{(\infty)}_{1, d}(t, p) = [(t + I_d^{-1}) \Gamma(1 + I_d^{-1})/\Gamma(t + p + I_d^{-1})] \delta_{p, 2}$, where $\Gamma(z)$ is the gamma function and $I_d = I_{1/4}[1/2,(d+1)/2]$ is the normalized incomplete beta function. This case does not lead to exploration of the random medium since after a short transient, the tourist gets trapped in pairs of cities that are mutually nearest neighbors. Interesting phenomena occur when the memory values are greater or equal to two ($\mu \ge 2$). In this case, the cycle distribution is no longer peaked at $p_{min} = \mu + 1$, but presents a whole spectrum of cycles with period $p \ge p_{min}$, with possible power-law decay [@lima_prl2001; @kinouchi:1:2002], which favors exploration of the medium by the walker. The elucidation of this intringuing broadening of the cycle period distribution is our main objective. As the medium dimensionality $d$ incresases, the correlations between the distances $D_{i, j}$ become weaker and weaker, so that, in the high dimensionality limit ($d \rightarrow \infty$), the distances can be considered independent random variables, uniformly distributed in the interval $[0, 1]$ [@mezard:1986; @percus:1996; @percus:1997; @percus:1998; @percus:1999]. This is the mean field model named Random Link (RL), where two euclidean constraints still remain: (i) the distance from a point to itself is null, $D_{i, i} = 0$, and (ii) the forward and backward distances are equal, $D_{i, j} = D_{j, i}$. Breaking these constraints leads to the Random Map model (RM) [@derrida:2:1997], which is a mean field approximation for the Kauffman’s model [@kauffman:1969]. The neighborhood statistics for these models have being analytically studied in Ref. . In this paper, we obtain analytical results for the TW, with memory $\mu = 2$ in the $d \to \infty$ medium, i.e. the RL approximation. These results enable us to explain the main mechanism which makes the $\mu = 1$ and $\mu \ge 2$ situations so distinct. Also, they permit us to estabilish a relationship between the mean fields RL and RM models. The walks with memory $\mu=2$ in the symmetric independent random distance case (RL model) is equivalent to memoryless ($\mu=0$) walks in the assymmetric independent random distance case (RM model), which has been already solved in Ref. . Throughout this relationship between RL and RM models, we show that the decay for the cycle period distribution in the RL model is a power law $\propto p^{-1}$. Also we are able to explain the reason of the already observed numerically abrupt change in the in the transient/period joint distribution for null transient $t = 0$. The presentation of these results are briefly skechted in the following. In Sec. \[Sec:DistrTildeN\], we calculate the probability $\tilde{S}_{2, rl}^{(N)}(\tilde{n})$ for the tourist, with memory $\mu=2$, to visit $\tilde{n}$ distinct sites before the first passage to any already visited site, walking on the RL model with $N$ sites. We start calculating the complementary cumulative distribution $\tilde{F}_{2, rl}^{(N)}(\tilde{n})$ (upper-tail distribution). Next, throughout an analogy to the geometric distribution, we obtain the revisit $\tilde{p}_{2, rl}^{(N)}(j)$ (first passage) and exploration $\tilde{q}_{2, rl}^{(N)}(j)$ probabilities. Using an alternative derivation, we obtain simpler expressions for these probabilities, which lead to a closed analytical expression for $\tilde{F}_{2, rl}^{(N)}(\tilde{n})$. In Sec. \[Sec:DistrN\], we show that the probability for the tourist to be trapped into a cycle when revisiting a site is 2/3, which is counterintuitive. This result (combined to previous ones) allows us to obtain the complementary cumulative distribution $F_{2, rl}^{(N)}(n)$ for the total number $n$ of visited sites (until the walker enters an attractor). In Sec. \[Sec:JointDistr\], we obtain the joint distribution $S_{2, rl}^{(N)}(t, p)$ of transient time $t$ and cycle period $p$ and show the drastic difference between the $t=0$ and $t \ne 0$ cases. Final remarks are presented and future studies are proposed in Sec. \[Sec:Conclusion\]. Distribution for the number of explored sites before the first passage (revisit) {#Sec:DistrTildeN} ================================================================================ Consider that the tourist, who performs a walk with memory $\mu=2$ on the RL model with $N$ points, has visited $\tilde{n} \ge 3 = \mu+1 = \tilde{n}_{min}$ distinct sites and then revisits one of these sites. Aiming to obtain the distribution $\tilde{S}_{2, rl}^{(N)}(\tilde{n})$ of the number $\tilde{n}$ of sites visited before the first passage, we start calculating the complementary cumulative (upper-tail) distribution $$\begin{aligned} \tilde{F}_{2, rl}^{(N)}(\tilde{n}) = \sum_{k=\tilde{n}}^N \tilde{S}_{2, rl}^{(N)}(k)\end{aligned}$$ i.e., the probability for the tourist to explore at least $\tilde{n}$ distinct sites, before the first revisit. In the schema of Fig. \[Fig:Trajectory\], the tourist leaves from a given site $s_1$ (first step, $j=1$) and follows the trajectory $s_1$, $s_2$, …, $s_{\tilde{n}}$, exploring $\tilde{n}=9$ distinct sites, with no revisit. For $1 \le i \le \tilde{n}-1$, let us denote - $x_i$ the distance between the consecutive sites $s_i$ and $s_{i+1}$ in the trajectory (thick continuous lines of Fig. \[Fig:Trajectory\]), - $y_{i, k}$ the distances between the site $s_i$ in the trajectory and other sites outside the trajectory (thin continuous lines of Fig. \[Fig:Trajectory\]), - $z_{i, k}$ the distance between the non-consecutive sites $s_i$ and $s_k$ in the trajectory (slashed lines of Fig. \[Fig:Trajectory\]), By definition of the RL model, all these distances $x_i$, $y_{i, k}$ and $z_{i, k}$ has uniform deviate in the interval $[0, 1]$. The conditions for the tourist to follow the trajectory $s_1$, $s_2$, …, $s_{\tilde{n}}$ in the first $\tilde{n}$ steps are 1. in the case $\mu=1$ (already solved in Ref. ), the distances $x_i$ must obey the relation $x_{\tilde{n}-1} < x_{\tilde{n}-2} < \cdots < x_1$, once the tourist stops exploring new sites when $x_{i+1} > x_i$, giving rise to a cycle of period $p=2$. But for the case $\mu=2$ addressed here, each distance $x_i$ may vary unrestrictly in the interval $[0, 1]$, because the memory $\mu=2$ forbids the tourist to move backward from $s_{i+1}$ to $s_i$ (even if $x_{i+1}>x_i$). 2. when the tourist is about to walk the distance $x_i$ (and move from $s_i$ to $s_{i+1}$) there exist $N-i$ non-explored sites at his/her disposal. 3. for each site $s_i$, all $N-(\tilde{n}-1)$ distances $y_{i, k}$ must be greater than $x_i$. The probability for this to occur is $\left[ \int_{x_i}^1 \mbox{d}y_{i, k} \right]^{N-\tilde{n}+1} = (1-x_i)^{N-\tilde{n}+1}$. The only exception is the site $s_{\tilde{n}-1}$, which has $N-\tilde{n}$ distances $y_{\tilde{n}-1, k}$ connected to it (see Fig. \[Fig:Trajectory\], where $s_{\tilde{n}-1}$ corresponds to $s_8$). 4. to avoid shortcuts and revisits, each distance $z_{i, k}$ must be greater than both $x_i$ and $x_k$. These conditions lead to the following chained integrals: $$\begin{aligned} \tilde{F}_{2, rl}^{(N)}(\tilde{n}) & = & \prod_{i=1}^{\tilde{n}-2} \int_0^1 \mbox{d}x_i(N-i)(1-x_i)^{N-\tilde{n}+1} \nonumber \\ \nonumber \\ & & \int_0^1 \mbox{d}x_{\tilde{n}-1}(N-\tilde{n}+1)(1-x_{\tilde{n}-1})^{N-\tilde{n}} \nonumber \\ \nonumber \\ & & \prod_{i=1}^{\tilde{n}-3} \prod_{k=i+2}^{\tilde{n}-1} \int_{\mbox{max}(x_i, x_k)}^1 \mbox{d}z_{i, k} \; . \label{Eq:ChainedIntegrals}\end{aligned}$$ It is worthwhile to mention that we have made no approximation yet, hence Eq. \[Eq:ChainedIntegrals\] yields exact results even for small values of $N$, as Tab. \[Tab:DistrN6\] shows. ------------- -------------------------------------- -------------------------------------- --------- ------------------- ------------------- ------------ standard dif. (in $\tilde{n}$ $\tilde{F}_{2, rl}^{(6)}(\tilde{n})$ $\tilde{S}_{2, rl}^{(6)}(\tilde{n})$ mean error difference std-error) 3 1 0,15625 0,15624 $1 \cdot 10^{-5}$ $7 \cdot 10^{-6}$ 0,62 4 $\frac{27}{32}$ 0,29534 0,29535 $1 \cdot 10^{-5}$ $2 \cdot 10^{-5}$ 1,13 5 $\frac{9\,459}{17\,248}$ 0,33785 0,33784 $1 \cdot 10^{-5}$ $1 \cdot 10^{-5}$ 0,82 6 $\frac{107\,301}{509\,600}$ 0,21056 0,21056 $1 \cdot 10^{-5}$ $3 \cdot 10^{-6}$ 0,22 ------------- -------------------------------------- -------------------------------------- --------- ------------------- ------------------- ------------ : Numerical validation of Eq. \[Eq:ChainedIntegrals\]. The columns $\tilde{F}_{2, rl}^{(6)}(\tilde{n})$ and $\tilde{S}_{2, rl}^{(6)}(\tilde{n})$ refer to analitical values and the columns mean and standard-error came from numeric simulation. Walks were performed on 300000000 maps with $N=6$ points each.[]{data-label="Tab:DistrN6"} However, the function $\mbox{max}(x_i, x_k)$ in the lower limits of the integrals in $z_{i, k}$ makes it difficult to solve Eq. \[Eq:ChainedIntegrals\], once we should consider all possible $(\tilde{n}-1)!$ orderings of distances $x_i$. In the following, we will consider the thermodynamic limit ($N \gg 1$) and make some approximations to solve Eq. \[Eq:ChainedIntegrals\]. For a better visualization, notice that the integrals in $z_{i, k}$ refer to the slashed lines of Fig. \[Fig:Trajectory\]. Observe that exactly $\tilde{n}-4$ slashed lines leave from each site, except the sites $s_1$ and $s_{\tilde{n}-1}$, where $\tilde{n}-3$ slashed lines leave from, due to the additional distance $z_{1, \tilde{n}-1}$ (thick slashed line in Fig. \[Fig:Trajectory\]). To obtain a more regular expression, we can eliminate the integral in $z_{1, \tilde{n}-1}$ in Eq. \[Eq:ChainedIntegrals\] (without any harm) and then each variable $z_{i, k}$ appears exactly $\tilde{n}-4$ times. To justify this elimination notice that, due to the deterministic rule of TW, each distance $x_i$ is the minimum of $N-2$ random variables uniformly distributed in the interval $[0, 1]$. Therefore, its pdf is given by [@tercariol_2005]: $g(x_i) = (N-2)(1-x_i)^{N-3}$ and its mean and standard deviation are: $\overline{x_i} = 1/(N-1) \approx 1/N$ and $\sigma_{x_i} = \sqrt{(N-2)/[N (N-1)^2]} \approx 1/N$, so that, in the limit $N \gg 1$, $x_i$ assumes values close to 0 and the value of the integral $\int_{\mbox{max}(x_1, x_{\tilde{n}-1})}^1 \mbox{d}z_{1, \tilde{n}-1}$ is typically close to 1. Changing the exponent of $x_{\tilde{n}-1}$ from $N-\tilde{n}$ to $N-\tilde{n}+1$, all the variables $x_i$ are raised to the same power. The resulting expression is algebraically symmetric with respect to the variables $x_1$, $x_2$, …, $x_{\tilde{n}-1}$, what means that all possible $(\tilde{n}-1)!$ orderings occur with the same probability. Thus, one can consider the specific ordering $x_1 < x_2 < \cdots < x_{\tilde{n}-1}$ and rewrite Eq. \[Eq:ChainedIntegrals\] without using the inconvenient function max(): $$\begin{aligned} \frac{\tilde{F}_{2, rl}^{(N)}(\tilde{n})}{(\tilde{n}-1)! } = \prod_{i=1}^{\tilde{n}-1} (N-i) \int_0^1 \mbox{d}x_1(1-x_1)^{N-\tilde{n}+1} \nonumber \\ \prod_{i=2}^{\tilde{n}-2} \int_{x_{i-1}}^1 \mbox{d}x_i(1-x_i)^{N-\tilde{n}+i-1} \nonumber \\ \int_{x_{\tilde{n}-2}}^1 \mbox{d}x_{\tilde{n}-1}(1-x_{\tilde{n}-1})^{N-3} \; , \label{Eq:IntegralsWithoutMax}\end{aligned}$$ where we emphasize that the extra factor $(\tilde{n}-1)!$ takes into account all possible orderings of the variables $x_i$. The exponent of $x_1$ may be changed from $N-\tilde{n}+1$ to $N-\tilde{n}$ aiming the exponents of $x_1$, $x_2$, …, $x_{\tilde{n}-2}$ to be in an arithmetic series. One then calculates the integrals of Eq. \[Eq:IntegralsWithoutMax\] to have at last: $$\begin{aligned} \nonumber \tilde{F}_{2, rl}^{(N)}(\tilde{n}) & = & \frac{(\tilde{n}-1)! (N-1)(N-2)(N-3)\cdots}{(N-2)(2N-4)(3N-7)(4N-11) \cdots } \nonumber \\ & & \frac{\cdots(N-\tilde{n}+1)}{\cdots \left\{ (\tilde{n}-1)N-[(\tilde{n}-1)\tilde{n}/2 + 1] \right\} } \nonumber \\ & = & \prod_{k=1}^{\tilde{n}-1} \frac{k(N-k)}{kN - k(k+1)/2 - 1} \nonumber \\ & = & \prod_{j=4}^{\tilde{n}} \frac{N-j+1}{N-j/2-1/(j-1)} \; , \label{Eq:DistrTildeNCumul}\end{aligned}$$ where we have called $j = k + 1$ and the lower limit of the productory was changed from $j=2$ to $j=4$ because the factors for $j=2$ and $j=3$ are physically meaningless, as we shall argue in the Subsec. \[Subsec:AnalogyGeomDistr\]. Finally, the distribution of $\tilde{n}$ is calculated from the one step difference of the upper-tail distribution: $$\begin{aligned} \tilde{S}_{2, rl}^{(N)}(\tilde{n}) & = & \tilde{F}_{2, rl}^{(N)}(\tilde{n}) - \tilde{F}_{2, rl}^{(N)}(\tilde{n}+1) \nonumber \\ & = & \left[ 1- \frac{N-\tilde{n}} {N-(\tilde{n}+1)/2 -1/\tilde{n}} \right] \nonumber \\ & & \prod_{j=4}^{\tilde{n}} \frac{N-j+1}{N - j/2 - 1/(j-1)} \; . \label{Eq:DistrTildeN}\end{aligned}$$ The expression of Eq. \[Eq:DistrTildeNCumul\] is similiar to the one obained for $\mu=1$ (using Eqs. 9 and 10 of Ref.  and calling $\tilde{n}=t+2$): $\tilde{F}_{1, rl}^{(N)}(\tilde{n}) = [\prod_{j=3}^{\tilde{n}} \frac{N-j+1}{N-j/2}]/(\tilde{n}-1)!$. The main difference is the presence of the factor $1/(\tilde{n}-1)!$, because, for $\mu=1$, one must to consider only the specific ordering $x_{\tilde{n}-1} < x_{\tilde{n}-2} < \cdots < x_1$. At this point we are able to understand the major role played by the memory in this partially self avoiding walk. For $\mu = 1$, the walker must go to the nearest neighbor. The extremal statistics is behind this dynamics. But, for instance, forbidding the walker to return to the last visited site, this opens up the possibility to go to the first or second nearest neighbor, which transforms the extremal statistics to the combinatorial statistics. Mathematically, this is expressed by the absence of $(\tilde{n} - 1)!$ in Eq. \[Eq:DistrTildeNCumul\]. Analogy to the geometric distribution {#Subsec:AnalogyGeomDistr} ------------------------------------- Making an analogy to the geometric distribution, we can write Eq. \[Eq:DistrTildeN\] as $\tilde{S}_{2, rl}^{(N)}(\tilde{n}) = \tilde{p}_{2, rl}^{(N)}(\tilde{n}+1) \prod_{j=4}^{\tilde{n}} \tilde{q}_{2, rl}^{(N)}(j)$ where $$\begin{aligned} \tilde{q}_{2, rl}^{(N)}(j) = \frac{N-j+1}{N-j/2-1/(j-1)} \label{Eq:TildeQ}\end{aligned}$$ is the exploration probability in the $j$th step and $\tilde{p}_{2, rl}^{(N)}(j) = 1 - \tilde{q}_{2, rl}^{(N)}(j)$ is the revisit probability in the $j$th step. We remark that the expression of Eq. \[Eq:TildeQ\] is similar to the one obtained for $\mu=1$ (adapting Eqs. 9 and 10 of Ref.  from their original concept of subsistence probability to the concept of exploration probability handled here): $(j-1)\tilde{q}_{1, rl}^{(N)}(j) = (N-j+1)/(N-j/2)$. The main difference is the extra factor $j-1$, which is a consequence of the restriction $x_{\tilde{n}-1} < x_{\tilde{n}-2} < \cdots < x_1$. This extra factor explains the abrupt change in the exploratory behavior between $\mu=1$ and $\mu=2$ cases: on one hand, for $\mu=1$ the exploration probability (in the thermodynamic limit) decreases harmonically along the trajectory; on the other hand, for $\mu=2$ the exploration probability tends to 1 when $N \rightarrow \infty$. Once the memory $\mu=2$ assures the tourist to explore at least $\tilde{n}_{min}=\mu+1=3$ sites, it only makes sense to define exploration probability from the 4th step. In fact, for the first step ($j=1$) Eq. \[Eq:TildeQ\] does not have a defined value, for the second step it yields $\tilde{q}_{2, rl}^{(N)}(2) = (N-1)/(N-2) > 1$, which is an absurd, and for the third step $\tilde{q}_{2, rl}^{(N)}(3) = 1$. To take into account the proper physical content, we previously changed lower limit of the products of Eq. \[Eq:DistrTildeNCumul\] from $j=2$ to $j=4$. Its interesting to mention that for the step $j=N+1$ (after the tourist explores all $N$ sites), Eq. \[Eq:TildeQ\] correctly yields $\tilde{q}_{2, rl}^{(N)}(N+1) = 0$. Since in the $j$th step there are $j-3$ sites equally probable to be revisited and $\tilde{p}_{2, rl}^{(N)}(j)$ is the probability for the tourist to revisit [**any one**]{} of these sites, in the limit $N \gg j \gg 1$ the probability $\tilde{p}_{rl}$ for the tourist to revisit [**a specific**]{} site $s_k$ is $$\begin{aligned} \tilde{p}_{rl} & = & \frac{1}{j-3} \; \tilde{p}_{2, rl}^{(N)}(j) = \frac{1}{j-3} \; \frac{j/2-1-1/(j-1)}{N-j/2-1/(j-1)} \nonumber \\ & \approx & \frac{1}{2N} \; , %\tilde{p}_{rl} = \frac{1}{j-3} \tilde{p}_{2, rl}^{(N)}(j) = \frac{1}{j-3} \frac{\frac{j}{2}-1-\frac{1}{j-1}}{N-\frac{j}{2}-\frac{1}{j-1}} \approx \frac{1}{2N} \label{Eq:TildePrl}\end{aligned}$$ which is half the probability for he/she to explore a specific new site \[namely $\tilde{q}_{rl} = 1/(N-j) \approx 1/N$\]. Alternative Derivation ---------------------- In the following we obtain for $N \gg 1$ simpler expressions for the first passage and exploration probabilities, via an alternative reasoning. From these probabilities, we obtain closed analytical expressions for $\tilde{F}_{2, rl}^{(N)}(\tilde{n})$. ### First Passage and Exploration Probabilities Supose that the tourist has traveled along the trajectory $s_1$, $s_2$, …, $s_{\tilde{n}}$ ($\tilde{n} \ge 3$) without any revisit. Let us first reobtain the probability $\tilde{p}_{rl}$ for the tourist to revisit a specific site $s_k$ (outside the exclusion window, i.e., $k \le \tilde{n}-2$) in the following step. To do this, consider the following constraints (see Fig. \[Fig:Trajectory\]): 1. the distance $z_{\tilde{n}, k}$ must be smaller than $x_{\tilde{n}}$. 2. once in the ($k+1$)th step, the tourist came from site $s_k$ to $s_{k+1}$, the distance $z_{\tilde{n}, k}$ is greater than the distance $x_k$. In brief, $z_{\tilde{n}, k}$ must vary between $x_k$ and $x_{\tilde{n}}$, so that, $0 < x_k < z_{\tilde{n}, k} < x_{\tilde{n}} < 1$. Once the pdf of each distance $x_i$ is $g(x_i) = (N-2)(1-x_i)^{N-3}$ and $z_{\tilde{n}, k}$ has uniform deviate (by definition of RL model), for $N \gg 1$ the probability $\tilde{p}_{rl}$ is given by: $\tilde{p}_{rl} = P(x_k < z_{\tilde{n}, k} < x_{\tilde{n}}) = \int_0^1 \mbox{d}x_k (N-2)(1-x_k)^{N-3} \int_{x_k}^1 \mbox{d}x_{\tilde{n}} (N-2)(1-x_{\tilde{n}})^{N-3} \int_{x_k}^{x_{\tilde{n}}} \mbox{d}z_{\tilde{n}, k} \nonumber = (N-2)/[(N-1)(2N-3)] \approx 1/(2N)$, which agrees to Eq. \[Eq:TildePrl\]. For a generic step $j$ there are $j-3$ sites susceptible to be revisited so that the first passage and exploration probabilities for this step are: $\tilde{p}_{2, rl}^{(N)}(j) = (j-3)/(2N) = 1-\tilde{q}_{2, rl}^{(N)}(j)$, which is an approximation for Eq. \[Eq:TildeQ\], leading to $$\begin{aligned} \tilde{F}_{2, rl}^{(N)}(\tilde{n}) & = & \prod_{j=4}^{\tilde{n}} \tilde{q}_{2, rl}^{(N)}(j) = \prod_{j=4}^{\tilde{n}} \left[ 1-\frac{j-3}{2N} \right] \nonumber \\ & = & \frac{\Gamma(2N)}{\Gamma(2N-\tilde{n}+3) (2N)^{\tilde{n}-3}} \; , \label{Eq:DistrTildeNGamaCumul}\end{aligned}$$ which is a closed analytical form for Eq. \[Eq:DistrTildeNCumul\]. ### Exponential Form (Cumulative Half Gaussian) In the limit $N \gg 1$, the exploration probability may be written as $\tilde{q}_{2, rl}^{(N)}(j) = [ 1-1/(2N) ]^{j-3}$, so that Eq. \[Eq:DistrTildeNGamaCumul\] assumes its exponential form $$\begin{aligned} \nonumber \tilde{F}_{2, rl}^{(N)}(\tilde{n}) & = & \prod_{j=4}^{\tilde{n}} \tilde{q}_{2, rl}^{(N)}(j) = \left( 1-\frac{1}{2N} \right)^{\tilde{\omega}} \\ & \approx & e^{-\tilde{\omega}/(2N)} = e^{-[(\tilde{n} - 3)^2/(4N) ][1 + 1/(\tilde{n}-3)]} \; , \label{Eq:DistrTildeNExp}\end{aligned}$$ where $$\tilde{\omega} = \sum_{j=4}^{\tilde{n}} (j-3) = \frac{(\tilde{n} - 2)(\tilde{n} - 3)}{2}$$ has a simple physical interpretation. It is just the number of distances $z_{i, k}$ between non-consecutive sites of trajectory. Notice that the trajectory of Fig. \[Fig:Trajectory\] is topologically equivalent to a ($\tilde{n}-1$)-sided polygon, which has $(\tilde{n}-1)(\tilde{n}-4)/2$ diagonals. All these diagonals plus the side $s_1 s_{\tilde{n}-1}$ totalize $\tilde{\omega}=(\tilde{n}-2)(\tilde{n}-3)/2$ paths (slashed lines of Fig. \[Fig:Trajectory\]), which allow revisit. For $\tilde{n}-3 \gg 1$, one can disregard $1/(\tilde{n}-3)$ in Eq. \[Eq:DistrTildeNExp\], leading to a half gaussian: $y = \tilde{F}_{2, rl}^{(N)}(\tilde{n}) = e^{-[(\tilde{n}-3)/\sqrt{2N}]^2/2}$, indicating that the scaled variable is $x = (\tilde{n}-\tilde{n}_{min})/\sqrt{2N}$ with $\tilde{n}_{min} = \mu+1 = 3$, leading to the universal curve $y = e^{-x^2/2}$, with $x \ge 0$. We only have kept $\tilde{n}_{min}$ to compare to a possible generalization of these calculations for the case of short memory $\mu \ll N$. Distribution of the total number of explored sites {#Sec:DistrN} ================================================== Up to this point we were focused on the number $\tilde{n}$ of sites explored before the first revisit. In the TW with $\mu=1$, the revisit implies the tourist has entered an attractor of period $p=2$ [@tercariol_2005], but with $\mu=2$, the revisit does not implies capture. In what follows we calculate the probability $p_t$ for the tourist to be trapped during a revisit and then obtain the capture $p_{2, rl}^{(N)}(j)$ and subsistence $q_{2, rl}^{(N)}(j)$ probabilities and the upper-tail distribution $F_{2, rl}^{(N)}(n)$ for the number $n$ of sites visited in the whole walk. Trapping Probability -------------------- Let us recall Fig. \[Fig:Trajectory\] and consider that the tourist has traveled along the trajectory $s_1$, $s_2$, …, $s_{\tilde{n}}$ without any revisit. Assume that in the following step he/she revisits site $s_k$ (outside the memory window, $k \le \tilde{n}-2$). Due to the deterministic rule, two situations may occur: (i) if $x_k < x_{k-1}$, the tourist moves forward to site $s_{k+1}$ and is trapped by an attractor of period $p=\tilde{n}-k+1$; (ii) if $x_{k-1} < x_k$, the tourist moves backward to site $s_{k-1}$ and escapes from the attractor. Therefore, the walker trapping or escaping depends on which distance $x_{k-1}$ or $x_k$ is shorter. The only exception is a revisit to $s_1$, when the tourist is unconditionally trapped, leading to a trajectory with a null transient time ($t=0$) and a cycle of period $p=\tilde{n}$. Taking into consideration that all $(\tilde{n}-1)!$ possible orderings of the distances $x_1$, $x_2$, …, $x_{\tilde{n}-1}$ are equally probable, one could naively conclude that the trapping probability would be $p_t = P(x_k < x_{k-1}) = 1/2$. Nonetheless, numerical simulations of this system have refuted this expectation, pointing out that this probability is in fact $p_t = 2/3$. To understand this result, we first show that the probability $P_v(r)$ for the tourist to revisit a specific site $s_k$ is proportional to the rank $r$ occuped by the associated distance $x_k$ (between sites $s_k$ and $s_{k+1}$) when one reorders the distances $x_1$, $x_2$, …, $x_{\tilde{n}-2}$ decreasingly (so that $x_k$ is the $r$th greatest one). Secondly, we show that the probability $P_t(r)$ for the tourist to be trapped when revisiting the site $s_k$ is proportional to $r-1$. Finally, from $P_v(r)$ and $P_t(r)$ we prove that $p_t = 2/3$. ### Order Statistics Let us recall some tool about Order Statistics. Given a sample of $M$ variates $X_1$, $X_2$, …, $X_M$, reorder them so that $X_{(1)} > X_{(2)} > \ldots > X_{(M)}$. If $X$ has pdf $g(x)$ and cumulative distribution $G(x) = \int_{-\infty}^x dx' g(x')$, then the pdf $h_r(x)$ of $X_{(r)}$ is $h_r(x) = M! [G(x)]^{M-r} [1-G(x)]^{r-1} g(x)/[(r-1)!(M-r)!] $, for $r=1, 2, \ldots, M$. Resuming the TW with $\mu=2$ on the RL model, each distance $x_i$ has pdf given by $g(x)=(N-2)(1-x)^{N-3}$, then its cumulative distribution is $G(x) =\int_0^x \mbox{d}x' \, g(x') = 1-(1-x)^{N-2}$ and the pdf of $x_{(r)}$ is $ h_r(x) = \tilde{n}! \left[1-(1-x)^{N-2}\right]^{\tilde{n}-r} [(1-x)^{N-2}]^{r-1} (N-2)(1-x)^{N-3}/[(r-1)!(\tilde{n}-r)!]$. ### Rank-revisit and Rank-trapping Probabilities Again, consider that the tourist has traveled along the trajectory $s_1$, $s_2$, …, $s_{\tilde{n}}$ (without any revisit). Let us calculate the probability $P_v(r)$ for he/she to revisit the site $s_{(r)}=s_k$ (with associated distance $x_{(r)}=x_k$) in the next step. Once $s_{(r)}$ is the nearest site, the distance $z_{\tilde{n}, (r)}$ has pdf given by $g(x)=(N-2)(1-x)^{N-3}$ and once the tourist came from site $s_{(r)}=s_k$ to $s_{k+1}$ in the ($k+1$)th step, the distance $z_{\tilde{n}, (r)}$ is certainly greater than $x_{(r)}$. Thus $P_v(r) \propto P(z_{\tilde{n}, (r)} > x_{(r)}) = \tilde{n}!/ [(r-1)!(\tilde{n}-r)!] \int_0^1 \mbox{d}x \left[1-(1-x)^{N-2}\right]^{\tilde{n}-r} \left[(1-x)^{N-2}\right]^{r-1} (N-2)(1-x)^{N-3} \int_x^1 \mbox{d}z (N-2)(1-z)^{N-3}$. Evaluating the integral in $z$ and calling $y = (1-x)^{N-2}$ the above equation is rewritten as: $P_v(r) \propto \tilde{n}!/[(r-1)!(\tilde{n}-r)!] \mbox{B}(\tilde{n}-r+1, r+1) = \tilde{n}!/[(r-1)!(\tilde{n}-r)!] (\tilde{n}-r)!r!/(\tilde{n}+1)! = r/(\tilde{n}+1)$. This expression [*is not*]{} the probability $P_v$ itself. Instead, it only gives the dependence of $P_v$ on $r$. Normalizing $P_v$ over $1 \le r \le \tilde{n}-2$, one has $$\begin{aligned} P_v(r) = \frac{r}{\sum_{k=1}^{\tilde{n}-2} k} = \frac{2r}{(\tilde{n}-1)(\tilde{n}-2)} \; , \label{Eq:Pvr}\end{aligned}$$ where $\tilde{n}-2$ is the number of sites available to revisit (the sites $s_{\tilde{n}}$ and $s_{\tilde{n}-1}$ are forbidden by memory) and the normalization factor $\sum_{j=1}^{\tilde{n}-2} j = (\tilde{n}-2)(\tilde{n}-1)/2$ is simply the sum of all $\tilde{n}-2$ ranks. The result of Eq. \[Eq:Pvr\] does not contradicts Eq. \[Eq:TildePrl\], since Eq. \[Eq:TildePrl\] gives an approximated probability for the tourist to revisit a specific site $s_k$, regardless its associated distance $x_k=x_{(r)}$, while Eq. \[Eq:Pvr\] gives the conditional probability for the tourist to “choose” the $r$-ranked site $s_{(r)}$ during a revisit after exploring $\tilde{n}$ distinct sites. Once the tourist had revisited site $s_k$ (or equivalently $s_{(r)}$), the probability $P_t(r)$ for he/she to be trapped also depends on the rank $r$. The trapping condition is that $x_{k-1}$ must be greater than $x_k$. Since $x_k=x_{(r)}$ is the $r$th greater distance, there are only $r-1$ remaining distances (among $\tilde{n}-3$ ones) greater than $x_k$. Thus, $$\begin{aligned} P_t(r) = \frac{r-1}{\tilde{n}-3} \; . \label{Eq:Ptr}\end{aligned}$$ Combining Eqs. \[Eq:Pvr\] and \[Eq:Ptr\], the probability for the tourist to be trapped when visiting a specific site $s_{(r)}$ is: $P_v(r) P_t(r) = 2r(r-1)/[(\tilde{n}-1)(\tilde{n}-2)(\tilde{n}-3)]$. Thus, the probability for the tourist to be trapped when revisiting any site is $p_t = \sum_{r=1}^{\tilde{n}-2} P_v(r) P_t(r)$. Calling $m=\tilde{n}-2$ and evaluating $\sum_{r=1}^m r(r-1) = m(m^2-1)/3$ one finds the trapping probability $$\begin{aligned} p_t = 2/3 \; . \label{Eq:Pt}\end{aligned}$$ We remark that this result has been obtained without any approximation, and numerical simulations agree to it even for small values of $N$. Capture and Subsistence Probabilities ------------------------------------- Combining the probability $\tilde{p}_{rl}$ for the tourist to revisit a specific site $s_k$ (Eq. \[Eq:TildePrl\]) and the trapping probability $p_t$ (Eq. \[Eq:Pt\]), one obtains the probability $p_{rl}$ for the tourist to revisit $s_k$ [**and**]{} be trapped: $$\begin{aligned} p_{rl} = \tilde{p}_{rl} p_t = \frac{1}{2N} \frac{2}{3} = \frac{1}{3N} \; . \label{Eq:Prl}\end{aligned}$$ Since in the $j$th step there are $j-3$ sites available to revisit, the capture (i.e., revisiting any site [**and**]{} being trapped) and subsistence (i.e., exploring any new site [**or**]{} revisiting any site and not being trapped) probabilities in the $j$th step are: $p_{2, rl}^{(N)}(j) = (j-3)/(3N) = 1 - q_{2, rl}^{(N)}(j)$ and the upper-tail distribution for the number $n$ of sites explored by the tourist in the whole trajectory is $$\begin{aligned} F_{2, rl}^{(N)}(n) & = & \prod_{j=4}^n q_{2, rl}^{(N)}(j) \nonumber \\ % = \prod_{j=4}^n \left( 1-\frac{j-3}{3N} \right) \nonumber \\ % & = & \prod_{j=4}^n \frac{3N-j+3}{3N} = \prod_{j=1}^{n-3} \frac{3N-j}{3N} \nonumber \\ & = & \frac{\Gamma(3N)}{\Gamma(3N-n+3) (3N)^{n-3}} \; , \label{Eq:DistrNGamaCumul}\end{aligned}$$ which is analogous to Eq. \[Eq:DistrTildeNGamaCumul\]. ### Comparison to RM model with $\mu=0$ The expression of Eq. \[Eq:DistrNGamaCumul\] is similar to the one obtained for the RM model with memory $\mu=0$ [@tercariol_2005]: $F_{0, rm}^{(N)}(n) = \Gamma(N)/[\Gamma(N-n) N^{n}]$. This result explains the non-trivial equivalence observed between RL model with $N$ points and memory $\mu=2$ (memory effect) and RM model with $3N$ points and memory $\mu=0$ (effect of distance symmetry break), when one compares the distributions for the total number $n$ of sites explored by the tourist. Notice that, taking both models with $N$ points each, in RL with $\mu=2$, at each step, the probability for the turist to revisit a specific site and be trapped is $p_{rl} \approx 1/(3N)$; and in RM with $\mu=0$, this probability is $p_{rm} = 1/N$. Therefore, taking RL with $N$ points and RM with $3N$ points equals these probabilities and justifies the equivalence. ### Exponential Form In the limit $N \gg 1$, the subsistence probability is rewritten as $ q_{2, rl}^{(N)}(j) = [1- 1/(3N) ]^{j-3}$ and one obtains the exponential form of Eq. \[Eq:DistrNGamaCumul\], namely $F_{2, rl}^{(N)}(n) = \prod_{j=4}^n q_{2, rl}^{(N)}(j) = [1 - 1/(3N)]^\omega \approx e^{-\omega/(3N)}$, with $\omega = (n-2)(n-3)/2$. Rather than differentiating $F_{2, rl}^{(N)}(n)$, the distribution $S_{2, rl}^{(N)}(n)$ for the number $n$ of sites explored in the whole trajectory is more precisely obtained by imposing the tourist to explore $n$ distinct sites and then be captured in the next step (i.e., revisit [**any**]{} site and be trapped): $$\begin{aligned} S_{2, rl}^{(N)}(n) & = & F_{2, rl}^{(N)}(n) \, p_{2, rl}^{(N)}(n+1) \nonumber \\ & = & \frac{n-2}{3N} \, e^{-\frac{(n-2)(n-3)/2}{3N}} \;. \label{Eq:DistrNExp}\end{aligned}$$ For $n \gg 1$, calling $y = \sqrt{3N} S_{2, rl}^{(N)}(n)$ and $x = (n-n_{min})/\sqrt{3N}$ (with $n_{min}=\mu+1=3$) one obtains the universal plot for this system: $$y = x \, e^{-x^2/2} \; , \label{Eq:Universal}$$ with $x \ge 0$ and $m$th moment $\langle x^m \rangle = 2^{m/2} \Gamma(m/2+1)$, where we see that normalization is assured by $\langle x^0 \rangle = 1$. The mean value is $\langle x \rangle = \sqrt{\pi/2}$ and the variance $\langle x^2 \rangle - \langle x \rangle^2 = 2 - \pi/2$. Fig. \[Fig:Universal\] exhibits a plot of Eq. \[Eq:Universal\] and experimental data. From this figure, or calculating analytically, one obtains that the mode is unitary. Transient and period joint distribution {#Sec:JointDistr} ======================================= The transient/period joint distribution $S_{2, rl}^{(N)}(t, p)$ can be obtained similarly to Eq. \[Eq:DistrNExp\], by imposing the tourist to expore $n$ distinct sites and then revist the [**specific**]{} site $s_k$ (instead of [**any site**]{}) and be trapped, giving rise to a tracjetory with transient $t=k-1$ and period $p=n-k+1$. We notice that the relevant variable is $t+p=n$. Hence, $S_{2, rl}^{(N)}(t, p)$ is obtained multiplying $F_{2, rl}^{(N)}(t+p)$ by $p_{rl}$ (Eq. \[Eq:Prl\]) \[or by $\tilde{p}_{rl}$ (Eq. \[Eq:TildePrl\]) in the case $t=0$, since the tourist is unconditionally captured when revisting the site $s_1$\]: $$\begin{aligned} S_{2, rl}^{(N)}(t, p) = \frac{1}{(3-\delta_{t, 0})N} \, e^{-\frac{(t+p-2)(t+p-3)/2}{3N}} \label{Eq:JointDistr}\end{aligned}$$ where $\delta_{i, j}$ is the Kronecker delta. Fig. \[Fig:JointDistr\] exhibits a plot of Eq. \[Eq:JointDistr\] for $N=1000$ points. Transient time marginal distribution ------------------------------------ The transient time distribution is calculated summing Eq. \[Eq:JointDistr\] over all possible periods, i.e. $S_{2, rl}^{(N)}(t) = \sum_{p=3}^N S_{2, rl}^{(N)}(t, p)$. In the limit $N \gg 1$, this summation can be approximated by the integral $$\begin{aligned} S_{2, rl}^{(N)}(t) & = & \int_{5/2}^\infty \mbox{d}p \; S_{2, rl}^{(N)}(t, p) \\ & = & \left( 1+\frac{\delta_{t, 0}}{2} \right) \sqrt{\frac{\pi}{6N}} \, \mbox{erfc}\left(\frac{t}{\sqrt{6N}} \right) \; ,\end{aligned}$$ where the lower limit $5/2$ is due to a Yates continuity correction (which other than improve the integral approximation, make the analytical form quite simpler) and the upper limit has been extended to infinity to make calculation easier (with no harm, because Eq. \[Eq:JointDistr\] yields despicable values for $p>N$) and $\mbox{erfc}(x) = (2/\sqrt{\pi}) \int_z^\infty \mbox{d}x \, e^{-x^2}$ is the complementary error function. Cycle period marginal distribution ---------------------------------- Similarly, the period distribution is $$\begin{aligned} S_{2, rl}^{(N)}(p) & = &\sum_{t=0}^{N-3} S_{2, rl}^{(N)}(t, p) = \int_{-1}^\infty \mbox{d}t \; S_{2, rl}^{(N)}(t, p) \\ & = &\sqrt{\frac{\pi}{6N}} \, \mbox{erfc}\left(\frac{p-7/2}{\sqrt{6N}} \right) \approx \frac{e^{-p^2/(6N)}}{p} \; ,\end{aligned}$$ where the lower limit $-1$ is due to both Yates continuity correction and a compensation for the half extra degree in $t=0$. The mean period value is $\overline{p} = \sqrt{3 \pi N/8}$ and standard deviation is $\sigma_p = \sqrt{(2 - 3 \pi/8) N}$. For $p \ll \sqrt{6N}$, the decay follows a power law $S_{2, rl}^{(N)}(p) \propto p^{-1}$. Conclusion {#Sec:Conclusion} ========== In this paper, we have analytically obtained the statistical distributions for the deterministic tourist walk with memory $\mu=2$ on the random link model. The distribution for the number of sites explored before the first passage has been compared to the one previously obtained for the case $\mu=1$, elucidating the mechanism that strongly increases the tourist’s exploratory behavior. On one hand, for $\mu=1$ the distances travelled at each step must obey $x_1>x_2>\ldots$, leading to a localized exploration. In the thermodynamic limit, the mean number of explored sites is then $\overline{n}= e = 2.71828 \ldots$ and the exploration probability decreases harmonically along the trajectory. This dynamics is due to the underlying extremal statistics. On the other hand, for $\mu=2$ the distances $x_1, x_2, \ldots$ are unconstrained, leading to an extended exploration: $\overline{n}$ is proportional to $N^{1/2}$ and the exploration probability tends to 1, when $N \rightarrow \infty$. This dynamics is due to the underlying combinatorial statistics. The factor $(\tilde{n}-1)!$ in Eq. \[Eq:IntegralsWithoutMax\] represents the change from the extremal statistics to the combinatorial one, which makes the $\delta_{p,2}$ distribution of $\mu = 1$ to broaden to a wide ($1/p$) distribution for $\mu \ge 2$. Throught the trapping probability $p_t=2/3$ (which value is counterintuitive), we obtained the capture and subsistence probabilities and a closed form to the complementary cumulative distribution for the number of sites explored in the whole trajectory. This distribution is analogous to the one obtained for the random map model with $\mu=0$. This result explains the equivalente between these mean field models (RL with $N$ points and memory $\mu=2$; and RM with $3N$ points and memory $\mu=0$). For a large number of sites ($N \gg 1$) in the random medium, the distribution $S^{(N)}_{2, rl}(n)$ of having $n$ distinct sites visited by the tourist with memory $\mu = 2$ in the random link model is universal $y = x \, e^{-x^2/2}$ with $y = \sqrt{3N} S^{(N)}_{2,rl}(n)$ and $x = (n - 3)/\sqrt{3N}$. The transient time $t$ and cycle period $p$ joint distribution $S^{(N)}_{2, rl}(t, p) = e^{[(t+p-3)^2/(3N)]/2}/[N(3 - \delta_{t, 0})]$ has been obtained noticing that the revelant variable is approximatively given by $t+p=n$. The marginal distributions are also universal. For the transient time one has: $y = [1 + \delta(x)/2]\mbox{erfc}(x)$ with $y = \sqrt{6N/\pi} S^{(N)}_{2,rl}(t)$ and $x = t/\sqrt{6N}$ and for the period distribution: $y = \mbox{erfc(x)}$, with $y = \sqrt{6N/\pi} S^{(N)}_{2,rl}(p)$ and $x = (p - 7/2)/\sqrt{6N}$. We have shown that the discrepance in the null transient time distribution ($t=0$), when compared to the subsequent ones ($t>0$), is due to the higher capture probability the starting site $s_1$ has \[namely, $\tilde{p}_{rl}=1/(2N)$\] when compared to the others else \[$p_{rl}=1/(3N)$\]. We also have shown that the period distribution decays according a power law $S^{(N)}_{2,rl}(p) \propto p^{-1}$. Future studies concern the consideration of higher memory values in the random link model and the understanding of the connection with the random map model. As the memory increases, we expect a transition from the closed periods to non-closed ones (chaotic phase). We are interested in understanding the role of finite dimensionality of the system. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank R. S. Gonzalez for fruitful discussions. ASM acknowledges the Brazilian agencies CNPq (303990/2007-4 and 476862/2007-8) for support. [30]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , (). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, ().
--- abstract: 'We present a period-luminosity-amplitude analysis of 5899 red giant and binary stars in the Large Magellanic Cloud, using publicly available observations of the MACHO project. For each star, we determined new periods, which were double-checked in order to exclude aliases and false periods. The period-luminosity relations confirm the existence of a short-period, small-amplitude P–L sequence at periods shortward of Seq. A. We point out that the widely accepted sequence of eclipsing binaries between Seqs. C and D, known as Seq. E, does not exist. The correct position for Seq. E is at periods a factor of two greater, and the few stars genuinely lying between Seq. C and D are under-luminous Mira variables, presumably enshrouded in dust. The true Seq. E overlaps with the sequence of Long Secondary Periods (Seq. D) and their P–L relation is well described by a simple model assuming Roche geometry. The amplitudes of LSPs have properties that are different from both the pulsations and the ellipsoidal variations, but they are more similar to the former than the latter, arguing for pulsation rather than binarity as the origin of the LSP phenomenon.' author: - 'A. Derekas, L. L. Kiss, T. R. Bedding, H. Kjeldsen, P. Lah, Gy. M. Szabó' title: Ellipsoidal Variability and Long Secondary Periods in MACHO Red Giant Stars --- Introduction ============ The multiplicity of red giant period-luminosity (P–L) relations has been a major discovery on the road to interpreting complex light variations of these stars. Following the two seminal papers by @woo99 and @woo00, a picture has emerged that can be summarized as follows: large-amplitude Mira stars pulsate in the fundamental mode, whereas smaller-amplitude semiregulars are often multimode pulsators, in which various overtone modes can be excited [see also @bed98]. Besides the pulsating P–L sequences (Seq. A, B and C, as labeled by @woo99), two other sequences were suggested: Seq. E with red giants in eclipsing binaries and Seq. D with stars that have long secondary periods (LSPs). The latter pose a great mystery and the nature of their slow variations is still not understood, with several different mechanisms proposed [@oli03; @woo04]. The basic picture of multiple P–L relations has been confirmed by many independent studies, mostly based on $K$-band magnitudes. It has emerged that the original five sequences have further details, including a break at the tip of the Red Giant Branch (RGB), which is due to the existence of distinct RGB pulsators that are mixed with the more evolved AGB variables [e.g. @ita02; @kis03; @kis04; @ita04a; @ita04b; @sos04a; @fra05]. Almost all authors have accepted the existence of the distinct sequences of red giant binaries (Seq. E) and LSP stars (Seq. D). The only exception was @sos04b, who showed that Seqs. E and D seem to merge at a specific luminosity (as measured by the Wesenheit index) and suggested that this may imply the binary origin of LSPs. Here we report on a combined analysis of MACHO observations of eclipsing binaries and red giants in the Large Magellanic Cloud, which shed new light on these stars and on the LSP phenomenon. Data analysis and results ========================= Our results are based on two sets of publicly available almost eight-year long MACHO light curves. A detailed description of the MACHO project can be found in @coo95. Some of the data are offered for download through the MACHO website (http://wwwmacho.mcmaster.ca), where one can choose specific samples based on an automated classification of variability type. Using the Web interface, we individually downloaded all light curves classified as eclipsing binaries (6833 stars) and as red giant variables (classified as Wood A, B, C and D classes; 2868 stars). In the case of the eclipsing binaries, it became obvious very quickly that the classification was not perfect, and a large fraction of stars turned out to be Cepheids, RR Lyrae stars or long-period variables. We also found that the catalogued periods were incorrect for a significant number of stars. We therefore reclassified all 6833 stars and re-determined their periods, using the following procedure (more details will be given elsewhere). Periods were first estimated using the Phase Dispersion Minimization method [@ste78]. We then checked all the folded light curves by eye and refined the periods with the String–Length method [@laf65; @cla02], which is more reliable than PDM when the light curve contains long flat sections and very narrow minima, as is the case for many eclipsing binaries. Also, in many cases PDM gave harmonics or subharmonics of the true period, which was only recognized through the visual inspection of every phase diagram. We also examined the color variations to identify and exclude pulsating stars with sinusoidal light curves. After this analysis, 3031 stars remained as genuine eclipsing or ellipsoidal variables. Next, we classified the binary sample using Fourier decomposition of their phase diagrams. @ruc93 showed that light curves of W UMa systems (contact binaries) can be quantitatively described using only two coefficients, ${\rm a_{2}}$ and ${\rm a_{4}}$, of the cosine decomposition ${\rm \sum a_{i} \cos (2\pi i \varphi)}$. @poj02 tested the behavior of semi-detached and detached systems in the ${\rm a_{2} - a_{4}}$ plane by decomposing theoretical light curves into Fourier coefficients. We found that only stars with “W UMa-like” light curve shape composed the sequence (which is plotted in Fig.\[pl\]), while detached and semi-detached systems are spread everywhere in the P–L plane. Our second set of light curves were those of the 2868 publicly available MACHO red giant variables. Since they often show multiply periodic light variations, we determined periods with iterative sine wave fitting. As a measure of significance, we also estimated the $S/N$ ratio of the peaks in the Fourier spectra. Since the noise in the Fourier spectrum increased toward lower frequencies, different values of the $S/N$ were used when determining whether a peak was real for different period-luminosity relations ($S/N$ cutoff was set to 3 for Seq. A$^\prime$, 4 for Seq. A, 6 for Seqs. B, C and 10 for Seq. D). We omitted periods close to 1 yr, because many light curves show variations with this period that are not real. As a result, a total of 4315 significant frequencies were identified for the 2868 stars. We studied these two samples in the P–L plane. In order to reduce the effects of interstellar extinction and allow a direct comparison with previous results in the literature, we plotted the period–$K$ magnitude relation. We obtained near infrared magnitudes by cross-correlation with the 2MASS All-Sky Point Source Catalog (http://irsa.ipac.caltech.edu), with a search radius of 3$^{\prime\prime}$. The resulting P–L diagram is shown in the left panel of Fig. \[pl\]. To our surprise, the P–L relation of the binary sample did not follow Seq. E, as we had expected. Instead, they overlapped with Seq. D, which at first sight appears to give strong evidence for the binary origin of Seq. D and prompted us to investigate the issue in more detail. Discussion ========== The combined P–L plot in Fig. \[pl\] shows the well-known complex structure of distinct sequences. Besides sequences A, B, C, D and E, we also detect the existence of the faintly visible new short-period P–L sequence (Seq. A$^\prime$) on the left-hand side boundary of the diagram (labeled as a$_{4}$ and b$_{4}$ in @sos04a and $P_4$ in @kis06). The eclipsing stars (pluses) seem to merge with Seq. D rather than forming Seq. E as adopted in the literature [@woo99; @woo00; @kis03; @kis04; @ita04a; @ita04b; @nod04; @fra05]. To clarify this issue, we re-checked periods for: (i) stars on Seq. E in Fig. 1 of @woo00, for which the identifiers and basic data were kindly provided by Peter Wood; (ii) stars on Seq. E in our Fig. \[pl\] (black dots). In both cases, it turned out that for most of the objects, the given periods were half of the true ones, as one might expect from a Fourier analysis of eclipsing binary light curves. We have carefully double-checked all individual light curves on Seq. E and D and corrected the periods (see Fig. \[example\]). The final P–L plot is shown in the right panel of Fig. \[pl\]. As a result of the period correction, the sequence between C and D, which is known as Seq. E in the literature, has completely disappeared. A few stars remain in the gap but practically none of them are eclipsing binaries. The majority turn out to be Mira stars that are very red ($J-K>2$ mag) and are presumably carbon-rich Miras that are dimmed by circumstellar dust clouds (bottom panel of Fig. \[example\]). We propose to retain the label E for the sequence of ellipsoidal variables in its corrected position (right panel of Fig. \[pl\]). In the literature, only @sos04b plotted the eclipsing binary sequence at the correct (doubled) period. However, they did not discuss this issue and one can still find more recent studies where Seq. E was shown at the wrong period. @sos04b have, however, shown that Roche geometry gives a good fit to the OGLE ellipsoidal variables. We have also checked this on the MACHO sample. We calculated the theoretical orbital periods of systems at mass ratios of 1, where the components fill their Roche lobes. For this, we used evolutionary models of @cas03 and applied equations of Section 4 in @sos04b. (Note, however, that the definition of $f(q)$ in their Eq. 1 actually gave $f(q)^{-1}$ as the filling factor.) For the calculations we took the evolutionary tracks of the 0.85 ${\rm M}_{\odot}$ and 2.5 ${\rm M}_{\odot}$ models, since theses masses represent the mass limits of stars that evolve through the RGB and AGB. The $K$ magnitudes of the models were determined from $T_{\rm eff}$ vs. $(V-K)$ calibrations, combined from @hou00 and @kuc06. In the right panel of Fig. \[pl\] we show these two limits (the shorter period line belongs to the higher mass). For any other mass ratios, the orbital periods shift towards smaller values. This simple approach with Roche geometry describes remarkably well the observed period-luminosity relation of ellipsoidal variables in the MACHO sample, in agreement with the study of @sos04b. However, the OGLE sample contains a larger fraction of ellipsoidals and most of them have longer periods than are predicted by the models. Those stars do not entirely fill the Roche lobe [@sos04b]. It is also worth mentioning that the low fraction of ellipsoidal/eclipsing RGB stars in the MACHO data (1.5%) was used by @woo04 as an argument against the binary origin of LSPs. However, the OGLE statistics clearly showed that there are at least 10 times more such RGB stars, and so that argument is no longer valid. Since the lower mass model fits Seq. D quite well, the question arises: how similar are the ellipsoidal and LSP variables? To assess this, we examined the amplitudes of these stars. @sos04b mentioned that amplitudes of LSPs are positively correlated with the brightness of the star. Compared to OGLE data, the MACHO observations have the advantage of giving information on the color variations. We examined the amplitudes in the MACHO blue and red bands of the binary and LSP stars, and also included Seq. C stars, which allow a comparison with stars that we know to be pulsating. The amplitudes were measured by fitting smooth spline functions to the phased light curves. The resulting blue peak-to-peak amplitudes are shown as a function of $K$ magnitude in Fig. \[amlu\], where several features are apparent. Firstly, the amplitudes of ellipsoidal variables (circles in the upper panel) do not show any correlation with luminosity, as expected for variation caused by the geometry of a binary system. Secondly, the amplitudes of LSP variables (crosses in the upper panel) increase with luminosity, with their distribution forming a striking triangular envelope (the correlation coefficient is $r\sim0.51$). It is also very interesting that the points below the well-defined upper envelope seem to show a flat distribution. Whatever the cause of the LSP phenomenon, there appears to be a maximum possible amplitude at each luminosity, with an apparently uniform distribution of amplitudes beneath this maximum value. In comparison, stars on the pulsating P–L sequence C (lower panel in Fig.\[amlu\]), show a very different distribution. To quantify the difference between the amplitude-luminosity distribution of Seqs. C and D, we performed the multivariate $\varepsilon$-test of @sze04, applied for two dimensions. In this test we measure the effects of random permutations of the initial distributions via changes in the “information energy” of the two distributions. In our case, 1000 random permutations showed that the difference between the C and D samples is highly significant. Therefore, we conclude that the physical mechanism causing the LSP phenomenon must be different from both the ellipsoidal variations of Seq. E and the radial fundamental-mode pulsations of Seq. C. Note, however, that for Seqs. A and B, there is a similar positive correlation between amplitude and luminosity (see the upper three panels in figs. 4 in @kis03 and @kis04) than for Seq. D. At the same time, comparing blue and red amplitudes for individual stars revealed further interesting information. In Fig. \[pamp\] we plot the blue-to-red amplitude ratios as function of period for Seqs. B, C, D and E stars. For the pulsating objects the median of the ratio is 1.40, indicating strong color, thus temperature, changes during the pulsations. For ellipsoidal/eclipsing binaries, the median ratio is 1.13, while the LSPs have a median ratio of 1.29, being more similar to the pulsating stars [see also @hub03]. This behavior agrees with the findings of @woo04 who, based on color-amplitude variations of single objects, argued for the pulsational origin of LSPs. The overall statistics of more than 700 LSP variables favors this argument over the binary hypothesis, also agreeing with @hin02, who concluded that the long-period velocity changes in their observed stars probably result from some kind of pulsation. Summary ======= The main results of this paper can be summarized as follows: [$\cdot$]{} =-6pt the period-luminosity relations of $\sim$6000 stars based on MACHO data confirm the existence of the short-period, small-amplitude P–L sequence at shortward of Seq. A, which belongs to a higher-overtone pulsation mode. We label this Seq. A$^{\prime}$. the widely accepted sequence of eclipsing binaries between C and D, known as Seq. E, does not exist. The correct position for Seq. E, which comprises contact binaries and ellipsoidal variables, is at periods a factor of two greater. The true Seq. E overlaps with the LSPs (Seq. D), which appears to suggest a binary origin for the LSP phenomenon (but see the last point). of the few stars that genuinely lie between Seq. C and D, most are under-luminous Mira variables, presumably enshrouded in dust. we confirmed that ellipsoidal variables have a similar P–L relation to LSP stars. Their P–L relation is well described by a simple model assuming Roche geometry. the amplitudes of LSPs have properties that are different from both the pulsations and the ellipsoidal variations, but they are more similar to the former than the latter, arguing for pulsation rather than binarity as the origin of the LSP phenomenon. AD is supported by an Australian Postgraduate Research Award. LLK is supported by a University of Sydney Postdoctoral Research Fellowship. GyMSz was supported by the Magyary Zoltán Higher Educational Public Foundation. This paper utilizes public domain data obtained by the MACHO Project, jointly funded by the US Department of Energy through the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, by the National Science Foundation through the Center for Particle Astrophysics of the University of California under cooperative agreement AST-8809616, and by the Mount Stromlo and Siding Spring Observatory, part of the Australian National University. natexlab\#1[\#1]{}url \#1[[\#1]{}]{} Bedding, T. R., Zijlstra, A. A., 1998, , 506, 47 Castellani, V., Degl’Innocenti, S., Marconi, M., Prada Moroni, P. G., & Sestito, P., 2003, , 404, 645 Clarke, D., 2002, , 386, 763 Cook, K. H., Alcock, C., Allsman, H. A., et al. 1995, ASP Conf. Ser., 83, 221 Fraser, O. J., Hawley, S. L., Cook, K. H., & Keller, S. C., 2005, , 129, 768 Hinkle, K. H., Lebzelter, T., Joyce, R. R., & Fekel, F. C., 2002, , 123, 1002 Houdashelt, M. L., Bell, R. A., Sweigart, A. V., & Wing, R. F., 2000, , 119, 1424 Huber, J. P., Bedding, T. R., O’Toole, S. J., 2003, aahd.conf., 421 Ita, Y., et al., 2002, , 337, L31 Ita, Y., et al., 2004a, , 347, 720 Ita, Y., et al., 2004b, , 353, 705 Kiss, L. L., & Bedding, T. R., 2003, , 343, L79 Kiss, L. L., & Bedding, T. R., 2004, , 347, L83 Kiss, L. L., & Lah, P., 2006, Mem. S. A. It., 77, 303 Kučinskas, A., et al., 2006, , 452, 1021 Lafler, J., & Kinman, T. D., 1965, , 11, 216 Noda, S., et al., 2004, , 348, 1120 Olivier, E. A., & Wood, P. R., 2003, , 584, 1035 Pojmański, G., 2002, Acta Astron., 52, 397 Rucinski, S. M., 1993, , 105, 1433 Soszy[ń]{}ski, I., et al., 2004a, Acta Astron., 54, 129 Soszy[ń]{}ski, I., et al., 2004b, Acta Astron., 54, 347 Stellingwerf, R. F., 1978, , 224, 953 Székely G.J., & Rizzo M.L., 2004, InterStat, 2004 November, 5 Wood, P. R., et al., 1999, IAUS, 191, 151 Wood, P. R., 2000, PASA, 17, 18 Wood, P. R., Olivier, E. A., & Kawaler, S. D., 2004, , 604, 800